id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
23619
https://en.wikipedia.org/wiki/Pressure
Pressure
Pressure (symbol: p or P) is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure (also spelled gage pressure) is the pressure relative to the ambient pressure. Various units are used to express pressure. Some of these derive from a unit of force divided by a unit of area; the SI unit of pressure, the pascal (Pa), for example, is one newton per square metre (N/m2); similarly, the pound-force per square inch (psi, symbol lbf/in2) is the traditional unit of pressure in the imperial and US customary systems. Pressure may also be expressed in terms of standard atmospheric pressure; the unit atmosphere (atm) is equal to this pressure, and the torr is defined as of this. Manometric units such as the centimetre of water, millimetre of mercury, and inch of mercury are used to express pressures in terms of the height of column of a particular fluid in a manometer. Definition Pressure is the amount of force applied perpendicular to the surface of an object per unit area. The symbol for it is "p" or P. The IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in which one is working, on the nearby presence of other symbols for quantities such as power and momentum, and on writing style. Formula Mathematically: where: is the pressure, is the magnitude of the normal force, is the area of the surface on contact. Pressure is a scalar quantity. It relates the vector area element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality constant that relates these two normal vectors: The minus sign comes from the convention that the force is considered towards the surface element, while the normal vector points outward. The equation has meaning in that, for any surface S in contact with the fluid, the total force exerted by the fluid on that surface is the surface integral over S of the right-hand side of the above equation. It is incorrect (although rather usual) to say "the pressure is directed in such or such direction". The pressure, as a scalar, has no direction. The force given by the previous relationship to the quantity has a direction, but the pressure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same. Pressure is distributed to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. It is a fundamental parameter in thermodynamics, and it is conjugate to volume. Units The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m2, or kg·m−1·s−2). This name for the unit was added in 1971; before that, pressure in SI was expressed in newtons per square metre. Other units of pressure, such as pounds per square inch (lbf/in2) and bar, are also in common use. The CGS unit of pressure is the barye (Ba), equal to 1 dyn·cm−2, or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre ("g/cm2" or "kg/cm2") and the like without properly identifying the force units. But using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as units of force is deprecated in SI. The technical atmosphere (symbol: at) is 1 kgf/cm2 (98.0665 kPa, or 14.223 psi). Pressure is related to energy density and may be expressed in units such as joules per cubic metre (J/m3, which is equal to Pa). Mathematically: Some meteorologists prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other fields, except aviation where the hecto- prefix is commonly used. The inch of mercury is still used in the United States. Oceanographers usually measure underwater pressure in decibars (dbar) because pressure in the ocean increases by approximately one decibar per metre depth. The standard atmosphere (atm) is an established constant. It is approximately equal to typical air pressure at Earth mean sea level and is defined as . Because pressure is commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., centimetres of water, millimetres of mercury or inches of mercury). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury's high density allows a shorter column (and so a smaller manometer) to be used to measure a given pressure. The pressure exerted by a column of liquid of height h and density ρ is given by the hydrostatic pressure equation , where g is the gravitational acceleration. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. When millimetres of mercury (or inches of mercury) are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units. One millimetre of mercury is approximately equal to one torr. The water-based units still depend on the density of water, a measured, rather than defined, quantity. These manometric units are still encountered in many fields. Blood pressure is measured in millimetres (or centimetres) of mercury in most of the world, and lung pressures in centimetres of water are still common. Underwater divers use the metre sea water (msw or MSW) and foot sea water (fsw or FSW) units of pressure, and these are the units for pressure gauges used to measure pressure exposure in diving chambers and personal decompression computers. A msw is defined as 0.1 bar (= 10,000 Pa), is not the same as a linear metre of depth. 33.066 fsw = 1 atm (1 atm = 101,325 Pa / 33.066 = 3,064.326 Pa). The pressure conversion from msw to fsw is different from the length conversion: 10 msw = 32.6336 fsw, while 10 m = 32.8083 ft. Gauge pressure is often given in units with "g" appended, e.g. "kPag", "barg" or "psig", and units for measurements of absolute pressure are sometimes given a suffix of "a", to avoid confusion, for example "kPaa", "psia". However, the US National Institute of Standards and Technology recommends that, to avoid confusion, any modifiers be instead applied to the quantity being measured rather than the unit of measure. For example, rather than . Differential pressure is expressed in units with "d" appended; this type of measurement is useful when considering sealing performance or whether a valve will open or close. Presently or formerly popular pressure units include the following: atmosphere (atm) manometric units: centimetre, inch, millimetre (torr) and micrometre (mTorr, micron) of mercury, height of equivalent column of water, including millimetre (mm ), centimetre (cm ), metre, inch, and foot of water; imperial and customary units: kip, short ton-force, long ton-force, pound-force, ounce-force, and poundal per square inch, short ton-force and long ton-force per square inch, fsw (feet sea water) used in underwater diving, particularly in connection with diving pressure exposure and decompression; non-SI metric units: bar, decibar, millibar, msw (metres sea water), used in underwater diving, particularly in connection with diving pressure exposure and decompression, kilogram-force, or kilopond, per square centimetre (technical atmosphere), gram-force and tonne-force (metric ton-force) per square centimetre, barye (dyne per square centimetre), kilogram-force and tonne-force per square metre, sthene per square metre (pieze). Examples As an example of varying pressures, a finger can be pressed against a wall without making any lasting impression; however, the same finger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is the same, the thumbtack applies more pressure because the point concentrates that force into a smaller area. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. Unlike stress, pressure is defined as a scalar quantity. The negative gradient of pressure is called the force density. Another example is a knife. If the flat edge is used, force is distributed over a larger surface area resulting in less pressure, and it will not cut. Whereas using the sharp edge, which has less surface area, results in greater pressure, and so the knife cuts smoothly. This is one example of a practical application of pressure. For gases, pressure is sometimes measured not as an absolute pressure, but relative to atmospheric pressure; such measurements are called gauge pressure. An example of this is the air pressure in an automobile tire, which might be said to be "", but is actually 220 kPa (32 psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute pressure in the tire is therefore about . In technical work, this is written "a gauge pressure of ". Where space is limited, such as on pressure gauges, name plates, graph labels, and table headings, the use of a modifier in parentheses, such as "kPa (gauge)" or "kPa (absolute)", is permitted. In non-SI technical work, a gauge pressure of is sometimes written as "32 psig", and an absolute pressure as "32 psia", though the other methods explained above that avoid attaching characters to the unit of pressure are preferred. Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is , a gas (such as helium) at (gauge) ( [absolute]) is 50% denser than the same gas at (gauge) ( [absolute]). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one. Scalar nature In a static gas, the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant random motion. Because there are an extremely large number of molecules and because the motion of the individual molecules is random in every direction, no motion is detected. When the gas is at least partially confined (that is, not free to expand rapidly), the gas will exhibit a hydrostatic pressure. This confinement can be achieved with either a physical container of some sort, or in a gravitational well such as a planet, otherwise known as atmospheric pressure. In the case of planetary atmospheres, the pressure-gradient force of the gas pushing outwards from higher pressure, lower altitudes to lower pressure, higher altitudes is balanced by the gravitational force, preventing the gas from diffusing into outer space and maintaining hydrostatic equilibrium. In a physical container, the pressure of the gas originates from the molecules colliding with the walls of the container. The walls of the container can be anywhere inside the gas, and the force per unit area (the pressure) is the same. If the "container" is shrunk down to a very small point (becoming less true as the atomic scale is approached), the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has magnitude but no direction sense associated with it. Pressure force acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface. A closely related quantity is the stress tensor σ, which relates the vector force to the vector area via the linear relation . This tensor may be expressed as the sum of the viscous stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, but in the following, the term "pressure" will refer only to the scalar pressure. According to the theory of general relativity, pressure increases the strength of a gravitational field (see stress–energy tensor) and so adds to the mass-energy cause of gravity. This effect is unnoticeable at everyday pressures but is significant in neutron stars, although it has not been experimentally tested. Types Fluid pressure Fluid pressure is most often the compressive stress at some point within a fluid. (The term fluid refers to both liquids and gases – for more information specifically about liquid pressure, see section below.) Fluid pressure occurs in one of two situations: An open condition, called "open channel flow", e.g. the ocean, a swimming pool, or the atmosphere. A closed condition, called "closed conduit", e.g. a water line or gas line. Pressure in open conditions usually can be approximated as the pressure in "static" or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics. The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure. Closed bodies of fluid are either "static", when the fluid is not moving, or "dynamic", when the fluid can move as in either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the principles of fluid dynamics. The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli. Bernoulli's equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the fluid being ideal and incompressible. An ideal fluid is a fluid in which there is no friction, it is inviscid (zero viscosity). The equation for all points of a system filled with a constant-density fluid is where: p, pressure of the fluid, = ρg, density × acceleration of gravity is the (volume-) specific weight of the fluid, v, velocity of the fluid, g, acceleration of gravity, z, elevation, , pressure head, , velocity head. Applications Hydraulic brakes Artesian well Blood pressure Hydraulic head Plant cell turgidity Pythagorean cup Pressure washing Explosion or deflagration pressures Explosion or deflagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconfined and confined spaces. Negative pressures While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of −21 kPa (i.e., 21 kPa below an atmospheric pressure of 101 kPa). For example, abdominal decompression is an obstetric procedure during which negative gauge pressure is applied intermittently to a pregnant woman's abdomen. Negative absolute pressures are possible. They are effectively tension, and both bulk solids and bulk liquids can be put under negative absolute pressure by pulling on them. Microscopically, the molecules in solids and liquids have attractive interactions that overpower the thermal kinetic energy, so some tension can be sustained. Thermodynamically, however, a bulk material under negative pressure is in a metastable state, and it is especially fragile in the case of liquids where the negative pressure state is similar to superheating and is easily susceptible to cavitation. In certain situations, the cavitation can be avoided and negative pressures sustained indefinitely, for example, liquid mercury has been observed to sustain up to in clean glass containers. Negative liquid pressures are thought to be involved in the ascent of sap in plants taller than 10 m (the atmospheric pressure head of water). The Casimir effect can create a small attractive force due to interactions with vacuum energy; this force is sometimes termed "vacuum pressure" (not to be confused with the negative gauge pressure of a vacuum). For non-isotropic stresses in rigid bodies, depending on how the orientation of a surface is chosen, the same distribution of forces may have a component of positive stress along one surface normal, with a component of negative stress acting along another surface normal. The pressure is then defined as the average of the three principal stresses. The stresses in an electromagnetic field are generally non-isotropic, with the stress normal to one surface element (the normal stress) being negative, and positive for surface elements perpendicular to this. In cosmology, dark energy creates a very small yet cosmically significant amount of negative pressure, which accelerates the expansion of the universe. Stagnation pressure Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: where is the stagnation pressure, is the density, is the flow velocity, is the static pressure. The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or Cobra probe, connected to a manometer. Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures. Surface pressure and surface tension There is a two-dimensional analog of pressure – the lateral force per unit length applied on a line perpendicular to the force. Surface pressure is denoted by π: and shares many similar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring pressure/area isotherms, as the two-dimensional analog of Boyle's law, , at constant temperature. Surface tension is another example of surface pressure, but with a reversed sign, because "tension" is the opposite to "pressure". Pressure of an ideal gas In an ideal gas, molecules have no volume and do not interact. According to the ideal gas law, pressure varies linearly with temperature and quantity, and inversely with volume: where: p is the absolute pressure of the gas, n is the amount of substance, T is the absolute temperature, V is the volume, R is the ideal gas constant. Real gases exhibit a more complex dependence on the variables of state. Vapour pressure Vapour pressure is the pressure of a vapour in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and solids have a tendency to evaporate into a gaseous form, and all gases have a tendency to condense back to their liquid or solid form. The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapour bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases. The vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial vapor pressure. Liquid pressure When a person swims under the water, water pressure is felt acting on the person's eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above the person and therefore greater pressure. The pressure a liquid exerts depends on its depth. Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. Thus, we can say that the depth, density and liquid pressure are directly proportionate. The pressure due to a liquid in liquid columns of constant density and gravity at a depth within a substance is represented by the following formula: where: p is liquid pressure, g is gravity at the surface of overlaying material, ρ is density of liquid, h is height of liquid column or depth within a substance. Another way of saying the same formula is the following: The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible – that is, their volume can hardly be changed by pressure (water volume decreases by only 50 millionths of its original volume for each atmospheric increase in pressure). Thus, except for small changes produced by temperature, the density of a particular liquid is practically the same at all depths. Atmospheric pressure pressing on the surface of a liquid must be taken into account when trying to discover the total pressure acting on a liquid. The total pressure of a liquid, then, is ρgh plus the pressure of the atmosphere. When this distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure without regard to the normally ever-present atmospheric pressure. The pressure does not depend on the amount of liquid present. Volume is not the important factor – depth is. The average water pressure acting against a dam depends on the average depth of the water and not on the volume of water held back. For example, a wide but shallow lake with a depth of exerts only half the average pressure that a small deep pond does. (The total force applied to the longer dam will be greater, due to the greater total surface area for the pressure to act upon. But for a given -wide section of each dam, the deep water will apply one quarter the force of deep water). A person will feel the same pressure whether their head is dunked a metre beneath the surface of the water in a small pool or to the same depth in the middle of a large lake. If four interconnected vases contain different amounts of water but are all filled to equal depths, then a fish with its head dunked a few centimetres under the surface will be acted on by water pressure that is the same in any of the vases. If the fish swims a few centimetres deeper, the pressure on the fish will increase with depth and be the same no matter which vase the fish is in. If the fish swims to the bottom, the pressure will be greater, but it makes no difference which vase it is in. All vases are filled to equal depths, so the water pressure is the same at the bottom of each vase, regardless of its shape or volume. If water pressure at the bottom of a vase were greater than water pressure at the bottom of a neighboring vase, the greater pressure would force water sideways and then up the neighboring vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that water seeks its own level. Restating this as an energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface, gravitational potential energy is large but liquid pressure energy is low. At the bottom of the vessel, all the gravitational potential energy is converted to pressure energy. The sum of pressure energy and gravitational potential energy per unit volume is constant throughout the volume of the fluid and the two energy components change linearly with the depth. Mathematically, it is described by Bernoulli's equation, where velocity head is zero and comparisons per unit volume in the vessel are Terms have the same meaning as in section Fluid pressure. Direction of liquid pressure An experimentally determined fact about liquid pressure is that it is exerted equally in all directions. If someone is submerged in water, no matter which way that person tilts their head, the person will feel the same amount of water pressure on their ears. Because a liquid can flow, this pressure is not only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of an upright can. Pressure also acts upward, as demonstrated when someone tries to push a beach ball beneath the surface of the water. The bottom of a ball is pushed upward by water pressure (buoyancy). When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure does not have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. This is why liquid particles' velocity only alters in a normal component after they are collided to the container's wall. Likewise, if the collision site is a hole, water spurting from the hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is , where h is the depth below the free surface. As predicted by Torricelli's law this is the same speed the water (or anything else) would have if freely falling the same vertical distance h. Kinematic pressure is the kinematic pressure, where is the pressure and constant mass density. The SI unit of P is m2/s2. Kinematic pressure is used in the same manner as kinematic viscosity in order to compute the Navier–Stokes equation without explicitly showing the density . Navier–Stokes equation with kinematic quantities
Physical sciences
Thermodynamics
null
23621
https://en.wikipedia.org/wiki/Polygon
Polygon
In geometry, a polygon () is a plane figure made up of line segments connected to form a closed polygonal chain. The segments of a closed polygonal chain are called its edges or sides. The points where two edges meet are the polygon's vertices or corners. An n-gon is a polygon with n sides; for example, a triangle is a 3-gon. A simple polygon is one which does not intersect itself. More precisely, the only allowed intersections among the line segments that make up the polygon are the shared endpoints of consecutive segments in the polygonal chain. A simple polygon is the boundary of a region of the plane that is called a solid polygon. The interior of a solid polygon is its body, also known as a polygonal region or polygonal area. In contexts where one is concerned only with simple and solid polygons, a polygon may refer only to a simple polygon or to a solid polygon. A polygonal chain may cross over itself, creating star polygons and other self-intersecting polygons. Some sources also consider closed polygonal chains in Euclidean space to be a type of polygon (a skew polygon), even when the chain does not lie in a single plane. A polygon is a 2-dimensional example of the more general polytope in any number of dimensions. There are many more generalizations of polygons defined for different purposes. Etymology The word polygon derives from the Greek adjective πολύς (polús) 'much', 'many' and γωνία (gōnía) 'corner' or 'angle'. It has been suggested that γόνυ (gónu) 'knee' may be the origin of gon. Classification Number of sides Polygons are primarily classified by the number of sides. Convexity and intersection Polygons may be characterized by their convexity or type of non-convexity: Convex: any line drawn through the polygon (and not tangent to an edge or corner) meets its boundary exactly twice. As a consequence, all its interior angles are less than 180°. Equivalently, any line segment with endpoints on the boundary passes through only interior points between its endpoints. This condition is true for polygons in any geometry, not just Euclidean. Non-convex: a line may be found which meets its boundary more than twice. Equivalently, there exists a line segment between two boundary points that passes outside the polygon. Simple: the boundary of the polygon does not cross itself. All convex polygons are simple. Concave: Non-convex and simple. There is at least one interior angle greater than 180°. Star-shaped: the whole interior is visible from at least one point, without crossing any edge. The polygon must be simple, and may be convex or concave. All convex polygons are star-shaped. Self-intersecting: the boundary of the polygon crosses itself. The term complex is sometimes used in contrast to simple, but this usage risks confusion with the idea of a complex polygon as one which exists in the complex Hilbert plane consisting of two complex dimensions. Star polygon: a polygon which self-intersects in a regular way. A polygon cannot be both a star and star-shaped. Equality and symmetry Equiangular: all corner angles are equal. Equilateral: all edges are of the same length. Regular: both equilateral and equiangular. Cyclic: all corners lie on a single circle, called the circumcircle. Tangential: all sides are tangent to an inscribed circle. Isogonal or vertex-transitive: all corners lie within the same symmetry orbit. The polygon is also cyclic and equiangular. Isotoxal or edge-transitive: all sides lie within the same symmetry orbit. The polygon is also equilateral and tangential. The property of regularity may be defined in other ways: a polygon is regular if and only if it is both isogonal and isotoxal, or equivalently it is both cyclic and equilateral. A non-convex regular polygon is called a regular star polygon. Miscellaneous Rectilinear: the polygon's sides meet at right angles, i.e. all its interior angles are 90 or 270 degrees. Monotone with respect to a given line L: every line orthogonal to L intersects the polygon not more than twice. Properties and formulas Euclidean geometry is assumed throughout. Angles Any polygon has as many corners as it has sides. Each corner has several angles. The two most important ones are: Interior angle – The sum of the interior angles of a simple n-gon is radians or degrees. This is because any simple n-gon ( having n sides ) can be considered to be made up of triangles, each of which has an angle sum of π radians or 180 degrees. The measure of any interior angle of a convex regular n-gon is radians or degrees. The interior angles of regular star polygons were first studied by Poinsot, in the same paper in which he describes the four regular star polyhedra: for a regular -gon (a p-gon with central density q), each interior angle is radians or degrees. Exterior angle – The exterior angle is the supplementary angle to the interior angle. Tracing around a convex n-gon, the angle "turned" at a corner is the exterior or external angle. Tracing all the way around the polygon makes one full turn, so the sum of the exterior angles must be 360°. This argument can be generalized to concave simple polygons, if external angles that turn in the opposite direction are subtracted from the total turned. Tracing around an n-gon in general, the sum of the exterior angles (the total amount one rotates at the vertices) can be any integer multiple d of 360°, e.g. 720° for a pentagram and 0° for an angular "eight" or antiparallelogram, where d is the density or turning number of the polygon. Area In this section, the vertices of the polygon under consideration are taken to be in order. For convenience in some formulas, the notation will also be used. Simple polygons If the polygon is non-self-intersecting (that is, simple), the signed area is or, using determinants where is the squared distance between and The signed area depends on the ordering of the vertices and of the orientation of the plane. Commonly, the positive orientation is defined by the (counterclockwise) rotation that maps the positive -axis to the positive -axis. If the vertices are ordered counterclockwise (that is, according to positive orientation), the signed area is positive; otherwise, it is negative. In either case, the area formula is correct in absolute value. This is commonly called the shoelace formula or surveyor's formula. The area A of a simple polygon can also be computed if the lengths of the sides, a1, a2, ..., an and the exterior angles, θ1, θ2, ..., θn are known, from: The formula was described by Lopshits in 1963. If the polygon can be drawn on an equally spaced grid such that all its vertices are grid points, Pick's theorem gives a simple formula for the polygon's area based on the numbers of interior and boundary grid points: the former number plus one-half the latter number, minus 1. In every polygon with perimeter p and area A , the isoperimetric inequality holds. For any two simple polygons of equal area, the Bolyai–Gerwien theorem asserts that the first can be cut into polygonal pieces which can be reassembled to form the second polygon. The lengths of the sides of a polygon do not in general determine its area. However, if the polygon is simple and cyclic then the sides do determine the area. Of all n-gons with given side lengths, the one with the largest area is cyclic. Of all n-gons with a given perimeter, the one with the largest area is regular (and therefore cyclic). Regular polygons Many specialized formulas apply to the areas of regular polygons. The area of a regular polygon is given in terms of the radius r of its inscribed circle and its perimeter p by This radius is also termed its apothem and is often represented as a. The area of a regular n-gon can be expressed in terms of the radius R of its circumscribed circle (the unique circle passing through all vertices of the regular n-gon) as follows: Self-intersecting The area of a self-intersecting polygon can be defined in two different ways, giving different answers: Using the formulas for simple polygons, we allow that particular regions within the polygon may have their area multiplied by a factor which we call the density of the region. For example, the central convex pentagon in the center of a pentagram has density 2. The two triangular regions of a cross-quadrilateral (like a figure 8) have opposite-signed densities, and adding their areas together can give a total area of zero for the whole figure. Considering the enclosed regions as point sets, we can find the area of the enclosed point set. This corresponds to the area of the plane covered by the polygon or to the area of one or more simple polygons having the same outline as the self-intersecting one. In the case of the cross-quadrilateral, it is treated as two simple triangles. Centroid Using the same convention for vertex coordinates as in the previous section, the coordinates of the centroid of a solid simple polygon are In these formulas, the signed value of area must be used. For triangles (), the centroids of the vertices and of the solid shape are the same, but, in general, this is not true for . The centroid of the vertex set of a polygon with vertices has the coordinates Generalizations The idea of a polygon has been generalized in various ways. Some of the more important include: A spherical polygon is a circuit of arcs of great circles (sides) and vertices on the surface of a sphere. It allows the digon, a polygon having only two sides and two corners, which is impossible in a flat plane. Spherical polygons play an important role in cartography (map making) and in Wythoff's construction of the uniform polyhedra. A skew polygon does not lie in a flat plane, but zigzags in three (or more) dimensions. The Petrie polygons of the regular polytopes are well known examples. An apeirogon is an infinite sequence of sides and angles, which is not closed but has no ends because it extends indefinitely in both directions. A skew apeirogon is an infinite sequence of sides and angles that do not lie in a flat plane. A polygon with holes is an area-connected or multiply-connected planar polygon with one external boundary and one or more interior boundaries (holes). A complex polygon is a configuration analogous to an ordinary polygon, which exists in the complex plane of two real and two imaginary dimensions. An abstract polygon is an algebraic partially ordered set representing the various elements (sides, vertices, etc.) and their connectivity. A real geometric polygon is said to be a realization of the associated abstract polygon. Depending on the mapping, all the generalizations described here can be realized. A polyhedron is a three-dimensional solid bounded by flat polygonal faces, analogous to a polygon in two dimensions. The corresponding shapes in four or higher dimensions are called polytopes. (In other conventions, the words polyhedron and polytope are used in any dimension, with the distinction between the two that a polytope is necessarily bounded.) Naming The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions. Beyond decagons (10-sided) and dodecagons (12-sided), mathematicians generally use numerical notation, for example 17-gon and 257-gon. Exceptions exist for side counts that are easily expressed in verbal form (e.g. 20 and 30), or are used by non-mathematicians. Some special polygons also have their own names; for example the regular star pentagon is also known as the pentagram. To construct the name of a polygon with more than 20 and fewer than 100 edges, combine the prefixes as follows. The "kai" term applies to 13-gons and higher and was used by Kepler, and advocated by John H. Conway for clarity of concatenated prefix numbers in the naming of quasiregular polyhedra, though not all sources use it. History Polygons have been known since ancient times. The regular polygons were known to the ancient Greeks, with the pentagram, a non-convex regular polygon (star polygon), appearing as early as the 7th century B.C. on a krater by Aristophanes, found at Caere and now in the Capitoline Museum. The first known systematic study of non-convex polygons in general was made by Thomas Bradwardine in the 14th century. In 1952, Geoffrey Colin Shephard generalized the idea of polygons to the complex plane, where each real dimension is accompanied by an imaginary one, to create complex polygons. In nature Polygons appear in rock formations, most commonly as the flat facets of crystals, where the angles between the sides depend on the type of mineral from which the crystal is made. Regular hexagons can occur when the cooling of lava forms areas of tightly packed columns of basalt, which may be seen at the Giant's Causeway in Northern Ireland, or at the Devil's Postpile in California. In biology, the surface of the wax honeycomb made by bees is an array of hexagons, and the sides and base of each cell are also polygons. Computer graphics In computer graphics, a polygon is a primitive used in modelling and rendering. They are defined in a database, containing arrays of vertices (the coordinates of the geometrical vertices, as well as other attributes of the polygon, such as color, shading and texture), connectivity information, and materials. Any surface is modelled as a tessellation called polygon mesh. If a square mesh has points (vertices) per side, there are n squared squares in the mesh, or 2n squared triangles since there are two triangles in a square. There are vertices per triangle. Where n is large, this approaches one half. Or, each vertex inside the square mesh connects four edges (lines). The imaging system calls up the structure of polygons needed for the scene to be created from the database. This is transferred to active memory and finally, to the display system (screen, TV monitors etc.) so that the scene can be viewed. During this process, the imaging system renders polygons in correct perspective ready for transmission of the processed data to the display system. Although polygons are two-dimensional, through the system computer they are placed in a visual scene in the correct three-dimensional orientation. In computer graphics and computational geometry, it is often necessary to determine whether a given point lies inside a simple polygon given by a sequence of line segments. This is called the point in polygon test.
Mathematics
Geometry and topology
null
23631
https://en.wikipedia.org/wiki/Primary%20mirror
Primary mirror
A primary mirror (or primary) is the principal light-gathering surface (the objective) of a reflecting telescope. Description The primary mirror of a reflecting telescope is a spherical, parabolic, or hyperbolic shaped disks of polished reflective metal (speculum metal up to the mid 19th century), or in later telescopes, glass or other material coated with a reflective layer. One of the first known reflecting telescopes, Newton's reflector of 1668, used a 3.3 cm polished metal primary mirror. The next major change was to use silver on glass rather than metal, in the 19th century such was with the Crossley reflector. This was changed to vacuum deposited aluminum on glass, used on the 200-inch Hale telescope. Solid primary mirrors have to sustain their own weight and not deform under gravity, which limits the maximum size for a single piece primary mirror. Segmented mirror configurations are used to get around the size limitation on single primary mirrors. For example, the Giant Magellan Telescope will have seven 8.4 meter primary mirrors, with the resolving power equivalent to a optical aperture. Superlative primary mirrors The largest optical telescope in the world as of 2009 to use a non-segmented single-mirror as its primary mirror is the Subaru telescope of the National Astronomical Observatory of Japan, located in Mauna Kea Observatory on Hawaii since 1997; however, this is not the largest diameter single mirror in a telescope, the U.S./German/Italian Large Binocular Telescope has two mirrors (which can be used together for interferometric mode). Both of these are smaller than the 10 m segmented primary mirrors on the dual Keck telescope. The Hubble Space Telescope has a primary mirror. Radio and submillimeter telescopes use much larger dishes or antennae, which do not have to be made as precisely as the mirrors used in optical telescopes. The Arecibo Telescope used a 305 m dish, which was the world largest single-dish radio telescope fixed to the ground. The Green Bank Telescope has the world's largest steerable single radio dish with 100 m in diameter. There are larger radio arrays, composed of multiple dishes which have better image resolution but less sensitivity.
Technology
Telescope
null
23634
https://en.wikipedia.org/wiki/Protein
Protein
Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity. A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than 20–30 residues, are rarely considered to be proteins and are commonly called peptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; but in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by post-translational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can work together to achieve a particular function, and they often associate to form stable protein complexes. Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-life and covers a wide range. They can exist for minutes or years with an average lifespan of 1–2 days in mammalian cells. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable. Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyse biochemical reactions and are vital to metabolism. Some proteins have structural or mechanical functions, such as actin and myosin in muscle, and the cytoskeleton's scaffolding proteins that maintain cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. In animals, proteins are needed in the diet to provide the essential amino acids that cannot be synthesized. Digestion breaks the proteins down for metabolic use. History and etymology Discovery and early studies Proteins have been studied and recognized since the 1700s by Antoine Fourcroy and others, who often collectively called them "albumins", or "albuminous materials" (Eiweisskörper, in German). Gluten, for example, was first separated from wheat in published research around 1747, and later determined to exist in many plants. In 1789, Antoine Fourcroy recognized three distinct varieties of animal proteins: albumin, fibrin, and gelatin. Vegetable (plant) proteins studied in the late 1700s and early 1800s included gluten, plant albumin, gliadin, and legumin. Proteins were first described by the Dutch chemist Gerardus Johannes Mulder and named by the Swedish chemist Jöns Jacob Berzelius in 1838. Mulder carried out elemental analysis of common proteins and found that nearly all proteins had the same empirical formula, C400H620N100O120P1S1. He came to the erroneous conclusion that they might be composed of a single type of (very large) molecule. The term "protein" to describe these molecules was proposed by Mulder's associate Berzelius; protein is derived from the Greek word (), meaning "primary", "in the lead", or "standing in front", + -in. Mulder went on to identify the products of protein degradation such as the amino acid leucine for which he found a (nearly correct) molecular weight of 131 Da. Early nutritional scientists such as the German Carl von Voit believed that protein was the most important nutrient for maintaining the structure of the body, because it was generally believed that "flesh makes flesh." Around 1862, Karl Heinrich Ritthausen isolated the amino acid glutamic acid. Thomas Burr Osborne compiled a detailed review of the vegetable proteins at the Connecticut Agricultural Experiment Station. Osborne, alongside Lafayette Mendel, established several nutritionally essential amino acids in feeding experiments with laboratory rats. Diets lacking an essential amino acid stunts the rats' growth, consistent with Liebig's law of the minimum. The final essential amino acid to be discovered, threonine, was identified by William Cumming Rose. The difficulty in purifying proteins impeded work by early protein biochemists. Proteins could be obtained in large quantities from blood, egg whites, and keratin, but individual proteins were unavailable. In the 1950s, the Armour Hot Dog Company purified 1 kg of bovine pancreatic ribonuclease A and made it freely available to scientists. This gesture helped ribonuclease A become a major target for biochemical study for the following decades. Polypeptides The understanding of proteins as polypeptides, or chains of amino acids, came through the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The central role of proteins as enzymes in living organisms that catalyzed reactions was not fully appreciated until 1926, when James B. Sumner showed that the enzyme urease was in fact a protein. Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, an idea first put forth by William Astbury in 1933. Later work by Walter Kauzmann on denaturation, based partly on previous studies by Kaj Linderstrøm-Lang, contributed an understanding of protein folding and structure mediated by hydrophobic interactions. The first protein to have its amino acid chain sequenced was insulin, by Frederick Sanger, in 1949. Sanger correctly determined the amino acid sequence of insulin, thus conclusively demonstrating that proteins consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols. He won the Nobel Prize for this achievement in 1958. Christian Anfinsen's studies of the oxidative folding process of ribonuclease A, for which he won the nobel prize in 1972, solidified the thermodynamic hypothesis of protein folding, according to which the folded form of a protein represents its free energy minimum. Structure With the development of X-ray crystallography, it became possible to determine protein structures as well as their sequences. The first protein structures to be solved were hemoglobin by Max Perutz and myoglobin by John Kendrew, in 1958. The use of computers and increasing computing power has supported the sequencing of complex proteins. In 1999, Roger Kornberg sequenced the highly complex structure of RNA polymerase using high intensity X-rays from synchrotrons. Since then, cryo-electron microscopy (cryo-EM) of large macromolecular assemblies has been developed. Cryo-EM uses protein samples that are frozen rather than crystals, and beams of electrons rather than X-rays. It causes less damage to the sample, allowing scientists to obtain more information and analyze larger structures. Computational protein structure prediction of small protein structural domains has helped researchers to approach atomic-level resolution of protein structures. , the Protein Data Bank contains 181,018 X-ray, 19,809 EM and 12,697 NMR protein structures. Classification Proteins are primarily classified by sequence and structure, although other classifications are commonly used. Especially for enzymes the EC number system provides a functional classification scheme. Similarly, gene ontology classifies both genes and proteins by their biological and biochemical function, and by their intracellular location. Sequence similarity is used to classify proteins both in terms of evolutionary and functional similarity. This may use either whole proteins or protein domains, especially in multi-domain proteins. Protein domains allow protein classification by a combination of sequence, structure and function, and they can be combined in many ways. In an early study of 170,000 proteins, about two-thirds were assigned at least one domain, with larger proteins containing more domains (e.g. proteins larger than 600 amino acids having an average of more than 5 domains). Biochemistry Most proteins consist of linear polymers built from series of up to 20 L-α-amino acids. All proteinogenic amino acids have a common structure where an α-carbon is bonded to an amino group, a carboxyl group, and a variable side chain. Only proline differs from this basic structure as its side chain is cyclical, bonding to the amino group, limiting protein chain flexibility. The side chains of the standard amino acids have a variety of chemical structures and properties, and it is the combined effect of all amino acids that determines its three-dimensional structure and chemical reactivity. The amino acids in a polypeptide chain are linked by peptide bonds between amino and carboxyl group. An individual amino acid in a chain is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone. The peptide bond has two resonance forms that confer some double-bond character to the backbone. The alpha carbons are roughly coplanar with the nitrogen and the carbonyl (C=O) group. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. One conseqence of the N-C(O) double bond character is that proteins are somewhat rigid. A polypeptide chain ends with a free amino group, known as the N-terminus or amino terminus, and a free carboxyl group, known as the C-terminus or carboxy terminus. By convention, peptide sequences are written N-terminus to C-terminus, correlating with the order in which proteins are synthesized by ribosomes. The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable 3D structure. But the boundary between the two is not well defined and usually lies near 20–30 residues. Proteins can interact with many types of molecules and ions, including with other proteins, with lipids, with carbohydrates, and with DNA. Abundance in cells A typical bacterial cell, e.g. E. coli and Staphylococcus aureus, is estimated to contain about 2 million proteins. Smaller bacteria, such as Mycoplasma or spirochetes contain fewer molecules, on the order of 50,000 to 1 million. By contrast, eukaryotic cells are larger and thus contain much more protein. For instance, yeast cells have been estimated to contain about 50 million proteins and human cells on the order of 1 to 3 billion. The concentration of individual protein copies ranges from a few molecules per cell up to 20 million. Not all genes coding proteins are expressed in most cells and their number depends on, for example, cell type and external stimuli. For instance, of the 20,000 or so proteins encoded by the human genome, only 6,000 are detected in lymphoblastoid cells. The most abundant protein in nature is thought to be RuBisCO, an enzyme that catalyzes the incorporation of carbon dioxide into organic matter in photosynthesis. Plants can consist of as much as 1% by weight of this enzyme, Synthesis Biosynthesis Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine–uracil–guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon. Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second. The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus. The size of a synthesized protein can be measured by the number of amino acids it contains and by its total molecular mass, which is normally reported in units of daltons (synonymous with atomic mass units), or the derivative unit kilodalton (kDa). The average size of a protein increases from Archaea to Bacteria to Eukaryote (283, 311, 438 residues and 31, 34, 49 kDa respectively) due to a bigger number of protein domains constituting proteins in higher organisms. For instance, yeast proteins are on average 466 amino acids long and 53 kDa in mass. The largest known proteins are the titins, a component of the muscle sarcomere, with a molecular mass of almost 3,000 kDa and a total length of almost 27,000 amino acids. Chemical synthesis Short proteins can be synthesized chemically by a family of peptide synthesis methods. These rely on organic synthesis techniques such as chemical ligation to produce peptides in high yield. Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains, such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology, though generally not for commercial applications. Chemical synthesis is inefficient for polypeptides longer than about 300 amino acids, and the synthesized proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction. Structure Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation. Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states. Biochemists often refer to four distinct aspects of a protein's structure: Primary structure: the amino acid sequence. A protein is a polyamide. Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the α-helix, β-sheet and turns. Because secondary structures are local, many regions of distinct secondary structure can be present in the same protein molecule. Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even post-translational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein. Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex. Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells. Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution, protein structures vary because of thermal vibration and collisions with other molecules. Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane. A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons. Protein domains Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units. Domains usually have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules. Sequence motif Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database. Cellular functions Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome. The chief characteristic of proteins that allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (> 1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine. Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can bind to, or be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks. As interactions between proteins are reversible and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types. Enzymes The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme). The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site. Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes. Cell signaling and ligand binding Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell. Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high. Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, and release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom. Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins. Transmembrane proteins can serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions. Structural proteins Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells. Some globular proteins can play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size. Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They generate the forces exerted by contracting muscles and play essential roles in intracellular transport. Methods of study Methods commonly used to study protein structure and function include immunohistochemistry, site-directed mutagenesis, X-ray crystallography, nuclear magnetic resonance and mass spectrometry. The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism, and can often provide more information about protein behavior in different contexts. In silico studies use computational methods to study proteins. Protein purification Proteins may be purified from other cellular components using a variety of techniques such as ultracentrifugation, precipitation, electrophoresis, and chromatography; the advent of genetic engineering has made possible a number of methods to facilitate purification. To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity. The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing. For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of tags have been developed to help researchers purify specific proteins from complex mixtures. Cellular localization The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP). The fused protein's position within the cell can then be cleanly and efficiently visualized using microscopy. Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose. Other possibilities exist, as well. For example, immunohistochemistry usually uses an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation. While this technique does not prove colocalization of a compartment of known density and the protein of interest, it indicates an increased likelihood. Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest. Through another genetic engineering application known as site-directed mutagenesis, researchers can alter the protein sequence and hence its structure, cellular localization, and susceptibility to regulation. This technique even allows the incorporation of unnatural amino acids into proteins, using modified tRNAs, and may allow the rational design of new proteins with novel properties. Proteomics The total complement of proteins present at a time in a cell or cell type is known as its proteome, and the study of such large-scale data sets defines the field of proteomics, named by analogy to the related field of genomics. Key experimental techniques in proteomics include 2D electrophoresis, which allows the separation of many proteins, mass spectrometry, which allows rapid high-throughput identification of proteins and sequencing of peptides (most often after in-gel digestion), protein microarrays, which allow the detection of the relative levels of the various proteins present in a cell, and two-hybrid screening, which allows the systematic exploration of protein–protein interactions. The total complement of biologically possible such interactions is known as the interactome. A systematic attempt to determine the structures of proteins representing every possible fold is known as structural genomics. Structure determination Discovering the tertiary structure of a protein, or the quaternary structure of its complexes, can provide important clues about how the protein performs its function and how it can be affected, i.e. in drug design. As proteins are too small to be seen under a light microscope, other methods have to be employed to determine their structure. Common experimental methods include X-ray crystallography and NMR spectroscopy, both of which can produce structural information at atomic resolution. However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins. Cryoelectron microscopy is used to produce lower-resolution structural information about very large protein complexes, including assembled viruses; a variant known as electron crystallography can produce high-resolution information in some cases, especially for two-dimensional crystals of membrane proteins. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein. Many more gene sequences are known than protein structures. Further, the set of solved structures is biased toward proteins that can be easily subjected to the conditions required in X-ray crystallography, one of the major structure determination methods. In particular, globular proteins are comparatively easy to crystallize in preparation for X-ray crystallography. Membrane proteins and large protein complexes, by contrast, are difficult to crystallize and are underrepresented in the PDB. Structural genomics initiatives have attempted to remedy these deficiencies by systematically solving representative structures of major fold classes. Protein structure prediction methods attempt to provide a means of generating a plausible structure for proteins whose structures have not been experimentally determined. Structure prediction Complementary to the field of structural genomics, protein structure prediction develops efficient mathematical models of proteins to computationally predict the molecular formations in theory, instead of detecting structures with laboratory observation. The most successful type of structure prediction, known as homology modeling, relies on the existence of a "template" structure with sequence similarity to the protein being modeled; structural genomics' goal is to provide sufficient representation in solved structures to model most of those that remain. Although producing accurate models remains a challenge when only distantly related template structures are available, it has been suggested that sequence alignment is the bottleneck in this process, as quite accurate models can be produced if a "perfect" sequence alignment is known. Many structure prediction methods have served to inform the emerging field of protein engineering, in which novel protein folds have already been designed. Many proteins (in eukaryotes ~33%) contain large unstructured but biologically functional segments and can be classified as intrinsically disordered proteins. Predicting and analysing protein disorder is an important part of protein structure characterisation. In silico simulation of dynamical processes A more complex computational problem is the prediction of intermolecular interactions, such as in molecular docking, protein folding, protein–protein interaction and chemical reactivity. Mathematical models to simulate these dynamical processes involve molecular mechanics, in particular, molecular dynamics. In this regard, in silico simulations discovered the folding of small α-helical protein domains such as the villin headpiece, the HIV accessory protein and hybrid methods combining standard molecular dynamics with quantum mechanical mathematics have explored the electronic states of rhodopsins. Beyond classical molecular dynamics, quantum dynamics methods allow the simulation of proteins in atomistic detail with an accurate description of quantum mechanical effects. Examples include the multi-layer multi-configuration time-dependent Hartree method and the hierarchical equations of motion approach, which have been applied to plant cryptochromes and bacteria light-harvesting complexes, respectively. Both quantum and classical mechanical simulations of biological-scale systems are extremely computationally demanding, so distributed computing initiatives such as the Folding@home project facilitate the molecular modeling by exploiting advances in GPU parallel processing and Monte Carlo techniques. Chemical analysis The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available. Digestion In the absence of catalysts, proteins are slow to hydrolyze. The breakdown of proteins to small peptides and amino acids (proteolysis) is a step in digestion; these breakdown products are then absorbed in the small intestine. The hydrolysis of proteins relies on enzymes called proteases or peptidases. Proteases, which are themselves proteins, come in several types according to the particular peptide bonds that they cleave as well as their tendency to cleave peptide bonds at the terminus of a protein (exopeptidases) vs peptide bonds at the interior of the protein (endopeptidases). Pepsin is an endopeptidase in the stomach. Subsequent to the stomach, the pancreas secretes other proteases to complete the hydrolysis, these include trypsin and chymotrypsin. Protein hydrolysis is employed commercially as a means of producing amino acids from bulk sources of protein, such as blood meal, feathers, keratin. Such materials are treated with hot hydrochloric acid, which effects the hydrolysis of the peptide bonds. Mechanical properties The mechanical properties of proteins are highly diverse and are often central to their biological function, as in the case of proteins like keratin and collagen. For instance, the ability of muscle tissue to continually expand and contract is directly tied to the elastic properties of their underlying protein makeup. Beyond fibrous proteins, the conformational dynamics of enzymes and the structure of biological membranes, among other biological functions, are governed by the mechanical properties of the proteins. Outside of their biological context, the unique mechanical properties of many proteins, along with their relative sustainability when compared to synthetic polymers, have made them desirable targets for next-generation materials design. Young's modulus, E, is calculated as the axial stress σ over the resulting strain ε. It is a measure of the relative stiffness of a material. In the context of proteins, this stiffness often directly correlates to biological function. For example, collagen, found in connective tissue, bones, and cartilage, and keratin, found in nails, claws, and hair, have observed stiffnesses that are several orders of magnitude higher than that of elastin, which is though to give elasticity to structures such as blood vessels, pulmonary tissue, and bladder tissue, among others. In comparison to this, globular proteins, such as Bovine Serum Albumin, which float relatively freely in the cytosol and often function as enzymes (and thus undergoing frequent conformational changes) have comparably much lower Young's moduli. The Young's modulus of a single protein can be found through molecular dynamics simulation. Using either atomistic force-fields, such as CHARMM or GROMOS, or coarse-grained forcefields like Martini, a single protein molecule can be stretched by a uniaxial force while the resulting extension is recorded in order to calculate the strain. Experimentally, methods such as atomic force microscopy can be used to obtain similar data. At the macroscopic level, the Young's modulus of cross-linked protein networks can be obtained through more traditional mechanical testing. Experimentally observed values for a few proteins can be seen below.
Biology and health sciences
Chemistry
null
23635
https://en.wikipedia.org/wiki/Physical%20chemistry
Physical chemistry
Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical mechanics, analytical dynamics and chemical equilibria. Physical chemistry, in contrast to chemical physics, is predominantly (but not always) a supra-molecular science, as the majority of the principles on which it was founded relate to the bulk rather than the molecular or atomic structure alone (for example, chemical equilibrium and colloids). Some of the relationships that physical chemistry strives to understand include the effects of: Intermolecular forces that act upon the physical properties of materials (plasticity, tensile strength, surface tension in liquids). Reaction kinetics on the rate of a reaction. The identity of ions and the electrical conductivity of materials. Surface science and electrochemistry of cell membranes. Interaction of one body with another in terms of quantities of heat and work called thermodynamics. Transfer of heat between a chemical system and its surroundings during change of phase or chemical reaction taking place called thermochemistry Study of colligative properties of number of species present in solution. Number of phases, number of components and degree of freedom (or variance) can be correlated with one another with help of phase rule. Reactions of electrochemical cells. Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics. Calculation of the energy of electron movement in molecules and metal complexes. Key concepts The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems. One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them. Disciplines Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter. Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics, which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine, and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid. It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium. Which reactions do occur and how fast is the subject of chemical kinetics, another branch of physical chemistry. A key idea in chemical kinetics is that for reactants to react and form products, most chemical species must go through transition states which are higher in energy than either the reactants or the products and serve as a barrier to reaction. In general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions, each with its own transition state. Key questions in kinetics include how the rate of reaction depends on temperature and on the concentrations of reactants and catalysts in the reaction mixture, as well as how catalysts and reaction conditions can be engineered to optimize the reaction rate. The fact that how fast reactions occur can often be specified with just a few concentrations and a temperature, instead of needing to know all the positions and speeds of every molecule in a mixture, is a special case of another key concept in physical chemistry, which is that to the extent an engineer needs to know, everything going on in a mixture of very large numbers (perhaps of the order of the Avogadro constant, 6 x 1023) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities. History The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" () before the students of Petersburg University. In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations". Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances. This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs' phase rule. The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie, founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff. Together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901 and 1909. Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy, such as infrared spectroscopy, microwave spectroscopy, electron paramagnetic resonance and nuclear magnetic resonance spectroscopy, is probably the most important 20th century development. Further development in physical chemistry may be attributed to discoveries in nuclear chemistry, especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry, as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), and herein lies the practical importance of contemporary physical chemistry. See Group contribution method, Lydersen method, Joback method, Benson group increment theory, quantitative structure–activity relationship Journals Some journals that deal with physical chemistry include Zeitschrift für Physikalische Chemie (1887) Journal of Physical Chemistry A (from 1896 as Journal of Physical Chemistry, renamed in 1997) Physical Chemistry Chemical Physics (from 1999, formerly Faraday Transactions with a history dating back to 1905) Macromolecular Chemistry and Physics (1947) Annual Review of Physical Chemistry (1950) Molecular Physics (1957) Journal of Physical Organic Chemistry (1988) Journal of Physical Chemistry B (1997) ChemPhysChem (2000) Journal of Physical Chemistry C (2007) Journal of Physical Chemistry Letters (from 2010, combined letters previously published in the separate journals) Historical journals that covered both chemistry and physics include Annales de chimie et de physique (started in 1789, published under the name given here from 1815 to 1914). Branches and related topics Chemical thermodynamics Chemical kinetics Statistical mechanics Quantum chemistry Electrochemistry Photochemistry Surface chemistry Solid-state chemistry Spectroscopy Biophysical chemistry Materials science Physical organic chemistry Micromeritics
Physical sciences
Chemistry
null
23636
https://en.wikipedia.org/wiki/Perimeter
Perimeter
A perimeter is a closed path that encompasses, surrounds, or outlines either a two dimensional shape or a one-dimensional length. The perimeter of a circle or an ellipse is called its circumference. Calculating the perimeter has several practical applications. A calculated perimeter is the length of fence required to surround a yard or garden. The perimeter of a wheel/circle (its circumference) describes how far it will roll in one revolution. Similarly, the amount of string wound around a spool is related to the spool's perimeter; if the length of the string was exact, it would equal the perimeter. Formulas The perimeter is the distance around a shape. Perimeters for more general shapes can be calculated, as any path, with , where is the length of the path and is an infinitesimal line element. Both of these must be replaced by algebraic forms in order to be practically calculated. If the perimeter is given as a closed piecewise smooth plane curve with then its length can be computed as follows: A generalized notion of perimeter, which includes hypersurfaces bounding volumes in -dimensional Euclidean spaces, is described by the theory of Caccioppoli sets. Polygons Polygons are fundamental to determining perimeters, not only because they are the simplest shapes but also because the perimeters of many shapes are calculated by approximating them with sequences of polygons tending to these shapes. The first mathematician known to have used this kind of reasoning is Archimedes, who approximated the perimeter of a circle by surrounding it with regular polygons. The perimeter of a polygon equals the sum of the lengths of its sides (edges). In particular, the perimeter of a rectangle of width and length equals An equilateral polygon is a polygon which has all sides of the same length (for example, a rhombus is a 4-sided equilateral polygon). To calculate the perimeter of an equilateral polygon, one must multiply the common length of the sides by the number of sides. A regular polygon may be characterized by the number of its sides and by its circumradius, that is to say, the constant distance between its centre and each of its vertices. The length of its sides can be calculated using trigonometry. If is a regular polygon's radius and is the number of its sides, then its perimeter is A splitter of a triangle is a cevian (a segment from a vertex to the opposite side) that divides the perimeter into two equal lengths, this common length being called the semiperimeter of the triangle. The three splitters of a triangle all intersect each other at the Nagel point of the triangle. A cleaver of a triangle is a segment from the midpoint of a side of a triangle to the opposite side such that the perimeter is divided into two equal lengths. The three cleavers of a triangle all intersect each other at the triangle's Spieker center. Circumference of a circle The perimeter of a circle, often called the circumference, is proportional to its diameter and its radius. That is to say, there exists a constant number pi, (the Greek p for perimeter), such that if is the circle's perimeter and its diameter then, In terms of the radius of the circle, this formula becomes, To calculate a circle's perimeter, knowledge of its radius or diameter and the number suffices. The problem is that is not rational (it cannot be expressed as the quotient of two integers), nor is it algebraic (it is not a root of a polynomial equation with rational coefficients). So, obtaining an accurate approximation of is important in the calculation. The computation of the digits of is relevant to many fields, such as mathematical analysis, algorithmics and computer science. Perception of perimeter The perimeter and the area are two main measures of geometric figures. Confusing them is a common error, as well as believing that the greater one of them is, the greater the other must be. Indeed, a commonplace observation is that an enlargement (or a reduction) of a shape make its area grow (or decrease) as well as its perimeter. For example, if a field is drawn on a 1/ scale map, the actual field perimeter can be calculated multiplying the drawing perimeter by . The real area is times the area of the shape on the map. Nevertheless, there is no relation between the area and the perimeter of an ordinary shape. For example, the perimeter of a rectangle of width 0.001 and length 1000 is slightly above 2000, while the perimeter of a rectangle of width 0.5 and length 2 is 5. Both areas are equal to 1. Proclus (5th century) reported that Greek peasants "fairly" parted fields relying on their perimeters. However, a field's production is proportional to its area, not to its perimeter, so many naive peasants may have gotten fields with long perimeters but small areas (thus, few crops). If one removes a piece from a figure, its area decreases but its perimeter may not. The convex hull of a figure may be visualized as the shape formed by a rubber band stretched around it. In the animated picture on the left, all the figures have the same convex hull; the big, first hexagon. Isoperimetry The isoperimetric problem is to determine a figure with the largest area, amongst those having a given perimeter. The solution is intuitive; it is the circle. In particular, this can be used to explain why drops of fat on a broth surface are circular. This problem may seem simple, but its mathematical proof requires some sophisticated theorems. The isoperimetric problem is sometimes simplified by restricting the type of figures to be used. In particular, to find the quadrilateral, or the triangle, or another particular figure, with the largest area amongst those with the same shape having a given perimeter. The solution to the quadrilateral isoperimetric problem is the square, and the solution to the triangle problem is the equilateral triangle. In general, the polygon with sides having the largest area and a given perimeter is the regular polygon, which is closer to being a circle than is any irregular polygon with the same number of sides. Etymology The word comes from the Greek περίμετρος perimetros, from περί peri "around" and μέτρον metron "measure".
Mathematics
Measurement
null
23637
https://en.wikipedia.org/wiki/Phase%20%28matter%29
Phase (matter)
In the physical sciences, a phase is a region of material that is chemically uniform, physically distinct, and (often) mechanically separable. In a system consisting of ice and water in a glass jar, the ice cubes are one phase, the water is a second phase, and the humid air is a third phase over the ice and water. The glass of the jar is a different material, in its own separate phase. (See .) More precisely, a phase is a region of space (a thermodynamic system), throughout which all physical properties of a material are essentially uniform. Examples of physical properties include density, index of refraction, magnetization and chemical composition. The term phase is sometimes used as a synonym for state of matter, but there can be several immiscible phases of the same state of matter (as where oil and water separate into distinct phases, both in the liquid state). It is also sometimes used to refer to the equilibrium states shown on a phase diagram, described in terms of state variables such as pressure and temperature and demarcated by phase boundaries. (Phase boundaries relate to changes in the organization of matter, including for example a subtle change within the solid state from one crystal structure to another, as well as state-changes such as between solid and liquid.) These two usages are not commensurate with the formal definition given above and the intended meaning must be determined in part from the context in which the term is used. Types of phases Distinct phases may be described as different states of matter such as gas, liquid, solid, plasma or Bose–Einstein condensate. Useful mesophases between solid and liquid form other states of matter. Distinct phases may also exist within a given state of matter. As shown in the diagram for iron alloys, several phases exist for both the solid and liquid states. Phases may also be differentiated based on solubility as in polar (hydrophilic) or non-polar (hydrophobic). A mixture of water (a polar liquid) and oil (a non-polar liquid) will spontaneously separate into two phases. Water has a very low solubility (is insoluble) in oil, and oil has a low solubility in water. Solubility is the maximum amount of a solute that can dissolve in a solvent before the solute ceases to dissolve and remains in a separate phase. A mixture can separate into more than two liquid phases and the concept of phase separation extends to solids, i.e., solids can form solid solutions or crystallize into distinct crystal phases. Metal pairs that are mutually soluble can form alloys, whereas metal pairs that are mutually insoluble cannot. As many as eight immiscible liquid phases have been observed. Mutually immiscible liquid phases are formed from water (aqueous phase), hydrophobic organic solvents, perfluorocarbons (fluorous phase), silicones, several different metals, and also from molten phosphorus. Not all organic solvents are completely miscible, e.g. a mixture of ethylene glycol and toluene may separate into two distinct organic phases. Phases do not need to macroscopically separate spontaneously. Emulsions and colloids are examples of immiscible phase pair combinations that do not physically separate. Phase equilibrium Left to equilibration, many compositions will form a uniform single phase, but depending on the temperature and pressure even a single substance may separate into two or more distinct phases. Within each phase, the properties are uniform but between the two phases properties differ. Water in a closed jar with an air space over it forms a two-phase system. Most of the water is in the liquid phase, where it is held by the mutual attraction of water molecules. Even at equilibrium molecules are constantly in motion and, once in a while, a molecule in the liquid phase gains enough kinetic energy to break away from the liquid phase and enter the gas phase. Likewise, every once in a while a vapor molecule collides with the liquid surface and condenses into the liquid. At equilibrium, evaporation and condensation processes exactly balance and there is no net change in the volume of either phase. At room temperature and pressure, the water jar reaches equilibrium when the air over the water has a humidity of about 3%. This percentage increases as the temperature goes up. At 100 °C and atmospheric pressure, equilibrium is not reached until the air is 100% water. If the liquid is heated a little over 100 °C, the transition from liquid to gas will occur not only at the surface but throughout the liquid volume: the water boils. Number of phases For a given composition, only certain phases are possible at a given temperature and pressure. The number and type of phases that will form is hard to predict and is usually determined by experiment. The results of such experiments can be plotted in phase diagrams. The phase diagram shown here is for a single component system. In this simple system, phases that are possible, depend only on pressure and temperature. The markings show points where two or more phases can co-exist in equilibrium. At temperatures and pressures away from the markings, there will be only one phase at equilibrium. In the diagram, the blue line marking the boundary between liquid and gas does not continue indefinitely, but terminates at a point called the critical point. As the temperature and pressure approach the critical point, the properties of the liquid and gas become progressively more similar. At the critical point, the liquid and gas become indistinguishable. Above the critical point, there are no longer separate liquid and gas phases: there is only a generic fluid phase referred to as a supercritical fluid. In water, the critical point occurs at around 647 K (374 °C or 705 °F) and 22.064 MPa. An unusual feature of the water phase diagram is that the solid–liquid phase line (illustrated by the dotted green line) has a negative slope. For most substances, the slope is positive as exemplified by the dark green line. This unusual feature of water is related to ice having a lower density than liquid water. Increasing the pressure drives the water into the higher density phase, which causes melting. Another interesting though not unusual feature of the phase diagram is the point where the solid–liquid phase line meets the liquid–gas phase line. The intersection is referred to as the triple point. At the triple point, all three phases can coexist. Experimentally, phase lines are relatively easy to map due to the interdependence of temperature and pressure that develops when multiple phases form. Gibbs' phase rule suggests that different phases are completely determined by these variables. Consider a test apparatus consisting of a closed and well-insulated cylinder equipped with a piston. By controlling the temperature and the pressure, the system can be brought to any point on the phase diagram. From a point in the solid stability region (left side of the diagram), increasing the temperature of the system would bring it into the region where a liquid or a gas is the equilibrium phase (depending on the pressure). If the piston is slowly lowered, the system will trace a curve of increasing temperature and pressure within the gas region of the phase diagram. At the point where gas begins to condense to liquid, the direction of the temperature and pressure curve will abruptly change to trace along the phase line until all of the water has condensed. Interfacial phenomena Between two phases in equilibrium there is a narrow region where the properties are not that of either phase. Although this region may be very thin, it can have significant and easily observable effects, such as causing a liquid to exhibit surface tension. In mixtures, some components may preferentially move toward the interface. In terms of modeling, describing, or understanding the behavior of a particular system, it may be efficacious to treat the interfacial region as a separate phase. Crystal phases A single material may have several distinct solid states capable of forming separate phases. Water is a well-known example of such a material. For example, water ice is ordinarily found in the hexagonal form ice Ih, but can also exist as the cubic ice Ic, the rhombohedral ice II, and many other forms. Polymorphism is the ability of a solid to exist in more than one crystal form. For pure chemical elements, polymorphism is known as allotropy. For example, diamond, graphite, and fullerenes are different allotropes of carbon. Phase transitions When a substance undergoes a phase transition (changes from one state of matter to another) it usually either takes up or releases energy. For example, when water evaporates, the increase in kinetic energy as the evaporating molecules escape the attractive forces of the liquid is reflected in a decrease in temperature. The energy required to induce the phase transition is taken from the internal thermal energy of the water, which cools the liquid to a lower temperature; hence evaporation is useful for cooling. See Enthalpy of vaporization. The reverse process, condensation, releases heat. The heat energy, or enthalpy, associated with a solid to liquid transition is the enthalpy of fusion and that associated with a solid to gas transition is the enthalpy of sublimation. Phases out of equilibrium While phases of matter are traditionally defined for systems in thermal equilibrium, work on quantum many-body localized (MBL) systems has provided a framework for defining phases out of equilibrium. MBL phases never reach thermal equilibrium, and can allow for new forms of order disallowed in equilibrium via a phenomenon known as localization protected quantum order. The transitions between different MBL phases and between MBL and thermalizing phases are novel dynamical phase transitions whose properties are active areas of research.
Physical sciences
Phase transitions
null
23639
https://en.wikipedia.org/wiki/Gasoline
Gasoline
Gasoline (North American English) or petrol (Commonwealth English) is a petrochemical product characterized as a transparent, yellowish, and flammable liquid normally used as a fuel for spark-ignited internal combustion engines. When formulated as a fuel for engines, gasoline is chemically composed of organic compounds derived from the fractional distillation of petroleum and later chemically enhanced with gasoline additives. It is a high-volume profitable product produced in crude oil refineries. The fuel-characteristics of a particular gasoline-blend, which will resist igniting too early are measured as the octane rating of the fuel blend. Gasoline blends with stable octane ratings are produced in several fuel-grades for various types of motors. A low octane rated fuel may cause engine knocking and reduced efficiency in reciprocating engines. Tetraethyl lead was once widely used to increase the octane rating but are not used in modern automotive gasoline due to the health hazard. Aviation, off-road motor vehicles, and racing car motors still use leaded gasolines. History Interest in gasoline-like fuels started with the invention of internal combustion engines suitable for use in transportation applications. The so-called Otto engines were developed in Germany during the last quarter of the 19th century. The fuel for these early engines was a relatively volatile hydrocarbon obtained from coal gas. With a boiling point near (n-octane boils at ), it was well-suited for early carburetors (evaporators). The development of a "spray nozzle" carburetor enabled the use of less volatile fuels. Further improvements in engine efficiency were attempted at higher compression ratios, but early attempts were blocked by the premature explosion of fuel, known as knocking. In 1891, the Shukhov cracking process became the world's first commercial method to break down heavier hydrocarbons in crude oil to increase the percentage of lighter products compared to simple distillation. Chemical analysis and production Commercial gasoline as well as other liquid transportation fuels are complex mixtures of hydrocarbons. The performance specification also varies with season, requiring less volatile blends during summer, in order to minimize evaporative losses. Gasoline is produced in oil refineries. Roughly of gasoline is derived from a barrel of crude oil. Material separated from crude oil via distillation, called virgin or straight-run gasoline, does not meet specifications for modern engines (particularly the octane rating; see below), but can be pooled to the gasoline blend. The bulk of a typical gasoline consists of a homogeneous mixture of hydrocarbons with between 4 and 12 carbon atoms per molecule (commonly referred to as C4–C12). It is a mixture of paraffins (alkanes), olefins (alkenes), napthenes (cycloalkanes), and aromatics. The use of the term paraffin in place of the standard chemical nomenclature alkane is particular to the oil industry (which relies extensively on jargon). The composition of a gasoline depends upon: the oil refinery that makes the gasoline, as not all refineries have the same set of processing units; the crude oil feed used by the refinery; the grade of gasoline sought (in particular, the octane rating). The various refinery streams blended to make gasoline have different characteristics. Some important streams include the following: Straight-run gasoline, sometimes referred to as naphtha (and also light straight run naphtha "LSR" and light virgin naphtha "LVN"), is distilled directly from crude oil. Once the leading source of fuel, naphtha's low octane rating required organometallic fuel additives (primarily tetraethyllead) prior to their phaseout from the gasoline pool which started in 1975 in the United States. Straight run naphtha is typically low in aromatics (depending on the grade of the crude oil stream) and contains some cycloalkanes (naphthenes) and no olefins (alkenes). Between 0 and 20 percent of this stream is pooled into the finished gasoline because the quantity of this fraction in the crude is less than fuel demand and the fraction's Research Octane Number (RON) is too low. The chemical properties (namely RON and Reid vapor pressure (RVP)) of the straight-run gasoline can be improved through reforming and isomerization. However, before feeding those units, the naphtha needs to be split into light and heavy naphtha. Straight-run gasoline can also be used as a feedstock for steam-crackers to produce olefins. Reformate, produced from straight run gasoline in a catalytic reformer, has a high octane rating with high aromatic content and relatively low olefin content. Most of the benzene, toluene, and xylene (the so-called BTX hydrocarbons) are more valuable as chemical feedstocks and are thus removed to some extent. Also the BTX content is regulated. Catalytic cracked gasoline, or catalytic cracked naphtha, produced with a catalytic cracker, has a moderate octane rating, high olefin content, and moderate aromatic content. Hydrocrackate (heavy, mid, and light), produced with a hydrocracker, has a medium to low octane rating and moderate aromatic levels. Alkylate is produced in an alkylation unit, using isobutane and C3-/C4-olefins as feedstocks. Finished alkylate contains no aromatics or olefins and has a high MON (Motor Octane Number) Alkylate was used during world war 2 in aviation fuel. Since the late 1980s it is sold as a specialty fuel for (handheld) gardening and forestry tools with a combustion engine. Isomerate is obtained by isomerizing low-octane straight-run gasoline into iso-paraffins (non-chain alkanes, such as isooctane). Isomerate has a medium RON and MON, but no aromatics or olefins. Butane is usually blended in the gasoline pool, although the quantity of this stream is limited by the RVP specification. The terms above are the jargon used in the oil industry, and the terminology varies. Currently, many countries set limits on gasoline aromatics in general, benzene in particular, and olefin (alkene) content. Such regulations have led to an increasing preference for alkane isomers, such as isomerate or alkylate, as their octane rating is higher than n-alkanes. In the European Union, the benzene limit is set at one percent by volume for all grades of automotive gasoline. This is usually achieved by avoiding feeding C6, in particular cyclohexane, to the reformer unit, where it would be converted to benzene. Therefore, only (desulfurized) heavy virgin naphtha (HVN) is fed to the reformer unit Gasoline can also contain other organic compounds, such as organic ethers (deliberately added), plus small levels of contaminants, in particular organosulfur compounds (which are usually removed at the refinery). On average, U.S. petroleum refineries produce about 19 to 20 gallons of gasoline, 11 to 13 gallons of distillate fuel diesel fuel and 3 to 4 gallons of jet fuel from each 42 gallon (152 liters) barrel of crude oil. The product ratio depends upon the processing in an oil refinery and the crude oil assay. Physical properties Density The specific gravity of gasoline ranges from 0.71 to 0.77, with higher densities having a greater volume fraction of aromatics. Finished marketable gasoline is traded (in Europe) with a standard reference of , (7,5668 lb/ imp gal) its price is escalated or de-escalated according to its actual density. Because of its low density, gasoline floats on water, and therefore water cannot generally be used to extinguish a gasoline fire unless applied in a fine mist. Stability Quality gasoline should be stable for six months if stored properly, but can degrade over time. Gasoline stored for a year will most likely be able to be burned in an internal combustion engine without too much trouble. However, the effects of long-term storage will become more noticeable with each passing month until a time comes when the gasoline should be diluted with ever-increasing amounts of freshly made fuel so that the older gasoline may be used up. If left undiluted, improper operation will occur and this may include engine damage from misfiring or the lack of proper action of the fuel within a fuel injection system and from an onboard computer attempting to compensate (if applicable to the vehicle). Gasoline should ideally be stored in an airtight container (to prevent oxidation or water vapor mixing in with the gas) that can withstand the vapor pressure of the gasoline without venting (to prevent the loss of the more volatile fractions) at a stable cool temperature (to reduce the excess pressure from liquid expansion and to reduce the rate of any decomposition reactions). When gasoline is not stored correctly, gums and solids may result, which can corrode system components and accumulate on wet surfaces, resulting in a condition called "stale fuel". Gasoline containing ethanol is especially subject to absorbing atmospheric moisture, then forming gums, solids, or two phases (a hydrocarbon phase floating on top of a water-alcohol phase). The presence of these degradation products in the fuel tank or fuel lines plus a carburetor or fuel injection components makes it harder to start the engine or causes reduced engine performance On resumption of regular engine use, the buildup may or may not be eventually cleaned out by the flow of fresh gasoline. The addition of a fuel stabilizer to gasoline can extend the life of fuel that is not or cannot be stored properly, though removal of all fuel from a fuel system is the only real solution to the problem of long-term storage of an engine or a machine or vehicle. Typical fuel stabilizers are proprietary mixtures containing mineral spirits, isopropyl alcohol, 1,2,4-trimethylbenzene or other additives. Fuel stabilizers are commonly used for small engines, such as lawnmower and tractor engines, especially when their use is sporadic or seasonal (little to no use for one or more seasons of the year). Users have been advised to keep gasoline containers more than half full and properly capped to reduce air exposure, to avoid storage at high temperatures, to run an engine for ten minutes to circulate the stabilizer through all components prior to storage, and to run the engine at intervals to purge stale fuel from the carburetor. Gasoline stability requirements are set by the standard ASTM D4814. This standard describes the various characteristics and requirements of automotive fuels for use over a wide range of operating conditions in ground vehicles equipped with spark-ignition engines. Combustion energy content A gasoline-fueled internal combustion engine obtains energy from the combustion of gasoline's various hydrocarbons with oxygen from the ambient air, yielding carbon dioxide and water as exhaust. The combustion of octane, a representative species, performs the chemical reaction: By weight, combustion of gasoline releases about or by volume , quoting the lower heating value. Gasoline blends differ, and therefore actual energy content varies according to the season and producer by up to 1.75 percent more or less than the average. On average, about of gasoline are available from a barrel of crude oil (about 46 percent by volume), varying with the quality of the crude and the grade of the gasoline. The remainder is products ranging from tar to naphtha. A high-octane-rated fuel, such as liquefied petroleum gas (LPG), has an overall lower power output at the typical 10:1 compression ratio of an engine design optimized for gasoline fuel. An engine tuned for LPG fuel via higher compression ratios (typically 12:1) improves the power output. This is because higher-octane fuels allow for a higher compression ratio without knocking, resulting in a higher cylinder temperature, which improves efficiency. Also, increased mechanical efficiency is created by a higher compression ratio through the concomitant higher expansion ratio on the power stroke, which is by far the greater effect. The higher expansion ratio extracts more work from the high-pressure gas created by the combustion process. An Atkinson cycle engine uses the timing of the valve events to produce the benefits of a high expansion ratio without the disadvantages, chiefly detonation, of a high compression ratio. A high expansion ratio is also one of the two key reasons for the efficiency of diesel engines, along with the elimination of pumping losses due to throttling of the intake airflow. The lower energy content of LPG by liquid volume in comparison to gasoline is due mainly to its lower density. This lower density is a property of the lower molecular weight of propane (LPG's chief component) compared to gasoline's blend of various hydrocarbon compounds with heavier molecular weights than propane. Conversely, LPG's energy content by weight is higher than gasoline's due to a higher hydrogen-to-carbon ratio. Molecular weights of the species in the representative octane combustion are 114, 32, 44, and 18 for C8H18, O2, CO2, and H2O, respectively; therefore of fuel reacts with of oxygen to produce of carbon dioxide and of water. Octane rating Spark-ignition engines are designed to burn gasoline in a controlled process called deflagration. However, the unburned mixture may autoignite by pressure and heat alone, rather than igniting from the spark plug at exactly the right time, causing a rapid pressure rise that can damage the engine. This is often referred to as engine knocking or end-gas knock. Knocking can be reduced by increasing the gasoline's resistance to autoignition, which is expressed by its octane rating. Octane rating is measured relative to a mixture of 2,2,4-trimethylpentane (an isomer of octane) and n-heptane. There are different conventions for expressing octane ratings, so the same physical fuel may have several different octane ratings based on the measure used. One of the best known is the research octane number (RON). The octane rating of typical commercially available gasoline varies by country. In Finland, Sweden, and Norway, 95 RON is the standard for regular unleaded gasoline and 98 RON is also available as a more expensive option. In the United Kingdom, over 95 percent of gasoline sold has 95 RON and is marketed as Unleaded or Premium Unleaded. Super Unleaded, with 97/98 RON and branded high-performance fuels (e.g., Shell V-Power, BP Ultimate) with 99 RON make up the balance. Gasoline with 102 RON may rarely be available for racing purposes. In the U.S., octane ratings in unleaded fuels vary between 85 and 87 AKI (91–92 RON) for regular, 89–90 AKI (94–95 RON) for mid-grade (equivalent to European regular), up to 90–94 AKI (95–99 RON) for premium (European premium). As South Africa's largest city, Johannesburg, is located on the Highveld at above sea level, the Automobile Association of South Africa recommends 95-octane gasoline at low altitude and 93-octane for use in Johannesburg because "The higher the altitude the lower the air pressure, and the lower the need for a high octane fuel as there is no real performance gain". Octane rating became important as the military sought higher output for aircraft engines in the late 1920s and the 1940s. A higher octane rating allows a higher compression ratio or supercharger boost, and thus higher temperatures and pressures, which translate to higher power output. Some scientists even predicted that a nation with a good supply of high-octane gasoline would have the advantage in air power. In 1943, the Rolls-Royce Merlin aero engine produced using 100 RON fuel from a modest displacement. By the time of Operation Overlord, both the RAF and USAAF were conducting some operations in Europe using 150 RON fuel (100/150 avgas), obtained by adding 2.5 percent aniline to 100-octane avgas. By this time, the Rolls-Royce Merlin 66 was developing using this fuel. Additives Antiknock additives Tetraethyl lead Gasoline, when used in high-compression internal combustion engines, tends to auto-ignite or "detonate" causing damaging engine knocking (also called "pinging" or "pinking"). To address this problem, tetraethyl lead (TEL) was widely adopted as an additive for gasoline in the 1920s. With a growing awareness of the seriousness of the extent of environmental and health damage caused by lead compounds, however, and the incompatibility of lead with catalytic converters, governments began to mandate reductions in gasoline lead. In the U.S., the Environmental Protection Agency issued regulations to reduce the lead content of leaded gasoline over a series of annual phases, scheduled to begin in 1973 but delayed by court appeals until 1976. By 1995, leaded fuel accounted for only 0.6 percent of total gasoline sales and under () of lead per year. From 1 January 1996, the U.S. Clean Air Act banned the sale of leaded fuel for use in on-road vehicles in the U.S. The use of TEL also necessitated other additives, such as dibromoethane. European countries began replacing lead-containing additives by the end of the 1980s, and by the end of the 1990s, leaded gasoline was banned within the entire European Union with an exception for Avgas 100LL for general aviation. The UAE started to switch to unleaded in the early 2000s. Reduction in the average lead content of human blood may be a major cause for falling violent crime rates around the world including South Africa. A study found a correlation between leaded gasoline usage and violent crime (see Lead–crime hypothesis). Other studies found no correlation. In August 2021, the UN Environment Programme announced that leaded petrol had been eradicated worldwide, with Algeria being the last country to deplete its reserves. UN Secretary-General António Guterres called the eradication of leaded petrol an "international success story". He also added: "Ending the use of leaded petrol will prevent more than one million premature deaths each year from heart disease, strokes and cancer, and it will protect children whose IQs are damaged by exposure to lead". Greenpeace called the announcement "the end of one toxic era". However, leaded gasoline continues to be used in aeronautic, auto racing, and off-road applications. The use of leaded additives is still permitted worldwide for the formulation of some grades of aviation gasoline such as 100LL, because the required octane rating is difficult to reach without the use of leaded additives. Different additives have replaced lead compounds. The most popular additives include aromatic hydrocarbons, ethers (MTBE and ETBE), and alcohols, most commonly ethanol. Lead Replacement Petrol Lead replacement petrol (LRP) was developed for vehicles designed to run on leaded fuels and incompatible with unleaded fuels. Rather than tetraethyllead, it contains other metals such as potassium compounds or methylcyclopentadienyl manganese tricarbonyl (MMT); these are purported to buffer soft exhaust valves and seats so that they do not suffer recession due to the use of unleaded fuel. LRP was marketed during and after the phaseout of leaded motor fuels in the United Kingdom, Australia, South Africa, and some other countries. Consumer confusion led to a widespread mistaken preference for LRP rather than unleaded, and LRP was phased out 8 to 10 years after the introduction of unleaded. Leaded gasoline was withdrawn from sale in Britain after 31 December 1999, seven years after EEC regulations signaled the end of production for cars using leaded gasoline in member states. At this stage, a large percentage of cars from the 1980s and early 1990s which ran on leaded gasoline were still in use, along with cars that could run on unleaded fuel. However, the declining number of such cars on British roads saw many gasoline stations withdrawing LRP from sale by 2003. MMT Methylcyclopentadienyl manganese tricarbonyl (MMT) is used in Canada and the U.S. to boost octane rating. Its use in the U.S. has been restricted by regulations, although it is currently allowed. Its use in the European Union is restricted by Article 8a of the Fuel Quality Directive following its testing under the Protocol for the evaluation of effects of metallic fuel-additives on the emissions performance of vehicles. Fuel stabilizers (antioxidants and metal deactivators) Gummy, sticky resin deposits result from oxidative degradation of gasoline during long-term storage. These harmful deposits arise from the oxidation of alkenes and other minor components in gasoline (see drying oils). Improvements in refinery techniques have generally reduced the susceptibility of gasolines to these problems. Previously, catalytically or thermally cracked gasolines were most susceptible to oxidation. The formation of gums is accelerated by copper salts, which can be neutralized by additives called metal deactivators. This degradation can be prevented through the addition of 5–100 ppm of antioxidants, such as phenylenediamines and other amines. Hydrocarbons with a bromine number of 10 or above can be protected with the combination of unhindered or partially hindered phenols and oil-soluble strong amine bases, such as hindered phenols. "Stale" gasoline can be detected by a colorimetric enzymatic test for organic peroxides produced by oxidation of the gasoline. Gasolines are also treated with metal deactivators, which are compounds that sequester (deactivate) metal salts that otherwise accelerate the formation of gummy residues. The metal impurities might arise from the engine itself or as contaminants in the fuel. Detergents Gasoline, as delivered at the pump, also contains additives to reduce internal engine carbon buildups, improve combustion and allow easier starting in cold climates. High levels of detergent can be found in Top Tier Detergent Gasolines. The specification for Top Tier Detergent Gasolines was developed by four automakers: GM, Honda, Toyota, and BMW. According to the bulletin, the minimal U.S. EPA requirement is not sufficient to keep engines clean. Typical detergents include alkylamines and alkyl phosphates at a level of 50–100 ppm. Ethanol European Union In the EU, 5 percent ethanol can be added within the common gasoline spec (EN 228). Discussions are ongoing to allow 10 percent blending of ethanol (available in Finnish, French and German gasoline stations). In Finland, most gasoline stations sell 95E10, which is 10 percent ethanol, and 98E5, which is 5 percent ethanol. Most gasoline sold in Sweden has 5–15 percent ethanol added. Three different ethanol blends are sold in the Netherlands—E5, E10 and hE15. The last of these differs from standard ethanol–gasoline blends in that it consists of 15 percent hydrous ethanol (i.e., the ethanol–water azeotrope) instead of the anhydrous ethanol traditionally used for blending with gasoline. Brazil The Brazilian National Agency of Petroleum, Natural Gas and Biofuels (ANP) requires gasoline for automobile use to have 27.5 percent of ethanol added to its composition. Pure hydrated ethanol is also available as a fuel. Australia Legislation requires retailers to label fuels containing ethanol on the dispenser, and limits ethanol use to 10 percent of gasoline in Australia. Such gasoline is commonly called E10 by major brands, and it is cheaper than regular unleaded gasoline. U.S. The federal Renewable Fuel Standard (RFS) effectively requires refiners and blenders to blend renewable biofuels (mostly ethanol) with gasoline, sufficient to meet a growing annual target of total gallons blended. Although the mandate does not require a specific percentage of ethanol, annual increases in the target combined with declining gasoline consumption have caused the typical ethanol content in gasoline to approach 10 percent. Most fuel pumps display a sticker that states that the fuel may contain up to 10 percent ethanol, an intentional disparity that reflects the varying actual percentage. In parts of the U.S., ethanol is sometimes added to gasoline without an indication that it is a component. India In October 2007, the Government of India decided to make five percent ethanol blending (with gasoline) mandatory. Currently, 10 percent ethanol blended product (E10) is being sold in various parts of the country. Ethanol has been found in at least one study to damage catalytic converters. Dyes Though gasoline is a naturally colorless liquid, many gasolines are dyed in various colors to indicate their composition and acceptable uses. In Australia, the lowest grade of gasoline (RON 91) was dyed a light shade of red/orange, but is now the same color as the medium grade (RON 95) and high octane (RON 98), which are dyed yellow. In the U.S., aviation gasoline (avgas) is dyed to identify its octane rating and to distinguish it from kerosene-based jet fuel, which is left colorless. In Canada, the gasoline for marine and farm use is dyed red and is not subject to fuel excise tax in most provinces. Oxygenate blending Oxygenate blending adds oxygen-bearing compounds such as MTBE, ETBE, TAME, TAEE, ethanol, and biobutanol. The presence of these oxygenates reduces the amount of carbon monoxide and unburned fuel in the exhaust. In many areas throughout the U.S., oxygenate blending is mandated by EPA regulations to reduce smog and other airborne pollutants. For example, in Southern California fuel must contain two percent oxygen by weight, resulting in a mixture of 5.6 percent ethanol in gasoline. The resulting fuel is often known as reformulated gasoline (RFG) or oxygenated gasoline, or, in the case of California, California reformulated gasoline (CARBOB). The federal requirement that RFG contain oxygen was dropped on 6 May 2006 because the industry had developed VOC-controlled RFG that did not need additional oxygen. MTBE was phased out in the U.S. due to groundwater contamination and the resulting regulations and lawsuits. Ethanol and, to a lesser extent, ethanol-derived ETBE are common substitutes. A common ethanol-gasoline mix of 10 percent ethanol mixed with gasoline is called gasohol or E10, and an ethanol-gasoline mix of 85 percent ethanol mixed with gasoline is called E85. The most extensive use of ethanol takes place in Brazil, where the ethanol is derived from sugarcane. In 2004, over of ethanol was produced in the U.S. for fuel use, mostly from corn and sold as E10. E85 is slowly becoming available in much of the U.S., though many of the relatively few stations vending E85 are not open to the general public. The use of bioethanol and bio-methanol, either directly or indirectly by conversion of ethanol to bio-ETBE, or methanol to bio-MTBE is encouraged by the European Union Directive on the Promotion of the use of biofuels and other renewable fuels for transport. Since producing bioethanol from fermented sugars and starches involves distillation, though, ordinary people in much of Europe cannot legally ferment and distill their own bioethanol at present (unlike in the U.S., where getting a BATF distillation permit has been easy since the 1973 oil crisis). Safety Toxicity The safety data sheet for a 2003 Texan unleaded gasoline shows at least 15 hazardous chemicals occurring in various amounts, including benzene (up to five percent by volume), toluene (up to 35 percent by volume), naphthalene (up to one percent by volume), trimethylbenzene (up to seven percent by volume), methyl tert-butyl ether (MTBE) (up to 18 percent by volume, in some states), and about 10 others. Hydrocarbons in gasoline generally exhibit low acute toxicities, with LD50 of 700–2700 mg/kg for simple aromatic compounds. Benzene and many antiknocking additives are carcinogenic. People can be exposed to gasoline in the workplace by swallowing it, breathing in vapors, skin contact, and eye contact. Gasoline is toxic. The National Institute for Occupational Safety and Health (NIOSH) has also designated gasoline as a carcinogen. Physical contact, ingestion, or inhalation can cause health problems. Since ingesting large amounts of gasoline can cause permanent damage to major organs, a call to a local poison control center or emergency room visit is indicated. Contrary to common misconception, swallowing gasoline does not generally require special emergency treatment, and inducing vomiting does not help, and can make it worse. According to poison specialist Brad Dahl, "even two mouthfuls wouldn't be that dangerous as long as it goes down to your stomach and stays there or keeps going". The U.S. CDC's Agency for Toxic Substances and Disease Registry says not to induce vomiting, lavage, or administer activated charcoal. Inhalation for intoxication Inhaled (huffed) gasoline vapor is a common intoxicant. Users concentrate and inhale gasoline vapor in a manner not intended by the manufacturer to produce euphoria and intoxication. Gasoline inhalation has become epidemic in some poorer communities and indigenous groups in Australia, Canada, New Zealand, and some Pacific Islands. The practice is thought to cause severe organ damage, along with other effects such as intellectual disability and various cancers. In Canada, Native children in the isolated Northern Labrador community of Davis Inlet were the focus of national concern in 1993, when many were found to be sniffing gasoline. The Canadian and provincial Newfoundland and Labrador governments intervened on several occasions, sending many children away for treatment. Despite being moved to the new community of Natuashish in 2002, serious inhalant abuse problems have continued. Similar problems were reported in Sheshatshiu in 2000 and also in Pikangikum First Nation. In 2012, the issue once again made the news media in Canada. Australia has long faced a petrol (gasoline) sniffing problem in isolated and impoverished aboriginal communities. Although some sources argue that sniffing was introduced by U.S. servicemen stationed in the nation's Top End during World War II or through experimentation by 1940s-era Cobourg Peninsula sawmill workers, other sources claim that inhalant abuse (such as glue inhalation) emerged in Australia in the late 1960s. Chronic, heavy petrol sniffing appears to occur among remote, impoverished indigenous communities, where the ready accessibility of petrol has helped to make it a common substance for abuse. In Australia, petrol sniffing now occurs widely throughout remote Aboriginal communities in the Northern Territory, Western Australia, northern parts of South Australia, and Queensland. The number of people sniffing petrol goes up and down over time as young people experiment or sniff occasionally. "Boss", or chronic, sniffers may move in and out of communities; they are often responsible for encouraging young people to take it up. In 2005, the Government of Australia and BP Australia began the usage of Opal fuel in remote areas prone to petrol sniffing. Opal is a non-sniffable fuel (which is much less likely to cause a high) and has made a difference in some indigenous communities. Flammability Gasoline is flammable with low flash point of . Gasoline has a lower explosive limit of 1.4 percent by volume and an upper explosive limit of 7.6 percent. If the concentration is below 1.4 percent, the air-gasoline mixture is too lean and does not ignite. If the concentration is above 7.6 percent, the mixture is too rich and also does not ignite. However, gasoline vapor rapidly mixes and spreads with air, making unconstrained gasoline quickly flammable. Gasoline exhaust The exhaust gas generated by burning gasoline is harmful to both the environment and to human health. After CO is inhaled into the human body, it readily combines with hemoglobin in the blood, and its affinity is 300 times that of oxygen. Therefore, the hemoglobin in the lungs combines with CO instead of oxygen, causing the human body to be hypoxic, causing headaches, dizziness, vomiting, and other poisoning symptoms. In severe cases, it may lead to death. Hydrocarbons only affect the human body when their concentration is quite high, and their toxicity level depends on the chemical composition. The hydrocarbons produced by incomplete combustion include alkanes, aromatics, and aldehydes. Among them, a concentration of methane and ethane over will cause loss of consciousness or suffocation, a concentration of pentane and hexane over will have an anesthetic effect, and aromatic hydrocarbons will have more serious effects on health, blood toxicity, neurotoxicity, and cancer. If the concentration of benzene exceeds 40 ppm, it can cause leukemia, and xylene can cause headache, dizziness, nausea, and vomiting. Human exposure to large amounts of aldehydes can cause eye irritation, nausea, and dizziness. In addition to carcinogenic effects, long-term exposure can cause damage to the skin, liver, kidneys, and cataracts. After NOx enters the alveoli, it has a severe stimulating effect on the lung tissue. It can irritate the conjunctiva of the eyes, cause tearing, and cause pink eyes. It also has a stimulating effect on the nose, pharynx, throat, and other organs. It can cause acute wheezing, breathing difficulties, red eyes, sore throat, and dizziness causing poisoning. Fine particulates are also dangerous to health. Environmental impact The air pollution in many large cities has changed from coal-burning pollution to "motor vehicle pollution". In the U.S., transportation is the largest source of carbon emissions, accounting for 30 percent of the total carbon footprint of the U.S. Combustion of gasoline produces of carbon dioxide, a greenhouse gas. Unburnt gasoline and evaporation from the tank, when in the atmosphere, react in sunlight to produce photochemical smog. Vapor pressure initially rises with some addition of ethanol to gasoline, but the increase is greatest at 10 percent by volume. At higher concentrations of ethanol above 10 percent, the vapor pressure of the blend starts to decrease. At a 10 percent ethanol by volume, the rise in vapor pressure may potentially increase the problem of photochemical smog. This rise in vapor pressure could be mitigated by increasing or decreasing the percentage of ethanol in the gasoline mixture. The chief risks of such leaks come not from vehicles, but gasoline delivery truck accidents and leaks from storage tanks. Because of this risk, most (underground) storage tanks now have extensive measures in place to detect and prevent any such leaks, such as monitoring systems (Veeder-Root, Franklin Fueling). Production of gasoline consumes of water by driven distance. Gasoline use causes a variety of deleterious effects to the human population and to the climate generally. The harms imposed include a higher rate of premature death and ailments, such as asthma, caused by air pollution, higher healthcare costs for the public generally, decreased crop yields, missed work and school days due to illness, increased flooding and other extreme weather events linked to global climate change, and other social costs. The costs imposed on society and the planet are estimated to be $3.80 per gallon of gasoline, in addition to the price paid at the pump by the user. The damage to the health and climate caused by a gasoline-powered vehicle greatly exceeds that caused by electric vehicles. Gasoline can be released into the Earth's environment as an uncombusted liquid fuel, as a flammable liquid, or as a vapor by way of leakages occurring during its production, handling, transport and delivery. Gasoline contains known carcinogens, and gasoline exhaust is a health risk. Gasoline is often used as a recreational inhalant and can be harmful or fatal when used in such a manner. When burned, of gasoline emits about of , a greenhouse gas, contributing to human-caused climate change. Oil products, including gasoline, were responsible for about 32% of emissions worldwide in 2021. Carbon dioxide About of carbon dioxide (CO2) are produced from burning gasoline that does not contain ethanol. Most of the retail gasoline now sold in the U.S. contains about 10 percent fuel ethanol (or E10) by volume. Burning E10 produces about of CO2 that is emitted from the fossil fuel content. If the CO2 emissions from ethanol combustion are considered, then about of CO2 are produced when E10 is combusted. Worldwide 7 liters of gasoline are burnt for every 100 km driven by cars and vans. In 2021, the International Energy Agency stated, "To ensure fuel economy and CO2 emissions standards are effective, governments must continue regulatory efforts to monitor and reduce the gap between real-world fuel economy and rated performance." Contamination of soil and water Gasoline enters the environment through the soil, groundwater, surface water, and air. Therefore, humans may be exposed to gasoline through methods such as breathing, eating, and skin contact. For example, using gasoline-filled equipment, such as lawnmowers, drinking gasoline-contaminated water close to gasoline spills or leaks to the soil, working at a gasoline station, inhaling gasoline volatile gas when refueling at a gasoline station is the easiest way to be exposed to gasoline. Use and pricing The International Energy Agency said in 2021 that "road fuels should be taxed at a rate that reflects their impact on people's health and the climate". Europe Countries in Europe impose substantially higher taxes on fuels such as gasoline when compared to the U.S. The price of gasoline in Europe is typically higher than that in the U.S. due to this difference. U.S. From 1998 to 2004, the price of gasoline fluctuated between . After 2004, the price increased until the average gasoline price reached a high of in mid-2008 but receded to approximately by September 2009. The U.S. experienced an upswing in gasoline prices through 2011, and, by 1 March 2012, the national average was . California prices are higher because the California government mandates unique California gasoline formulas and taxes. In the U.S., most consumer goods bear pre-tax prices, but gasoline prices are posted with taxes included. Taxes are added by federal, state, and local governments. , the federal tax was for gasoline and for diesel (excluding red diesel). About nine percent of all gasoline sold in the U.S. in May 2009 was premium grade, according to the Energy Information Administration. Consumer Reports magazine says, "If [your owner's manual] says to use regular fuel, do so—there's no advantage to a higher grade." The Associated Press said premium gas—which has a higher octane rating and costs more per gallon than regular unleaded—should be used only if the manufacturer says it is "required". Cars with turbocharged engines and high compression ratios often specify premium gasoline because higher octane fuels reduce the incidence of "knock", or fuel pre-detonation. The price of gasoline varies considerably between the summer and winter months. There is a considerable difference between summer oil and winter oil in gasoline vapor pressure (Reid Vapor Pressure, RVP), which is a measure of how easily the fuel evaporates at a given temperature. The higher the gasoline volatility (the higher the RVP), the easier it is to evaporate. The conversion between the two fuels occurs twice a year, once in autumn (winter mix) and the other in spring (summer mix). The winter blended fuel has a higher RVP because the fuel must be able to evaporate at a low temperature for the engine to run normally. If the RVP is too low on a cold day, the vehicle will be difficult to start; however, the summer blended gasoline has a lower RVP. It prevents excessive evaporation when the outdoor temperature rises, reduces ozone emissions, and reduces smog levels. At the same time, vapor lock is less likely to occur in hot weather. Gasoline production by country Comparison with other fuels Below is a table of the energy density (per volume) and specific energy (per mass) of various transportation fuels as compared with gasoline. In the rows with gross and net, they are from the Oak Ridge National Laboratory's Transportation Energy Data Book.
Technology
Energy and fuel
null
23643
https://en.wikipedia.org/wiki/Propane
Propane
Propane () is a three-carbon alkane with the molecular formula . It is a gas at standard temperature and pressure, but compressible to a transportable liquid. A by-product of natural gas processing and petroleum refining, it is often a constituent of liquefied petroleum gas (LPG), which is commonly used as a fuel in domestic and industrial applications and in low-emissions public transportation; other constituents of LPG may include propylene, butane, butylene, butadiene, and isobutylene. Discovered in 1857 by the French chemist Marcellin Berthelot, it became commercially available in the US by 1911. Propane has lower volumetric energy density than gasoline or coal, but has higher gravimetric energy density than them and burns more cleanly. Propane gas has become a popular choice for barbecues and portable stoves because its low −42 °C boiling point makes it vaporise inside pressurised liquid containers (it exists in two phases, vapor above liquid). It retains its ability to vaporise even in cold weather, making it better-suited for outdoor use in cold climates than alternatives with higher boiling points like butane. LPG powers buses, forklifts, automobiles, outboard boat motors, and ice resurfacing machines, and is used for heat and cooking in recreational vehicles and campers. Propane is becoming popular as a replacement refrigerant (R290) for heatpumps also as it offers greater efficiency than the current refrigerants: R410A / R32, higher temperature heat output and less damage to the atmosphere for escaped gasses - at the expense of high gas flammability. History Propane was first synthesized by the French chemist Marcellin Berthelot in 1857 during his researches on hydrogenation. Berthelot made propane by heating propylene dibromide (C3H6Br2) with potassium iodide and water. Propane was found dissolved in Pennsylvanian light crude oil by Edmund Ronalds in 1864. Walter O. Snelling of the U.S. Bureau of Mines highlighted it as a volatile component in gasoline in 1910, which marked the "birth of the propane industry" in the United States. The volatility of these lighter hydrocarbons caused them to be known as "wild" because of the high vapor pressures of unrefined gasoline. On March 31, 1912, The New York Times reported on Snelling's work with liquefied gas, saying "a steel bottle will carry enough gas to light an ordinary home for three weeks". It was during this time that Snelling—in cooperation with Frank P. Peterson, Chester Kerr, and Arthur Kerr—developed ways to liquefy the LP gases during the refining of gasoline. Together, they established American Gasol Co., the first commercial marketer of propane. Snelling had produced relatively pure propane by 1911, and on March 25, 1913, his method of processing and producing LP gases was issued patent #1,056,845. A separate method of producing LP gas through compression was developed by Frank Peterson and its patent was granted on July 2, 1912. The 1920s saw increased production of LP gases, with the first year of recorded production totaling in 1922. In 1927, annual marketed LP gas production reached , and by 1935, the annual sales of LP gas had reached . Major industry developments in the 1930s included the introduction of railroad tank car transport, gas odorization, and the construction of local bottle-filling plants. The year 1945 marked the first year that annual LP gas sales reached a billion gallons. By 1947, 62% of all U.S. homes had been equipped with either natural gas or propane for cooking. In 1950, 1,000 propane-fueled buses were ordered by the Chicago Transit Authority, and by 1958, sales in the U.S. had reached annually. In 2004, it was reported to be a growing $8-billion to $10-billion industry with over of propane being used annually in the U.S. During the COVID-19 pandemic, propane shortages were reported in the United States due to increased demand. Etymology The "prop-" root found in "propane" and names of other compounds with three-carbon chains was derived from "propionic acid", which in turn was named after the Greek words protos (meaning first) and pion (fat), as it was the "first" member of the series of fatty acids. Properties and reactions Propane is a colorless, odorless gas. Ethyl mercaptan is added as a safety precaution as an odorant, and is commonly called a "rotten egg" smell. At normal pressure it liquifies below its boiling point at −42 °C and solidifies below its melting point at −187.7 °C. Propane crystallizes in the space group P21/n. The low space-filling of 58.5% (at 90 K), due to the bad stacking properties of the molecule, is the reason for the particularly low melting point. Propane undergoes combustion reactions in a similar fashion to other alkanes. In the presence of excess oxygen, propane burns to form water and carbon dioxide. C3H8 + 5 O2 -> 3 CO2 + 4 H2O + heat When insufficient oxygen is present for complete combustion, carbon monoxide, soot (carbon), or both, are formed as well: C3H8 + 9/2 O2 -> 2 CO2 + CO + 4 H2O + heat C3H8 + 2 O2 -> 3 C + 4 H2O + heat The complete combustion of propane produces about 50 MJ/kg of heat. Propane combustion is much cleaner than that of coal or unleaded gasoline. Propane's per-BTU production of CO2 is almost as low as that of natural gas. Propane burns hotter than home heating oil or diesel fuel because of the very high hydrogen content. The presence of C–C bonds, plus the multiple bonds of propylene and butylene, produce organic exhausts besides carbon dioxide and water vapor during typical combustion. These bonds also cause propane to burn with a visible flame. Energy content The enthalpy of combustion of propane gas where all products return to standard state, for example where water returns to its liquid state at standard temperature (known as higher heating value), is (2,219.2 ± 0.5) kJ/mol, or (50.33 ± 0.01) MJ/kg. The enthalpy of combustion of propane gas where products do not return to standard state, for example where the hot gases including water vapor exit a chimney, (known as lower heating value) is −2043.455 kJ/mol. The lower heat value is the amount of heat available from burning the substance where the combustion products are vented to the atmosphere; for example, the heat from a fireplace when the flue is open. Density The density of propane gas at 25 °C (77 °F) is 1.808 kg/m3, about 1.5× the density of air at the same temperature. The density of liquid propane at 25 °C (77 °F) is 0.493 g/cm3, which is equivalent to 4.11 pounds per U.S. liquid gallon or 493 g/L. Propane expands at 1.5% per 10 °F. Thus, liquid propane has a density of approximately 4.2 pounds per gallon (504 g/L) at 60 °F (15.6 °C). As the density of propane changes with temperature, this fact must be considered every time when the application is connected with safety or custody transfer operations. Uses Portable stoves Propane is a popular choice for barbecues and portable stoves because the low boiling point of makes it vaporize as soon as it is released from its pressurized container. Therefore, no carburetor or other vaporizing device is required; a simple metering nozzle suffices. Refrigerant Blends of pure, dry "isopropane" [isobutane/propane mixtures of propane (R-290) and isobutane (R-600a)] can be used as the circulating refrigerant in suitably constructed compressor-based refrigeration. Compared to fluorocarbons, propane has a negligible ozone depletion potential and very low global warming potential (having a GWP value of 0.072, 13.9 times lower than the GWP of carbon dioxide) and can serve as a functional replacement for R-12, R-22, R-134a, and other chlorofluorocarbon or hydrofluorocarbon refrigerants in conventional stationary refrigeration and air conditioning systems. Because its global warming effect is far less than current refrigerants, propane was chosen as one of five replacement refrigerants approved by the EPA in 2015, for use in systems specially designed to handle its flammability. Such substitution is widely prohibited or discouraged in motor vehicle air conditioning systems, on the grounds that using flammable hydrocarbons in systems originally designed to carry non-flammable refrigerant presents a significant risk of fire or explosion. Vendors and advocates of hydrocarbon refrigerants argue against such bans on the grounds that there have been very few such incidents relative to the number of vehicle air conditioning systems filled with hydrocarbons. Propane is also instrumental in providing off-the-grid refrigeration, as the energy source for a gas absorption refrigerator and is commonly used for camping and recreational vehicles. It has also been proposed to use propane as a refrigerant in heat pumps. Domestic and industrial fuel Since it can be transported easily, it is a popular fuel for home heat and backup electrical generation in sparsely populated areas that do not have natural gas pipelines. In June 2023, Stanford researchers found propane combustion emitted detectable and repeatable levels of benzene that in some homes raised indoor benzene concentrations above well-established health benchmarks. The research also shows that gas and propane fuels appear to be the dominant source of benzene produced by cooking. In rural areas of North America, as well as northern Australia, propane is used to heat livestock facilities, in grain dryers, and other heat-producing appliances. When used for heating or grain drying it is usually stored in a large, permanently-placed cylinder which is refilled by a propane-delivery truck. , 6.2 million American households use propane as their primary heating fuel. In North America, local delivery trucks with an average cylinder size of , fill up large cylinders that are permanently installed on the property, or other service trucks exchange empty cylinders of propane with filled cylinders. Large tractor-trailer trucks, with an average cylinder size of , transport propane from the pipeline or refinery to the local bulk plant. The bobtail tank truck is not unique to the North American market, though the practice is not as common elsewhere, and the vehicles are generally called tankers. In many countries, propane is delivered to end-users via small or medium-sized individual cylinders, while empty cylinders are removed for refilling at a central location. There are also community propane systems, with a central cylinder feeding individual homes. Motor fuel In the U.S., over 190,000 on-road vehicles use propane, and over 450,000 forklifts use it for power. It is the third most popular vehicle fuel in the world, behind gasoline and diesel fuel. In other parts of the world, propane used in vehicles is known as autogas. In 2007, approximately 13 million vehicles worldwide use autogas. The advantage of propane in cars is its liquid state at a moderate pressure. This allows fast refill times, affordable fuel cylinder construction, and price ranges typically just over half that of gasoline. Meanwhile, it is noticeably cleaner (both in handling, and in combustion), results in less engine wear (due to carbon deposits) without diluting engine oil (often extending oil-change intervals), and until recently was relatively low-cost in North America. The octane rating of propane is relatively high at 110. In the United States the propane fueling infrastructure is the most developed of all alternative vehicle fuels. Many converted vehicles have provisions for topping off from "barbecue bottles". Purpose-built vehicles are often in commercially owned fleets, and have private fueling facilities. A further saving for propane fuel vehicle operators, especially in fleets, is that theft is much more difficult than with gasoline or diesel fuels. Propane is also used as fuel for small engines, especially those used indoors or in areas with insufficient fresh air and ventilation to carry away the more toxic exhaust of an engine running on gasoline or diesel fuel. More recently, there have been lawn-care products like string trimmers, lawn mowers and leaf blowers intended for outdoor use, but fueled by propane in order to reduce air pollution. Many heavy-duty highway trucks use propane as a boost, where it is added through the turbocharger, to mix with diesel fuel droplets. Propane droplets' very high hydrogen content helps the diesel fuel to burn hotter and therefore more completely. This provides more torque, more horsepower, and a cleaner exhaust for the trucks. It is normal for a 7-liter medium-duty diesel truck engine to increase fuel economy by 20 to 33 percent when a propane boost system is used. It is cheaper because propane is much cheaper than diesel fuel. The longer distance a cross-country trucker can travel on a full load of combined diesel and propane fuel means they can maintain federal hours of work rules with two fewer fuel stops in a cross-country trip. Truckers, tractor pulling competitions, and farmers have been using a propane boost system for over forty years in North America. Other uses Propane is the primary flammable gas in blowtorches for soldering. Propane is used in oxy-fuel welding and cutting. Propane does not burn as hot as acetylene in its inner cone, and so it is rarely used for welding. Propane, however, has a very high number of BTUs per cubic foot in its outer cone, and so with the right torch (injector style) it can make a faster and cleaner cut than acetylene, and is much more useful for heating and bending than acetylene. Propane is used as a feedstock for the production of base petrochemicals in steam cracking. Propane is the primary fuel for hot-air balloons. It is used in semiconductor manufacture to deposit silicon carbide. Propane is commonly used in theme parks and in movie production as an inexpensive, high-energy fuel for explosions and other special effects. Propane is used as a propellant, relying on the expansion of the gas to fire the projectile. It does not ignite the gas. The use of a liquefied gas gives more shots per cylinder, compared to a compressed gas. Propane is also used as a cooking fuel. Propane is used as a propellant for many household aerosol sprays, including shaving creams and air fresheners. Propane is a promising feedstock for the production of propylene. Liquified propane is used in the extraction of animal fats and vegetable oils. Purity The North American standard grade of automotive-use propane is rated HD-5 (Heavy Duty 5%). HD-5 grade has a maximum of 5 percent butane, but propane sold in Europe has a maximum allowable amount of butane of 30 percent, meaning it is not the same fuel as HD-5. The LPG used as auto fuel and cooking gas in Asia and Australia also has very high butane content. Propylene (also called propene) can be a contaminant of commercial propane. Propane containing too much propene is not suited for most vehicle fuels. HD-5 is a specification that establishes a maximum concentration of 5% propene in propane. Propane and other LP gas specifications are established in ASTM D-1835. All propane fuels include an odorant, almost always ethanethiol, so that the gas can be smelled easily in case of a leak. Propane as HD-5 was originally intended for use as vehicle fuel. HD-5 is currently being used in all propane applications. Typically in the United States and Canada, LPG is primarily propane (at least 90%), while the rest is mostly ethane, propylene, butane, and odorants including ethyl mercaptan. This is the HD-5 standard, (maximum allowable propylene content, and no more than 5% butanes and ethane) defined by the American Society for Testing and Materials by its Standard 1835 for internal combustion engines. Not all products labeled "LPG" conform to this standard, however. In Mexico, for example, gas labeled "LPG" may consist of 60% propane and 40% butane. "The exact proportion of this combination varies by country, depending on international prices, on the availability of components and, especially, on the climatic conditions that favor LPG with higher butane content in warmer regions and propane in cold areas". Comparison with natural gas Propane is bought and stored in a liquid form, LPG. It can easily be stored in a relatively small space. By comparison, compressed natural gas (CNG) cannot be liquefied by compression at normal temperatures, as these are well above its critical temperature. As a gas, very high pressure is required to store useful quantities. This poses the hazard that, in an accident, just as with any compressed gas cylinder (such as a CO2 cylinder used for a soda concession) a CNG cylinder may burst with great force, or leak rapidly enough to become a self-propelled missile. Therefore, CNG is much less efficient to store than propane, due to the large cylinder volume required. An alternative means of storing natural gas is as a cryogenic liquid in an insulated container as liquefied natural gas (LNG). This form of storage is at low pressure and is around 3.5 times as efficient as storing it as CNG. Unlike propane, if a spill occurs, CNG will evaporate and dissipate because it is lighter than air. Propane is much more commonly used to fuel vehicles than is natural gas, because that equipment costs less. Propane requires just of pressure to keep it liquid at . Hazards Propane is a simple asphyxiant. Unlike natural gas, it is denser than air. It may accumulate in low spaces and near the floor. When abused as an inhalant, it may cause hypoxia (lack of oxygen), pneumonia, cardiac failure or cardiac arrest. Propane has low toxicity since it is not readily absorbed and is not biologically active. Commonly stored under pressure at room temperature, propane and its mixtures will flash evaporate at atmospheric pressure and cool well below the freezing point of water. The cold gas, which appears white due to moisture condensing from the air, may cause frostbite. Propane is denser than air. If a leak in a propane fuel system occurs, the vaporized gas will have a tendency to sink into any enclosed area and thus poses a risk of explosion and fire. The typical scenario is a leaking cylinder stored in a basement; the propane leak drifts across the floor to the pilot light on the furnace or water heater, and results in an explosion or fire. This property makes propane generally unsuitable as a fuel for boats. In 2007, a heavily investigated vapor-related explosion occurred in Ghent, West Virginia, U.S., killing four people and completely destroying the Little General convenience store on Flat Top Road, causing several injuries. Another hazard associated with propane storage and transport is known as a BLEVE or boiling liquid expanding vapor explosion. The Kingman Explosion involved a railroad tank car in Kingman, Arizona, U.S., in 1973 during a propane transfer. The fire and subsequent explosions resulted in twelve fatalities and numerous injuries. Production Propane is produced as a by-product of two other processes, natural gas processing and petroleum refining. The processing of natural gas involves removal of butane, propane, and large amounts of ethane from the raw gas, to prevent condensation of these volatiles in natural gas pipelines. Additionally, oil refineries produce some propane as a by-product of cracking petroleum into gasoline or heating oil. The supply of propane cannot easily be adjusted to meet increased demand, because of the by-product nature of propane production. About 90% of U.S. propane is domestically produced. The United States imports about 10% of the propane consumed each year, with about 70% of that coming from Canada via pipeline and rail. The remaining 30% of imported propane comes to the United States from other sources via ocean transport. After it is separated from the crude oil, North American propane is stored in huge salt caverns. Examples of these are Fort Saskatchewan, Alberta; Mont Belvieu, Texas; and Conway, Kansas. These salt caverns can store of propane. Retail cost United States , the retail cost of propane was approximately $2.37 per gallon, or roughly $25.95 per 1 million BTUs. This means that filling a 500-gallon propane tank, which is what households that use propane as their main source of energy usually require, cost $948 (80% of 500 gallons or 400 gallons), a 7.5% increase on the 2012–2013 winter season average US price. However, propane costs per gallon change significantly from one state to another: the Energy Information Administration (EIA) quotes a $2.995 per gallon average on the East Coast for October 2013, while the figure for the Midwest was $1.860 for the same period. , the propane retail cost was approximately $1.97 per gallon, which meant filling a 500-gallon propane tank to 80% capacity costed $788, a 16.9% decrease or $160 less from November 2013. Similar regional differences in prices are present with the December 2015 EIA figure for the East Coast at $2.67 per gallon and the Midwest at $1.43 per gallon. , the average US propane retail cost was approximately $2.48 per gallon. The wholesale price of propane in the U.S. always drops in the summer as most homes do not require it for home heating. The wholesale price of propane in the summer of 2018 was between 86 cents to 96 cents per U.S. gallon, based on a truckload or railway car load. The price for home heating was exactly double that price; at 95 cents per gallon wholesale, a home-delivered price was $1.90 per gallon if ordered 500 gallons at a time. Prices in the Midwest are always less than in California. Prices for home delivery always go up near the end of August or the first few days of September when people start ordering their home tanks to be filled.
Physical sciences
Hydrocarbons
null
23645
https://en.wikipedia.org/wiki/Precambrian
Precambrian
The Precambrian ( ; or Pre-Cambrian, sometimes abbreviated pC, or Cryptozoic) is the earliest part of Earth's history, set before the current Phanerozoic Eon. The Precambrian is so named because it preceded the Cambrian, the first period of the Phanerozoic Eon, which is named after Cambria, the Latinized name for Wales, where rocks from this age were first studied. The Precambrian accounts for 88% of the Earth's geologic time. The Precambrian is an informal unit of geologic time, subdivided into three eons (Hadean, Archean, Proterozoic) of the geologic time scale. It spans from the formation of Earth about 4.6 billion years ago (Ga) to the beginning of the Cambrian Period, about million years ago (Ma), when hard-shelled creatures first appeared in abundance. Overview Relatively little is known about the Precambrian, despite it making up roughly seven-eighths of the Earth's history, and what is known has largely been discovered from the 1960s onwards. The Precambrian fossil record is poorer than that of the succeeding Phanerozoic, and fossils from the Precambrian (e.g. stromatolites) are of limited biostratigraphic use. This is because many Precambrian rocks have been heavily metamorphosed, obscuring their origins, while others have been destroyed by erosion, or remain deeply buried beneath Phanerozoic strata. It is thought that the Earth coalesced from material in orbit around the Sun at roughly 4,543 Ma, and may have been struck by another planet called Theia shortly after it formed, splitting off material that formed the Moon (see Giant-impact hypothesis). A stable crust was apparently in place by 4,433 Ma, since zircon crystals from Western Australia have been dated at 4,404 ± 8 Ma. The term "Precambrian" is used by geologists and paleontologists for general discussions not requiring a more specific eon name. However, both the United States Geological Survey and the International Commission on Stratigraphy regard the term as informal. Because the span of time falling under the Precambrian consists of three eons (the Hadean, the Archean, and the Proterozoic), it is sometimes described as a supereon, but this is also an informal term, not defined by the ICS in its chronostratigraphic guide. (from "earliest") was a synonym for pre-Cambrian, or more specifically Archean. Life forms A specific date for the origin of life has not been determined. Carbon found in 3.8 billion-year-old rocks (Archean Eon) from islands off western Greenland may be of organic origin. Well-preserved microscopic fossils of bacteria older than 3.46 billion years have been found in Western Australia. Probable fossils 100 million years older have been found in the same area. However, there is evidence that life could have evolved over 4.280 billion years ago. There is a fairly solid record of bacterial life throughout the remainder (Proterozoic Eon) of the Precambrian. Complex multicellular organisms may have appeared as early as 2100 Ma. However, the interpretation of ancient fossils is problematic, and "... some definitions of multicellularity encompass everything from simple bacterial colonies to badgers." Other possible early complex multicellular organisms include a possible 2450 Ma red alga from the Kola Peninsula, 1650 Ma carbonaceous biosignatures in north China, the 1600 Ma Rafatazmia, and a possible 1047 Ma Bangiomorpha red alga from the Canadian Arctic. The earliest fossils widely accepted as complex multicellular organisms date from the Ediacaran Period. A very diverse collection of soft-bodied forms is found in a variety of locations worldwide and date to between 635 and 542 Ma. These are referred to as Ediacaran or Vendian biota. Hard-shelled creatures appeared toward the end of that time span, marking the beginning of the Phanerozoic Eon. By the middle of the following Cambrian Period, a very diverse fauna is recorded in the Burgess Shale, including some which may represent stem groups of modern taxa. The increase in diversity of lifeforms during the early Cambrian is called the Cambrian explosion of life. While land seems to have been devoid of plants and animals, cyanobacteria and other microbes formed prokaryotic mats that covered terrestrial areas. Tracks from an animal with leg-like appendages have been found in what was mud 551 million years ago. Emergence of life The RNA world hypothesis asserts that RNA evolved before coded proteins and DNA genomes. During the Hadean Eon (4,567–4,031 Ma) abundant geothermal microenvironments were present that may have had the potential to support the synthesis and replication of RNA and thus possibly the evolution of a primitive life form. It was shown that porous rock systems comprising heated air-water interfaces could allow ribozyme-catalyzed RNA replication of sense and antisense strands that could be followed by strand-dissociation, thus enabling combined synthesis, release and folding of active ribozymes. This primitive RNA replicative system also may have been able to undergo template strand switching during replication (genetic recombination) as is known to occur during the RNA replication of extant coronaviruses. Planetary environment and the oxygen catastrophe Evidence of the details of plate motions and other tectonic activity in the Precambrian is difficult to interpret. It is generally believed that small proto-continents existed before 4280 Ma, and that most of the Earth's landmasses collected into a single supercontinent around 1130 Ma. The supercontinent, known as Rodinia, broke up around 750 Ma. A number of glacial periods have been identified going as far back as the Huronian epoch, roughly 2400–2100 Ma. One of the best studied is the Sturtian-Varangian glaciation, around 850–635 Ma, which may have brought glacial conditions all the way to the equator, resulting in a "Snowball Earth". The atmosphere of the early Earth is not well understood. Most geologists believe it was composed primarily of nitrogen, carbon dioxide, and other relatively inert gases, and was lacking in free oxygen. There is, however, evidence that an oxygen-rich atmosphere existed since the early Archean. At present, it is still believed that molecular oxygen was not a significant fraction of Earth's atmosphere until after photosynthetic life forms evolved and began to produce it in large quantities as a byproduct of their metabolism. This radical shift from a chemically inert to an oxidizing atmosphere caused an ecological crisis, sometimes called the oxygen catastrophe. At first, oxygen would have quickly combined with other elements in Earth's crust, primarily iron, removing it from the atmosphere. After the supply of oxidizable surfaces ran out, oxygen would have begun to accumulate in the atmosphere, and the modern high-oxygen atmosphere would have developed. Evidence for this lies in older rocks that contain massive banded iron formations that were laid down as iron oxides. Subdivisions A terminology has evolved covering the early years of the Earth's existence, as radiometric dating has allowed absolute dates to be assigned to specific formations and features. The Precambrian is divided into three eons: the Hadean (– Ma), Archean (- Ma) and Proterozoic (- Ma). See Timetable of the Precambrian. Proterozoic: this eon refers to the time from the lower Cambrian boundary, Ma, back through Ma. As originally used, it was a synonym for "Precambrian" and hence included everything prior to the Cambrian boundary. The Proterozoic Eon is divided into three eras: the Neoproterozoic, Mesoproterozoic and Paleoproterozoic. Neoproterozoic: The youngest geologic era of the Proterozoic Eon, from the Cambrian Period lower boundary ( Ma) back to Ma. The Neoproterozoic corresponds to Precambrian Z rocks of older North American stratigraphy. Ediacaran: The youngest geologic period within the Neoproterozoic Era. The "2012 Geologic Time Scale" dates it from to Ma. In this period the Ediacaran biota appeared. Cryogenian: The middle period in the Neoproterozoic Era: - Ma. Tonian: the earliest period of the Neoproterozoic Era: - Ma. Mesoproterozoic: the middle era of the Proterozoic Eon, - Ma. Corresponds to "Precambrian Y" rocks of older North American stratigraphy. Paleoproterozoic: oldest era of the Proterozoic Eon, - Ma. Corresponds to "Precambrian X" rocks of older North American stratigraphy. Archean Eon: - Ma. Hadean Eon: – Ma. This term was intended originally to cover the time before any preserved rocks were deposited, although some zircon crystals from about 4400 Ma demonstrate the existence of crust in the Hadean Eon. Other records from Hadean time come from the Moon and meteorites. It has been proposed that the Precambrian should be divided into eons and eras that reflect stages of planetary evolution, rather than the current scheme based upon numerical ages. Such a system could rely on events in the stratigraphic record and be demarcated by GSSPs. The Precambrian could be divided into five "natural" eons, characterized as follows: Accretion and differentiation: a period of planetary formation until giant Moon-forming impact event. Hadean: dominated by heavy bombardment from about 4.51 Ga (possibly including a cool early Earth period) to the end of the Late Heavy Bombardment period. Archean: a period defined by the first crustal formations (the Isua greenstone belt) until the deposition of banded iron formations due to increasing atmospheric oxygen content. Transition: a period of continued banded iron formation until the first continental red beds. Proterozoic: a period of modern plate tectonics until the first animals. Precambrian supercontinents The movement of Earth's plates has caused the formation and break-up of continents over time, including occasional formation of a supercontinent containing most or all of the landmass. The earliest known supercontinent was Vaalbara. It formed from proto-continents and was a supercontinent 3.636 billion years ago. Vaalbara broke up c. 2.845–2.803 Ga ago. The supercontinent Kenorland was formed c. 2.72 Ga ago and then broke sometime after 2.45–2.1 Ga into the proto-continent cratons called Laurentia, Baltica, Yilgarn craton and Kalahari. The supercontinent Columbia, or Nuna, formed 2.1–1.8 billion years ago and broke up about 1.3–1.2 billion years ago. The supercontinent Rodinia is thought to have formed about 1300-900 Ma, to have included most or all of Earth's continents and to have broken up into eight continents around 750–600 million years ago.
Physical sciences
Geological periods
null
23647
https://en.wikipedia.org/wiki/Polymerase%20chain%20reaction
Polymerase chain reaction
The polymerase chain reaction (PCR) is a method widely used to make millions to billions of copies of a specific DNA sample rapidly, allowing scientists to amplify a very small sample of DNA (or a part of it) sufficiently to enable detailed study. PCR was invented in 1983 by American biochemist Kary Mullis at Cetus Corporation. Mullis and biochemist Michael Smith, who had developed other essential ways of manipulating DNA, were jointly awarded the Nobel Prize in Chemistry in 1993. PCR is fundamental to many of the procedures used in genetic testing and research, including analysis of ancient samples of DNA and identification of infectious agents. Using PCR, copies of very small amounts of DNA sequences are exponentially amplified in a series of cycles of temperature changes. PCR is now a common and often indispensable technique used in medical laboratory research for a broad variety of applications including biomedical research and forensic science. The majority of PCR methods rely on thermal cycling. Thermal cycling exposes reagents to repeated cycles of heating and cooling to permit different temperature-dependent reactions—specifically, DNA melting and enzyme-driven DNA replication. PCR employs two main reagents—primers (which are short single strand DNA fragments known as oligonucleotides that are a complementary sequence to the target DNA region) and a thermostable DNA polymerase. In the first step of PCR, the two strands of the DNA double helix are physically separated at a high temperature in a process called nucleic acid denaturation. In the second step, the temperature is lowered and the primers bind to the complementary sequences of DNA. The two DNA strands then become templates for DNA polymerase to enzymatically assemble a new DNA strand from free nucleotides, the building blocks of DNA. As PCR progresses, the DNA generated is itself used as a template for replication, setting in motion a chain reaction in which the original DNA template is exponentially amplified. Almost all PCR applications employ a heat-stable DNA polymerase, such as Taq polymerase, an enzyme originally isolated from the thermophilic bacterium Thermus aquaticus. If the polymerase used was heat-susceptible, it would denature under the high temperatures of the denaturation step. Before the use of Taq polymerase, DNA polymerase had to be manually added every cycle, which was a tedious and costly process. Applications of the technique include DNA cloning for sequencing, gene cloning and manipulation, gene mutagenesis; construction of DNA-based phylogenies, or functional analysis of genes; diagnosis and monitoring of genetic disorders; amplification of ancient DNA; analysis of genetic fingerprints for DNA profiling (for example, in forensic science and parentage testing); and detection of pathogens in nucleic acid tests for the diagnosis of infectious diseases. Principles PCR amplifies a specific region of a DNA strand (the DNA target). Most PCR methods amplify DNA fragments of between 0.1 and 10 kilo base pairs (kbp) in length, although some techniques allow for amplification of fragments up to 40 kbp. The amount of amplified product is determined by the available substrates in the reaction, which becomes limiting as the reaction progresses. A basic PCR set-up requires several components and reagents, including: a DNA template that contains the DNA target region to amplify a DNA polymerase; an enzyme that polymerizes new DNA strands; heat-resistant Taq polymerase is especially common, as it is more likely to remain intact during the high-temperature DNA denaturation process two DNA primers that are complementary to the 3' (three prime) ends of each of the sense and anti-sense strands of the DNA target (DNA polymerase can only bind to and elongate from a double-stranded region of DNA; without primers, there is no double-stranded initiation site at which the polymerase can bind); specific primers that are complementary to the DNA target region are selected beforehand, and are often custom-made in a laboratory or purchased from commercial biochemical suppliers deoxynucleoside triphosphates, or dNTPs (sometimes called "deoxynucleotide triphosphates"; nucleotides containing triphosphate groups), the building blocks from which the DNA polymerase synthesizes a new DNA strand a buffer solution providing a suitable chemical environment for optimum activity and stability of the DNA polymerase bivalent cations, typically magnesium (Mg) or manganese (Mn) ions; Mg2+ is the most common, but Mn2+ can be used for PCR-mediated DNA mutagenesis, as a higher Mn2+ concentration increases the error rate during DNA synthesis; and monovalent cations, typically potassium (K) ions The reaction is commonly carried out in a volume of 10–200 μL in small reaction tubes (0.2–0.5 mL volumes) in a thermal cycler. The thermal cycler heats and cools the reaction tubes to achieve the temperatures required at each step of the reaction (see below). Many modern thermal cyclers make use of a Peltier device, which permits both heating and cooling of the block holding the PCR tubes simply by reversing the device's electric current. Thin-walled reaction tubes permit favorable thermal conductivity to allow for rapid thermal equilibrium. Most thermal cyclers have heated lids to prevent condensation at the top of the reaction tube. Older thermal cyclers lacking a heated lid require a layer of oil on top of the reaction mixture or a ball of wax inside the tube. Procedure Typically, PCR consists of a series of 20–40 repeated temperature changes, called thermal cycles, with each cycle commonly consisting of two or three discrete temperature steps (see figure below). The cycling is often preceded by a single temperature step at a very high temperature (>), and followed by one hold at the end for final product extension or brief storage. The temperatures used and the length of time they are applied in each cycle depend on a variety of parameters, including the enzyme used for DNA synthesis, the concentration of bivalent ions and dNTPs in the reaction, and the melting temperature (Tm) of the primers. The individual steps common to most PCR methods are as follows: Initialization: This step is only required for DNA polymerases that require heat activation by hot-start PCR. It consists of heating the reaction chamber to a temperature of , or if extremely thermostable polymerases are used, which is then held for 1–10 minutes. Denaturation: This step is the first regular cycling event and consists of heating the reaction chamber to for 20–30 seconds. This causes DNA melting, or denaturation, of the double-stranded DNA template by breaking the hydrogen bonds between complementary bases, yielding two single-stranded DNA molecules. Annealing: In the next step, the reaction temperature is lowered to for 20–40 seconds, allowing annealing of the primers to each of the single-stranded DNA templates. Two different primers are typically included in the reaction mixture: one for each of the two single-stranded complements containing the target region. The primers are single-stranded sequences themselves, but are much shorter than the length of the target region, complementing only very short sequences at the 3' end of each strand. It is critical to determine a proper temperature for the annealing step because efficiency and specificity are strongly affected by the annealing temperature. This temperature must be low enough to allow for hybridization of the primer to the strand, but high enough for the hybridization to be specific, i.e., the primer should bind only to a perfectly complementary part of the strand, and nowhere else. If the temperature is too low, the primer may bind imperfectly. If it is too high, the primer may not bind at all. A typical annealing temperature is about 3–5 °C below the Tm of the primers used. Stable hydrogen bonds between complementary bases are formed only when the primer sequence very closely matches the template sequence. During this step, the polymerase binds to the primer-template hybrid and begins DNA formation. Extension/elongation: The temperature at this step depends on the DNA polymerase used; the optimum activity temperature for the thermostable DNA polymerase of Taq polymerase is approximately , though a temperature of is commonly used with this enzyme. In this step, the DNA polymerase synthesizes a new DNA strand complementary to the DNA template strand by adding free dNTPs from the reaction mixture that is complementary to the template in the 5'-to-3' direction, condensing the 5'-phosphate group of the dNTPs with the 3'-hydroxy group at the end of the nascent (elongating) DNA strand. The precise time required for elongation depends both on the DNA polymerase used and on the length of the DNA target region to amplify. As a rule of thumb, at their optimal temperature, most DNA polymerases polymerize a thousand bases per minute. Under optimal conditions (i.e., if there are no limitations due to limiting substrates or reagents), at each extension/elongation step, the number of DNA target sequences is doubled. With each successive cycle, the original template strands plus all newly generated strands become template strands for the next round of elongation, leading to exponential (geometric) amplification of the specific DNA target region. The processes of denaturation, annealing and elongation constitute a single cycle. Multiple cycles are required to amplify the DNA target to millions of copies. The formula used to calculate the number of DNA copies formed after a given number of cycles is 2n, where n is the number of cycles. Thus, a reaction set for 30 cycles results in 230, or copies of the original double-stranded DNA target region. Final elongation: This single step is optional, but is performed at a temperature of (the temperature range required for optimal activity of most polymerases used in PCR) for 5–15 minutes after the last PCR cycle to ensure that any remaining single-stranded DNA is fully elongated. Final hold: The final step cools the reaction chamber to for an indefinite time, and may be employed for short-term storage of the PCR products. To check whether the PCR successfully generated the anticipated DNA target region (also sometimes referred to as the amplimer or amplicon), agarose gel electrophoresis may be employed for size separation of the PCR products. The size of the PCR products is determined by comparison with a DNA ladder, a molecular weight marker which contains DNA fragments of known sizes, which runs on the gel alongside the PCR products. Stages As with other chemical reactions, the reaction rate and efficiency of PCR are affected by limiting factors. Thus, the entire PCR process can further be divided into three stages based on reaction progress: Exponential amplification: At every cycle, the amount of product is doubled (assuming 100% reaction efficiency). After 30 cycles, a single copy of DNA can be increased up to 1,000,000,000 (one billion) copies. In a sense, then, the replication of a discrete strand of DNA is being manipulated in a tube under controlled conditions. The reaction is very sensitive: only minute quantities of DNA must be present. Leveling off stage: The reaction slows as the DNA polymerase loses activity and as consumption of reagents, such as dNTPs and primers, causes them to become more limited. Plateau: No more product accumulates due to exhaustion of reagents and enzyme. Optimization In practice, PCR can fail for various reasons, such as sensitivity or contamination. Contamination with extraneous DNA can lead to spurious products and is addressed with lab protocols and procedures that separate pre-PCR mixtures from potential DNA contaminants. For instance, if DNA from a crime scene is analyzed, a single DNA molecule from lab personnel could be amplified and misguide the investigation. Hence the PCR-setup areas is separated from the analysis or purification of other PCR products, disposable plasticware used, and the work surface between reaction setups needs to be thoroughly cleaned. Specificity can be adjusted by experimental conditions so that no spurious products are generated. Primer-design techniques are important in improving PCR product yield and in avoiding the formation of unspecific products. The usage of alternate buffer components or polymerase enzymes can help with amplification of long or otherwise problematic regions of DNA. For instance, Q5 polymerase is said to be ≈280 times less error-prone than Taq polymerase. Both the running parameters (e.g. temperature and duration of cycles), or the addition of reagents, such as formamide, may increase the specificity and yield of PCR. Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design. Applications Selective DNA isolation PCR allows isolation of DNA fragments from genomic DNA by selective amplification of a specific region of DNA. This use of PCR augments many ways, such as generating hybridization probes for Southern or northern hybridization and DNA cloning, which require larger amounts of DNA, representing a specific DNA region. PCR supplies these techniques with high amounts of pure DNA, enabling analysis of DNA samples even from very small amounts of starting material. Other applications of PCR include DNA sequencing to determine unknown PCR-amplified sequences in which one of the amplification primers may be used in Sanger sequencing, isolation of a DNA sequence to expedite recombinant DNA technologies involving the insertion of a DNA sequence into a plasmid, phage, or cosmid (depending on size) or the genetic material of another organism. Bacterial colonies (such as E. coli) can be rapidly screened by PCR for correct DNA vector constructs. PCR may also be used for genetic fingerprinting; a forensic technique used to identify a person or organism by comparing experimental DNAs through different PCR-based methods. Some PCR fingerprint methods have high discriminative power and can be used to identify genetic relationships between individuals, such as parent-child or between siblings, and are used in paternity testing (Fig. 4). This technique may also be used to determine evolutionary relationships among organisms when certain molecular clocks are used (i.e. the 16S rRNA and recA genes of microorganisms). Amplification and quantification of DNA Because PCR amplifies the regions of DNA that it targets, PCR can be used to analyze extremely small amounts of sample. This is often critical for forensic analysis, when only a trace amount of DNA is available as evidence. PCR may also be used in the analysis of ancient DNA that is tens of thousands of years old. These PCR-based techniques have been successfully used on animals, such as a forty-thousand-year-old mammoth, and also on human DNA, in applications ranging from the analysis of Egyptian mummies to the identification of a Russian tsar and the body of English king Richard III. Quantitative PCR or Real Time PCR (qPCR, not to be confused with RT-PCR) methods allow the estimation of the amount of a given sequence present in a sample—a technique often applied to quantitatively determine levels of gene expression. Quantitative PCR is an established tool for DNA quantification that measures the accumulation of DNA product after each round of PCR amplification. qPCR allows the quantification and detection of a specific DNA sequence in real time since it measures concentration while the synthesis process is taking place. There are two methods for simultaneous detection and quantification. The first method consists of using fluorescent dyes that are retained nonspecifically in between the double strands. The second method involves probes that code for specific sequences and are fluorescently labeled. Detection of DNA using these methods can only be seen after the hybridization of probes with its complementary DNA (cDNA) takes place. An interesting technique combination is real-time PCR and reverse transcription. This sophisticated technique, called RT-qPCR, allows for the quantification of a small quantity of RNA. Through this combined technique, mRNA is converted to cDNA, which is further quantified using qPCR. This technique lowers the possibility of error at the end point of PCR, increasing chances for detection of genes associated with genetic diseases such as cancer. Laboratories use RT-qPCR for the purpose of sensitively measuring gene regulation. The mathematical foundations for the reliable quantification of the PCR and RT-qPCR facilitate the implementation of accurate fitting procedures of experimental data in research, medical, diagnostic and infectious disease applications. Medical and diagnostic applications Prospective parents can be tested for being genetic carriers, or their children might be tested for actually being affected by a disease. DNA samples for prenatal testing can be obtained by amniocentesis, chorionic villus sampling, or even by the analysis of rare fetal cells circulating in the mother's bloodstream. PCR analysis is also essential to preimplantation genetic diagnosis, where individual cells of a developing embryo are tested for mutations. PCR can also be used as part of a sensitive test for tissue typing, vital to organ transplantation. there is even a proposal to replace the traditional antibody-based tests for blood type with PCR-based tests. Many forms of cancer involve alterations to oncogenes. By using PCR-based tests to study these mutations, therapy regimens can sometimes be individually customized to a patient. PCR permits early diagnosis of malignant diseases such as leukemia and lymphomas, which is currently the highest-developed in cancer research and is already being used routinely. PCR assays can be performed directly on genomic DNA samples to detect translocation-specific malignant cells at a sensitivity that is at least 10,000 fold higher than that of other methods. PCR is very useful in the medical field since it allows for the isolation and amplification of tumor suppressors. Quantitative PCR for example, can be used to quantify and analyze single cells, as well as recognize DNA, mRNA and protein confirmations and combinations. Infectious disease applications PCR allows for rapid and highly specific diagnosis of infectious diseases, including those caused by bacteria or viruses. PCR also permits identification of non-cultivatable or slow-growing microorganisms such as mycobacteria, anaerobic bacteria, or viruses from tissue culture assays and animal models. The basis for PCR diagnostic applications in microbiology is the detection of infectious agents and the discrimination of non-pathogenic from pathogenic strains by virtue of specific genes. Characterization and detection of infectious disease organisms have been revolutionized by PCR in the following ways: The human immunodeficiency virus (or HIV), is a difficult target to find and eradicate. The earliest tests for infection relied on the presence of antibodies to the virus circulating in the bloodstream. However, antibodies don't appear until many weeks after infection, maternal antibodies mask the infection of a newborn, and therapeutic agents to fight the infection don't affect the antibodies. PCR tests have been developed that can detect as little as one viral genome among the DNA of over 50,000 host cells. Infections can be detected earlier, donated blood can be screened directly for the virus, newborns can be immediately tested for infection, and the effects of antiviral treatments can be quantified. Some disease organisms, such as that for tuberculosis, are difficult to sample from patients and slow to be grown in the laboratory. PCR-based tests have allowed detection of small numbers of disease organisms (both live or dead), in convenient samples. Detailed genetic analysis can also be used to detect antibiotic resistance, allowing immediate and effective therapy. The effects of therapy can also be immediately evaluated. The spread of a disease organism through populations of domestic or wild animals can be monitored by PCR testing. In many cases, the appearance of new virulent sub-types can be detected and monitored. The sub-types of an organism that were responsible for earlier epidemics can also be determined by PCR analysis. Viral DNA can be detected by PCR. The primers used must be specific to the targeted sequences in the DNA of a virus, and PCR can be used for diagnostic analyses or DNA sequencing of the viral genome. The high sensitivity of PCR permits virus detection soon after infection and even before the onset of disease. Such early detection may give physicians a significant lead time in treatment. The amount of virus ("viral load") in a patient can also be quantified by PCR-based DNA quantitation techniques (see below). A variant of PCR (RT-PCR) is used for detecting viral RNA rather than DNA: in this test the enzyme reverse transcriptase is used to generate a DNA sequence which matches the viral RNA; this DNA is then amplified as per the usual PCR method. RT-PCR is widely used to detect the SARS-CoV-2 viral genome. Diseases such as pertussis (or whooping cough) are caused by the bacteria Bordetella pertussis. This bacteria is marked by a serious acute respiratory infection that affects various animals and humans and has led to the deaths of many young children. The pertussis toxin is a protein exotoxin that binds to cell receptors by two dimers and reacts with different cell types such as T lymphocytes which play a role in cell immunity. PCR is an important testing tool that can detect sequences within the gene for the pertussis toxin. Because PCR has a high sensitivity for the toxin and a rapid turnaround time, it is very efficient for diagnosing pertussis when compared to culture. Forensic applications The development of PCR-based genetic (or DNA) fingerprinting protocols has seen widespread application in forensics: In its most discriminating form, genetic fingerprinting can uniquely discriminate any one person from the entire population of the world. Minute samples of DNA can be isolated from a crime scene, and compared to that from suspects, or from a DNA database of earlier evidence or convicts. Simpler versions of these tests are often used to rapidly rule out suspects during a criminal investigation. Evidence from decades-old crimes can be tested, confirming or exonerating the people originally convicted. Forensic DNA typing has been an effective way of identifying or exonerating criminal suspects due to analysis of evidence discovered at a crime scene. The human genome has many repetitive regions that can be found within gene sequences or in non-coding regions of the genome. Specifically, up to 40% of human DNA is repetitive. There are two distinct categories for these repetitive, non-coding regions in the genome. The first category is called variable number tandem repeats (VNTR), which are 10–100 base pairs long and the second category is called short tandem repeats (STR) and these consist of repeated 2–10 base pair sections. PCR is used to amplify several well-known VNTRs and STRs using primers that flank each of the repetitive regions. The sizes of the fragments obtained from any individual for each of the STRs will indicate which alleles are present. By analyzing several STRs for an individual, a set of alleles for each person will be found that statistically is likely to be unique. Researchers have identified the complete sequence of the human genome. This sequence can be easily accessed through the NCBI website and is used in many real-life applications. For example, the FBI has compiled a set of DNA marker sites used for identification, and these are called the Combined DNA Index System (CODIS) DNA database. Using this database enables statistical analysis to be used to determine the probability that a DNA sample will match. PCR is a very powerful and significant analytical tool to use for forensic DNA typing because researchers only need a very small amount of the target DNA to be used for analysis. For example, a single human hair with attached hair follicle has enough DNA to conduct the analysis. Similarly, a few sperm, skin samples from under the fingernails, or a small amount of blood can provide enough DNA for conclusive analysis. Less discriminating forms of DNA fingerprinting can help in DNA paternity testing, where an individual is matched with their close relatives. DNA from unidentified human remains can be tested, and compared with that from possible parents, siblings, or children. Similar testing can be used to confirm the biological parents of an adopted (or kidnapped) child. The actual biological father of a newborn can also be confirmed (or ruled out). The PCR AMGX/AMGY design facilitate in amplifying DNA sequences from a very minuscule amount of genome. However it can also be used for real-time sex determination from forensic bone samples. This provides a powerful and effective way to determine gender in forensic cases and ancient specimens. Research applications PCR has been applied to many areas of research in molecular genetics: PCR allows rapid production of short pieces of DNA, even when not more than the sequence of the two primers is known. This ability of PCR augments many methods, such as generating hybridization probes for Southern or northern blot hybridization. PCR supplies these techniques with large amounts of pure DNA, sometimes as a single strand, enabling analysis even from very small amounts of starting material. The task of DNA sequencing can also be assisted by PCR. Known segments of DNA can easily be produced from a patient with a genetic disease mutation. Modifications to the amplification technique can extract segments from a completely unknown genome, or can generate just a single strand of an area of interest. PCR has numerous applications to the more traditional process of DNA cloning. It can extract segments for insertion into a vector from a larger genome, which may be only available in small quantities. Using a single set of 'vector primers', it can also analyze or extract fragments that have already been inserted into vectors. Some alterations to the PCR protocol can generate mutations (general or site-directed) of an inserted fragment. Sequence-tagged sites is a process where PCR is used as an indicator that a particular segment of a genome is present in a particular clone. The Human Genome Project found this application vital to mapping the cosmid clones they were sequencing, and to coordinating the results from different laboratories. An application of PCR is the phylogenic analysis of DNA from ancient sources, such as that found in the recovered bones of Neanderthals, from frozen tissues of mammoths, or from the brain of Egyptian mummies. In some cases the highly degraded DNA from these sources might be reassembled during the early stages of amplification. A common application of PCR is the study of patterns of gene expression. Tissues (or even individual cells) can be analyzed at different stages to see which genes have become active, or which have been switched off. This application can also use quantitative PCR to quantitate the actual levels of expression The ability of PCR to simultaneously amplify several loci from individual sperm has greatly enhanced the more traditional task of genetic mapping by studying chromosomal crossovers after meiosis. Rare crossover events between very close loci have been directly observed by analyzing thousands of individual sperms. Similarly, unusual deletions, insertions, translocations, or inversions can be analyzed, all without having to wait (or pay) for the long and laborious processes of fertilization, embryogenesis, etc. Site-directed mutagenesis: PCR can be used to create mutant genes with mutations chosen by scientists at will. These mutations can be chosen in order to understand how proteins accomplish their functions, and to change or improve protein function. Advantages PCR has a number of advantages. It is fairly simple to understand and to use, and produces results rapidly. The technique is highly sensitive with the potential to produce millions to billions of copies of a specific product for sequencing, cloning, and analysis. qRT-PCR shares the same advantages as the PCR, with an added advantage of quantification of the synthesized product. Therefore, it has its uses to analyze alterations of gene expression levels in tumors, microbes, or other disease states. PCR is a very powerful and practical research tool. The sequencing of unknown etiologies of many diseases are being figured out by the PCR. The technique can help identify the sequence of previously unknown viruses related to those already known and thus give us a better understanding of the disease itself. If the procedure can be further simplified and sensitive non-radiometric detection systems can be developed, the PCR will assume a prominent place in the clinical laboratory for years to come. Limitations One major limitation of PCR is that prior information about the target sequence is necessary in order to generate the primers that will allow its selective amplification. This means that, typically, PCR users must know the precise sequence(s) upstream of the target region on each of the two single-stranded templates in order to ensure that the DNA polymerase properly binds to the primer-template hybrids and subsequently generates the entire target region during DNA synthesis. Like all enzymes, DNA polymerases are also prone to error, which in turn causes mutations in the PCR fragments that are generated. Another limitation of PCR is that even the smallest amount of contaminating DNA can be amplified, resulting in misleading or ambiguous results. To minimize the chance of contamination, investigators should reserve separate rooms for reagent preparation, the PCR, and analysis of product. Reagents should be dispensed into single-use aliquots. Pipettors with disposable plungers and extra-long pipette tips should be routinely used. It is moreover recommended to ensure that the lab set-up follows a unidirectional workflow. No materials or reagents used in the PCR and analysis rooms should ever be taken into the PCR preparation room without thorough decontamination. Environmental samples that contain humic acids may inhibit PCR amplification and lead to inaccurate results. Variations Allele-specific PCR or The amplification refractory mutation system (ARMS): a diagnostic or cloning technique based on single-nucleotide variations (SNVs not to be confused with SNPs) (single-base differences in a patient). Any mutation involving single base change can be detected by this system. It requires prior knowledge of a DNA sequence, including differences between alleles, and uses primers whose 3' ends encompass the SNV (base pair buffer around SNV usually incorporated). PCR amplification under stringent conditions is much less efficient in the presence of a mismatch between template and primer, so successful amplification with an SNP-specific primer signals presence of the specific SNP or small deletions in a sequence. See SNP genotyping for more information. Assembly PCR or Polymerase Cycling Assembly (PCA): artificial synthesis of long DNA sequences by performing PCR on a pool of long oligonucleotides with short overlapping segments. The oligonucleotides alternate between sense and antisense directions, and the overlapping segments determine the order of the PCR fragments, thereby selectively producing the final long DNA product. Asymmetric PCR: preferentially amplifies one DNA strand in a double-stranded DNA template. It is used in sequencing and hybridization probing where amplification of only one of the two complementary strands is required. PCR is carried out as usual, but with a great excess of the primer for the strand targeted for amplification. Because of the slow (arithmetic) amplification later in the reaction after the limiting primer has been used up, extra cycles of PCR are required. A recent modification on this process, known as Linear-After-The-Exponential-PCR (LATE-PCR), uses a limiting primer with a higher melting temperature (Tm) than the excess primer to maintain reaction efficiency as the limiting primer concentration decreases mid-reaction. Convective PCR: a pseudo-isothermal way of performing PCR. Instead of repeatedly heating and cooling the PCR mixture, the solution is subjected to a thermal gradient. The resulting thermal instability driven convective flow automatically shuffles the PCR reagents from the hot and cold regions repeatedly enabling PCR. Parameters such as thermal boundary conditions and geometry of the PCR enclosure can be optimized to yield robust and rapid PCR by harnessing the emergence of chaotic flow fields. Such convective flow PCR setup significantly reduces device power requirement and operation time. Dial-out PCR: a highly parallel method for retrieving accurate DNA molecules for gene synthesis. A complex library of DNA molecules is modified with unique flanking tags before massively parallel sequencing. Tag-directed primers then enable the retrieval of molecules with desired sequences by PCR. Digital PCR (dPCR): used to measure the quantity of a target DNA sequence in a DNA sample. The DNA sample is highly diluted so that after running many PCRs in parallel, some of them do not receive a single molecule of the target DNA. The target DNA concentration is calculated using the proportion of negative outcomes. Hence the name 'digital PCR'. Helicase-dependent amplification: similar to traditional PCR, but uses a constant temperature rather than cycling through denaturation and annealing/extension cycles. DNA helicase, an enzyme that unwinds DNA, is used in place of thermal denaturation. Hot start PCR: a technique that reduces non-specific amplification during the initial set up stages of the PCR. It may be performed manually by heating the reaction components to the denaturation temperature (e.g., 95 °C) before adding the polymerase. Specialized enzyme systems have been developed that inhibit the polymerase's activity at ambient temperature, either by the binding of an antibody or by the presence of covalently bound inhibitors that dissociate only after a high-temperature activation step. Hot-start/cold-finish PCR is achieved with new hybrid polymerases that are inactive at ambient temperature and are instantly activated at elongation temperature. In silico PCR (digital PCR, virtual PCR, electronic PCR, e-PCR) refers to computational tools used to calculate theoretical polymerase chain reaction results using a given set of primers (probes) to amplify DNA sequences from a sequenced genome or transcriptome. In silico PCR was proposed as an educational tool for molecular biology. Intersequence-specific PCR (ISSR): a PCR method for DNA fingerprinting that amplifies regions between simple sequence repeats to produce a unique fingerprint of amplified fragment lengths. Inverse PCR: is commonly used to identify the flanking sequences around genomic inserts. It involves a series of DNA digestions and self ligation, resulting in known sequences at either end of the unknown sequence. Ligation-mediated PCR: uses small DNA linkers ligated to the DNA of interest and multiple primers annealing to the DNA linkers; it has been used for DNA sequencing, genome walking, and DNA footprinting. Methylation-specific PCR (MSP): developed by Stephen Baylin and James G. Herman at the Johns Hopkins School of Medicine, and is used to detect methylation of CpG islands in genomic DNA. DNA is first treated with sodium bisulfite, which converts unmethylated cytosine bases to uracil, which is recognized by PCR primers as thymine. Two PCRs are then carried out on the modified DNA, using primer sets identical except at any CpG islands within the primer sequences. At these points, one primer set recognizes DNA with cytosines to amplify methylated DNA, and one set recognizes DNA with uracil or thymine to amplify unmethylated DNA. MSP using qPCR can also be performed to obtain quantitative rather than qualitative information about methylation. Miniprimer PCR: uses a thermostable polymerase (S-Tbr) that can extend from short primers ("smalligos") as short as 9 or 10 nucleotides. This method permits PCR targeting to smaller primer binding regions, and is used to amplify conserved DNA sequences, such as the 16S (or eukaryotic 18S) rRNA gene. Multiplex ligation-dependent probe amplification (MLPA): permits amplifying multiple targets with a single primer pair, thus avoiding the resolution limitations of multiplex PCR (see below). Multiplex-PCR: consists of multiple primer sets within a single PCR mixture to produce amplicons of varying sizes that are specific to different DNA sequences. By targeting multiple genes at once, additional information may be gained from a single test-run that otherwise would require several times the reagents and more time to perform. Annealing temperatures for each of the primer sets must be optimized to work correctly within a single reaction, and amplicon sizes. That is, their base pair length should be different enough to form distinct bands when visualized by gel electrophoresis. Nanoparticle-assisted PCR (nanoPCR): some nanoparticles (NPs) can enhance the efficiency of PCR (thus being called nanoPCR), and some can even outperform the original PCR enhancers. It was reported that quantum dots (QDs) can improve PCR specificity and efficiency. Single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) are efficient in enhancing the amplification of long PCR. Carbon nanopowder (CNP) can improve the efficiency of repeated PCR and long PCR, while zinc oxide, titanium dioxide and Ag NPs were found to increase the PCR yield. Previous data indicated that non-metallic NPs retained acceptable amplification fidelity. Given that many NPs are capable of enhancing PCR efficiency, it is clear that there is likely to be great potential for nanoPCR technology improvements and product development. Nested PCR: increases the specificity of DNA amplification, by reducing background due to non-specific amplification of DNA. Two sets of primers are used in two successive PCRs. In the first reaction, one pair of primers is used to generate DNA products, which besides the intended target, may still consist of non-specifically amplified DNA fragments. The product(s) are then used in a second PCR with a set of primers whose binding sites are completely or partially different from and located 3' of each of the primers used in the first reaction. Nested PCR is often more successful in specifically amplifying long DNA fragments than conventional PCR, but it requires more detailed knowledge of the target sequences. Overlap-extension PCR or Splicing by overlap extension (SOEing) : a genetic engineering technique that is used to splice together two or more DNA fragments that contain complementary sequences. It is used to join DNA pieces containing genes, regulatory sequences, or mutations; the technique enables creation of specific and long DNA constructs. It can also introduce deletions, insertions or point mutations into a DNA sequence. PAN-AC: uses isothermal conditions for amplification, and may be used in living cells. PAN-PCR: A computational method for designing bacterium typing assays based on whole genome sequence data. Quantitative PCR (qPCR): used to measure the quantity of a target sequence (commonly in real-time). It quantitatively measures starting amounts of DNA, cDNA, or RNA. Quantitative PCR is commonly used to determine whether a DNA sequence is present in a sample and the number of its copies in the sample. Quantitative PCR has a very high degree of precision. Quantitative PCR methods use fluorescent dyes, such as Sybr Green, EvaGreen or fluorophore-containing DNA probes, such as TaqMan, to measure the amount of amplified product in real time. It is also sometimes abbreviated to RT-PCR (real-time PCR) but this abbreviation should be used only for reverse transcription PCR. qPCR is the appropriate contractions for quantitative PCR (real-time PCR). Reverse Complement PCR (RC-PCR): Allows the addition of functional domains or sequences of choice to be appended independently to either end of the generated amplicon in a single closed tube reaction. This method generates target specific primers within the reaction by the interaction of universal primers (which contain the desired sequences or domains to be appended) and RC probes. Reverse Transcription PCR (RT-PCR): for amplifying DNA from RNA. Reverse transcriptase reverse transcribes RNA into cDNA, which is then amplified by PCR. RT-PCR is widely used in expression profiling, to determine the expression of a gene or to identify the sequence of an RNA transcript, including transcription start and termination sites. If the genomic DNA sequence of a gene is known, RT-PCR can be used to map the location of exons and introns in the gene. The 5' end of a gene (corresponding to the transcription start site) is typically identified by RACE-PCR (Rapid Amplification of cDNA Ends). RNase H-dependent PCR (rhPCR): a modification of PCR that utilizes primers with a 3' extension block that can be removed by a thermostable RNase HII enzyme. This system reduces primer-dimers and allows for multiplexed reactions to be performed with higher numbers of primers. Single specific primer-PCR (SSP-PCR): allows the amplification of double-stranded DNA even when the sequence information is available at one end only. This method permits amplification of genes for which only a partial sequence information is available, and allows unidirectional genome walking from known into unknown regions of the chromosome. Solid Phase PCR: encompasses multiple meanings, including Polony Amplification (where PCR colonies are derived in a gel matrix, for example), Bridge PCR (primers are covalently linked to a solid-support surface), conventional Solid Phase PCR (where Asymmetric PCR is applied in the presence of solid support bearing primer with sequence matching one of the aqueous primers) and Enhanced Solid Phase PCR (where conventional Solid Phase PCR can be improved by employing high Tm and nested solid support primer with optional application of a thermal 'step' to favour solid support priming). Suicide PCR: typically used in paleogenetics or other studies where avoiding false positives and ensuring the specificity of the amplified fragment is the highest priority. It was originally described in a study to verify the presence of the microbe Yersinia pestis in dental samples obtained from 14th Century graves of people supposedly killed by the plague during the medieval Black Death epidemic. The method prescribes the use of any primer combination only once in a PCR (hence the term "suicide"), which should never have been used in any positive control PCR reaction, and the primers should always target a genomic region never amplified before in the lab using this or any other set of primers. This ensures that no contaminating DNA from previous PCR reactions is present in the lab, which could otherwise generate false positives. Thermal asymmetric interlaced PCR (TAIL-PCR): for isolation of an unknown sequence flanking a known sequence. Within the known sequence, TAIL-PCR uses a nested pair of primers with differing annealing temperatures; a degenerate primer is used to amplify in the other direction from the unknown sequence. Touchdown PCR (Step-down PCR): a variant of PCR that aims to reduce nonspecific background by gradually lowering the annealing temperature as PCR cycling progresses. The annealing temperature at the initial cycles is usually a few degrees (3–5 °C) above the Tm of the primers used, while at the later cycles, it is a few degrees (3–5 °C) below the primer Tm. The higher temperatures give greater specificity for primer binding, and the lower temperatures permit more efficient amplification from the specific products formed during the initial cycles. Universal Fast Walking: for genome walking and genetic fingerprinting using a more specific 'two-sided' PCR than conventional 'one-sided' approaches (using only one gene-specific primer and one general primer—which can lead to artefactual 'noise') by virtue of a mechanism involving lariat structure formation. Streamlined derivatives of UFW are LaNe RAGE (lariat-dependent nested PCR for rapid amplification of genomic DNA ends), 5'RACE LaNe and 3'RACE LaNe. History The heat-resistant enzymes that are a key component in polymerase chain reaction were discovered in the 1960s as a product of a microbial life form that lived in the superheated waters of Yellowstone's Mushroom Spring. A 1971 paper in the Journal of Molecular Biology by Kjell Kleppe and co-workers in the laboratory of H. Gobind Khorana first described a method of using an enzymatic assay to replicate a short DNA template with primers in vitro. However, this early manifestation of the basic PCR principle did not receive much attention at the time and the invention of the polymerase chain reaction in 1983 is generally credited to Kary Mullis. When Mullis developed the PCR in 1983, he was working in Emeryville, California for Cetus Corporation, one of the first biotechnology companies, where he was responsible for synthesizing short chains of DNA. Mullis has written that he conceived the idea for PCR while cruising along the Pacific Coast Highway one night in his car. He was playing in his mind with a new way of analyzing changes (mutations) in DNA when he realized that he had instead invented a method of amplifying any DNA region through repeated cycles of duplication driven by DNA polymerase. In Scientific American, Mullis summarized the procedure: "Beginning with a single molecule of the genetic material DNA, the PCR can generate 100 billion similar molecules in an afternoon. The reaction is easy to execute. It requires no more than a test tube, a few simple reagents, and a source of heat." DNA fingerprinting was first used for paternity testing in 1988. Mullis has credited his use of LSD as integral to his development of PCR: "Would I have invented PCR if I hadn't taken LSD? I seriously doubt it. I could sit on a DNA molecule and watch the polymers go by. I learnt that partly on psychedelic drugs." Mullis and biochemist Michael Smith, who had developed other essential ways of manipulating DNA, were jointly awarded the Nobel Prize in Chemistry in 1993, seven years after Mullis and his colleagues at Cetus first put his proposal to practice. Mullis's 1985 paper with R. K. Saiki and H. A. Erlich, "Enzymatic Amplification of β-globin Genomic Sequences and Restriction Site Analysis for Diagnosis of Sickle Cell Anemia"—the polymerase chain reaction invention (PCR)—was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society in 2017. At the core of the PCR method is the use of a suitable DNA polymerase able to withstand the high temperatures of > required for separation of the two DNA strands in the DNA double helix after each replication cycle. The DNA polymerases initially employed for in vitro experiments presaging PCR were unable to withstand these high temperatures. So the early procedures for DNA replication were very inefficient and time-consuming, and required large amounts of DNA polymerase and continuous handling throughout the process. The discovery in 1976 of Taq polymerase—a DNA polymerase purified from the thermophilic bacterium, Thermus aquaticus, which naturally lives in hot () environments such as hot springs—paved the way for dramatic improvements of the PCR method. The DNA polymerase isolated from T. aquaticus is stable at high temperatures remaining active even after DNA denaturation, thus obviating the need to add new DNA polymerase after each cycle. This allowed an automated thermocycler-based process for DNA amplification. Patent disputes The PCR technique was patented by Kary Mullis and assigned to Cetus Corporation, where Mullis worked when he invented the technique in 1983. The Taq polymerase enzyme was also covered by patents. There have been several high-profile lawsuits related to the technique, including an unsuccessful lawsuit brought by DuPont. The Swiss pharmaceutical company Hoffmann-La Roche purchased the rights to the patents in 1992. The last of the commercial PCR patents expired in 2017. A related patent battle over the Taq polymerase enzyme is still ongoing in several jurisdictions around the world between Roche and Promega. The legal arguments have extended beyond the lives of the original PCR and Taq polymerase patents, which expired on 28 March 2005.
Technology
Biotechnology
null
23652
https://en.wikipedia.org/wiki/Purine
Purine
Purine is a heterocyclic aromatic organic compound that consists of two rings (pyrimidine and imidazole) fused together. It is water-soluble. Purine also gives its name to the wider class of molecules, purines, which include substituted purines and their tautomers. They are the most widely occurring nitrogen-containing heterocycles in nature. Dietary sources Purines are found in high concentration in meat and meat products, especially internal organs such as liver and kidney. In general, plant-based diets are low in purines. High-purine plants and algae include some legumes (lentils, soybeans, and black-eyed peas) and spirulina. Examples of high-purine sources include: sweetbreads, anchovies, sardines, liver, beef kidneys, brains, meat extracts (e.g., Oxo, Bovril), herring, mackerel, scallops, game meats, yeast (beer, yeast extract, nutritional yeast) and gravy. A moderate amount of purine is also contained in red meat, beef, pork, poultry, fish and seafood, asparagus, cauliflower, spinach, mushrooms, green peas, lentils, dried peas, beans, oatmeal, wheat bran, wheat germ, and haws. Biochemistry Purines and pyrimidines make up the two groups of nitrogenous bases, including the two groups of nucleotide bases. The purine bases are guanine (G) and adenine (A) which form corresponding nucleosides-deoxyribonucleosides (deoxyguanosine and deoxyadenosine) with deoxyribose moiety and ribonucleosides (guanosine, adenosine) with ribose moiety. These nucleosides with phosphoric acid form corresponding nucleotides (deoxyguanylate, deoxyadenylate and guanylate, adenylate) which are the building blocks of DNA and RNA, respectively. Purine bases also play an essential role in many metabolic and signalling processes within the compounds guanosine monophosphate (GMP) and adenosine monophosphate (AMP). In order to perform these essential cellular processes, both purines and pyrimidines are needed by the cell, and in similar quantities. Both purine and pyrimidine are self-inhibiting and activating. When purines are formed, they inhibit the enzymes required for more purine formation. This self-inhibition occurs as they also activate the enzymes needed for pyrimidine formation. Pyrimidine simultaneously self-inhibits and activates purine in a similar manner. Because of this, there is nearly an equal amount of both substances in the cell at all times. Properties Purine is both a very weak acid (pKa 8.93) and an even weaker base (pKa 2.39). If dissolved in pure water, the pH is halfway between these two pKa values. Purine is aromatic, having four tautomers each with a hydrogen bonded to a different one of the four nitrogen atoms. These are identified as 1-H, 3-H, 7-H, and 9-H (see image of numbered ring). The common crystalline form favours the 7-H tautomer, while in polar solvents both the 9-H and 7-H tautomers predominate. Substituents to the rings and interactions with other molecules can shift the equilibrium of these tautomers. Notable purines There are many naturally occurring purines. They include the nucleotide bases adenine and guanine. In DNA, these bases form hydrogen bonds with their complementary pyrimidines, thymine and cytosine, respectively. This is called complementary base pairing. In RNA, the complement of adenine is uracil instead of thymine. Other notable purines are hypoxanthine, xanthine, theophylline, theobromine, caffeine, uric acid and isoguanine. Functions Aside from the crucial roles of purines (adenine and guanine) in DNA and RNA, purines are also significant components in a number of other important biomolecules, such as ATP, GTP, cyclic AMP, NADH, and coenzyme A. Purine (1) itself, has not been found in nature, but it can be produced by organic synthesis. They may also function directly as neurotransmitters, acting upon purinergic receptors. Adenosine activates adenosine receptors. History The word purine (pure urine) was coined by the German chemist Emil Fischer in 1884. He synthesized it for the first time in 1898. The starting material for the reaction sequence was uric acid (8), which had been isolated from kidney stones by Carl Wilhelm Scheele in 1776. Uric acid was reacted with PCl5 to give 2,6,8-trichloropurine, which was converted with HI and PH4I to give 2,6-diiodopurine. The product was reduced to purine using zinc dust. Metabolism Many organisms have metabolic pathways to synthesize and break down purines. Purines are biologically synthesized as nucleosides (bases attached to ribose). Accumulation of modified purine nucleotides is defective to various cellular processes, especially those involving DNA and RNA. To be viable, organisms possess a number of deoxypurine phosphohydrolases, which hydrolyze these purine derivatives removing them from the active NTP and dNTP pools. Deamination of purine bases can result in accumulation of such nucleotides as ITP, dITP, XTP and dXTP. Defects in enzymes that control purine production and breakdown can severely alter a cell's DNA sequences, which may explain why people who carry certain genetic variants of purine metabolic enzymes have a higher risk for some types of cancer. Purine biosynthesis in the three domains of life Organisms in all three domains of life, eukaryotes, bacteria and archaea, are able to carry out de novo biosynthesis of purines. This ability reflects the essentiality of purines for life. The biochemical pathway of synthesis is very similar in eukaryotes and bacterial species, but is more variable among archaeal species. A nearly complete, or complete, set of genes required for purine biosynthesis was determined to be present in 58 of the 65 archaeal species studied. However, also identified were seven archaeal species with entirely, or nearly entirely, absent purine encoding genes. Apparently the archaeal species unable to synthesize purines are able to acquire exogenous purines for growth., and are thus analogous to purine mutants of eukaryotes, e.g. purine mutants of the Ascomycete fungus Neurospora crassa, that also require exogenous purines for growth. Relationship with gout Higher levels of meat and seafood consumption are associated with an increased risk of gout, whereas a higher level of consumption of dairy products is associated with a decreased risk. Moderate intake of purine-rich vegetables or protein is not associated with an increased risk of gout. Similar results have been found with the risk of hyperuricemia. Laboratory synthesis In addition to in vivo synthesis of purines in purine metabolism, purine can also be synthesized artificially. Purine is obtained in good yield when formamide is heated in an open vessel at 170 °C for 28 hours. This remarkable reaction and others like it have been discussed in the context of the origin of life. Patented August 20, 1968, the current recognized method of industrial-scale production of adenine is a modified form of the formamide method. This method heats up formamide under 120 °C conditions within a sealed flask for 5 hours to form adenine. The reaction is heavily increased  in quantity by using a phosphorus oxychloride (phosphoryl chloride) or phosphorus pentachloride as an acid catalyst and sunlight or ultraviolet conditions. After the 5 hours have passed and the formamide-phosphorus oxychloride-adenine solution cools down, water is put into the flask containing the formamide and now-formed adenine. The water-formamide-adenine solution is then poured through a filtering column of activated charcoal. The water and formamide molecules, being small molecules, will pass through the charcoal and into the waste flask; the large adenine molecules, however, will attach or “adsorb” to the charcoal due to the van der waals forces that interact between the adenine and the carbon in the charcoal. Because charcoal has a large surface area, it's able to capture the majority of molecules that pass a certain size (greater than water and formamide) through it. To extract the adenine from the charcoal-adsorbed adenine, ammonia gas dissolved in water (aqua ammonia) is poured onto the activated charcoal-adenine structure to liberate the adenine into the ammonia-water solution. The solution containing water, ammonia, and adenine is then left to air dry, with the adenine losing solubility due to the loss of ammonia gas that previously made the solution basic and capable of dissolving adenine, thus causing it to crystallize into a pure white powder that can be stored. Oro and Kamat (1961) and Orgel co-workers (1966, 1967) have shown that four molecules of HCN tetramerize to form diaminomaleodinitrile (12), which can be converted into almost all naturally occurring purines. For example, five molecules of HCN condense in an exothermic reaction to make adenine, especially in the presence of ammonia. The Traube purine synthesis (1900) is a classic reaction (named after Wilhelm Traube) between an amine-substituted pyrimidine and formic acid. Prebiotic synthesis of purine ribonucleosides In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. Nam et al. (2018) demonstrated the direct condensation of purine and pyrimidine nucleobases with ribose to give ribonucleosides in aqueous microdroplets, a key step leading to RNA formation. Also, a plausible prebiotic process for synthesizing purine ribonucleosides was presented by Becker et al. in 2016.
Physical sciences
Alkaloids
Chemistry
23653
https://en.wikipedia.org/wiki/Pyrimidine
Pyrimidine
Pyrimidine (; ) is an aromatic, heterocyclic, organic compound similar to pyridine (). One of the three diazines (six-membered heterocyclics with two nitrogen atoms in the ring), it has nitrogen atoms at positions 1 and 3 in the ring. The other diazines are pyrazine (nitrogen atoms at the 1 and 4 positions) and pyridazine (nitrogen atoms at the 1 and 2 positions). In nucleic acids, three types of nucleobases are pyrimidine derivatives: cytosine (C), thymine (T), and uracil (U). Occurrence and history The pyrimidine ring system has wide occurrence in nature as substituted and ring fused compounds and derivatives, including the nucleotides cytosine, thymine and uracil, thiamine (vitamin B1) and alloxan. It is also found in many synthetic compounds such as barbiturates and the HIV drug zidovudine. Although pyrimidine derivatives such as alloxan were known in the early 19th century, a laboratory synthesis of a pyrimidine was not carried out until 1879, when Grimaux reported the preparation of barbituric acid from urea and malonic acid in the presence of phosphorus oxychloride. The systematic study of pyrimidines began in 1884 with Pinner, who synthesized derivatives by condensing ethyl acetoacetate with amidines. Pinner first proposed the name “pyrimidin” in 1885. The parent compound was first prepared by Gabriel and Colman in 1900, by conversion of barbituric acid to 2,4,6-trichloropyrimidine followed by reduction using zinc dust in hot water. Nomenclature The nomenclature of pyrimidines is straightforward. However, like other heterocyclics, tautomeric hydroxyl groups yield complications since they exist primarily in the cyclic amide form. For example, 2-hydroxypyrimidine is more properly named 2-pyrimidone. A partial list of trivial names of various pyrimidines exists. Physical properties Physical properties are shown in the data box. A more extensive discussion, including spectra, can be found in Brown et al. Chemical properties Per the classification by Albert, six-membered heterocycles can be described as π-deficient. Substitution by electronegative groups or additional nitrogen atoms in the ring significantly increase the π-deficiency. These effects also decrease the basicity. Like pyridines, in pyrimidines the π-electron density is decreased to an even greater extent. Therefore, electrophilic aromatic substitution is more difficult while nucleophilic aromatic substitution is facilitated. An example of the last reaction type is the displacement of the amino group in 2-aminopyrimidine by chlorine and its reverse. Electron lone pair availability (basicity) is decreased compared to pyridine. Compared to pyridine, N-alkylation and N-oxidation are more difficult. The pKa value for protonated pyrimidine is 1.23 compared to 5.30 for pyridine. Protonation and other electrophilic additions will occur at only one nitrogen due to further deactivation by the second nitrogen. The 2-, 4-, and 6- positions on the pyrimidine ring are electron deficient analogous to those in pyridine and nitro- and dinitrobenzene. The 5-position is less electron deficient and substituents there are quite stable. However, electrophilic substitution is relatively facile at the 5-position, including nitration and halogenation. Reduction in resonance stabilization of pyrimidines may lead to addition and ring cleavage reactions rather than substitutions. One such manifestation is observed in the Dimroth rearrangement. Pyrimidine is also found in meteorites, but scientists still do not know its origin. Pyrimidine also photolytically decomposes into uracil under ultraviolet light. Synthesis Pyrimidine biosynthesis creates derivatives —like orotate, thymine, cytosine, and uracil— de novo from carbamoyl phosphate and aspartate. As is often the case with parent heterocyclic ring systems, the synthesis of pyrimidine is not that common and is usually performed by removing functional groups from derivatives. Primary syntheses in quantity involving formamide have been reported. As a class, pyrimidines are typically synthesized by the principal synthesis involving cyclization of β-dicarbonyl compounds with N–C–N compounds. Reaction of the former with amidines to give 2-substituted pyrimidines, with urea to give 2-pyrimidinones, and guanidines to give 2-aminopyrimidines are typical. Pyrimidines can be prepared via the Biginelli reaction and other multicomponent reactions. Many other methods rely on condensation of carbonyls with diamines for instance the synthesis of 2-thio-6-methyluracil from thiourea and ethyl acetoacetate or the synthesis of 4-methylpyrimidine with 4,4-dimethoxy-2-butanone and formamide. A novel method is by reaction of N-vinyl and N-aryl amides with carbonitriles under electrophilic activation of the amide with 2-chloro-pyridine and trifluoromethanesulfonic anhydride: Reactions Because of the decreased basicity compared to pyridine, electrophilic substitution of pyrimidine is less facile. Protonation or alkylation typically takes place at only one of the ring nitrogen atoms. Mono-N-oxidation occurs by reaction with peracids. Electrophilic C-substitution of pyrimidine occurs at the 5-position, the least electron-deficient. Nitration, nitrosation, azo coupling, halogenation, sulfonation, formylation, hydroxymethylation, and aminomethylation have been observed with substituted pyrimidines. Nucleophilic C-substitution should be facilitated at the 2-, 4-, and 6-positions but there are only a few examples. Amination and hydroxylation have been observed for substituted pyrimidines. Reactions with Grignard or alkyllithium reagents yield 4-alkyl- or 4-aryl pyrimidine after aromatization. Free radical attack has been observed for pyrimidine and photochemical reactions have been observed for substituted pyrimidines. Pyrimidine can be hydrogenated to give tetrahydropyrimidine. Derivatives Nucleotides Three nucleobases found in nucleic acids, cytosine (C), thymine (T), and uracil (U), are pyrimidine derivatives: {| |- | || || |- | || || |} In DNA and RNA, these bases form hydrogen bonds with their complementary purines. Thus, in DNA, the purines adenine (A) and guanine (G) pair up with the pyrimidines thymine (T) and cytosine (C), respectively. In RNA, the complement of adenine (A) is uracil (U) instead of thymine (T), so the pairs that form are adenine:uracil and guanine:cytosine. Very rarely, thymine can appear in RNA, or uracil in DNA, but when the other three major pyrimidine bases are represented, some minor pyrimidine bases can also occur in nucleic acids. These minor pyrimidines are usually methylated versions of major ones and are postulated to have regulatory functions. These hydrogen bonding modes are for classical Watson–Crick base pairing. Other hydrogen bonding modes ("wobble pairings") are available in both DNA and RNA, although the additional 2′-hydroxyl group of RNA expands the configurations, through which RNA can form hydrogen bonds. Theoretical aspects In March 2015, NASA Ames scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar dust and gas clouds. Prebiotic synthesis of pyrimidine nucleotides In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. The RNA world hypothesis holds that in the primordial soup there existed free-floating ribonucleotides, the fundamental molecules that combine in series to form RNA. Complex molecules such as RNA must have emerged from relatively small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of pyrimidine and purine nucleotides, both of which are necessary for reliable information transfer, and thus natural selection and Darwinian evolution. Becker et al. showed how pyrimidine nucleosides can be synthesized from small molecules and ribose, driven solely by wet-dry cycles. Purine nucleosides can be synthesized by a similar pathway. 5’-mono-and diphosphates also form selectively from phosphate-containing minerals, allowing concurrent formation of polyribonucleotides with both the pyrimidine and purine bases. Thus a reaction network towards the pyrimidine and purine RNA building blocks can be established starting from simple atmospheric or volcanic molecules.
Physical sciences
Alkaloids
Chemistry
23659
https://en.wikipedia.org/wiki/Plug-in%20%28computing%29
Plug-in (computing)
In computing, a plug-in (or plugin, add-in, addin, add-on, or addon) is a software component that extends the functionality of an existing software system without requiring the system to be re-built. A plug-in feature is one way that a system can be customizable. Applications support plug-ins for a variety of reasons including: Enable third-party developers to extend an application Support easily adding new features Reduce the size of an application by not loading unused features Separate source code from an application because of incompatible software licenses Examples Examples of plug-in use for various categories of applications: Digital audio workstations and audio editing software use audio plug-ins to generate, process or analyze sound. Ardour, Audacity, Cubase, FL Studio, Logic Pro X and Pro Tools are examples of such systems. Email clients use plug-ins to decrypt and encrypt email. Pretty Good Privacy is an example of such plug-ins. Video game console emulators often use plug-ins to modularize the separate subsystems of the devices they seek to emulate. For example, the PCSX2 emulator makes use of video, audio, optical, etc. plug-ins for those respective components of the PlayStation 2. Graphics software use plug-ins to support file formats and process images. A Photoshop plug-in may do this. Broadcasting and live-streaming software, like OBS Studio, as an open source software utilises plug-ins for user-specific needs. Media players use plug-ins to support file formats and apply filters. foobar2000, GStreamer, Quintessential, VST, Winamp, XMMS are examples of such media players. Packet sniffers use plug-ins to decode packet formats. OmniPeek is an example of such packet sniffers. Remote sensing applications use plug-ins to process data from different sensor types; e.g., Opticks. Text editors and Integrated development environments use plug-ins to support programming languages or enhance the development process e.g., Visual Studio, RAD Studio, Eclipse, IntelliJ IDEA, jEdit and MonoDevelop support plug-ins. Visual Studio itself can be plugged into other applications via Visual Studio Tools for Office and Visual Studio Tools for Applications. Web browsers have historically used executables as plug-ins, though they are now mostly deprecated. Examples include the Adobe Flash Player, a Java virtual machine (for Java applets), QuickTime, Microsoft Silverlight and the Unity Web Player. (Browser extensions, which are a separate type of installable module, are still widely in use.) Mechanism The host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application and a protocol for the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not usually work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application. Programmers typically implement plug-ins as shared libraries, which get dynamically loaded at run time. HyperCard supported a similar facility, but more commonly included the plug-in code in the HyperCard documents (called stacks) themselves. Thus the HyperCard stack became a self-contained application in its own right, distributable as a single entity that end-users could run without the need for additional installation-steps. Programs may also implement plug-ins by loading a directory of simple script files written in a scripting language like Python or Lua. Helper application In the context of a web browser, a helper application is a separate programlike IrfanView or Adobe Readerthat extends the functionality of a browser. A helper application extends the functionality an application but unlike the typical plug-in that is loaded into the host application's address space, a helper application is a separate application. With a separate address space, the extension cannot crash the host application as is possible if they share an address space. History In the mid-1970s, the EDT text editor ran on the Unisys VS/9 operating system for the UNIVAC Series 90 mainframe computer. It allowed a program to be run from the editor which can access the in-memory edit buffer. The plug-in executable could call the editor to inspect and change the text. The University of Waterloo Fortran compiler used this to allow interactive compilation of Fortran programs. Early personal computer software with plug-in capability included HyperCard and QuarkXPress on the Apple Macintosh, both released in 1987. In 1988, Silicon Beach Software included plug-in capability in Digital Darkroom and SuperPaint.
Technology
Computer software
null
23664
https://en.wikipedia.org/wiki/Papyrus
Papyrus
Papyrus ( ) is a material similar to thick paper that was used in ancient times as a writing surface. It was made from the pith of the papyrus plant, Cyperus papyrus, a wetland sedge. Papyrus (plural: papyri or papyruses) can also refer to a document written on sheets of such material, joined side by side and rolled up into a scroll, an early form of a book. Papyrus was first known to have been used in Egypt (at least as far back as the First Dynasty), as the papyrus plant was once abundant across the Nile Delta. It was also used throughout the Mediterranean region. Apart from writing material, ancient Egyptians employed papyrus in the construction of other artifacts, such as reed boats, mats, rope, sandals, and baskets. History Papyrus was first manufactured in Egypt as far back as the third millennium BCE. The earliest archaeological evidence of papyrus was excavated in 2012 and 2013 at Wadi al-Jarf, an ancient Egyptian harbor located on the Red Sea coast. These documents, the Diary of Merer, date from –2550 BCE (end of the reign of Khufu). The papyrus rolls describe the last years of building the Great Pyramid of Giza. For multiple millennia, papyrus was commonly rolled into scrolls as a form of storage. However, at some point late in its history, papyrus began being collected together in the form of codices akin to the modern book. This may have been mimicking the book-form of codices created with parchment. Early Christian writers soon adopted the codex form, and in the Greco-Roman world, it became common to cut sheets from papyrus rolls to form codices. Codices were an improvement on the papyrus scroll, as the papyrus was not pliable enough to fold without cracking, and a long roll, or scroll, was required to create large-volume texts. Papyrus had the advantage of being relatively cheap and easy to produce, but it was fragile and susceptible to both moisture and excessive dryness. Unless the papyrus was of perfect quality, the writing surface was irregular, and the range of media that could be used was also limited. Papyrus was gradually overtaken in Europe by a rival writing surface that rose in prominence known as parchment, which was made from animal skins. By the beginning of the fourth century A.D., the most important books began to be manufactured in parchment, and works worth preserving were transferred from papyrus to parchment. Parchment had significant advantages over papyrus, including higher durability in moist climates and being more conducive to writing on both sides of the surface. The main advantage of papyrus had been its cheaper raw material — the papyrus plant is easy to cultivate in a suitable climate and produces more writing material than animal hides (the most expensive books, made from foetal vellum would take up to dozens of bovine fetuses to produce). However, as trade networks declined, the availability of papyrus outside the range of the papyrus plant became limited and it thus lost its cost advantage. Papyrus' last appearance in the Merovingian chancery was with a document from 692 A.D., though it was known in Gaul until the middle of the following century. The latest certain dates for the use of papyrus in Europe are 1057 for a papal decree (typically conservative, all papal bulls were on papyrus until 1022), under Pope Victor II, and 1087 for an Arabic document. Its use in Egypt continued until it was replaced by less expensive paper introduced by the Islamic world, which originally learned of it from the Chinese. By the 12th century, parchment and paper were in use in the Byzantine Empire, but papyrus was still an option. Until the middle of the 19th century, only some isolated documents written on papyrus were known, and museums simply showed them as curiosities. They did not contain literary works. The first modern discovery of papyri rolls was made at Herculaneum in 1752. Until then, the only papyri known had been a few surviving from medieval times. Scholarly investigations began with the Dutch historian Caspar Jacob Christiaan Reuvens (1793–1835). He wrote about the content of the Leyden papyrus, published in 1830. The first publication has been credited to the British scholar Charles Wycliffe Goodwin (1817–1878), who published for the Cambridge Antiquarian Society one of the Papyri Graecae Magicae V, translated into English with commentary in 1853. Varying quality Papyrus was made in several qualities and prices. Pliny the Elder and Isidore of Seville described six variations of papyrus that were sold in the Roman market of the day. These were graded by quality based on how fine, firm, white, and smooth the writing surface was. Grades ranged from the superfine Augustan, which was produced in sheets of 13 digits (10 inches) wide, to the least expensive and most coarse, measuring six digits (four inches) wide. Materials deemed unusable for writing or less than six digits were considered commercial quality and were pasted edge to edge to be used only for wrapping. Etymology The English word "papyrus" derives, via Latin, from Greek πάπυρος (papyros), a loanword of unknown (perhaps Pre-Greek) origin. Greek has a second word for it, βύβλος (byblos), said to derive from the name of the Phoenician city of Byblos. The Greek writer Theophrastus, who flourished during the 4th century BCE, uses papyros when referring to the plant used as a foodstuff and byblos for the same plant when used for nonfood products, such as cordage, basketry, or writing surfaces. The more specific term βίβλος biblos, which finds its way into English in such words as 'bibliography', 'bibliophile', and 'bible', refers to the inner bark of the papyrus plant. Papyrus is also the etymon of 'paper', a similar substance. In the Egyptian language, papyrus was called wadj (w3ḏ), tjufy (ṯwfy), or djet (ḏt). Documents written on papyrus The word for the material papyrus is also used to designate documents written on sheets of it, often rolled up into scrolls. The plural for such documents is papyri. Historical papyri are given identifying names – generally the name of the discoverer, first owner, or institution where they are kept – and numbered, such as "Papyrus Harris I". Often an abbreviated form is used, such as "pHarris I". These documents provide important information on ancient writings; they give us the only extant copy of Menander, the Egyptian Book of the Dead, Egyptian treatises on medicine (the Ebers Papyrus) and on surgery (the Edwin Smith papyrus), Egyptian mathematical treatises (the Rhind papyrus), and Egyptian folk tales (the Westcar Papyrus). When, in the 18th century, a library of ancient papyri was found in Herculaneum, ripples of expectation spread among the learned men of the time. However, since these papyri were badly charred, their unscrolling and deciphering are still going on today. Manufacture and use Papyrus was made from the stem of the papyrus plant, Cyperus papyrus. The outer rind was first removed, and the sticky fibrous inner pith is cut lengthwise into thin strips about long. The strips were then placed side by side on a hard surface with their edges slightly overlapping, and then another layer of strips is laid on top at right angles. The strips may have been soaked in water long enough for decomposition to begin, perhaps increasing adhesion, but this is not certain. The two layers possibly were glued together. While still moist, the two layers were hammered together, mashing the layers into a single sheet. The sheet was then dried under pressure. After drying, the sheet was polished with a rounded object, possibly a stone, seashell, or round hardwood. Sheets, or Mollema, could be cut to fit the obligatory size or glued together to create a longer roll. The point where the Mollema are joined with glue is called the kollesis. A wooden stick would be attached to the last sheet in a roll, making it easier to handle. To form the long strip scrolls required, several such sheets were united and placed so all the horizontal fibres parallel with the roll's length were on one side and all the vertical fibres on the other. Normally, texts were first written on the recto, the lines following the fibres, parallel to the long edges of the scroll. Secondarily, papyrus was often reused, writing across the fibres on the verso. One source used for determining the method by which papyrus was created in antiquity is through the examination of tombs in the ancient Egyptian city of Thebes, which housed a necropolis containing many murals displaying the process of papyrus-making. The Roman commander Pliny the Elder also describes the methods of preparing papyrus in his Naturalis Historia. In a dry climate, like that of Egypt, papyrus is stable, formed as it is of highly rot-resistant cellulose, but storage in humid conditions can result in molds attacking and destroying the material. Library papyrus rolls were stored in wooden boxes and chests made in the form of statues. Papyrus scrolls were organized according to subject or author and identified with clay labels that specified their contents without having to unroll the scroll. In European conditions, papyrus seems to have lasted only a matter of decades; a 200-year-old papyrus was considered extraordinary. Imported papyrus once commonplace in Greece and Italy has since deteriorated beyond repair, but papyri are still being found in Egypt; extraordinary examples include the Elephantine papyri and the famous finds at Oxyrhynchus and Nag Hammadi. The Villa of the Papyri at Herculaneum, containing the library of Lucius Calpurnius Piso Caesoninus, Julius Caesar's father-in-law, was preserved by the eruption of Mount Vesuvius but has only been partially excavated. Sporadic attempts to revive the manufacture of papyrus have been made since the mid-18th century. Scottish explorer James Bruce experimented in the late 18th century with papyrus plants from Sudan, for papyrus had become extinct in Egypt. Also in the 18th century, Sicilian Saverio Landolina manufactured papyrus at Syracuse, where papyrus plants had continued to grow in the wild. During the 1920s, when Egyptologist Battiscombe Gunn lived in Maadi, outside Cairo, he experimented with the manufacture of papyrus, growing the plant in his garden. He beat the sliced papyrus stalks between two layers of linen and produced successful examples of papyrus, one of which was exhibited in the Egyptian Museum in Cairo. The modern technique of papyrus production used in Egypt for the tourist trade was developed in 1962 by the Egyptian engineer Hassan Ragab using plants that had been reintroduced into Egypt in 1872 from France. Both Sicily and Egypt have centres of limited papyrus production. Papyrus is still used by communities living in the vicinity of swamps, to the extent that rural householders derive up to 75% of their income from swamp goods. Particularly in East and Central Africa, people harvest papyrus, which is used to manufacture items that are sold or used locally. Examples include baskets, hats, fish traps, trays or winnowing mats, and floor mats. Papyrus is also used to make roofs, ceilings, rope, and fences. Although alternatives, such as eucalyptus, are increasingly available, papyrus is still used as fuel. Collections of papyrus Amherst Papyri: this is a collection of William Tyssen-Amherst, 1st Baron Amherst of Hackney. It includes biblical manuscripts, early church fragments, and classical documents from the Ptolemaic, Roman, and Byzantine eras. The collection was edited by Bernard Grenfell and Arthur Hunt in 1900–1901. It is housed at the Morgan Library & Museum (New York). Archduke Rainer Collection, also known as the Vienna Papyrus Collection: is one of the world's largest collections of papyri (about 180,000 objects) in the Austrian National Library of Vienna. Berlin Papyri: housed in the Egyptian Museum and Papyrus Collection. Berliner Griechische Urkunden (BGU): a publishing project ongoing since 1895 Bodmer Papyri: this collection was purchased by Martin Bodmer in 1955–1956. Currently, it is housed in the Bibliotheca Bodmeriana in Cologny. It includes Greek and Coptic documents, classical texts, biblical books, and writing of the early churches. Chester Beatty Papyri: a collection of 11 codices acquired by Alfred Chester Beatty in 1930–1931 and 1935. It is housed at the Chester Beatty Library. The collection was edited by Frederic G. Kenyon. Colt Papyri, housed at the Morgan Library & Museum (New York). Former private collection of Grigol Tsereteli: a collection up to one hundred Greek papyri, currently housed at Georgian National Centre of Manuscripts. The Herculaneum papyri: these papyri were found in Herculaneum in the eighteenth century, carbonized by the eruption of Mount Vesuvius. After some tinkering, a method was found to unroll and to read them. Most of them are housed at the Naples National Archaeological Museum. The Heroninos Archive: a collection of around a thousand papyrus documents, dealing with the management of a large Roman estate, dating to the third century CE, found at the very end of the 19th century at Kasr El Harit, the site of ancient , in the Faiyum area of Egypt by Bernard Pyne Grenfell and Arthur Surridge Hunt. It is spread over many collections throughout the world. The Houghton's papyri: the collection at Houghton Library, Harvard University was acquired between 1901 and 1909 thanks to a donation from the Egypt Exploration Fund. Martin Schøyen Collection: biblical manuscripts in Greek and Coptic, Dead Sea Scrolls, classical documents Michigan Papyrus Collection: this collection contains above 10,000 papyri fragments. It is housed at the University of Michigan. Oxyrhynchus Papyri: these numerous papyri fragments were discovered by Grenfell and Hunt in and around Oxyrhynchus. The publication of these papyri is still in progress. A large part of the Oxyrhynchus papyri are housed at the Ashmolean Museum in Oxford, others in the British Museum in London, in the Egyptian Museum in Cairo, and many other places. Princeton Papyri: it is housed at the Princeton University Papiri della Società Italiana (PSI): a series, still in progress, published by the Società per la ricerca dei Papiri greci e latini in Egitto and from 1927 onwards by the succeeding Istituto Papirologico "G. Vitelli" in Florence. These papyri are situated at the institute itself and in the Biblioteca Laurenziana. Rylands Papyri: this collection contains above 700 papyri, with 31 ostraca and 54 codices. It is housed at the John Rylands University Library. Tebtunis Papyri: housed by the Bancroft Library at the University of California, Berkeley, this is a collection of more than 30,000 fragments dating from the 3rd century BCE through the 3rd century CE, found in the winter 1899–1900 at the site of ancient Tebtunis, Egypt, by an expedition team led by the British papyrologists Bernard P. Grenfell and Arthur S. Hunt. Washington University Papyri Collection: includes 445 manuscript fragments, dating from the first century BCE to the eighth century AD. Housed at the Washington University Libraries. Yale Papyrus Collection: housed by the Beinecke Library, it contains over six thousand inventoried items. It is cataloged, digitally scanned, and accessible online. Individual papyri Brooklyn Papyrus: this papyrus focuses mainly on snakebites and their remedies. It speaks of remedial methods for poisons obtained from snakes, scorpions, and tarantulas. The Brooklyn Papyrus currently resides in the Brooklyn Museum. Saite Oracle Papyrus: this papyrus located at the Brooklyn Museum records the petition of a man named Pemou on behalf of his father, Harsiese to ask their god for permission to change temples. Strasbourg papyrus Will of Naunakhte: found at Deir el-Medina and dating to the 20th dynasty, it is notable because it is a legal document for a non-noble woman.
Technology
Material and chemical
null
23665
https://en.wikipedia.org/wiki/Pixel
Pixel
In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software. Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black. In some contexts (such as descriptions of camera sensors), pixel refers to a single scalar element of a multi-component representation (called a photosite in the camera sensor context, although sensel is sometimes used), while in yet other contexts (like MRI) it may refer to a set of component intensities for a spatial position. Software on early consumer computers was necessarily rendered at a low resolution, with large pixels visible to the naked eye; graphics made under these limitations may be called pixel art, especially in reference to video games. Modern computers and displays, however, can easily render orders of magnitude more pixels than was previously possible, necessitating the use of large measurements like the megapixel (one million pixels). Etymology The word pixel is a combination of pix (from "pictures", shortened to "pics") and el (for "element"); similar formations with 'el include the words voxel , and texel . The word pix appeared in Variety magazine headlines in 1932, as an abbreviation for the word pictures, in reference to movies. By 1938, "pix" was being used in reference to still pictures by photojournalists. The word "pixel" was first published in 1965 by Frederic C. Billingsley of JPL, to describe the picture elements of scanned images from space probes to the Moon and Mars. Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, who in turn said he did not know where it originated. McFarland said simply it was "in use at the time" (). The concept of a "picture element" dates to the earliest days of television, for example as "Bildpunkt" (the German word for pixel, literally 'picture point') in the 1888 German patent of Paul Nipkow. According to various etymologies, the earliest publication of the term picture element itself was in Wireless World magazine in 1927, though it had been used earlier in various U.S. patents filed as early as 1911. Some authors explain pixel as picture cell, as early as 1972. In graphics and in image and video processing, pel is often used instead of pixel. For example, IBM used it in their Technical Reference for the original PC. Pixilation, spelled with a second i, is an unrelated filmmaking technique that dates to the beginnings of cinema, in which live actors are posed frame by frame and photographed to create stop-motion animation. An archaic British word meaning "possession by spirits (pixies)", the term has been used to describe the animation process since the early 1950s; various animators, including Norman McLaren and Grant Munro, are credited with popularizing it. Technical thought of as the smallest single component of a digital image. However, the definition is highly context-sensitive. For example, there can be "printed pixels" in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, and spot. Pixels can be used as a unit of measure such as: 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart. The measures "dots per inch" (dpi) and "pixels per inch" (ppi) are sometimes used interchangeably, but have distinct meanings, especially for printer devices, where dpi is a measure of the printer's density of dot (e.g. ink droplet) placement. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer. Even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution. The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display) and therefore has a total number of 640 × 480 = 307,200 pixels, or 0.3 megapixels. The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image. In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from television scanning patterns, and has been widely used to describe similar halftone printing and storage techniques. Sampling patterns For convenience, pixels are normally arranged in a regular two-dimensional grid. By using this arrangement, many common operations can be implemented by uniformly applying the same operation to each pixel independently. Other arrangements of pixels are possible, with some sampling patterns even changing the shape (or kernel) of each pixel across the image. For this reason, care must be taken when acquiring an image on one device and displaying it on another, or when converting image data from one pixel format to another. For example: LCD screens typically use a staggered grid, where the red, green, and blue components are sampled at slightly different locations. Subpixel rendering is a technology which takes advantage of these differences to improve the rendering of text on LCD screens. The vast majority of color digital cameras use a Bayer filter, resulting in a regular grid of pixels where the color of each pixel depends on its position on the grid. A clipmap uses a hierarchical sampling pattern, where the size of the support of each pixel depends on its location within the hierarchy. Warped grids are used when the underlying geometry is non-planar, such as images of the earth from space. The use of non-uniform grids is an active research area, attempting to bypass the traditional Nyquist limit. Pixels on computer monitors are normally "square" (that is, have equal horizontal and vertical sampling pitch); pixels in other systems are often "rectangular" (that is, have unequal horizontal and vertical sampling pitch – oblong in shape), as are digital video formats with diverse aspect ratios, such as the anamorphic widescreen formats of the Rec. 601 digital video standard. Resolution of computer monitors Computer monitors (and TV sets) generally have a fixed native resolution. What it is depends on the monitor, and size. See below for historical exceptions. Computers can use pixels to display an image, often an abstract image that represents a GUI. The resolution of this image is called the display resolution and is determined by the video card of the computer. Flat-panel monitors (and TV sets), e.g. OLED or LCD monitors, or E-ink, also use pixels to display an image, and have a native resolution, and it should (ideally) be matched to the video card resolution. Each pixel is made up of triads, with the number of these triads determining the native resolution. On older, historically available, CRT monitors the resolution was possibly adjustable (still lower than what modern monitor achieve), while on some such monitors (or TV sets) the beam sweep rate was fixed, resulting in a fixed native resolution. Most CRT monitors do not have a fixed beam sweep rate, meaning they do not have a native resolution at all – instead they have a set of resolutions that are equally well supported. To produce the sharpest images possible on a flat-panel, e.g. OLED or LCD, the user must ensure the display resolution of the computer matches the native resolution of the monitor. Resolution of telescopes The pixel scale used in astronomy is the angular distance between two objects on the sky that fall one pixel apart on the detector (CCD or infrared chip). The scale measured in radians is the ratio of the pixel spacing and focal length of the preceding optics, . (The focal length is the product of the focal ratio by the diameter of the associated lens or mirror.) Because is usually expressed in units of arcseconds per pixel, because 1 radian equals (180/π) × 3600 ≈ 206,265 arcseconds, and because focal lengths are often given in millimeters and pixel sizes in micrometers which yields another factor of 1,000, the formula is often quoted as . Bits per pixel The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1 bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors: 1 bpp, 21 = 2 colors (monochrome) 2 bpp, 22 = 4 colors 3 bpp, 23 = 8 colors 4 bpp, 24 = 16 colors 8 bpp, 28 = 256 colors 16 bpp, 216 = 65,536 colors ("Highcolor" ) 24 bpp, 224 = 16,777,216 colors ("Truecolor") For color depths of 15 or more bits per pixel, the depth is normally the sum of the bits allocated to each of the red, green, and blue components. Highcolor, usually meaning 16 bpp, normally has five bits for red and blue each, and six bits for green, as the human eye is more sensitive to errors in green than in the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image). Subpixels Many display and image-acquisition systems are not capable of displaying or sensing the different color channels at the same site. Therefore, the pixel grid is divided into single-color regions that contribute to the displayed or sensed color when viewed at a distance. In some displays, such as LCD, LED, and plasma displays, these single-color regions are separately addressable elements, which have come to be known as subpixels, mostly RGB colors. For example, LCDs typically divide each pixel vertically into three subpixels. When the square pixel is divided into three subpixels, each subpixel is necessarily rectangular. In display industry terminology, subpixels are often referred to as pixels, as they are the basic addressable elements in a viewpoint of hardware, and hence pixel circuits rather than subpixel circuits is used. Most digital camera image sensors use single-color sensor regions, for example using the Bayer filter pattern, and in the camera industry these are known as pixels just like in the display industry, not subpixels. For systems with subpixels, two different approaches can be taken: The subpixels can be ignored, with full-color pixels being treated as the smallest addressable imaging element; or The subpixels can be included in rendering calculations, which requires more analysis and processing time, but can produce apparently superior images in some cases. This latter approach, referred to as subpixel rendering, uses knowledge of pixel geometry to manipulate the three colored subpixels separately, producing an increase in the apparent resolution of color displays. While CRT displays use red-green-blue-masked phosphor areas, dictated by a mesh grid called the shadow mask, it would require a difficult calibration step to be aligned with the displayed pixel raster, and so CRTs do not use subpixel rendering. The concept of subpixels is related to samples. Logical pixel In graphic, web design, and user interfaces, a "pixel" may refer to a fixed length rather than a true pixel on the screen to accommodate different pixel densities. A typical definition, such as in CSS, is that a "physical" pixel is . Doing so makes sure a given element will display as the same size no matter what screen resolution views it. There may, however, be some further adjustments between a "physical" pixel and an on-screen logical pixel. As screens are viewed at difference distances (consider a phone, a computer display, and a TV), the desired length (a "reference pixel") is scaled relative to a reference viewing distance ( in CSS). In addition, as true screen pixel densities are rarely multiples of 96 dpi, some rounding is often applied so that a logical pixel is an integer amount of actual pixels. Doing so avoids render artifacts. The final "pixel" obtained after these two steps becomes the "anchor" to which all other absolute measurements (e.g. the "centimeter") are based on. Worked example, with a 2160p TV placed away from the viewer: Calculate the scaled pixel size as . Calculate the DPI of the TV as . Calculate the real-pixel count per logical-pixel as . A browser will then choose to use the 1.721× pixel size, or round to a 2× ratio. Megapixel A megapixel (MP''') is a million pixels; the term is used not only for the number of pixels in an image but also to express the number of image sensor elements of digital cameras or the number of display elements of digital displays. For example, a camera that makes a 2048 × 1536 pixel image (3,145,728 finished image pixels) typically uses a few extra rows and columns of sensor elements and is commonly said to have "3.2 megapixels" or "3.4 megapixels", depending on whether the number reported is the "effective" or the "total" pixel count. The number of pixels is sometimes quoted as the "resolution" of a photo. This measure of resolution can be calculated by multiplying the width and height of a sensor in pixels. Digital cameras use photosensitive electronics, either charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) image sensors, consisting of a large number of single sensor elements, each of which records a measured intensity level. In most digital cameras, the sensor array is covered with a patterned color filter mosaic having red, green, and blue regions in the Bayer filter arrangement so that each sensor element can record the intensity of a single primary color of light. The camera interpolates the color information of neighboring sensor elements, through a process called demosaicing, to create the final image. These sensor elements are often called "pixels", even though they only record one channel (only red or green or blue) of the final color image. Thus, two of the three color channels for each sensor must be interpolated and a so-called N-megapixel'' camera that produces an N-megapixel image provides only one-third of the information that an image of the same size could get from a scanner. Thus, certain color contrasts may look fuzzier than others, depending on the allocation of the primary colors (green has twice as many elements as red or blue in the Bayer arrangement). DxO Labs invented the Perceptual MegaPixel (P-MPix) to measure the sharpness that a camera produces when paired to a particular lens – as opposed to the MP a manufacturer states for a camera product, which is based only on the camera's sensor. The new P-MPix claims to be a more accurate and relevant value for photographers to consider when weighing up camera sharpness. As of mid-2013, the Sigma 35 mm f/1.4 DG HSM lens mounted on a Nikon D800 has the highest measured P-MPix. However, with a value of 23 MP, it still more than one-third of the D800's 36.3 MP sensor. In August 2019, Xiaomi released the Redmi Note 8 Pro as the world's first smartphone with 64 MP camera. On December 12, 2019, Samsung released Samsung A71 that also has a 64 MP camera. In late 2019, Xiaomi announced the first camera phone with 108 MP 1/1.33-inch across sensor. The sensor is larger than most of bridge camera with 1/2.3-inch across sensor. One new method to add megapixels has been introduced in a Micro Four Thirds System camera, which only uses a 16 MP sensor but can produce a 64 MP RAW (40 MP JPEG) image by making two exposures, shifting the sensor by a half pixel between them. Using a tripod to take level multi-shots within an instance, the multiple 16 MP images are then generated into a unified 64 MP image.
Technology
Computer science
null
23666
https://en.wikipedia.org/wiki/Prime%20number
Prime number
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, or , involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order. The property of being prime is called primality. A simple but slow method of checking the primality of a given number , called trial division, tests whether is a multiple of any integer between 2 and . Faster algorithms include the Miller–Rabin primality test, which is fast but has a small chance of error, and the AKS primality test, which always produces the correct answer in polynomial time but is too slow to be practical. Particularly fast methods are available for numbers of special forms, such as Mersenne numbers. the largest known prime number is a Mersenne prime with 41,024,320 decimal digits. There are infinitely many primes, as demonstrated by Euclid around 300 BC. No known simple formula separates prime numbers from composite numbers. However, the distribution of primes within the natural numbers in the large can be statistically modelled. The first result in that direction is the prime number theorem, proven at the end of the 19th century, which says roughly that the probability of a randomly chosen large number being prime is inversely proportional to its number of digits, that is, to its logarithm. Several historical questions regarding prime numbers are still unsolved. These include Goldbach's conjecture, that every even integer greater than 2 can be expressed as the sum of two primes, and the twin prime conjecture, that there are infinitely many pairs of primes that differ by two. Such questions spurred the development of various branches of number theory, focusing on analytic or algebraic aspects of numbers. Primes are used in several routines in information technology, such as public-key cryptography, which relies on the difficulty of factoring large numbers into their prime factors. In abstract algebra, objects that behave in a generalized way like prime numbers include prime elements and prime ideals. Definition and examples A natural number (1, 2, 3, 4, 5, 6, etc.) is called a prime number (or a prime) if it is greater than 1 and cannot be written as the product of two smaller natural numbers. The numbers greater than 1 that are not prime are called composite numbers. In other words, is prime if items cannot be divided up into smaller equal-size groups of more than one item, or if it is not possible to arrange dots into a rectangular grid that is more than one dot wide and more than one dot high. For example, among the numbers 1 through 6, the numbers 2, 3, and 5 are the prime numbers, as there are no other numbers that divide them evenly (without a remainder). 1 is not prime, as it is specifically excluded in the definition. and are both composite. The divisors of a natural number are the natural numbers that divide evenly. Every natural number has both 1 and itself as a divisor. If it has any other divisor, it cannot be prime. This leads to an equivalent definition of prime numbers: they are the numbers with exactly two positive divisors. Those two are 1 and the number itself. As 1 has only one divisor, itself, it is not prime by this definition. Yet another way to express the same thing is that a number is prime if it is greater than one and if none of the numbers divides evenly. The first 25 prime numbers (all the prime numbers less than 100) are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97 . No even number greater than 2 is prime because any such number can be expressed as the product . Therefore, every prime number other than 2 is an odd number, and is called an odd prime. Similarly, when written in the usual decimal system, all prime numbers larger than 5 end in 1, 3, 7, or 9. The numbers that end with other digits are all composite: decimal numbers that end in 0, 2, 4, 6, or 8 are even, and decimal numbers that end in 0 or 5 are divisible by 5. The set of all primes is sometimes denoted by (a boldface capital P) or by (a blackboard bold capital P). History The Rhind Mathematical Papyrus, from around 1550 BC, has Egyptian fraction expansions of different forms for prime and composite numbers. However, the earliest surviving records of the study of prime numbers come from the ancient Greek mathematicians, who called them (). Euclid's Elements (c. 300 BC) proves the infinitude of primes and the fundamental theorem of arithmetic, and shows how to construct a perfect number from a Mersenne prime. Another Greek invention, the Sieve of Eratosthenes, is still used to construct lists of Around 1000 AD, the Islamic mathematician Ibn al-Haytham (Alhazen) found Wilson's theorem, characterizing the prime numbers as the numbers that evenly divide . He also conjectured that all even perfect numbers come from Euclid's construction using Mersenne primes, but was unable to prove it. Another Islamic mathematician, Ibn al-Banna' al-Marrakushi, observed that the sieve of Eratosthenes can be sped up by considering only the prime divisors up to the square root of the upper limit. Fibonacci took the innovations from Islamic mathematics to Europe. His book Liber Abaci (1202) was the first to describe trial division for testing primality, again using divisors only up to the square root. In 1640 Pierre de Fermat stated (without proof) Fermat's little theorem (later proved by Leibniz and Euler). Fermat also investigated the primality of the Fermat numbers , and Marin Mersenne studied the Mersenne primes, prime numbers of the form with itself a prime. Christian Goldbach formulated Goldbach's conjecture, that every even number is the sum of two primes, in a 1742 letter to Euler. Euler proved Alhazen's conjecture (now the Euclid–Euler theorem) that all even perfect numbers can be constructed from Mersenne primes. He introduced methods from mathematical analysis to this area in his proofs of the infinitude of the primes and the divergence of the sum of the reciprocals of the primes . At the start of the 19th century, Legendre and Gauss conjectured that as tends to infinity, the number of primes up to is asymptotic to , where is the natural logarithm of . A weaker consequence of this high density of primes was Bertrand's postulate, that for every there is a prime between and , proved in 1852 by Pafnuty Chebyshev. Ideas of Bernhard Riemann in his 1859 paper on the zeta-function sketched an outline for proving the conjecture of Legendre and Gauss. Although the closely related Riemann hypothesis remains unproven, Riemann's outline was completed in 1896 by Hadamard and de la Vallée Poussin, and the result is now known as the prime number theorem. Another important 19th century result was Dirichlet's theorem on arithmetic progressions, that certain arithmetic progressions contain infinitely many primes. Many mathematicians have worked on primality tests for numbers larger than those where trial division is practicably applicable. Methods that are restricted to specific number forms include Pépin's test for Fermat numbers (1877), Proth's theorem (c. 1878), the Lucas–Lehmer primality test (originated 1856), and the generalized Lucas primality test. Since 1951 all the largest known primes have been found using these tests on computers. The search for ever larger primes has generated interest outside mathematical circles, through the Great Internet Mersenne Prime Search and other distributed computing projects. The idea that prime numbers had few applications outside of pure mathematics was shattered in the 1970s when public-key cryptography and the RSA cryptosystem were invented, using prime numbers as their basis. The increased practical importance of computerized primality testing and factorization led to the development of improved methods capable of handling large numbers of unrestricted form. The mathematical theory of prime numbers also moved forward with the Green–Tao theorem (2004) that there are arbitrarily long arithmetic progressions of prime numbers, and Yitang Zhang's 2013 proof that there exist infinitely many prime gaps of bounded size. Primality of one Most early Greeks did not even consider to be a number, so they could not consider its primality. A few scholars in the Greek and later Roman tradition, including Nicomachus, Iamblichus, Boethius, and Cassiodorus, also considered the prime numbers to be a subdivision of the odd numbers, so they did not consider to be prime either. However, Euclid and a majority of the other Greek mathematicians considered as prime. The medieval Islamic mathematicians largely followed the Greeks in viewing as not being a number. By the Middle Ages and Renaissance, mathematicians began treating as a number, and by the 17th century some of them included it as the first prime number. In the mid-18th century, Christian Goldbach listed as prime in his correspondence with Leonhard Euler; however, Euler himself did not consider 1 to be prime. Many 19th century mathematicians still considered to be prime, and Derrick Norman Lehmer included in his list of primes less than ten million published in 1914. Lists of primes that included 1 continued to be published as recently However, around this time, by the early 20th century, mathematicians started to agree that 1 should not be classified as a prime number. If were to be considered a prime, many statements involving primes would need to be awkwardly reworded. For example, the fundamental theorem of arithmetic would need to be rephrased in terms of factorizations into primes greater than , because every number would have multiple factorizations with any number of copies of . Similarly, the sieve of Eratosthenes would not work correctly if it handled as a prime, because it would eliminate all multiples of (that is, all other numbers) and output only the single number . Some other more technical properties of prime numbers also do not hold for the number : for instance, the formulas for Euler's totient function or for the sum of divisors function are different for prime numbers than they are for . By the early 20th century, mathematicians began to agree that 1 should not be listed as prime, but rather in its own special category as a "unit". Elementary properties Unique factorization Writing a number as a product of prime numbers is called a prime factorization of the number. For example: The terms in the product are called prime factors. The same prime factor may occur more than once; this example has two copies of the prime factor When a prime occurs multiple times, exponentiation can be used to group together multiple copies of the same prime number: for example, in the second way of writing the product above, denotes the square or second power of . The central importance of prime numbers to number theory and mathematics in general stems from the fundamental theorem of arithmetic. This theorem states that every integer larger than can be written as a product of one or more primes. More strongly, this product is unique in the sense that any two prime factorizations of the same number will have the same numbers of copies of the same primes, although their ordering may differ. So, although there are many different ways of finding a factorization using an integer factorization algorithm, they all must produce the same result. Primes can thus be considered the "basic building blocks" of the natural numbers. Some proofs of the uniqueness of prime factorizations are based on Euclid's lemma: If is a prime number and divides a product of integers and then divides or divides (or both). Conversely, if a number has the property that when it divides a product it always divides at least one factor of the product, then must be prime. Infinitude There are infinitely many prime numbers. Another way of saying this is that the sequence of prime numbers never ends. This statement is referred to as Euclid's theorem in honor of the ancient Greek mathematician Euclid, since the first known proof for this statement is attributed to him. Many more proofs of the infinitude of primes are known, including an analytical proof by Euler, Goldbach's proof based on Fermat numbers, Furstenberg's proof using general topology, and Kummer's elegant proof. Euclid's proof shows that every finite list of primes is incomplete. The key idea is to multiply together the primes in any given list and add If the list consists of the primes this gives the number By the fundamental theorem, has a prime factorization with one or more prime factors. is evenly divisible by each of these factors, but has a remainder of one when divided by any of the prime numbers in the given list, so none of the prime factors of can be in the given list. Because there is no finite list of all the primes, there must be infinitely many primes. The numbers formed by adding one to the products of the smallest primes are called Euclid numbers. The first five of them are prime, but the sixth, is a composite number. Formulas for primes There is no known efficient formula for primes. For example, there is no non-constant polynomial, even in several variables, that takes only prime values. However, there are numerous expressions that do encode all primes, or only primes. One possible formula is based on Wilson's theorem and generates the number 2 many times and all other primes exactly once. There is also a set of Diophantine equations in nine variables and one parameter with the following property: the parameter is prime if and only if the resulting system of equations has a solution over the natural numbers. This can be used to obtain a single formula with the property that all its positive values are prime. Other examples of prime-generating formulas come from Mills' theorem and a theorem of Wright. These assert that there are real constants and such that are prime for any natural number in the first formula, and any number of exponents in the second formula. Here represents the floor function, the largest integer less than or equal to the number in question. However, these are not useful for generating primes, as the primes must be generated first in order to compute the values of or Open questions Many conjectures revolving about primes have been posed. Often having an elementary formulation, many of these conjectures have withstood proof for decades: all four of Landau's problems from 1912 are still unsolved. One of them is Goldbach's conjecture, which asserts that every even integer greater than can be written as a sum of two primes. , this conjecture has been verified for all numbers up to Weaker statements than this have been proven; for example, Vinogradov's theorem says that every sufficiently large odd integer can be written as a sum of three primes. Chen's theorem says that every sufficiently large even number can be expressed as the sum of a prime and a semiprime (the product of two primes). Also, any even integer greater than can be written as the sum of six primes. The branch of number theory studying such questions is called additive number theory. Another type of problem concerns prime gaps, the differences between consecutive primes. The existence of arbitrarily large prime gaps can be seen by noting that the sequence consists of composite numbers, for any natural number However, large prime gaps occur much earlier than this argument shows. For example, the first prime gap of length 8 is between the primes 89 and 97, much smaller than It is conjectured that there are infinitely many twin primes, pairs of primes with difference 2; this is the twin prime conjecture. Polignac's conjecture states more generally that for every positive integer there are infinitely many pairs of consecutive primes that differ by Andrica's conjecture, Brocard's conjecture, Legendre's conjecture, and Oppermann's conjecture all suggest that the largest gaps between primes from to should be at most approximately a result that is known to follow from the Riemann hypothesis, while the much stronger Cramér conjecture sets the largest gap size at . Prime gaps can be generalized to prime -tuples, patterns in the differences among more than two prime numbers. Their infinitude and density are the subject of the first Hardy–Littlewood conjecture, which can be motivated by the heuristic that the prime numbers behave similarly to a random sequence of numbers with density given by the prime number theorem. Analytic properties Analytic number theory studies number theory through the lens of continuous functions, limits, infinite series, and the related mathematics of the infinite and infinitesimal. This area of study began with Leonhard Euler and his first major result, the solution to the Basel problem. The problem asked for the value of the infinite sum which today can be recognized as the value of the Riemann zeta function. This function is closely connected to the prime numbers and to one of the most significant unsolved problems in mathematics, the Riemann hypothesis. Euler showed that . The reciprocal of this number, , is the limiting probability that two random numbers selected uniformly from a large range are relatively prime (have no factors in common). The distribution of primes in the large, such as the question how many primes are smaller than a given, large threshold, is described by the prime number theorem, but no efficient formula for the -th prime is known. Dirichlet's theorem on arithmetic progressions, in its basic form, asserts that linear polynomials with relatively prime integers and take infinitely many prime values. Stronger forms of the theorem state that the sum of the reciprocals of these prime values diverges, and that different linear polynomials with the same have approximately the same proportions of primes. Although conjectures have been formulated about the proportions of primes in higher-degree polynomials, they remain unproven, and it is unknown whether there exists a quadratic polynomial that (for integer arguments) is prime infinitely often. Analytical proof of Euclid's theorem Euler's proof that there are infinitely many primes considers the sums of reciprocals of primes, Euler showed that, for any arbitrary real number , there exists a prime for which this sum is bigger than . This shows that there are infinitely many primes, because if there were finitely many primes the sum would reach its maximum value at the biggest prime rather than growing past every . The growth rate of this sum is described more precisely by Mertens' second theorem. For comparison, the sum does not grow to infinity as goes to infinity (see the Basel problem). In this sense, prime numbers occur more often than squares of natural numbers, although both sets are infinite. Brun's theorem states that the sum of the reciprocals of twin primes, is finite. Because of Brun's theorem, it is not possible to use Euler's method to solve the twin prime conjecture, that there exist infinitely many twin primes. Number of primes below a given bound The prime-counting function is defined as the number of primes not greater than . For example, , since there are five primes less than or equal to . Methods such as the Meissel–Lehmer algorithm can compute exact values of faster than it would be possible to list each prime up to . The prime number theorem states that is asymptotic to , which is denoted as and means that the ratio of to the right-hand fraction approaches 1 as grows to infinity. This implies that the likelihood that a randomly chosen number less than is prime is (approximately) inversely proportional to the number of digits in . It also implies that the th prime number is proportional to and therefore that the average size of a prime gap is proportional to . A more accurate estimate for is given by the offset logarithmic integral Arithmetic progressions An arithmetic progression is a finite or infinite sequence of numbers such that consecutive numbers in the sequence all have the same difference. This difference is called the modulus of the progression. For example, is an infinite arithmetic progression with modulus 9. In an arithmetic progression, all the numbers have the same remainder when divided by the modulus; in this example, the remainder is 3. Because both the modulus 9 and the remainder 3 are multiples of 3, so is every element in the sequence. Therefore, this progression contains only one prime number, 3 itself. In general, the infinite progression can have more than one prime only when its remainder and modulus are relatively prime. If they are relatively prime, Dirichlet's theorem on arithmetic progressions asserts that the progression contains infinitely many primes. The Green–Tao theorem shows that there are arbitrarily long finite arithmetic progressions consisting only of primes. Prime values of quadratic polynomials Euler noted that the function yields prime numbers for , although composite numbers appear among its later values. The search for an explanation for this phenomenon led to the deep algebraic number theory of Heegner numbers and the class number problem. The Hardy–Littlewood conjecture F predicts the density of primes among the values of quadratic polynomials with integer coefficients in terms of the logarithmic integral and the polynomial coefficients. No quadratic polynomial has been proven to take infinitely many prime values. The Ulam spiral arranges the natural numbers in a two-dimensional grid, spiraling in concentric squares surrounding the origin with the prime numbers highlighted. Visually, the primes appear to cluster on certain diagonals and not others, suggesting that some quadratic polynomials take prime values more often than others. Zeta function and the Riemann hypothesis One of the most famous unsolved questions in mathematics, dating from 1859, and one of the Millennium Prize Problems, is the Riemann hypothesis, which asks where the zeros of the Riemann zeta function are located. This function is an analytic function on the complex numbers. For complex numbers with real part greater than one it equals both an infinite sum over all integers, and an infinite product over the prime numbers, This equality between a sum and a product, discovered by Euler, is called an Euler product. The Euler product can be derived from the fundamental theorem of arithmetic, and shows the close connection between the zeta function and the prime numbers. It leads to another proof that there are infinitely many primes: if there were only finitely many, then the sum-product equality would also be valid at , but the sum would diverge (it is the harmonic series ) while the product would be finite, a contradiction. The Riemann hypothesis states that the zeros of the zeta-function are all either negative even numbers, or complex numbers with real part equal to 1/2. The original proof of the prime number theorem was based on a weak form of this hypothesis, that there are no zeros with real part equal to , although other more elementary proofs have been found. The prime-counting function can be expressed by Riemann's explicit formula as a sum in which each term comes from one of the zeros of the zeta function; the main term of this sum is the logarithmic integral, and the remaining terms cause the sum to fluctuate above and below the main term. In this sense, the zeros control how regularly the prime numbers are distributed. If the Riemann hypothesis is true, these fluctuations will be small, and the asymptotic distribution of primes given by the prime number theorem will also hold over much shorter intervals (of length about the square root of for intervals near a number ). Abstract algebra Modular arithmetic and finite fields Modular arithmetic modifies usual arithmetic by only using the numbers , for a natural number called the modulus. Any other natural number can be mapped into this system by replacing it by its remainder after division by . Modular sums, differences and products are calculated by performing the same replacement by the remainder on the result of the usual sum, difference, or product of integers. Equality of integers corresponds to congruence in modular arithmetic: and are congruent (written mod ) when they have the same remainder after division by . However, in this system of numbers, division by all nonzero numbers is possible if and only if the modulus is prime. For instance, with the prime number as modulus, division by is possible: , because clearing denominators by multiplying both sides by gives the valid formula . However, with the composite modulus , division by is impossible. There is no valid solution to : clearing denominators by multiplying by causes the left-hand side to become while the right-hand side becomes either or . In the terminology of abstract algebra, the ability to perform division means that modular arithmetic modulo a prime number forms a field or, more specifically, a finite field, while other moduli only give a ring but not a field. Several theorems about primes can be formulated using modular arithmetic. For instance, Fermat's little theorem states that if (mod ), then (mod ). Summing this over all choices of gives the equation valid whenever is prime. Giuga's conjecture says that this equation is also a sufficient condition for to be prime. Wilson's theorem says that an integer is prime if and only if the factorial is congruent to mod . For a composite number  this cannot hold, since one of its factors divides both and , and so is impossible. p-adic numbers The -adic order of an integer is the number of copies of in the prime factorization of . The same concept can be extended from integers to rational numbers by defining the -adic order of a fraction to be . The -adic absolute value of any rational number is then defined as . Multiplying an integer by its -adic absolute value cancels out the factors of in its factorization, leaving only the other primes. Just as the distance between two real numbers can be measured by the absolute value of their distance, the distance between two rational numbers can be measured by their -adic distance, the -adic absolute value of their difference. For this definition of distance, two numbers are close together (they have a small distance) when their difference is divisible by a high power of . In the same way that the real numbers can be formed from the rational numbers and their distances, by adding extra limiting values to form a complete field, the rational numbers with the -adic distance can be extended to a different complete field, the -adic numbers. This picture of an order, absolute value, and complete field derived from them can be generalized to algebraic number fields and their valuations (certain mappings from the multiplicative group of the field to a totally ordered additive group, also called orders), absolute values (certain multiplicative mappings from the field to the real numbers, also called norms), and places (extensions to complete fields in which the given field is a dense set, also called completions). The extension from the rational numbers to the real numbers, for instance, is a place in which the distance between numbers is the usual absolute value of their difference. The corresponding mapping to an additive group would be the logarithm of the absolute value, although this does not meet all the requirements of a valuation. According to Ostrowski's theorem, up to a natural notion of equivalence, the real numbers and -adic numbers, with their orders and absolute values, are the only valuations, absolute values, and places on the rational numbers. The local–global principle allows certain problems over the rational numbers to be solved by piecing together solutions from each of their places, again underlining the importance of primes to number theory. Prime elements of a ring A commutative ring is an algebraic structure where addition, subtraction and multiplication are defined. The integers are a ring, and the prime numbers in the integers have been generalized to rings in two different ways, prime elements and irreducible elements. An element of a ring is called prime if it is nonzero, has no multiplicative inverse (that is, it is not a unit), and satisfies the following requirement: whenever divides the product of two elements of , it also divides at least one of or . An element is irreducible if it is neither a unit nor the product of two other non-unit elements. In the ring of integers, the prime and irreducible elements form the same set, In an arbitrary ring, all prime elements are irreducible. The converse does not hold in general, but does hold for unique factorization domains. The fundamental theorem of arithmetic continues to hold (by definition) in unique factorization domains. An example of such a domain is the Gaussian integers , the ring of complex numbers of the form where denotes the imaginary unit and and are arbitrary integers. Its prime elements are known as Gaussian primes. Not every number that is prime among the integers remains prime in the Gaussian integers; for instance, the number 2 can be written as a product of the two Gaussian primes and . Rational primes (the prime elements in the integers) congruent to 3 mod 4 are Gaussian primes, but rational primes congruent to 1 mod 4 are not. This is a consequence of Fermat's theorem on sums of two squares, which states that an odd prime is expressible as the sum of two squares, , and therefore factorable as , exactly when is 1 mod 4. Prime ideals Not every ring is a unique factorization domain. For instance, in the ring of numbers (for integers and ) the number has two factorizations , where neither of the four factors can be reduced any further, so it does not have a unique factorization. In order to extend unique factorization to a larger class of rings, the notion of a number can be replaced with that of an ideal, a subset of the elements of a ring that contains all sums of pairs of its elements, and all products of its elements with ring elements. Prime ideals, which generalize prime elements in the sense that the principal ideal generated by a prime element is a prime ideal, are an important tool and object of study in commutative algebra, algebraic number theory and algebraic geometry. The prime ideals of the ring of integers are the ideals , , , , , , ... The fundamental theorem of arithmetic generalizes to the Lasker–Noether theorem, which expresses every ideal in a Noetherian commutative ring as an intersection of primary ideals, which are the appropriate generalizations of prime powers. The spectrum of a ring is a geometric space whose points are the prime ideals of the ring. Arithmetic geometry also benefits from this notion, and many concepts exist in both geometry and number theory. For example, factorization or ramification of prime ideals when lifted to an extension field, a basic problem of algebraic number theory, bears some resemblance with ramification in geometry. These concepts can even assist with in number-theoretic questions solely concerned with integers. For example, prime ideals in the ring of integers of quadratic number fields can be used in proving quadratic reciprocity, a statement that concerns the existence of square roots modulo integer prime numbers. Early attempts to prove Fermat's Last Theorem led to Kummer's introduction of regular primes, integer prime numbers connected with the failure of unique factorization in the cyclotomic integers. The question of how many integer prime numbers factor into a product of multiple prime ideals in an algebraic number field is addressed by Chebotarev's density theorem, which (when applied to the cyclotomic integers) has Dirichlet's theorem on primes in arithmetic progressions as a special case. Group theory In the theory of finite groups the Sylow theorems imply that, if a power of a prime number divides the order of a group, then the group has a subgroup of order . By Lagrange's theorem, any group of prime order is a cyclic group, and by Burnside's theorem any group whose order is divisible by only two primes is solvable. Computational methods For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly. In particular, number theorists such as British mathematician G. H. Hardy prided themselves on doing work that had absolutely no military significance. This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation of public-key cryptography algorithms. These applications have led to significant study of algorithms for computing with prime numbers, and in particular of primality testing, methods for determining whether a given number is prime. The most basic primality testing routine, trial division, is too slow to be useful for large numbers. One group of modern primality tests is applicable to arbitrary numbers, while more efficient tests are available for numbers of special types. Most primality tests only tell whether their argument is prime or not. Routines that also provide a prime factor of composite arguments (or all of its prime factors) are called factorization algorithms. Prime numbers are also used in computing for checksums, hash tables, and pseudorandom number generators. Trial division The most basic method of checking the primality of a given integer is called trial division. This method divides by each integer from up to the square root of . Any such integer dividing evenly establishes as composite; otherwise it is prime. Integers larger than the square root do not need to be checked because, whenever , one of the two factors and is less than or equal to the square root of . Another optimization is to check only primes as factors in this range. For instance, to check whether is prime, this method divides it by the primes in the range from to , which are , , and . Each division produces a nonzero remainder, so is indeed prime. Although this method is simple to describe, it is impractical for testing the primality of large integers, because the number of tests that it performs grows exponentially as a function of the number of digits of these integers. However, trial division is still used, with a smaller limit than the square root on the divisor size, to quickly discover composite numbers with small factors, before using more complicated methods on the numbers that pass this filter. Sieves Before computers, mathematical tables listing all of the primes or prime factorizations up to a given limit were commonly printed. The oldest known method for generating a list of primes is called the sieve of Eratosthenes. The animation shows an optimized variant of this method. Another more asymptotically efficient sieving method for the same problem is the sieve of Atkin. In advanced mathematics, sieve theory applies similar methods to other problems. Primality testing versus primality proving Some of the fastest modern tests for whether an arbitrary given number is prime are probabilistic (or Monte Carlo) algorithms, meaning that they have a small random chance of producing an incorrect answer. For instance the Solovay–Strassen primality test on a given number chooses a number randomly from through and uses modular exponentiation to check whether is divisible by . If so, it answers yes and otherwise it answers no. If really is prime, it will always answer yes, but if is composite then it answers yes with probability at most 1/2 and no with probability at least 1/2. If this test is repeated times on the same number, the probability that a composite number could pass the test every time is at most . Because this decreases exponentially with the number of tests, it provides high confidence (although not certainty) that a number that passes the repeated test is prime. On the other hand, if the test ever fails, then the number is certainly composite. A composite number that passes such a test is called a pseudoprime. In contrast, some other algorithms guarantee that their answer will always be correct: primes will always be determined to be prime and composites will always be determined to be composite. For instance, this is true of trial division. The algorithms with guaranteed-correct output include both deterministic (non-random) algorithms, such as the AKS primality test, and randomized Las Vegas algorithms where the random choices made by the algorithm do not affect its final answer, such as some variations of elliptic curve primality proving. When the elliptic curve method concludes that a number is prime, it provides primality certificate that can be verified quickly. The elliptic curve primality test is the fastest in practice of the guaranteed-correct primality tests, but its runtime analysis is based on heuristic arguments rather than rigorous proofs. The AKS primality test has mathematically proven time complexity, but is slower than elliptic curve primality proving in practice. These methods can be used to generate large random prime numbers, by generating and testing random numbers until finding one that is prime; when doing this, a faster probabilistic test can quickly eliminate most composite numbers before a guaranteed-correct algorithm is used to verify that the remaining numbers are prime. The following table lists some of these tests. Their running time is given in terms of , the number to be tested and, for probabilistic algorithms, the number of tests performed. Moreover, is an arbitrarily small positive number, and log is the logarithm to an unspecified base. The big O notation means that each time bound should be multiplied by a constant factor to convert it from dimensionless units to units of time; this factor depends on implementation details such as the type of computer used to run the algorithm, but not on the input parameters and . Special-purpose algorithms and the largest known prime In addition to the aforementioned tests that apply to any natural number, some numbers of a special form can be tested for primality more quickly. For example, the Lucas–Lehmer primality test can determine whether a Mersenne number (one less than a power of two) is prime, deterministically, in the same time as a single iteration of the Miller–Rabin test. This is why since 1992 () the largest known prime has always been a Mersenne prime. It is conjectured that there are infinitely many Mersenne primes. The following table gives the largest known primes of various types. Some of these primes have been found using distributed computing. In 2009, the Great Internet Mersenne Prime Search project was awarded a US$100,000 prize for first discovering a prime with at least 10 million digits. The Electronic Frontier Foundation also offers $150,000 and $250,000 for primes with at least 100 million digits and 1 billion digits, respectively. Integer factorization Given a composite integer , the task of providing one (or all) prime factors is referred to as factorization of . It is significantly more difficult than primality testing, and although many factorization algorithms are known, they are slower than the fastest primality testing methods. Trial division and Pollard's rho algorithm can be used to find very small factors of , and elliptic curve factorization can be effective when has factors of moderate size. Methods suitable for arbitrary large numbers that do not depend on the size of its factors include the quadratic sieve and general number field sieve. As with primality testing, there are also factorization algorithms that require their input to have a special form, including the special number field sieve. the largest number known to have been factored by a general-purpose algorithm is RSA-240, which has 240 decimal digits (795 bits) and is the product of two large primes. Shor's algorithm can factor any integer in a polynomial number of steps on a quantum computer. However, current technology can only run this algorithm for very small numbers. , the largest number that has been factored by a quantum computer running Shor's algorithm is 21. Other computational applications Several public-key cryptography algorithms, such as RSA and the Diffie–Hellman key exchange, are based on large prime numbers (2048-bit primes are common). RSA relies on the assumption that it is much easier (that is, more efficient) to perform the multiplication of two (large) numbers and than to calculate and (assumed coprime) if only the product is known. The Diffie–Hellman key exchange relies on the fact that there are efficient algorithms for modular exponentiation (computing ), while the reverse operation (the discrete logarithm) is thought to be a hard problem. Prime numbers are frequently used for hash tables. For instance the original method of Carter and Wegman for universal hashing was based on computing hash functions by choosing random linear functions modulo large prime numbers. Carter and Wegman generalized this method to -independent hashing by using higher-degree polynomials, again modulo large primes. As well as in the hash function, prime numbers are used for the hash table size in quadratic probing based hash tables to ensure that the probe sequence covers the whole table. Some checksum methods are based on the mathematics of prime numbers. For instance the checksums used in International Standard Book Numbers are defined by taking the rest of the number modulo , a prime number. Because is prime this method can detect both single-digit errors and transpositions of adjacent digits. Another checksum method, Adler-32, uses arithmetic modulo , the largest prime number less than . Prime numbers are also used in pseudorandom number generators including linear congruential generators and the Mersenne Twister. Other applications Prime numbers are of central importance to number theory but also have many applications to other areas within mathematics, including abstract algebra and elementary geometry. For example, it is possible to place prime numbers of points in a two-dimensional grid so that no three are in a line, or so that every triangle formed by three of the points has large area. Another example is Eisenstein's criterion, a test for whether a polynomial is irreducible based on divisibility of its coefficients by a prime number and its square. The concept of a prime number is so important that it has been generalized in different ways in various branches of mathematics. Generally, "prime" indicates minimality or indecomposability, in an appropriate sense. For example, the prime field of a given field is its smallest subfield that contains both 0 and 1. It is either the field of rational numbers or a finite field with a prime number of elements, whence the name. Often a second, additional meaning is intended by using the word prime, namely that any object can be, essentially uniquely, decomposed into its prime components. For example, in knot theory, a prime knot is a knot that is indecomposable in the sense that it cannot be written as the connected sum of two nontrivial knots. Any knot can be uniquely expressed as a connected sum of prime knots. The prime decomposition of 3-manifolds is another example of this type. Beyond mathematics and computing, prime numbers have potential connections to quantum mechanics, and have been used metaphorically in the arts and literature. They have also been used in evolutionary biology to explain the life cycles of cicadas. Constructible polygons and polygon partitions Fermat primes are primes of the form with a nonnegative integer. They are named after Pierre de Fermat, who conjectured that all such numbers are prime. The first five of these numbers – 3, 5, 17, 257, and 65,537 – are prime, but is composite and so are all other Fermat numbers that have been verified as of 2017. A regular -gon is constructible using straightedge and compass if and only if the odd prime factors of (if any) are distinct Fermat primes. Likewise, a regular -gon may be constructed using straightedge, compass, and an angle trisector if and only if the prime factors of are any number of copies of 2 or 3 together with a (possibly empty) set of distinct Pierpont primes, primes of the form . It is possible to partition any convex polygon into smaller convex polygons of equal area and equal perimeter, when is a power of a prime number, but this is not known for other values of . Quantum mechanics Beginning with the work of Hugh Montgomery and Freeman Dyson in the 1970s, mathematicians and physicists have speculated that the zeros of the Riemann zeta function are connected to the energy levels of quantum systems. Prime numbers are also significant in quantum information science, thanks to mathematical structures such as mutually unbiased bases and symmetric informationally complete positive-operator-valued measures. Biology The evolutionary strategy used by cicadas of the genus Magicicada makes use of prime numbers. These insects spend most of their lives as grubs underground. They only pupate and then emerge from their burrows after 7, 13 or 17 years, at which point they fly about, breed, and then die after a few weeks at most. Biologists theorize that these prime-numbered breeding cycle lengths have evolved in order to prevent predators from synchronizing with these cycles. In contrast, the multi-year periods between flowering in bamboo plants are hypothesized to be smooth numbers, having only small prime numbers in their factorizations. Arts and literature Prime numbers have influenced many artists and writers. The French composer Olivier Messiaen used prime numbers to create ametrical music through "natural phenomena". In works such as La Nativité du Seigneur (1935) and Quatre études de rythme (1949–1950), he simultaneously employs motifs with lengths given by different prime numbers to create unpredictable rhythms: the primes 41, 43, 47 and 53 appear in the third étude, "Neumes rythmiques". According to Messiaen this way of composing was "inspired by the movements of nature, movements of free and unequal durations". In his science fiction novel Contact, scientist Carl Sagan suggested that prime factorization could be used as a means of establishing two-dimensional image planes in communications with aliens, an idea that he had first developed informally with American astronomer Frank Drake in 1975. In the novel The Curious Incident of the Dog in the Night-Time by Mark Haddon, the narrator arranges the sections of the story by consecutive prime numbers as a way to convey the mental state of its main character, a mathematically gifted teen with Asperger syndrome. Prime numbers are used as a metaphor for loneliness and isolation in the Paolo Giordano novel The Solitude of Prime Numbers, in which they are portrayed as "outsiders" among integers.
Mathematics
Counting and numbers
null
23670
https://en.wikipedia.org/wiki/Perfect%20number
Perfect number
In number theory, a perfect number is a positive integer that is equal to the sum of its positive proper divisors, that is, divisors excluding the number itself. For instance, 6 has proper divisors 1, 2 and 3, and 1 + 2 + 3 = 6, so 6 is a perfect number. The next perfect number is 28, since 1 + 2 + 4 + 7 + 14 = 28. The first four perfect numbers are 6, 28, 496 and 8128. The sum of proper divisors of a number is called its aliquot sum, so a perfect number is one that is equal to its aliquot sum. Equivalently, a perfect number is a number that is half the sum of all of its positive divisors; in symbols, where is the sum-of-divisors function. This definition is ancient, appearing as early as Euclid's Elements (VII.22) where it is called (perfect, ideal, or complete number). Euclid also proved a formation rule (IX.36) whereby is an even perfect number whenever is a prime of the form for positive integer —what is now called a Mersenne prime. Two millennia later, Leonhard Euler proved that all even perfect numbers are of this form. This is known as the Euclid–Euler theorem. It is not known whether there are any odd perfect numbers, nor whether infinitely many perfect numbers exist. History In about 300 BC Euclid showed that if 2p − 1 is prime then 2p−1(2p − 1) is perfect. The first four perfect numbers were the only ones known to early Greek mathematics, and the mathematician Nicomachus noted 8128 as early as around AD 100. In modern language, Nicomachus states without proof that perfect number is of the form where is prime. He seems to be unaware that itself has to be prime. He also says (wrongly) that the perfect numbers end in 6 or 8 alternately. (The first 5 perfect numbers end with digits 6, 8, 6, 8, 6; but the sixth also ends in 6.) Philo of Alexandria in his first-century book "On the creation" mentions perfect numbers, claiming that the world was created in 6 days and the moon orbits in 28 days because 6 and 28 are perfect. Philo is followed by Origen, and by Didymus the Blind, who adds the observation that there are only four perfect numbers that are less than 10,000. (Commentary on Genesis 1. 14–19). St Augustine defines perfect numbers in City of God (Book XI, Chapter 30) in the early 5th century AD, repeating the claim that God created the world in 6 days because 6 is the smallest perfect number. The Egyptian mathematician Ismail ibn Fallūs (1194–1252) mentioned the next three perfect numbers (33,550,336; 8,589,869,056; and 137,438,691,328) and listed a few more which are now known to be incorrect. The first known European mention of the fifth perfect number is a manuscript written between 1456 and 1461 by an unknown mathematician. In 1588, the Italian mathematician Pietro Cataldi identified the sixth (8,589,869,056) and the seventh (137,438,691,328) perfect numbers, and also proved that every perfect number obtained from Euclid's rule ends with a 6 or an 8. Even perfect numbers Euclid proved that is an even perfect number whenever is prime (Elements, Prop. IX.36). For example, the first four perfect numbers are generated by the formula with a prime number, as follows: Prime numbers of the form are known as Mersenne primes, after the seventeenth-century monk Marin Mersenne, who studied number theory and perfect numbers. For to be prime, it is necessary that itself be prime. However, not all numbers of the form with a prime are prime; for example, is not a prime number. In fact, Mersenne primes are very rare: of the primes up to 68,874,199, is prime for only 48 of them. While Nicomachus had stated (without proof) that perfect numbers were of the form where is prime (though he stated this somewhat differently), Ibn al-Haytham (Alhazen) circa AD 1000 was unwilling to go that far, declaring instead (also without proof) that the formula yielded only every even perfect number. It was not until the 18th century that Leonhard Euler proved that the formula will yield all the even perfect numbers. Thus, there is a one-to-one correspondence between even perfect numbers and Mersenne primes; each Mersenne prime generates one even perfect number, and vice versa. This result is often referred to as the Euclid–Euler theorem. An exhaustive search by the GIMPS distributed computing project has shown that the first 48 even perfect numbers are for = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609 and 57885161 . Four higher perfect numbers have also been discovered, namely those for which = 74207281, 77232917, 82589933 and 136279841. Although it is still possible there may be others within this range, initial but exhaustive tests by GIMPS have revealed no other perfect numbers for below 109332539. , 52 Mersenne primes are known, and therefore 52 even perfect numbers (the largest of which is with 82,048,640 digits). It is not known whether there are infinitely many perfect numbers, nor whether there are infinitely many Mersenne primes. As well as having the form , each even perfect number is the -th triangular number (and hence equal to the sum of the integers from 1 to ) and the -th hexagonal number. Furthermore, each even perfect number except for 6 is the -th centered nonagonal number and is equal to the sum of the first odd cubes (odd cubes up to the cube of ): Even perfect numbers (except 6) are of the form with each resulting triangular number , , (after subtracting 1 from the perfect number and dividing the result by 9) ending in 3 or 5, the sequence starting with , , , It follows that by adding the digits of any even perfect number (except 6), then adding the digits of the resulting number, and repeating this process until a single digit (called the digital root) is obtained, always produces the number 1. For example, the digital root of 8128 is 1, because , , and . This works with all perfect numbers with odd prime and, in fact, with numbers of the form for odd integer (not necessarily prime) . Owing to their form, every even perfect number is represented in binary form as ones followed by zeros; for example: Thus every even perfect number is a pernicious number. Every even perfect number is also a practical number (cf. Related concepts). Odd perfect numbers It is unknown whether any odd perfect numbers exist, though various results have been obtained. In 1496, Jacques Lefèvre stated that Euclid's rule gives all perfect numbers, thus implying that no odd perfect number exists, but Euler himself stated: "Whether ... there are any odd perfect numbers is a most difficult question". More recently, Carl Pomerance has presented a heuristic argument suggesting that indeed no odd perfect number should exist. All perfect numbers are also harmonic divisor numbers, and it has been conjectured as well that there are no odd harmonic divisor numbers other than 1. Many of the properties proved about odd perfect numbers also apply to Descartes numbers, and Pace Nielsen has suggested that sufficient study of those numbers may lead to a proof that no odd perfect numbers exist. Any odd perfect number N must satisfy the following conditions: N > 101500. N is not divisible by 105. N is of the form N ≡ 1 (mod 12) or N ≡ 117 (mod 468) or N ≡ 81 (mod 324). The largest prime factor of N is greater than 108, and less than The second largest prime factor is greater than 104, and is less than . The third largest prime factor is greater than 100, and less than N has at least 101 prime factors and at least 10 distinct prime factors. If 3 does not divide N, then N has at least 12 distinct prime factors. N is of the form where: q, p1, ..., pk are distinct odd primes (Euler). q ≡ α ≡ 1 (mod 4) (Euler). The smallest prime factor of N is at most At least one of the prime powers dividing N exceeds 1062. . . . Furthermore, several minor results are known about the exponents e1, ..., ek. Not all ei ≡ 1 (mod 3). Not all ei ≡ 2 (mod 5). If all ei ≡ 1 (mod 3) or 2 (mod 5), then the smallest prime factor of N must lie between 108 and 101000. More generally, if all 2ei+1 have a prime factor in a given finite set S, then the smallest prime factor of N must be smaller than an effectively computable constant depending only on S. If (e1, ..., ek) =  (1, ..., 1, 2, ..., 2) with t ones and u twos, then . (e1, ..., ek) ≠ (1, ..., 1, 3), (1, ..., 1, 5), (1, ..., 1, 6). If , then e cannot be 3, 5, 24, 6, 8, 11, 14 or 18. and . In 1888, Sylvester stated: Minor results All even perfect numbers have a very precise form; odd perfect numbers either do not exist or are rare. There are a number of results on perfect numbers that are actually quite easy to prove but nevertheless superficially impressive; some of them also come under Richard Guy's strong law of small numbers: The only even perfect number of the form n3 + 1 is 28 . 28 is also the only even perfect number that is a sum of two positive cubes of integers . The reciprocals of the divisors of a perfect number N must add up to 2 (to get this, take the definition of a perfect number, , and divide both sides by n): For 6, we have ; For 28, we have , etc. The number of divisors of a perfect number (whether even or odd) must be even, because N cannot be a perfect square. From these two results it follows that every perfect number is an Ore's harmonic number. The even perfect numbers are not trapezoidal numbers; that is, they cannot be represented as the difference of two positive non-consecutive triangular numbers. There are only three types of non-trapezoidal numbers: even perfect numbers, powers of two, and the numbers of the form formed as the product of a Fermat prime with a power of two in a similar way to the construction of even perfect numbers from Mersenne primes. The number of perfect numbers less than n is less than , where c > 0 is a constant. In fact it is , using little-o notation. Every even perfect number ends in 6 or 28, base ten; and, with the only exception of 6, ends in 1 in base 9. Therefore, in particular the digital root of every even perfect number other than 6 is 1. The only square-free perfect number is 6. Related concepts The sum of proper divisors gives various other kinds of numbers. Numbers where the sum is less than the number itself are called deficient, and where it is greater than the number, abundant. These terms, together with perfect itself, come from Greek numerology. A pair of numbers which are the sum of each other's proper divisors are called amicable, and larger cycles of numbers are called sociable. A positive integer such that every smaller positive integer is a sum of distinct divisors of it is a practical number. By definition, a perfect number is a fixed point of the restricted divisor function , and the aliquot sequence associated with a perfect number is a constant sequence. All perfect numbers are also -perfect numbers, or Granville numbers. A semiperfect number is a natural number that is equal to the sum of all or some of its proper divisors. A semiperfect number that is equal to the sum of all its proper divisors is a perfect number. Most abundant numbers are also semiperfect; abundant numbers which are not semiperfect are called weird numbers.
Mathematics
Sums and products
null
23690
https://en.wikipedia.org/wiki/Phosphate
Phosphate
In chemistry, a phosphate is an anion, salt, functional group or ester derived from a phosphoric acid. It most commonly means orthophosphate, a derivative of orthophosphoric acid, phosphoric acid . The phosphate or orthophosphate ion is derived from phosphoric acid by the removal of three protons . Removal of one proton gives the dihydrogen phosphate ion while removal of two protons gives the hydrogen phosphate ion . These names are also used for salts of those anions, such as ammonium dihydrogen phosphate and trisodium phosphate. In organic chemistry, phosphate or orthophosphate is an organophosphate, an ester of orthophosphoric acid of the form where one or more hydrogen atoms are replaced by organic groups. An example is trimethyl phosphate, . The term also refers to the trivalent functional group in such esters. Phosphates may contain sulfur in place of one or more oxygen atoms (thiophosphates and organothiophosphates). Orthophosphates are especially important among the various phosphates because of their key roles in biochemistry, biogeochemistry, and ecology, and their economic importance for agriculture and industry. The addition and removal of phosphate groups (phosphorylation and dephosphorylation) are key steps in cell metabolism. Orthophosphates can condense to form pyrophosphates. Chemical properties The phosphate ion has a molar mass of 94.97 g/mol, and consists of a central phosphorus atom surrounded by four oxygen atoms in a tetrahedral arrangement. It is the conjugate base of the hydrogen phosphate ion , which in turn is the conjugate base of the dihydrogen phosphate ion , which in turn is the conjugate base of orthophosphoric acid, . Many phosphates are soluble in water at standard temperature and pressure. The sodium, potassium, rubidium, caesium, and ammonium phosphates are all water-soluble. Most other phosphates are only slightly soluble or are insoluble in water. As a rule, the hydrogen and dihydrogen phosphates are slightly more soluble than the corresponding phosphates. Equilibria in solution In water solution, orthophosphoric acid and its three derived anions coexist according to the dissociation and recombination equilibria below Values are at 25°C and 0 ionic strength. The pKa values are the pH values where the concentration of each species is equal to that of its conjugate bases. At pH 1 or lower, the phosphoric acid is practically undissociated. Around pH 4.7 (mid-way between the first two pKa values) the dihydrogen phosphate ion, , is practically the only species present. Around pH 9.8 (mid-way between the second and third pKa values) the monohydrogen phosphate ion, , is the only species present. At pH 13 or higher, the acid is completely dissociated as the phosphate ion, . This means that salts of the mono- and di-phosphate ions can be selectively crystallised from aqueous solution by setting the pH value to either 4.7 or 9.8. In effect, , and behave as separate weak acids because the successive pKa differ by more than 4. Phosphate can form many polymeric ions such as pyrophosphate, , and triphosphate, . The various metaphosphate ions (which are usually long linear polymers) have an empirical formula of and are found in many compounds. Biochemistry of phosphates In biological systems, phosphorus can be found as free phosphate anions in solution (inorganic phosphate) or bound to organic molecules as various organophosphates. Inorganic phosphate is generally denoted Pi and at physiological (homeostatic) pH primarily consists of a mixture of and ions. At a neutral pH, as in the cytosol (pH = 7.0), the concentrations of the orthophoshoric acid and its three anions have the ratios Thus, only and ions are present in significant amounts in the cytosol (62% , 38% ). In extracellular fluid (pH = 7.4), this proportion is inverted (61% , 39% ). Inorganic phosphate can also be present as pyrophosphate anions , which give orthophosphate by hydrolysis: + H2O 2 Organic phosphates are commonly found in the form of esters as nucleotides (e.g. AMP, ADP, and ATP) and in DNA and RNA. Free orthophosphate anions can be released by the hydrolysis of the phosphoanhydride bonds in ATP or ADP. These phosphorylation and dephosphorylation reactions are the immediate storage and source of energy for many metabolic processes. ATP and ADP are often referred to as high-energy phosphates, as are the phosphagens in muscle tissue. Similar reactions exist for the other nucleoside diphosphates and triphosphates. Bones and teeth An important occurrence of phosphates in biological systems is as the structural material of bone and teeth. These structures are made of crystalline calcium phosphate in the form of hydroxyapatite. The hard dense enamel of mammalian teeth may contain fluoroapatite, a hydroxy calcium phosphate where some of the hydroxyl groups have been replaced by fluoride ions. Medical and biological research uses Phosphates are medicinal salts of phosphorus. Some phosphates, which help cure many urinary tract infections, are used to make urine more acidic. To avoid the development of calcium stones in the urinary tract, some phosphates are used. For patients who are unable to get enough phosphorus in their daily diet, phosphates are used as dietary supplements, usually because of certain disorders or diseases. Injectable phosphates can only be handled by qualified health care providers. Plant metabolism Plants take up phosphorus through several pathways: the arbuscular mycorrhizal pathway and the direct uptake pathway. Adverse health effects Hyperphosphatemia, or a high blood level of phosphates, is associated with elevated mortality in the general population. The most common cause of hyperphosphatemia in people, dogs, and cats is kidney failure. In cases of hyperphosphatemia, limiting consumption of phosphate-rich foods, such as some meats and dairy items and foods with a high phosphate-to-protein ratio, such as soft drinks, fast food, processed foods, condiments, and other products containing phosphate-salt additives is advised. Phosphates induce vascular calcification, and a high concentration of phosphates in blood was found to be a predictor of cardiovascular events. Production Geological occurrence Phosphates are the naturally occurring form of the element phosphorus, found in many phosphate minerals. In mineralogy and geology, phosphate refers to a rock or ore containing phosphate ions. Inorganic phosphates are mined to obtain phosphorus for use in agriculture and industry. The largest global producer and exporter of phosphates is Morocco. Within North America, the largest deposits lie in the Bone Valley region of central Florida, the Soda Springs region of southeastern Idaho, and the coast of North Carolina. Smaller deposits are located in Montana, Tennessee, Georgia, and South Carolina. The small island nation of Nauru and its neighbor Banaba Island, which used to have massive phosphate deposits of the best quality, have been mined excessively. Rock phosphate can also be found in Egypt, Israel, Palestine, Western Sahara, Navassa Island, Tunisia, Togo, and Jordan, countries that have large phosphate-mining industries. Phosphorite mines are primarily found in: North America: United States, especially Florida, with lesser deposits in North Carolina, Idaho, and Tennessee Africa: Morocco, Algeria, Egypt, Niger, Senegal, Togo, Tunisia, Mauritania Middle East: Saudi Arabia, Jordan, Israel, Syria, Iran and Iraq, at the town of Akashat, near the Jordanian border. Central Asia: Kazakhstan Oceania: Australia, Makatea, Nauru, and Banaba Island In 2007, at the current rate of consumption, the supply of phosphorus was estimated to run out in 345 years. However, some scientists thought that a "peak phosphorus" would occur in 30 years and Dana Cordell from Institute for Sustainable Futures said that at "current rates, reserves will be depleted in the next 50 to 100 years". Reserves refer to the amount assumed recoverable at current market prices. In 2012 the USGS estimated world reserves at 71 billion tons, while 0.19 billion tons were mined globally in 2011. Phosphorus comprises 0.1% by mass of the average rock (while, for perspective, its typical concentration in vegetation is 0.03% to 0.2%), and consequently there are quadrillions of tons of phosphorus in Earth's 3×1019-ton crust, albeit at predominantly lower concentration than the deposits counted as reserves, which are inventoried and cheaper to extract. If it is assumed that the phosphate minerals in phosphate rock are mainly hydroxyapatite and fluoroapatite, phosphate minerals contain roughly 18.5% phosphorus by weight. If phosphate rock contains around 20% of these minerals, the average phosphate rock has roughly 3.7% phosphorus by weight. Some phosphate rock deposits, such as Mulberry in Florida, are notable for their inclusion of significant quantities of radioactive uranium isotopes. This is a concern because radioactivity can be released into surface waters from application of the resulting phosphate fertilizer. In December 2012, Cominco Resources announced an updated JORC compliant resource of their Hinda project in Congo-Brazzaville of 531 million tons, making it the largest measured and indicated phosphate deposit in the world. Around 2018, Norway discovered phosphate deposits almost equal to those in the rest of Earth combined. In July 2022 China announced quotas on phosphate exportation. The largest importers in millions of metric tons of phosphate are Brazil 3.2, India 2.9 and the USA 1.6. Mining The three principal phosphate producer countries (China, Morocco and the United States) account for about 70% of world production. Ecology In ecological terms, because of its important role in biological systems, phosphate is a highly sought after resource. Once used, it is often a limiting nutrient in environments, and its availability may govern the rate of growth of organisms. This is generally true of freshwater environments, whereas nitrogen is more often the limiting nutrient in marine (seawater) environments. Addition of high levels of phosphate to environments and to micro-environments in which it is typically rare can have significant ecological consequences. For example, blooms in the populations of some organisms at the expense of others, and the collapse of populations deprived of resources such as oxygen (see eutrophication) can occur. In the context of pollution, phosphates are one component of total dissolved solids, a major indicator of water quality, but not all phosphorus is in a molecular form that algae can break down and consume. Calcium hydroxyapatite and calcite precipitates can be found around bacteria in alluvial topsoil. As clay minerals promote biomineralization, the presence of bacteria and clay minerals resulted in calcium hydroxyapatite and calcite precipitates. Phosphate deposits can contain significant amounts of naturally occurring heavy metals. Mining operations processing phosphate rock can leave tailings piles containing elevated levels of cadmium, lead, nickel, copper, chromium, and uranium. Unless carefully managed, these waste products can leach heavy metals into groundwater or nearby estuaries. Uptake of these substances by plants and marine life can lead to concentration of toxic heavy metals in food products.
Physical sciences
Phosphoric oxyanions
Chemistry
23692
https://en.wikipedia.org/wiki/Prime%20number%20theorem
Prime number theorem
In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann (in particular, the Riemann zeta function). The first such distribution found is , where is the prime-counting function (the number of primes less than or equal to N) and is the natural logarithm of . This means that for large enough , the probability that a random integer not greater than is prime is very close to . Consequently, a random integer with at most digits (for large enough ) is about half as likely to be prime as a random integer with at most digits. For example, among the positive integers of at most 1000 digits, about one in 2300 is prime (), whereas among positive integers of at most 2000 digits, about one in 4600 is prime (). In other words, the average gap between consecutive prime numbers among the first integers is roughly . Statement Let be the prime-counting function defined to be the number of primes less than or equal to , for any real number . For example, because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that is a good approximation to (where log here means the natural logarithm), in the sense that the limit of the quotient of the two functions and as increases without bound is 1: known as the asymptotic law of distribution of prime numbers. Using asymptotic notation this result can be restated as This notation (and the theorem) does not say anything about the limit of the difference of the two functions as increases without bound. Instead, the theorem states that approximates in the sense that the relative error of this approximation approaches 0 as increases without bound. The prime number theorem is equivalent to the statement that the th prime number satisfies the asymptotic notation meaning, again, that the relative error of this approximation approaches 0 as increases without bound. For example, the th prime number is , and ()log() rounds to , a relative error of about 6.4%. On the other hand, the following asymptotic relations are logically equivalent: As outlined below, the prime number theorem is also equivalent to where and are the first and the second Chebyshev functions respectively, and to where is the Mertens function. History of the proof of the asymptotic law of prime numbers Based on the tables by Anton Felkel and Jurij Vega, Adrien-Marie Legendre conjectured in 1797 or 1798 that is approximated by the function , where and are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with and . Carl Friedrich Gauss considered the same question at age 15 or 16 "in the year 1792 or 1793", according to his own recollection in 1849. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of and stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients. In two papers from 1848 and 1850, the Russian mathematician Pafnuty Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function , for real values of the argument "", as in works of Leonhard Euler, as early as 1737. Chebyshev's papers predated Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit as goes to infinity of exists at all, then it is necessarily equal to one. He was able to prove unconditionally that this ratio is bounded above and below by 0.92129 and 1.10555, for all sufficiently large . Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for were strong enough for him to prove Bertrand's postulate that there exists a prime number between and for any integer . An important paper concerning the distribution of prime numbers was Riemann's 1859 memoir "On the Number of Primes Less Than a Given Magnitude", the only paper he ever wrote on the subject. Riemann introduced new ideas into the subject, chiefly that the distribution of prime numbers is intimately connected with the zeros of the analytically extended Riemann zeta function of a complex variable. In particular, it is in this paper that the idea to apply methods of complex analysis to the study of the real function originates. Extending Riemann's ideas, two proofs of the asymptotic law of the distribution of prime numbers were found independently by Jacques Hadamard and Charles Jean de la Vallée Poussin and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta function is nonzero for all complex values of the variable that have the form with . During the 20th century, the theorem of Hadamard and de la Vallée Poussin also became known as the Prime Number Theorem. Several different proofs of it were found, including the "elementary" proofs of Atle Selberg and Paul Erdős (1949). Hadamard's and de la Vallée Poussin's original proofs are long and elaborate; later proofs introduced various simplifications through the use of Tauberian theorems but remained difficult to digest. A short proof was discovered in 1980 by the American mathematician Donald J. Newman. Newman's proof is arguably the simplest known proof of the theorem, although it is non-elementary in the sense that it uses Cauchy's integral theorem from complex analysis. Proof sketch Here is a sketch of the proof referred to in one of Terence Tao's lectures. Like most proofs of the PNT, it starts out by reformulating the problem in terms of a less intuitive, but better-behaved, prime-counting function. The idea is to count the primes (or a related set such as the set of prime powers) with weights to arrive at a function with smoother asymptotic behavior. The most common such generalized counting function is the Chebyshev function , defined by This is sometimes written as where is the von Mangoldt function, namely It is now relatively easy to check that the PNT is equivalent to the claim that Indeed, this follows from the easy estimates and (using big notation) for any , The next step is to find a useful representation for . Let be the Riemann zeta function. It can be shown that is related to the von Mangoldt function , and hence to , via the relation A delicate analysis of this equation and related properties of the zeta function, using the Mellin transform and Perron's formula, shows that for non-integer the equation holds, where the sum is over all zeros (trivial and nontrivial) of the zeta function. This striking formula is one of the so-called explicit formulas of number theory, and is already suggestive of the result we wish to prove, since the term (claimed to be the correct asymptotic order of ) appears on the right-hand side, followed by (presumably) lower-order asymptotic terms. The next step in the proof involves a study of the zeros of the zeta function. The trivial zeros −2, −4, −6, −8, ... can be handled separately: which vanishes for large . The nontrivial zeros, namely those on the critical strip , can potentially be of an asymptotic order comparable to the main term if , so we need to show that all zeros have real part strictly less than 1. Non-vanishing on Re(s) = 1 To do this, we take for granted that is meromorphic in the half-plane , and is analytic there except for a simple pole at , and that there is a product formula for . This product formula follows from the existence of unique prime factorization of integers, and shows that is never zero in this region, so that its logarithm is defined there and Write ; then Now observe the identity so that for all . Suppose now that . Certainly is not zero, since has a simple pole at . Suppose that and let tend to 1 from above. Since has a simple pole at and stays analytic, the left hand side in the previous inequality tends to 0, a contradiction. Finally, we can conclude that the PNT is heuristically true. To rigorously complete the proof there are still serious technicalities to overcome, due to the fact that the summation over zeta zeros in the explicit formula for does not converge absolutely but only conditionally and in a "principal value" sense. There are several ways around this problem but many of them require rather delicate complex-analytic estimates. Edwards's book provides the details. Another method is to use Ikehara's Tauberian theorem, though this theorem is itself quite hard to prove. D.J. Newman observed that the full strength of Ikehara's theorem is not needed for the prime number theorem, and one can get away with a special case that is much easier to prove. Newman's proof of the prime number theorem D. J. Newman gives a quick proof of the prime number theorem (PNT). The proof is "non-elementary" by virtue of relying on complex analysis, but uses only elementary techniques from a first course in the subject: Cauchy's integral formula, Cauchy's integral theorem and estimates of complex integrals. Here is a brief sketch of this proof. See for the complete details. The proof uses the same preliminaries as in the previous section except instead of the function , the Chebyshev function is used, which is obtained by dropping some of the terms from the series for . Similar to the argument in the previous proof based on Tao's lecture, we can show that , and for any . Thus, the PNT is equivalent to . Likewise instead of the function is used, which is obtained by dropping some terms in the series for . The functions and differ by a function holomorphic on . Since, as was shown in the previous section, has no zeroes on the line , has no singularities on . One further piece of information needed in Newman's proof, and which is the key to the estimates in his simple method, is that is bounded. This is proved using an ingenious and easy method due to Chebyshev. Integration by parts shows how and are related. For , Newman's method proves the PNT by showing the integral converges, and therefore the integrand goes to zero as , which is the PNT. In general, the convergence of the improper integral does not imply that the integrand goes to zero at infinity, since it may oscillate, but since is increasing, it is easy to show in this case. To show the convergence of , for let and where then which is equal to a function holomorphic on the line . The convergence of the integral , and thus the PNT, is proved by showing that . This involves change of order of limits since it can be written and therefore classified as a Tauberian theorem. The difference is expressed using Cauchy's integral formula and then shown to be small for large by estimating the integrand. Fix and such that is holomorphic in the region where , and let be the boundary of this region. Since 0 is in the interior of the region, Cauchy's integral formula gives where is the factor introduced by Newman, which does not change the integral since is entire and . To estimate the integral, break the contour into two parts, where and . Then where . Since , and hence , is bounded, let be an upper bound for the absolute value of . This bound together with the estimate for gives that the first integral in absolute value is . The integrand over in the second integral is entire, so by Cauchy's integral theorem, the contour can be modified to a semicircle of radius in the left half-plane without changing the integral, and the same argument as for the first integral gives the absolute value of the second integral is . Finally, letting , the third integral goes to zero since and hence goes to zero on the contour. Combining the two estimates and the limit get This holds for any so , and the PNT follows. Prime-counting function in terms of the logarithmic integral In a handwritten note on a reprint of his 1838 paper "", which he mailed to Gauss, Dirichlet conjectured (under a slightly different form appealing to a series rather than an integral) that an even better approximation to is given by the offset logarithmic integral function , defined by Indeed, this integral is strongly suggestive of the notion that the "density" of primes around should be . This function is related to the logarithm by the asymptotic expansion So, the prime number theorem can also be written as . In fact, in another paper in 1899 de la Vallée Poussin proved that for some positive constant , where is the big notation. This has been improved to where . In 2016, Trudgian proved an explicit upper bound for the difference between and : for . The connection between the Riemann zeta function and is one reason the Riemann hypothesis has considerable importance in number theory: if established, it would yield a far better estimate of the error involved in the prime number theorem than is available today. More specifically, Helge von Koch showed in 1901 that if the Riemann hypothesis is true, the error term in the above relation can be improved to (this last estimate is in fact equivalent to the Riemann hypothesis). The constant involved in the big notation was estimated in 1976 by Lowell Schoenfeld, assuming the Riemann hypothesis: for all . He also derived a similar bound for the Chebyshev prime-counting function : for all  . This latter bound has been shown to express a variance to mean power law (when regarded as a random function over the integers) and  noise and to also correspond to the Tweedie compound Poisson distribution. (The Tweedie distributions represent a family of scale invariant distributions that serve as foci of convergence for a generalization of the central limit theorem.) A lower bound is also derived by J. E. Littlewood, assuming the Riemann hypothesis: The logarithmic integral is larger than for "small" values of . This is because it is (in some sense) counting not primes, but prime powers, where a power of a prime is counted as of a prime. This suggests that should usually be larger than by roughly and in particular should always be larger than . However, in 1914, Littlewood proved that changes sign infinitely often. The first value of where exceeds is probably around ; see the article on Skewes' number for more details. (On the other hand, the offset logarithmic integral is smaller than already for ; indeed, , while .) Elementary proofs In the first half of the twentieth century, some mathematicians (notably G. H. Hardy) believed that there exists a hierarchy of proof methods in mathematics depending on what sorts of numbers (integers, reals, complex) a proof requires, and that the prime number theorem (PNT) is a "deep" theorem by virtue of requiring complex analysis. This belief was somewhat shaken by a proof of the PNT based on Wiener's tauberian theorem, though Wiener's proof ultimately relies on properties of the Riemann zeta function on the line , where complex analysis must be used. In March 1948, Atle Selberg established, by "elementary" means, the asymptotic formula where for primes . By July of that year, Selberg and Paul Erdős had each obtained elementary proofs of the PNT, both using Selberg's asymptotic formula as a starting point. These proofs effectively laid to rest the notion that the PNT was "deep" in that sense, and showed that technically "elementary" methods were more powerful than had been believed to be the case. On the history of the elementary proofs of the PNT, including the Erdős–Selberg priority dispute, see an article by Dorian Goldfeld. There is some debate about the significance of Erdős and Selberg's result. There is no rigorous and widely accepted definition of the notion of elementary proof in number theory, so it is not clear exactly in what sense their proof is "elementary". Although it does not use complex analysis, it is in fact much more technical than the standard proof of PNT. One possible definition of an "elementary" proof is "one that can be carried out in first-order Peano arithmetic." There are number-theoretic statements (for example, the Paris–Harrington theorem) provable using second order but not first-order methods, but such theorems are rare to date. Erdős and Selberg's proof can certainly be formalized in Peano arithmetic, and in 1994, Charalambos Cornaros and Costas Dimitracopoulos proved that their proof can be formalized in a very weak fragment of PA, namely . However, this does not address the question of whether or not the standard proof of PNT can be formalized in PA. A more recent "elementary" proof of the prime number theorem uses ergodic theory, due to Florian Richter. The prime number theorem is obtained there in an equivalent form that the Cesàro sum of the values of the Liouville function is zero. The Liouville function is where is the number of prime factors, with multiplicity, of the integer . Bergelson and Richter (2022) then obtain this form of the prime number theorem from an ergodic theorem which they prove: Let be a compact metric space, a continuous self-map of , and a -invariant Borel probability measure for which is uniquely ergodic. Then, for every , This ergodic theorem can also be used to give "soft" proofs of results related to the prime number theorem, such as the Pillai–Selberg theorem and Erdős–Delange theorem. Computer verifications In 2005, Avigad et al. employed the Isabelle theorem prover to devise a computer-verified variant of the Erdős–Selberg proof of the PNT. This was the first machine-verified proof of the PNT. Avigad chose to formalize the Erdős–Selberg proof rather than an analytic one because while Isabelle's library at the time could implement the notions of limit, derivative, and transcendental function, it had almost no theory of integration to speak of. In 2009, John Harrison employed HOL Light to formalize a proof employing complex analysis. By developing the necessary analytic machinery, including the Cauchy integral formula, Harrison was able to formalize "a direct, modern and elegant proof instead of the more involved 'elementary' Erdős–Selberg argument". Prime number theorem for arithmetic progressions Let denote the number of primes in the arithmetic progression that are less than . Dirichlet and Legendre conjectured, and de la Vallée Poussin proved, that if and are coprime, then where is Euler's totient function. In other words, the primes are distributed evenly among the residue classes modulo with  . This is stronger than Dirichlet's theorem on arithmetic progressions (which only states that there is an infinity of primes in each class) and can be proved using similar methods used by Newman for his proof of the prime number theorem. The Siegel–Walfisz theorem gives a good estimate for the distribution of primes in residue classes. Bennett et al. proved the following estimate that has explicit constants and (Theorem 1.3): Let be an integer and let be an integer that is coprime to . Then there are positive constants and such that where and Prime number race Although we have in particular empirically the primes congruent to 3 are more numerous and are nearly always ahead in this "prime number race"; the first reversal occurs at . However Littlewood showed in 1914 that there are infinitely many sign changes for the function so the lead in the race switches back and forth infinitely many times. The phenomenon that is ahead most of the time is called Chebyshev's bias. The prime number race generalizes to other moduli and is the subject of much research; Pál Turán asked whether it is always the case that and change places when and are coprime to . Granville and Martin give a thorough exposition and survey. Another example is the distribution of the last digit of prime numbers. Except for 2 and 5, all prime numbers end in 1, 3, 7, or 9. Dirichlet's theorem states that asymptotically, 25% of all primes end in each of these four digits. However, empirical evidence shows that, for a given limit, there tend to be slightly more primes that end in 3 or 7 than end in 1 or 9 (a generation of the Chebyshev's bias). This follows that 1 and 9 are quadratic residues modulo 10, and 3 and 7 are quadratic nonresidues modulo 10. Non-asymptotic bounds on the prime-counting function The prime number theorem is an asymptotic result. It gives an ineffective bound on as a direct consequence of the definition of the limit: for all , there is an such that for all , However, better bounds on are known, for instance Pierre Dusart's The first inequality holds for all and the second one for . The proof by de la Vallée Poussin implies the following bound: For every , there is an such that for all , The value gives a weak but sometimes useful bound for : In Pierre Dusart's thesis there are stronger versions of this type of inequality that are valid for larger . Later in 2010, Dusart proved: Note that the first of these obsoletes the condition on the lower bound. Approximations for the nth prime number As a consequence of the prime number theorem, one gets an asymptotic expression for the th prime number, denoted by : A better approximation is by Cesàro (1894): Again considering the th prime number , assuming the trailing error term is zero gives an estimate of ; the first 5 digits match and relative error is about 0.46 parts per million. Cipolla (1902) showed that these are the leading terms of an infinite series which may be truncated at arbitrary degree, with where each is a degree- monic polynomial. (, , , and so on.) Rosser's theorem states that Dusart (1999). found tighter bounds using the form of the Cesàro/Cipolla approximations but varying the lowest-order constant term. is the same function as above, but with the lowest-order constant term replaced by a parameter : The upper bounds can be extended to smaller by loosening the parameter. For example, for all . Axler (2019) extended this to higher order, showing: Again, the bound on may be decreased by loosening the parameter. For example, for . Table of π(x), x / log x, and li(x) The table compares exact values of to the two approximations and . The approximation difference columns are rounded to the nearest integer, but the "% error" columns are computed based on the unrounded approximations. The last column, , is the average prime gap below . {| class="wikitable col1left" style="text-align: right" !rowspan=2 scope=col| !rowspan=2 scope=col| !rowspan=2 scope=col| !rowspan=2 scope=col| !colspan=2 scope=colgroup| % error !rowspan=2 scope=col| |- !scope=col| !scope=col| |- | 10 | 4 | 0 | 2 |8.22% |42.606% | 2.500 |- | 102 | 25 | 3 | 5 |14.06% |18.597% | 4.000 |- | 103 | 168 | 23 | 10 |14.85% |5.561% | 5.952 |- | 104 | 1,229 | 143 | 17 |12.37% |1.384% | 8.137 |- | 105 | 9,592 | 906 | 38 |9.91% |0.393% | 10.425 |- | 106 | 78,498 | 6,116 | 130 |8.11% |0.164% | 12.739 |- | 107 | 664,579 | 44,158 | 339 |6.87% |0.051% | 15.047 |- | 108 | 5,761,455 | 332,774 | 754 |5.94% |0.013% | 17.357 |- | 109 | 50,847,534 | 2,592,592 | 1,701 |5.23% |3.34 % | 19.667 |- | 1010 | 455,052,511 | 20,758,029 | 3,104 |4.66% |6.82 % | 21.975 |- | 1011 | 4,118,054,813 | 169,923,159 | 11,588 |4.21% |2.81 % | 24.283 |- | 1012 | 37,607,912,018 | 1,416,705,193 | 38,263 |3.83% |1.02 % | 26.590 |- | 1013 | 346,065,536,839 | 11,992,858,452 | 108,971 |3.52% |3.14 % | 28.896 |- | 1014 | | 102,838,308,636 | 314,890 |3.26% |9.82 % | 31.202 |- | 1015 | | 891,604,962,452 | 1,052,619 |3.03% |3.52 % | 33.507 |- | 1016 | | | 3,214,632 |2.83% |1.15 % | 35.812 |- | 1017 | | | 7,956,589 |2.66% |3.03 % | 38.116 |- | 1018 | | | 21,949,555 |2.51% |8.87 % | 40.420 |- | 1019 | | | 99,877,775 |2.36% |4.26 % | 42.725 |- | 1020 | | | 222,744,644 |2.24% |1.01 % | 45.028 |- | 1021 | | | 597,394,254 |2.13% |2.82 % | 47.332 |- | 1022 | | | 1,932,355,208 |2.03% |9.59 % | 49.636 |- | 1023 | | | 7,250,186,216 |1.94% |3.76 % | 51.939 |- | 1024 | | | 17,146,907,278 |1.86% |9.31 % | 54.243 |- | 1025 | | | 55,160,980,939 |1.78% |3.21 % | 56.546 |- | 1026 | | | 155,891,678,121 |1.71% |9.17 % | 58.850 |- | 1027 | | | 508,666,658,006 |1.64% |3.11 % | 61.153 |- | 1028 | | | |1.58% |9.05 % | 63.456 |- | 1029 | | | |1.53% |2.99 % | 65.759 |} The value for was originally computed assuming the Riemann hypothesis; it has since been verified unconditionally. Analogue for irreducible polynomials over a finite field There is an analogue of the prime number theorem that describes the "distribution" of irreducible polynomials over a finite field; the form it takes is strikingly similar to the case of the classical prime number theorem. To state it precisely, let be the finite field with elements, for some fixed , and let be the number of monic irreducible polynomials over whose degree is equal to . That is, we are looking at polynomials with coefficients chosen from , which cannot be written as products of polynomials of smaller degree. In this setting, these polynomials play the role of the prime numbers, since all other monic polynomials are built up of products of them. One can then prove that If we make the substitution , then the right hand side is just which makes the analogy clearer. Since there are precisely monic polynomials of degree (including the reducible ones), this can be rephrased as follows: if a monic polynomial of degree is selected randomly, then the probability of it being irreducible is about . One can even prove an analogue of the Riemann hypothesis, namely that The proofs of these statements are far simpler than in the classical case. It involves a short, combinatorial argument, summarised as follows: every element of the degree extension of is a root of some irreducible polynomial whose degree divides ; by counting these roots in two different ways one establishes that where the sum is over all divisors of . Möbius inversion then yields where is the Möbius function. (This formula was known to Gauss.) The main term occurs for , and it is not difficult to bound the remaining terms. The "Riemann hypothesis" statement depends on the fact that the largest proper divisor of can be no larger than .
Mathematics
Other
null
23703
https://en.wikipedia.org/wiki/Potential%20energy
Potential energy
In physics, potential energy is the energy held by an object because of its position relative to other objects, stresses within itself, its electric charge, or other factors. The term potential energy was introduced by the 19th-century Scottish engineer and physicist William Rankine, although it has links to the ancient Greek philosopher Aristotle's concept of potentiality. Common types of potential energy include the gravitational potential energy of an object, the elastic potential energy of a deformed spring, and the electric potential energy of an electric charge in an electric field. The unit for energy in the International System of Units (SI) is the joule (symbol J). Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, whose total work is path independent, are called conservative forces. If the force acting on a body varies over space, then one has a force field; such a field is described by vectors at every point in space, which is in-turn called a vector field. A conservative vector field can be simply expressed as the gradient of a certain scalar function, called a scalar potential. The potential energy is related to, and can be obtained from, this potential function. Overview There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration. Forces derivable from a potential are also called conservative forces. The work done by a conservative force is where is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, while work done by the force field decreases potential energy. Common notations for potential energy are PE, U, V, and Ep. Potential energy is the energy by virtue of an object's position relative to other objects. Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching a spring or lifting a mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall. Consider a ball whose mass is dropped from height . The acceleration of free fall is approximately constant, so the weight force of the ball is constant. The product of force and displacement gives the work done, which is equal to the gravitational potential energy, thus The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position. History From around 1840 scientists sought to define and understand energy and work. The term "potential energy" was coined by William Rankine a Scottish engineer and physicist in 1853 as part of a specific effort to develop terminology. He chose the term as part of the pair "actual" vs "potential" going back to work by Aristotle. In his 1867 discussion of the same topic Rankine describes potential energy as 'energy of configuration' in contrast to actual energy as 'energy of activity'. Also in 1867, William Thomson introduced "kinetic energy" as the opposite of "potential energy", asserting that all actual energy took the form of mv2. Once this hypothesis became widely accepted, the term "actual energy" gradually faded. Work and potential energy Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points (if the work is done by a conservative force), then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for an applied force is independent of the path, then the work done by the force is evaluated from the start to the end of the trajectory of the point of application. This means that there is a function U(x), called a "potential", that can be evaluated at the two points xA and xB to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is where C is the trajectory taken from A to B. Because the work done is independent of the path taken, then this expression is true for any trajectory, C, from A to B. The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces. Derivable from a potential In this section the relationship between work and potential energy is presented in more detail. The line integral that defines work along curve C takes a special form if the force F is related to a scalar field U′(x) so that This means that the units of U′ must be this case, work along the curve is given by which can be evaluated using the gradient theorem to obtain This shows that when forces are derivable from a scalar field, the work of those forces along a curve C is computed by evaluating the scalar field at the start point A and the end point B of the curve. This means the work integral does not depend on the path between A and B and is said to be independent of the path. Potential energy is traditionally defined as the negative of this scalar field so that work by the force field decreases potential energy, that is In this case, the application of the del operator to the work function yields, and the force F is said to be "derivable from a potential". This also necessarily implies that F must be a conservative vector field. The potential U defines a force F at every point x in space, so the set of forces is called a force field. Computing potential energy Given a force field F(x), evaluation of the work integral using the gradient theorem can be used to find the scalar function associated with potential energy. This is done by introducing a parameterized curve from to , and computing, For the force field F, let , then the gradient theorem yields, The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity v of the point of application, that is Examples of work that can be computed from potential functions are gravity and spring forces. Potential energy for near-Earth gravity For small height changes, gravitational potential energy can be computed using where m is the mass in kilograms, g is the local gravitational field (9.8 metres per second squared on Earth), h is the height above a reference level in metres, and U is the energy in joules. In classical physics, gravity exerts a constant downward force on the center of mass of a body moving near the surface of the Earth. The work of gravity on a body moving along a trajectory , such as the track of a roller coaster is calculated using its velocity, , to obtain where the integral of the vertical component of velocity is the vertical distance. The work of gravity depends only on the vertical movement of the curve . Potential energy for a linear spring A horizontal spring exerts a force that is proportional to its deformation in the axial or x direction. The work of this spring on a body moving along the space curve , is calculated using its velocity, , to obtain For convenience, consider contact with the spring occurs at , then the integral of the product of the distance x and the x-velocity, xvx, is x2/2. The function is called the potential energy of a linear spring. Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy. Potential energy for gravitational forces between two bodies The gravitational potential function, also known as gravitational potential energy, is: The negative sign follows the convention that work is gained from a loss of potential energy. Derivation The gravitational force between two bodies of mass M and m separated by a distance r is given by Newton's law of universal gravitation where is a vector of length 1 pointing from M to m and G is the gravitational constant. Let the mass m move at the velocity then the work of gravity on this mass as it moves from position to is given by The position and velocity of the mass m are given by where er and et are the radial and tangential unit vectors directed relative to the vector from M to m. Use this to simplify the formula for work of gravity to, This calculation uses the fact that Potential energy for electrostatic forces between two bodies The electrostatic force exerted by a charge Q on another charge q separated by a distance r is given by Coulomb's Law where is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. The work W required to move q from A to any point B in the electrostatic force field is given by the potential function Reference level The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state; it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used; therefore it can be chosen based on convenience. Typically the potential energy of a system depends on the relative positions of its components only, so the reference state can also be expressed in terms of relative positions. Gravitational potential energy Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount. Consider a book placed on top of a table. As the book is raised from the floor to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat, deformation, and sound by the impact. The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. "Height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail. Local approximation The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant (standard gravity). In this case, a simple expression for gravitational potential energy can be derived using the equation for work, and the equation The amount of gravitational potential energy held by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember ). The upward force required while moving at a constant velocity is equal to the weight, , of an object, so the work done in lifting it through a height is the product . Thus, when accounting only for mass, gravity, and altitude, the equation is: where is the potential energy of the object relative to its being on the Earth's surface, is the mass of the object, is the acceleration due to gravity, and h is the altitude of the object. Hence, the potential difference is General formula However, over large variations in distance, the approximation that is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance between the two bodies. Using that definition, the gravitational potential energy of a system of masses and at a distance using the Newtonian constant of gravitation is where is an arbitrary constant dependent on the choice of datum from which potential is measured. Choosing the convention that (i.e. in relation to a point at infinity) makes calculations simpler, albeit at the cost of making negative; for why this is physically reasonable, see below. Given this formula for , the total potential energy of a system of bodies is found by summing, for all pairs of two bodies, the potential energy of the system of those two bodies. Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity. therefore, Negative gravitational energy As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which becomes zero: and . The choice of at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative. The singularity at in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with for , would result in potential energy being positive, but infinitely large for all nonzero values of , and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and is always non-zero in practice, the choice of at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first. The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this. Uses Gravitational potential energy has a number of practical uses, notably the generation of pumped-storage hydroelectricity. For example, in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity. The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction. Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism. It is also used by counterweights for lifting up an elevator, crane, or sash window. Roller coasters are an entertaining way to utilize potential energy – chains are used to move a car up an incline (building up gravitational potential energy), to then have that energy converted into kinetic energy as it falls. Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the kinetic energy obtained from the potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips. The commercialization of stored energy (in the form of rail cars raised to higher elevations) that is then converted to electrical energy when needed by an electrical grid, is being undertaken in the United States in a system called Advanced Rail Energy Storage (ARES). Chemical potential energy Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions. The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc. Electric potential energy An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy). Electrostatic potential energy Electrostatic potential energy between two bodies in space is obtained from the force exerted by a charge Q on another charge q which is given by where is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. If the electric charge of an object can be assumed to be at rest, then it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, adjusted for non-electrical forces on the object. This energy will generally be non-zero if there is another electrically charged object nearby. The work W required to move q from A to any point B in the electrostatic force field is given by typically given in J for Joules. A related quantity called electric potential (commonly denoted with a V for voltage) is equal to the electric potential energy per unit charge. Magnetic potential energy The energy of a magnetic moment in an externally produced magnetic B-field has potential energy The magnetization in a field is where the integral can be over all space or, equivalently, where is nonzero. Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when its field is in the same direction as the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be higher the further they are apart and lower the closer they are. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart. Nuclear potential energy Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Weak nuclear forces provide the potential energy for certain kinds of radioactive decay, such as beta decay. Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them can have less mass than if they were individually free, in which case this mass difference can be liberated as heat and radiation in nuclear reactions (the heat and radiation have the missing mass, but it often escapes from the system, where it is not measured). The energy from the Sun is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million tonnes of solar matter per second into electromagnetic energy, which is radiated into space. Forces and potential energy Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points, then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by or , corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass M and m separated by a distance r is The gravitational potential (specific energy) of the two bodies is where is the reduced mass. The work done against gravity by moving an infinitesimal mass from point A with to point B with is and the work done going back the other way is so that the total work done in moving from A to B and returning to A is If the potential is redefined at A to be and the potential at B to be , where is a constant (i.e. can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is as before. In practical terms, this means that one can set the zero of and anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section). A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field.
Physical sciences
Classical mechanics
null
23704
https://en.wikipedia.org/wiki/Pyramid
Pyramid
A pyramid () is a structure whose visible surfaces are triangular in broad outline and converge toward the top, making the appearance roughly a pyramid in the geometric sense. The base of a pyramid can be of any polygon shape, such as triangular or quadrilateral, and its lines either filled or stepped. A pyramid has the majority of its mass closer to the ground with less mass towards the pyramidion at the apex. This is due to the gradual decrease in the cross-sectional area along the vertical axis with increasing elevation. This offers a weight distribution that allowed early civilizations to create monumental structures.Civilizations in many parts of the world have built pyramids. The largest pyramid by volume is the Mesoamerican Great Pyramid of Cholula, in the Mexican state of Puebla. For millennia, the largest structures on Earth were pyramids—first the Red Pyramid in the Dashur Necropolis and then the Great Pyramid of Khufu, both in Egypt—the latter is the only extant example of the Seven Wonders of the Ancient World. Ancient monuments West Asia Mesopotamia The Mesopotamians built the earliest pyramidal structures, called ziggurats. In ancient times, these were brightly painted in gold/bronze. They were constructed of sun-dried mud-brick, and little remains of them. Ziggurats were built by the Sumerians, Babylonians, Elamites, Akkadians, and Assyrians. Each ziggurat was part of a temple complex that included other buildings. The ziggurat's precursors were raised platforms that date from the Ubaid period of the fourth millennium BC. The earliest ziggurats began near the end of the Early Dynastic Period. The original pyramidal structure, the anu ziggurat, dates to around 4000 BC. The White Temple was built on top of it circa 3500 BC. Built in receding tiers upon a rectangular, oval, or square platform, the ziggurat was a pyramidal structure with a flat top. Sun-baked bricks made up the core of the ziggurat with facings of fired bricks on the outside. The facings were often glazed in different colors and may have had astrological significance. Kings sometimes had their names engraved on them. The number of tiers ranged from two to seven. It is assumed that they had shrines at the top, but no archaeological evidence supports this and the only textual evidence is from Herodotus. Access to the shrine would have been by a series of ramps on one side of the ziggurat or by a spiral ramp from base to summit. Africa Egypt The most famous African pyramids are in Egypt — huge structures built of bricks or stones, primarily limestone, some of which are among the world's largest constructions. They are shaped in reference to the sun's rays. Most had a smoothed white limestone surface. Many of the facing stones have fallen or were removed and used for construction in Cairo. The capstone was usually made of limestone, granite or basalt and some were plated with electrum. Ancient Egyptians built pyramids from 2700 BC until around 1700 BC. The first pyramid was erected during the Third Dynasty by the Pharaoh Djoser and his architect Imhotep. This step pyramid consisted of six stacked mastabas. Early kings such as Snefru built pyramids, with subsequent kings adding to the number until the end of the Middle Kingdom. The age of the pyramids reached its zenith at Giza in 2575–2150 BC. The last king to build royal pyramids was Ahmose, with later kings hiding their tombs in the hills, such as those in the Valley of the Kings in Luxor's West Bank. In Medinat Habu and Deir el-Medina, smaller pyramids were built by individuals. Smaller pyramids with steeper sides were built by the Nubians who ruled Egypt in the Late Period. The Great Pyramid of Giza is the largest in Egypt and one of the largest in the world. At it was the tallest structure in the world until the Lincoln Cathedral was finished in 1311 AD. Its base covers an area of around . The Great Pyramid is the only extant one of the Seven Wonders of the Ancient World. Ancient Egyptian pyramids were, in most cases, placed west of the river Nile because the divine pharaoh's soul was meant to join with the sun during its descent before continuing with the sun in its eternal round. As of 2008, some 135 pyramids had been discovered in Egypt, most located near Cairo. Sudan While African pyramids are commonly associated with Egypt, Sudan has 220 extant pyramids, the most in the world. Nubian pyramids were constructed (roughly 240 of them) at three sites in Sudan to serve as tombs for the kings and queens of Napata and Meroë. The pyramids of Kush, also known as Nubian Pyramids, have different characteristics than those of Egypt. The Nubian pyramids had steeper sides than the Egyptian ones. Pyramids were built in Sudan as late as 200 AD. Sahel The Tomb of Askia, in Gao, Mali, is believed to be the burial place of Askia Mohammad I, one of the Songhai Empire's most prolific emperors. It was built at the end of the fifteenth century and is designated as a UNESCO World Heritage Site. UNESCO describes the tomb as an example of the monumental mud-building traditions of the West African Sahel. The complex includes the pyramidal tomb, two mosques, a cemetery and an assembly ground. At 17 metres (56 ft) in height it is the largest pre-colonial architectural monument in Gao. It is a notable example of the Sudano-Sahelian architectural style that later spread throughout the region. Nigeria One of the unique structures of Igbo culture was the Nsude pyramids, in the Nigerian town of Nsude, northern Igboland. Ten pyramidal structures were built of clay/mud. The first base section was in circumference and in height. The next stack was in circumference. Circular stacks continued to the top. The structures were temples for the god Ala, who was believed to reside there. A stick was placed at the top to represent the god's residence. The structures were laid in groups of five parallel to each other. Because it was built of clay/mud like the Deffufa of Nubia, over time periodic reconstruction has been required. Europe Greece Pausanias (2nd century AD) mentions two buildings resembling pyramids, one, 19 kilometres (12 mi) southwest of a still standing structure at Hellenikon, a common tomb for soldiers who died in a legendary struggle for the throne of Argos and another that he was told was the tomb of Argives killed in a battle around 669/8 BC. Neither survives and no evidence indicates that they resembled Egyptian pyramids. At least two surviving pyramid-like structures are available to study, one at Hellenikon and the other at Ligourio/Ligurio, a village near the ancient theatre Epidaurus. These buildings have inwardly sloping walls, but bear no other resemblance to Egyptian pyramids. They had large central rooms (unlike Egyptian pyramids) and the Hellenikon structure is rectangular rather than square, which means that the sides could not have met at a point. The stone used to build these structures was limestone quarried locally and was cut to fit, not into freestanding blocks like the Great Pyramid of Giza. These structures were dated from pot shards excavated from the floor and grounds. The latest estimates are around the 5th and 4th centuries. Normally this technique is used for dating pottery, but researchers used it to try to date stone flakes from the structure walls. This launched debate about whether or not these structures are actually older than Egypt, part of the Black Athena controversy. Lefkowitz criticised this research, suggesting that some of the research was done not to determine the reliability of the dating method, as was suggested, but to back up a claim and to make points about pyramids and Greek civilization. She claimed that not only were the results imprecise, but that other structures mentioned in the research are not in fact pyramids, e.g. a tomb alleged to be the tomb of Amphion and Zethus near Thebes, a structure at Stylidha (Thessaly) which is a long wall, etc. She pushed the possibility that the stones that were dated might have been recycled from earlier constructions. She claimed that earlier research from the 1930s, confirmed in the 1980s by Fracchia, was ignored. Liritzis responded that Lefkowitz failed to understand and misinterpreted the methodology. Spain The Pyramids of Güímar refer to six rectangular pyramid-shaped, terraced structures, built from lava without mortar. They are located in the district of Chacona, part of the town of Güímar on the island of Tenerife in the Canary Islands. The structures were dated to the 19th century and their function explained as a byproduct of contemporary agricultural techniques. Autochthonous Guanche traditions as well as surviving images indicate that similar structures (also known as, "Morras", "Majanos", "Molleros", or "Paredones") were built in many locations on the island. However, over time they were dismantled and used as building material. Güímar hostred nine pyramids, only six of which survive. Roman Empire The 27-metre-high Pyramid of Cestius was built by the end of the 1st century BC and survives close to the Porta San Paolo. Another, named Meta Romuli, stood in the Ager Vaticanus (today's Borgo), but was destroyed at the end of the 15th century. Medieval Europe Pyramids were occasionally used in Christian architecture during the feudal era, e.g. as the tower of Oviedo's Gothic Cathedral of San Salvador. Americas Peru Andean cultures used pyramids in various architectural structures such as the ones in Caral, Túcume and Chavín de Huantar, constructed around the same time as early Egyptian pyramids. Mesoamerica Several Mesoamerican cultures built pyramid-shaped structures. Mesoamerican pyramids were usually stepped, with temples on top, more similar to the Mesopotamian ziggurat than the Egyptian pyramid. The largest by volume is the Great Pyramid of Cholula, in the Mexican state of Puebla. Constructed from the 3rd century BC to the 9th century AD, this pyramid is the world's largest monument, and is still not fully excavated. The third largest pyramid in the world, the Pyramid of the Sun, at Teotihuacan, is also located in Mexico. An unusual pyramid with a circular plan survives at the site of Cuicuilco, now inside Mexico City and mostly covered with lava from an eruption of the Xitle Volcano in the 1st century BC. Several circular stepped pyramids called Guachimontones survive in Teuchitlán, Jalisco. Pyramids in Mexico were often used for human sacrifice. Harner stated that for the dedication of the Great Pyramid of Tenochtitlan in 1487, "one source states 20,000, another 72,344, and several give 80,400" as the number of humans sacrificed. United States Many pre-Columbian Native American societies of ancient North America built large pyramidal earth structures known as platform mounds. Among the largest and best-known of these structures is Monks Mound at the site of Cahokia in what became Illinois, completed around 1100 AD. It has a base larger than that of the Great Pyramid. Many mounds underwent repeated episodes of expansion. They are believed to have played a central role in the mound-building peoples' religious life. Documented uses include semi-public chief's house platforms, public temple platforms, mortuary platforms, charnel house platforms, earth lodge/town house platforms, residence platforms, square ground and rotunda platforms, and dance platforms. Cultures that built substructure mounds include the Troyville culture, Coles Creek culture, Plaquemine culture and Mississippian cultures. Asia Many square flat-topped mound tombs in China. The first emperor Qin Shi Huang (, who unified the seven pre-imperial kingdoms) was buried under a large mound outside modern-day Xi'an. In the following centuries about a dozen more Han dynasty royal persons were also buried under flat-topped pyramidal earthworks. India Numerous giant, granite, temple pyramids were built in South India during the Chola Empire, many of which remain in use. Examples include Brihadisvara Temple at Thanjavur, Brihadisvara Temple at Gangaikonda Cholapuram, and the Airavatesvara Temple at Darasuram. However, the largest temple (area) is the Ranganathaswamy Temple in Srirangam, Tamil Nadu. The Thanjavur temple was built by Raja Raja Chola in the 11th century. The Brihadisvara Temple was declared a World Heritage Site by UNESCO in 1987; the Temple of Gangaikondacholapuram and the Airavatesvara Temple at Darasuram were added in 2004. Indonesia Austronesian megalithic culture in Indonesia featured earth and stone step pyramid structures called punden berundak. These were discovered in Pangguyangan near Cisolok and in Cipari near Kuningan. The stone pyramids were based on beliefs that mountains and high places were the abode for the spirit of the ancestors. The step pyramid is the basic design of the 8th century Borobudur Buddhist monument in Central Java. However later Java temples were influenced by Indian Hindu architecture, as exemplified by the spires of Prambanan temple. In the 15th century, during late Majapahit period, Java saw the revival of indigenous Austronesian elements as displayed by Sukuh temple that somewhat resemble Mesoamerican pyramids, and also stepped pyramids of Mount Penanggungan. East Asia, Southeast Asia and Central Asia In east Asia, Buddhist stupas were usually represented as tall pagodas. However, some pyramidal stupas survive. One theory is that these pyramids were inspired by the Borobudur monument through Sumatran and Javanese monks. A similar Buddhist monument survives in Vrang, Tajikistan. At least nine Buddhist step pyramids survive, 4 from former Gyeongsang Province of Korea, 3 from Japan, 1 from Indonesia (Borobudur) and 1 from Tajikistan. Oceania Several pyramids were erected throughout the Pacific islands, such as Puʻukoholā Heiau in Hawaii, the Pulemelei Mound in Samoa, and Nan Madol in Pohnpei. Modern pyramids Two pyramid-shaped tombs were erected in Maudlin's Cemetery, Ireland, c. 1840, belonging to the De Burgh family. The Louvre Pyramid in Paris, France, in the court of the Louvre Museum, is a 20.6 metre (about 70 foot) glass structure that acts as a museum entrance. It was designed by American architect I. M. Pei and completed in 1989. The Pyramide Inversée (Inverted Pyramid) is displayed in the underground Louvre shopping mall. The Tama-Re village was an Egyptian-themed set of buildings and monuments built near Eatonton, Georgia by Nuwaubians in 1993 that was mostly demolished after it was sold in 2005. The Luxor Hotel in Las Vegas, United States, is a 30-story pyramid. The 32-story Memphis Pyramid (Memphis was named after the Egyptian capital whose name was derived from the name of one of its pyramids). Built in 1991, it was the home court for the University of Memphis men's basketball program, and the National Basketball Association's Memphis Grizzlies until 2004. It was not regularly used as a sports or entertainment venue after 2007, and in 2015 was re-purposed as a Bass Pro Shops megastore. The Walter Pyramid, home of the basketball and volleyball teams of the California State University, Long Beach, campus in California, United States, is an 18-story-tall blue true pyramid. The 48-story Transamerica Pyramid in San Francisco, California, designed by William Pereira, is a city symbol. The 105-story Ryugyong Hotel is in Pyongyang, North Korea. A former museum/monument in Tirana, Albania is commonly known as the "Pyramid of Tirana". It differs from typical pyramids in having a radial rather than square or rectangular shape, and gently sloped sides that make it short in comparison to the size of its base. The Slovak Radio Building in Bratislava, Slovakia is an inverted pyramid. The Palace of Peace and Reconciliation is in Astana, Kazakhstan. The three pyramids of Moody Gardens are in Galveston, Texas. The Co-Op Bank Pyramid or Stockport Pyramid in Stockport, England is a large pyramid-shaped office block. (The surrounding part of the valley of the upper Mersey has sometimes been called the "Kings Valley" after the Egypt's Valley of the Kings.) The Ames Monument in southeastern Wyoming honors the brothers who financed the Union Pacific Railroad. The Trylon, a triangular pyramid was erected for the 1939 World's Fair in Flushing, Queens and demolished after the Fair closed. The Ballandean Pyramid, at Ballandean in rural Queensland is a 15-metre folly pyramid made from blocks of local granite. The Karlsruhe Pyramid is a pyramid made of red sandstone, located in the centre of the market square of Karlsruhe, Germany. It was erected in 1823–1825 over the vault of the city's founder, Margrave Charles III William (1679–1738). Muttart Conservatory greenhouses are in Edmonton, Alberta. Sunway Pyramid shopping mall is in Selangor, Malaysia. Hanoi Museum has an overall design of a reversed pyramid. The Ha! Ha! Pyramid by artist Jean-Jules Soucy in La Baie, Quebec is made out of 3,000 give way signs. The culture-entertainment complex and is in Kazan, Russia. The Time pyramid in Wemding, Germany is a pyramid begun in 1993 and scheduled for completion in the year 3183. Triangle is a proposed skyscraper in Paris. The Shimizu Mega-City Pyramid is a proposed project for construction of a massive pyramid over Tokyo Bay in Japan. The Donkin Memorial was erected on a Xhosa reserve in 1820 by Cape Governor Sir Rufane Shaw Donkin in memory of his late wife Elizabeth, in Port Elizabeth, South Africa. The pyramid is used in many different coats-of-arms associated with Port Elizabeth. Modern mausoleums With the Egyptian Revival movement in the nineteenth and early twentieth century, pyramids became more common in funerary architecture. The tomb of Quintino Sella, outside the monumental cemetery of Oropa, is pyramid-shaped. This style was popular with tycoons in the US. The Schoenhofen Pyramid Mausoleum (1889) in Chicago and Hunt's Tomb (1930) in Phoenix, Arizona are notable examples. Some people build pyramid tombs for themselves. Nicolas Cage bought a pyramid tomb for himself in a famed New Orleans graveyard.
Technology
Ceremonial buildings
null
23738
https://en.wikipedia.org/wiki/Propeller
Propeller
A propeller (often called a screw if on a ship or an airscrew if on an aircraft) is a device with a rotating hub and radiating blades that are set at a pitch to form a helical spiral which, when rotated, exerts linear thrust upon a working fluid such as water or air. Propellers are used to pump fluid through a pipe or duct, or to create thrust to propel a boat through water or an aircraft through air. The blades are shaped so that their rotational motion through the fluid causes a pressure difference between the two surfaces of the blade by Bernoulli's principle which exerts force on the fluid. Most marine propellers are screw propellers with helical blades rotating on a propeller shaft with an approximately horizontal axis. History Early developments The principle employed in using a screw propeller is derived from stern sculling. In sculling, a single blade is moved through an arc, from side to side taking care to keep presenting the blade to the water at the effective angle. The innovation introduced with the screw propeller was the extension of that arc through more than 360° by attaching the blade to a rotating shaft. Propellers can have a single blade, but in practice there is nearly always more than one so as to balance the forces involved. The origin of the screw propeller starts at least as early as Archimedes (c. 287 – c. 212 BC), who used a screw to lift water for irrigation and bailing boats, so famously that it became known as Archimedes' screw. It was probably an application of spiral movement in space (spirals were a special study of Archimedes) to a hollow segmented water-wheel used for irrigation by Egyptians for centuries. A flying toy, the bamboo-copter, was enjoyed in China beginning around 320 AD. Later, Leonardo da Vinci adopted the screw principle to drive his theoretical helicopter, sketches of which involved a large canvas screw overhead. In 1661, Toogood and Hays proposed using screws for waterjet propulsion, though not as a propeller. Robert Hooke in 1681 designed a horizontal watermill which was remarkably similar to the Kirsten-Boeing vertical axis propeller designed almost two and a half centuries later in 1928; two years later Hooke modified the design to provide motive power for ships through water. In 1693 a Frenchman by the name of Du Quet invented a screw propeller which was tried in 1693 but later abandoned. In 1752, the Academie des Sciences in Paris granted Burnelli a prize for a design of a propeller-wheel. At about the same time, the French mathematician Alexis-Jean-Pierre Paucton suggested a water propulsion system based on the Archimedean screw. In 1771, steam-engine inventor James Watt in a private letter suggested using "spiral oars" to propel boats, although he did not use them with his steam engines, or ever implement the idea. One of the first practical and applied uses of a propeller was on a submarine dubbed which was designed in New Haven, Connecticut, in 1775 by Yale student and inventor David Bushnell, with the help of clock maker, engraver, and brass foundryman Isaac Doolittle. Bushnell's brother Ezra Bushnell and ship's carpenter and clock maker Phineas Pratt constructed the hull in Saybrook, Connecticut. On the night of September 6, 1776, Sergeant Ezra Lee piloted Turtle in an attack on in New York Harbor. Turtle also has the distinction of being the first submarine used in battle. Bushnell later described the propeller in an October 1787 letter to Thomas Jefferson: "An oar formed upon the principle of the screw was fixed in the forepart of the vessel its axis entered the vessel and being turned one way rowed the vessel forward but being turned the other way rowed it backward. It was made to be turned by the hand or foot." The brass propeller, like all the brass and moving parts on Turtle, was crafted by Issac Doolittle of New Haven. In 1785, Joseph Bramah of England proposed a propeller solution of a rod going through the underwater aft of a boat attached to a bladed propeller, though he never built it. In February 1800, Edward Shorter of London proposed using a similar propeller attached to a rod angled down temporarily deployed from the deck above the waterline and thus requiring no water seal, and intended only to assist becalmed sailing vessels. He tested it on the transport ship at Gibraltar and Malta, achieving a speed of . In 1802, American lawyer and inventor John Stevens built a boat with a rotary steam engine coupled to a four-bladed propeller. The craft achieved a speed of , but Stevens abandoned propellers due to the inherent danger in using the high-pressure steam engines. His subsequent vessels were paddle-wheeled boats. By 1827, Czech inventor Josef Ressel had invented a screw propeller with multiple blades on a conical base. He tested it in February 1826 on a manually-driven ship and successfully used it on a steamboat in 1829. His 48-ton ship Civetta reached 6 knots. This was the first successful Archimedes screw-propelled ship. His experiments were banned by police after a steam engine accident. Ressel, a forestry inspector, held an Austro-Hungarian patent for his propeller. The screw propeller was an improvement over paddlewheels as it wasn't affected by ship motions or draft changes. John Patch, a mariner in Yarmouth, Nova Scotia developed a two-bladed, fan-shaped propeller in 1832 and publicly demonstrated it in 1833, propelling a row boat across Yarmouth Harbour and a small coastal schooner at Saint John, New Brunswick, but his patent application in the United States was rejected until 1849 because he was not an American citizen. His efficient design drew praise in American scientific circles but by then he faced multiple competitors. Screw propellers Despite experimentation with screw propulsion before the 1830s, few of these inventions were pursued to the testing stage, and those that were proved unsatisfactory for one reason or another. In 1835, two inventors in Britain, John Ericsson and Francis Pettit Smith, began working separately on the problem. Smith was first to take out a screw propeller patent on 31 May, while Ericsson, a gifted Swedish engineer then working in Britain, filed his patent six weeks later. Smith quickly built a small model boat to test his invention, which was demonstrated first on a pond at his Hendon farm, and later at the Royal Adelaide Gallery of Practical Science in London, where it was seen by the Secretary of the Navy, Sir William Barrow. Having secured the patronage of a London banker named Wright, Smith then built a , canal boat of six tons burthen called Francis Smith, which was fitted with his wooden propeller and demonstrated on the Paddington Canal from November 1836 to September 1837. By a fortuitous accident, the wooden propeller of two turns was damaged during a voyage in February 1837, and to Smith's surprise the broken propeller, which now consisted of only a single turn, doubled the boat's previous speed, from about four miles an hour to eight. Smith would subsequently file a revised patent in keeping with this accidental discovery. In the meantime, Ericsson built a screw-propelled steamboat, Francis B. Ogden in 1837, and demonstrated his boat on the River Thames to senior members of the British Admiralty, including Surveyor of the Navy Sir William Symonds. In spite of the boat achieving a speed of 10 miles an hour, comparable with that of existing paddle steamers, Symonds and his entourage were unimpressed. The Admiralty maintained the view that screw propulsion would be ineffective in ocean-going service, while Symonds himself believed that screw propelled ships could not be steered efficiently. Following this rejection, Ericsson built a second, larger screw-propelled boat, Robert F. Stockton, and had her sailed in 1839 to the United States, where he was soon to gain fame as the designer of the U.S. Navy's first screw-propelled warship, . Apparently aware of the Royal Navy's view that screw propellers would prove unsuitable for seagoing service, Smith determined to prove this assumption wrong. In September 1837, he took his small vessel (now fitted with an iron propeller of a single turn) to sea, steaming from Blackwall, London to Hythe, Kent, with stops at Ramsgate, Dover and Folkestone. On the way back to London on the 25th, Smith's craft was observed making headway in stormy seas by officers of the Royal Navy. This revived Admiralty's interest and Smith was encouraged to build a full size ship to more conclusively demonstrate the technology. was built in 1838 by Henry Wimshurst of London, as the world's first steamship to be driven by a screw propeller. The Archimedes had considerable influence on ship development, encouraging the adoption of screw propulsion by the Royal Navy, in addition to her influence on commercial vessels. Trials with Smith's Archimedes led to a tug-of-war competition in 1845 between and with the screw-driven Rattler pulling the paddle steamer Alecto backward at . The Archimedes also influenced the design of Isambard Kingdom Brunel's in 1843, then the world's largest ship and the first screw-propelled steamship to cross the Atlantic Ocean in August 1845. and were both heavily modified to become the first Royal Navy ships to have steam-powered engines and screw propellers. Both participated in Franklin's lost expedition, last seen in July 1845 near Baffin Bay. Screw propeller design stabilized in the 1880s. Aircraft The Wright brothers pioneered the twisted aerofoil shape of modern aircraft propellers. They realized an air propeller was similar to a wing. They verified this using wind tunnel experiments. They introduced a twist in their blades to keep the angle of attack constant. Their blades were only 5% less efficient than those used 100 years later. Understanding of low-speed propeller aerodynamics was complete by the 1920s, although increased power and smaller diameters added design constraints. Alberto Santos Dumont, another early pioneer, applied the knowledge he gained from experiences with airships to make a propeller with a steel shaft and aluminium blades for his 14 bis biplane. Some of his designs used a bent aluminium sheet for blades, thus creating an airfoil shape. They were heavily undercambered, and this plus the absence of lengthwise twist made them less efficient than the Wright propellers. Even so, this may have been the first use of aluminium in the construction of an airscrew. Theory In the nineteenth century, several theories concerning propellers were proposed. The momentum theory or disk actuator theory – a theory describing a mathematical model of an ideal propeller – was developed by W.J.M. Rankine (1865), A.G. Greenhill (1888) and R.E. Froude (1889). The propeller is modelled as an infinitely thin disc, inducing a constant velocity along the axis of rotation and creating a flow around the propeller. A screw turning through a solid will have zero "slip"; but as a propeller screw operates in a fluid (either air or water), there will be some losses. The most efficient propellers are large-diameter, slow-turning screws, such as on large ships; the least efficient are small-diameter and fast-turning (such as on an outboard motor). Using Newton's laws of motion, one may usefully think of a propeller's forward thrust as being a reaction proportionate to the mass of fluid sent backward per time and the speed the propeller adds to that mass, and in practice there is more loss associated with producing a fast jet than with creating a heavier, slower jet. (The same applies in aircraft, in which larger-diameter turbofan engines tend to be more efficient than earlier, smaller-diameter turbofans, and even smaller turbojets, which eject less mass at greater speeds.) Propeller geometry The geometry of a marine screw propeller is based on a helicoidal surface. This may form the face of the blade, or the faces of the blades may be described by offsets from this surface. The back of the blade is described by offsets from the helicoid surface in the same way that an aerofoil may be described by offsets from the chord line. The pitch surface may be a true helicoid or one having a warp to provide a better match of angle of attack to the wake velocity over the blades. A warped helicoid is described by specifying the shape of the radial reference line and the pitch angle in terms of radial distance. The traditional propeller drawing includes four parts: a side elevation, which defines the rake, the variation of blade thickness from root to tip, a longitudinal section through the hub, and a projected outline of a blade onto a longitudinal centreline plane. The expanded blade view shows the section shapes at their various radii, with their pitch faces drawn parallel to the base line, and thickness parallel to the axis. The outline indicated by a line connecting the leading and trailing tips of the sections depicts the expanded blade outline. The pitch diagram shows variation of pitch with radius from root to tip. The transverse view shows the transverse projection of a blade and the developed outline of the blade. The blades are the foil section plates that develop thrust when the propeller is rotated The hub is the central part of the propeller, which connects the blades together and fixes the propeller to the shaft. This is called the boss in the UK. Rake is the angle of the blade to a radius perpendicular to the shaft. Skew is the tangential offset of the line of maximum thickness to a radius The propeller characteristics are commonly expressed as dimensionless ratios: Pitch ratio PR = propeller pitch/propeller diameter, or P/D Disk area A0 = πD2/4 Expanded area ratio = AE/A0, where expanded area AE = Expanded area of all blades outside of the hub. Developed area ratio = AD/A0, where developed area AD = Developed area of all blades outside of the hub Projected area ratio = AP/A0, where projected area AP = Projected area of all blades outside of the hub Mean width ratio = (Area of one blade outside the hub/length of the blade outside the hub)/Diameter Blade width ratio = Maximum width of a blade/Diameter Blade thickness fraction = Thickness of a blade produced to shaft axis/Diameter Cavitation Cavitation is the formation of vapor bubbles in water near a moving propeller blade in regions of very low pressure. It can occur if an attempt is made to transmit too much power through the screw, or if the propeller is operating at a very high speed. Cavitation can waste power, create vibration and wear, and cause damage to the propeller. It can occur in many ways on a propeller. The two most common types of propeller cavitation are suction side surface cavitation and tip vortex cavitation. Suction side surface cavitation forms when the propeller is operating at high rotational speeds or under heavy load (high blade lift coefficient). The pressure on the upstream surface of the blade (the "suction side") can drop below the vapor pressure of the water, resulting in the formation of a vapor pocket. Under such conditions, the change in pressure between the downstream surface of the blade (the "pressure side") and the suction side is limited, and eventually reduced as the extent of cavitation is increased. When most of the blade surface is covered by cavitation, the pressure difference between the pressure side and suction side of the blade drops considerably, as does the thrust produced by the propeller. This condition is called "thrust breakdown". Operating the propeller under these conditions wastes energy, generates considerable noise, and as the vapor bubbles collapse it rapidly erodes the screw's surface due to localized shock waves against the blade surface. Tip vortex cavitation is caused by the extremely low pressures formed at the core of the tip vortex. The tip vortex is caused by fluid wrapping around the tip of the propeller; from the pressure side to the suction side. This video demonstrates tip vortex cavitation. Tip vortex cavitation typically occurs before suction side surface cavitation and is less damaging to the blade, since this type of cavitation doesn't collapse on the blade, but some distance downstream. Types of propellers Variable-pitch propeller Variable-pitch propellers may be either controllable (controllable-pitch propellers) or automatically feathering (folding propellers). Variable-pitch propellers have significant advantages over the fixed-pitch variety, namely: the ability to select the most effective blade angle for any given speed; when motorsailing, the ability to coarsen the blade angle to attain the optimum drive from wind and engines; the ability to move astern (in reverse) much more efficiently (fixed props perform very poorly in astern); the ability to "feather" the blades to give the least resistance when not in use (for example, when sailing). For large airplanes, if the engine is uncontrollable, the ability to feather the propeller is necessary to prevent the propeller from spinning so fast it breaks apart. Skewback propeller An advanced type of propeller used on the American Los Angeles-class submarine as well as the German Type 212 submarine is called a skewback propeller. As in the scimitar blades used on some aircraft, the blade tips of a skewback propeller are swept back against the direction of rotation. In addition, the blades are tilted rearward along the longitudinal axis, giving the propeller an overall cup-shaped appearance. This design preserves thrust efficiency while reducing cavitation, and thus makes for a quiet, stealthy design. A small number of ships use propellers with winglets similar to those on some airplane wings, reducing tip vortices and improving efficiency. Modular propeller A modular propeller provides more control over the boat's performance. There is no need to change an entire propeller when there is an opportunity to only change the pitch or the damaged blades. Being able to adjust pitch will allow for boaters to have better performance while in different altitudes, water sports, or cruising. Voith Schneider propeller Voith Schneider propellers use four untwisted straight blades turning around a vertical axis instead of helical blades and can provide thrust in any direction at any time, at the cost of higher mechanical complexity. Shaftless A rim-driven thruster integrates an electric motor into a ducted propeller. The cylindrical duct acts as the stator, while the tips of the blades act as the rotor. They typically provide high torque and operate at low RPMs, producing less noise. The system does not require a shaft, reducing weight. Units can be placed at various locations around the hull and operated independently, e.g., to aid in maneuvering. The absence of a shaft allows alternative rear hull designs. Toroidal Twisted-toroid (ring-shaped) propellers, first invented over 120 years ago, replace the blades with a-circular rings. They are significantly quieter (particularly at audible frequencies) and more efficient than traditional propellers for both air and water applications. The design distributes vortices generated by the propeller across the entire shape, causing them to dissipate faster in the atmosphere. Damage protection Shaft protection For smaller engines, such as outboards, where the propeller is exposed to the risk of collision with heavy objects, the propeller often includes a device that is designed to fail when overloaded; the device or the whole propeller is sacrificed so that the more expensive transmission and engine are not damaged. Typically in smaller (less than ) and older engines, a narrow shear pin through the drive shaft and propeller hub transmits the power of the engine at normal loads. The pin is designed to shear when the propeller is put under a load that could damage the engine. After the pin is sheared the engine is unable to provide propulsive power to the boat until a new shear pin is fitted. In larger and more modern engines, a rubber bushing transmits the torque of the drive shaft to the propeller's hub. Under a damaging load the friction of the bushing in the hub is overcome and the rotating propeller slips on the shaft, preventing overloading of the engine's components. After such an event the rubber bushing may be damaged. If so, it may continue to transmit reduced power at low revolutions, but may provide no power, due to reduced friction, at high revolutions. Also, the rubber bushing may perish over time leading to its failure under loads below its designed failure load. Whether a rubber bushing can be replaced or repaired depends upon the propeller; some cannot. Some can, but need special equipment to insert the oversized bushing for an interference fit. Others can be replaced easily. The "special equipment" usually consists of a funnel, a press and rubber lubricant (soap). If one does not have access to a lathe, an improvised funnel can be made from steel tube and car body filler; as the filler is only subject to compressive forces it is able to do a good job. Often, the bushing can be drawn into place with nothing more complex than a couple of nuts, washers and a threaded rod. A more serious problem with this type of propeller is a "frozen-on" spline bushing, which makes propeller removal impossible. In such cases the propeller must be heated in order to deliberately destroy the rubber insert. Once the propeller is removed, the splined tube can be cut away with a grinder and a new spline bushing is then required. To prevent a recurrence of the problem, the splines can be coated with anti-seize anti-corrosion compound. In some modern propellers, a hard polymer insert called a drive sleeve replaces the rubber bushing. The splined or other non-circular cross section of the sleeve inserted between the shaft and propeller hub transmits the engine torque to the propeller, rather than friction. The polymer is weaker than the components of the propeller and engine so it fails before they do when the propeller is overloaded. This fails completely under excessive load, but can easily be replaced. Weed hatches and rope cutters Whereas the propeller on a large ship will be immersed in deep water and free of obstacles and flotsam, yachts, barges and river boats often suffer propeller fouling by debris such as weed, ropes, cables, nets and plastics. British narrowboats invariably have a weed hatch over the propeller, and once the narrowboat is stationary, the hatch may be opened to give access to the propeller, enabling debris to be cleared. Yachts and river boats rarely have weed hatches; instead they may fit a rope cutter that fits around the prop shaft and rotates with the propeller. These cutters clear the debris and obviate the need for divers to attend manually to the fouling. Several forms of rope cutters are available: A simple sharp edged disc that cuts like a razor; A rotor with two or more projecting blades that slice against a fixed blade, cutting with a scissor action; A serrated rotor with a complex cutting edge made up of sharp edges and projections. Propeller variations A cleaver is a type of propeller design especially used for boat racing. Its leading edge is formed round, while the trailing edge is cut straight. It provides little bow lift, so that it can be used on boats that do not need much bow lift, for instance hydroplanes, that naturally have enough hydrodynamic bow lift. To compensate for the lack of bow lift, a hydrofoil may be installed on the lower unit. Hydrofoils reduce bow lift and help to get a boat out of the hole and onto plane.
Technology
Rigid components
null
23740
https://en.wikipedia.org/wiki/Toxin
Toxin
A toxin is a naturally occurring poison produced by metabolic activities of living cells or organisms. They occur especially as proteins, often conjugated. The term was first used by organic chemist Ludwig Brieger (1849–1919), derived from toxic. Toxins can be small molecules, peptides, or proteins that are capable of causing disease on contact with or absorption by body tissues interacting with biological macromolecules such as enzymes or cellular receptors. They vary greatly in their toxicity, ranging from usually minor (such as a bee sting) to potentially fatal even at extremely low doses (such as botulinum toxin). Terminology Toxins are often distinguished from other chemical agents strictly based on their biological origin. Less strict understandings embrace naturally occurring inorganic toxins, such as arsenic. Other understandings embrace synthetic analogs of naturally occurring organic poisons as toxins, and may or may not embrace naturally occurring inorganic poisons. It is important to confirm usage if a common understanding is critical. Toxins are a subset of toxicants. The term toxicant is preferred when the poison is man-made and therefore artificial. The human and scientific genetic assembly of a natural-based toxin should be considered a toxin as it is identical to its natural counterpart. The debate is one of linguistic semantics. The word toxin does not specify method of delivery (as opposed to venom, a toxin delivered via a bite, sting, etc.). Poison is a related but broader term that encompasses both toxins and toxicants; poisons may enter the body through any means - typically inhalation, ingestion, or skin absorption. Toxin, toxicant, and poison are often used interchangeably despite these subtle differences in definition. The term toxungen has also been proposed to refer to toxins that are delivered onto the body surface of another organism without an accompanying wound. A rather informal terminology of individual toxins relates them to the anatomical location where their effects are most notable: Genitotoxin, damages the urinary organs or the reproductive organs Hemotoxin, causes destruction of red blood cells (hemolysis) Phototoxin, causes dangerous photosensitivity Hepatotoxins affect the liver Neurotoxins affect the nervous system On a broader scale, toxins may be classified as either exotoxins, excreted by an organism, or endotoxins, which are released mainly when bacteria are lysed. Biological The term "biotoxin" is sometimes used to explicitly confirm the biological origin as opposed to environmental or anthropogenic origins. Biotoxins can be classified by their mechanism of delivery as poisons (passively transferred via ingestion, inhalation, or absorption across the skin), toxungens (actively transferred to the target's surface by spitting, spraying, or smearing), or venoms (delivered through a wound generated by a bite, sting, or other such action). They can also be classified by their source, such as fungal biotoxins, microbial toxins, plant biotoxins, or animal biotoxins. Toxins produced by microorganisms are important virulence determinants responsible for microbial pathogenicity and/or evasion of the host immune response. Biotoxins vary greatly in purpose and mechanism, and can be highly complex (the venom of the cone snail can contain over 100 unique peptides, which target specific nerve channels or receptors). Biotoxins in nature have two primary functions: Predation, such as in the spider, snake, scorpion, jellyfish, and wasp Defense as in the bee, ant, termite, honey bee, wasp, poison dart frog and plants producing toxins The toxins used as defense in species among the poison dart frog can also be used for medicinal purposes Some of the more well known types of biotoxins include: Cyanotoxins, produced by cyanobacteria Dinotoxins, produced by dinoflagellates Necrotoxins cause necrosis (i.e., death) in the cells they encounter. Necrotoxins spread through the bloodstream. In humans, skin and muscle tissues are most sensitive to necrotoxins. Organisms that possess necrotoxins include: The brown recluse or "fiddle back" spider Most rattlesnakes and vipers produce phospholipase and various trypsin-like serine proteases Puff adder Necrotizing fasciitis (caused by the "flesh eating" bacterium Streptococcus pyogenes) – produces a pore forming toxin Neurotoxins primarily affect the nervous systems of animals. The group neurotoxins generally consists of ion channel toxins that disrupt ion channel conductance. Organisms that possess neurotoxins include: The black widow spider. Most scorpions The box jellyfish Elapid snakes The cone snail The Blue-ringed octopus Venomous fish Frogs Palythoa coral Various different types of algae, cyanobacteria and dinoflagellates Myotoxins are small, basic peptides found in snake and lizard venoms, They cause muscle tissue damage by a non-enzymatic receptor based mechanism. Organisms that possess myotoxins include: rattlesnakes Mexican beaded lizard Cytotoxins are toxic at the level of individual cells, either in a non-specific fashion or only in certain types of living cells: Ricin, from castor beans Apitoxin, from honey bees T-2 mycotoxin, from certain toxic mushrooms Cardiotoxin III, from Chinese cobra Hemotoxin, from vipers Weaponry Many living organisms employ toxins offensively or defensively. A relatively small number of toxins are known to have the potential to cause widespread sickness or casualties. They are often inexpensive and easily available, and in some cases it is possible to refine them outside the laboratory. As biotoxins act quickly, and are highly toxic even at low doses, they can be more efficient than chemical agents. Due to these factors, it is vital to raise awareness of the clinical symptoms of biotoxin poisoning, and to develop effective countermeasures including rapid investigation, response, and treatment. Environmental The term "environmental toxin" can sometimes explicitly include synthetic contaminants such as industrial pollutants and other artificially made toxic substances. As this contradicts most formal definitions of the term "toxin", it is important to confirm what the researcher means when encountering the term outside of microbiological contexts. Environmental toxins from food chains that may be dangerous to human health include: Paralytic shellfish poisoning (PSP) Amnesic shellfish poisoning (ASP) Diarrheal shellfish poisoning (DSP) Neurotoxic shellfish poisoning (NSP) Research In general, when scientists determine the amount of a substance that may be hazardous for humans, animals and/or the environment they determine the amount of the substance likely to trigger effects and if possible establish a safe level. In Europe, the European Food Safety Authority produced risk assessments for more than 4,000 substances in over 1,600 scientific opinions and they provide open access summaries of human health, animal health and ecological hazard assessments in their OpenFoodTox database. The OpenFoodTox database can be used to screen potential new foods for toxicity. The Toxicology and Environmental Health Information Program (TEHIP) at the United States National Library of Medicine (NLM) maintains a comprehensive toxicology and environmental health web site that includes access to toxins-related resources produced by TEHIP and by other government agencies and organizations. This web site includes links to databases, bibliographies, tutorials, and other scientific and consumer-oriented resources. TEHIP also is responsible for the Toxicology Data Network (TOXNET), an integrated system of toxicology and environmental health databases that are available free of charge on the web. TOXMAP is a Geographic Information System (GIS) that is part of TOXNET. TOXMAP uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs.
Biology and health sciences
Types
Health
23743
https://en.wikipedia.org/wiki/Phanerozoic
Phanerozoic
The Phanerozoic is the current and the latest of the four geologic eons in the Earth's geologic time scale, covering the time period from 538.8 million years ago to the present. It is the eon during which abundant animal and plant life has proliferated, diversified and colonized various niches on the Earth's surface, beginning with the Cambrian period when animals first developed hard shells that can be clearly preserved in the fossil record. The time before the Phanerozoic, collectively called the Precambrian, is now divided into the Hadean, Archaean and Proterozoic eons. The time span of the Phanerozoic starts with the sudden appearance of fossilised evidence of a number of animal phyla; the evolution of those phyla into diverse forms; the evolution of plants; the evolution of fish, arthropods and molluscs; the terrestrial colonization and evolution of insects, chelicerates, myriapods and tetrapods; and the development of modern flora dominated by vascular plants. During this time span, tectonic forces which move the continents had collected them into a single landmass known as Pangaea (the most recent supercontinent), which then separated into the current continental landmasses. Etymology The term "Phanerozoic" was coined in 1930 by the American geologist George Halcott Chadwick (1876–1953), deriving from the Ancient Greek words (), meaning "visible"; and (), meaning "life". This is because it was once believed that life began in the Cambrian, the first period of this eon, due to the lack of Precambrian fossil record back then. However, trace fossils of booming complex life from the Ediacaran period (Avalon explosion) of the preceding Proterozoic eon have since been discovered, and the modern scientific consensus now agrees that complex life (in the form of placozoans and primitive sponges such as Otavia) has existed at least since the Tonian period and the earliest known life forms (in the form of simple prokaryotic microbial mats) started in the ocean floor during the earlier Archean eon. Proterozoic–Phanerozoic boundary The Proterozoic–Phanerozoic boundary is at 538.8 million years ago. In the 19th century, the boundary was set at time of appearance of the first abundant animal (metazoan) fossils, but trace fossils of several hundred groups (taxa) of complex soft-bodied metazoa from the preceding Ediacaran period of the Proterozoic eon, known as the Avalon Explosion, have been identified since the systematic study of those forms started in the 1950s. The transition from the largely sessile Precambrian biota to the active mobile Cambrian biota occurred early in the Phanerozoic. Eras of the Phanerozoic The Phanerozoic is divided into three eras: the Paleozoic, Mesozoic and Cenozoic, which are further subdivided into 12 periods. The Paleozoic features the evolution of the three most prominent animal phyla, arthropods, molluscs and chordates, the last of which includes fish, amphibians and the fully terrestrial amniotes (synapsids and sauropsids). The Mesozoic features the evolution of crocodilians, turtles, dinosaurs (including birds), lepidosaurs (lizards and snakes) and mammals. The Cenozoic begins with the extinction of all non-avian dinosaurs, pterosaurs and marine reptiles, and features the great diversification in birds and mammals. Humans appeared and evolved during the most recent part of the Cenozoic. Paleozoic Era The Paleozoic is a time in Earth's history when active complex life forms evolved, took their first foothold on dry land, and when the forerunners of all multicellular life on Earth began to diversify. There are six periods in the Paleozoic era: Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian. Cambrian Period The Cambrian is the first period of the Paleozoic Era and ran from 539 million to 485 million years ago. The Cambrian sparked a rapid expansion in the diversity of animals, in an event known as the Cambrian explosion, during which the greatest number of animal body plans evolved in a single period in the history of Earth. Complex algae evolved, and the fauna was dominated by armoured arthropods (such as trilobites and radiodontids) and to a lesser extent shelled cephalopods (such as orthocones). Almost all phyla of marine animals evolved in this period. During this time, the super-continent Pannotia began to break up, most of which later recombined into the super-continent Gondwana. Ordovician Period The Ordovician spans from 485 million to 444 million years ago. The Ordovician was a time in Earth's history in which many groups still prevalent today evolved or diversified, such as primitive nautiloids, vertebrates (then only jawless fish) and corals. This process is known as the Great Ordovician Biodiversification Event or GOBE. Trilobites began to be replaced by articulate brachiopods, and crinoids also became an increasingly important part of the fauna. The first arthropods crept ashore to colonise Gondwana, a continent empty of animal life. A group of freshwater green algae, the streptophytes, also survived being washed ashore and began to colonize the flood plains and riparian zones, giving rise to primitive land plants. By the end of the Ordovician, Gondwana had moved from the equator to the South Pole, and Laurentia had collided with Baltica, closing the Iapetus Ocean. The glaciation of Gondwana resulted in a major drop in sea level, killing off all life that had established along its coast. Glaciation caused an icehouse Earth, leading to the Ordovician–Silurian extinction, during which 60% of marine invertebrates and 25% of families became extinct. Though one of the deadliest mass extinctions in earth's history, the O–S extinction did not cause profound ecological changes between the periods. Silurian Period The Silurian spans from 444 million to 419 million years ago, which saw a warming from an icehouse Earth. This period saw the mass diversification of fish, as jawless fish became more numerous, and early jawed fish and freshwater species appeared in the fossil record. Arthropods remained abundant, and some groups, such as eurypterids, became apex predators in the ocean. Fully terrestrial life established itself on land, including early fungi, arachnids, hexapods and myriapods. The evolution of vascular plants (mainly spore-producing ferns such as Cooksonia) allowed land plants to gain a foothold further inland as well. During this time, there were four continents: Gondwana (Africa, South America, Australia, Antarctica, India), Laurentia (North America with parts of Europe), Baltica (the rest of Europe), and Siberia (Northern Asia). Devonian Period The Devonian spans from 419 million to 359 million years ago. Also informally known as the "Age of the Fish", the Devonian features a huge diversification in fish such as the jawless conodonts and ostracoderms, as well as jawed fish such as the armored placoderms (e.g. Dunkleosteus), the spiny acanthodians and early bony fish. The Devonian also saw the primitive appearance of modern fish groups such as chondricthyans (cartilaginous fish) and osteichthyans (bony fish), the latter of which include two clades — the actinopterygians (ray-finned fish) and sarcopterygians (lobe-finned fish). One lineage of sarcopterygians, Rhipidistia, evolved the first four-limbed vertebrates, which would eventually become tetrapods. On land, plant groups diversified after the Silurian-Devonian Terrestrial Revolution; the first woody ferns and the earliest seed plants evolved during this period. By the Middle Devonian, shrub-like forests existed: lycophytes, horsetails and progymnosperm. This greening event also allowed the diversification of arthropods as they took advantage of the new habitat. Near the end of the Devonian, 70% of all species became extinct in a sequence of mass extinction events, collectively known as the Late Devonian extinction. Carboniferous Period The Carboniferous spans from 359 million to 299 million years ago. Tropical swamps dominated the Earth, and the large amounts of trees sequestered much of the carbon that became coal deposits (hence the name Carboniferous and the term "coal forest"). About 90% of all coal beds were deposited in the Carboniferous and Permian periods, which represent just 2% of the Earth's geologic history. The high oxygen levels caused by these wetland rainforests allowed arthropods, normally limited in size by their respiratory systems, to proliferate and increase in size. Tetrapods also diversified during the Carboniferous as semiaquatic amphibians such as the temnospondyls, and one lineage developed extraembryonic membranes that allowed their eggs to survive outside of the water. These tetrapods, the amniotes, included the first sauropsids (which evolved the reptiles, dinosaurs and birds) and synapsids (the ancestors of mammal). Throughout the Carboniferous, there was a cooling pattern, which eventually led to the glaciation of Gondwana as much of it was situated around the South Pole. This event was known as the Permo-Carboniferous Glaciation and resulted in a major loss of coal forests, known as the Carboniferous rainforest collapse. Permian Period The Permian spans from 299 million to 251 million years ago and was the last period of the Paleozoic era. At its beginning, all landmasses came together to form the supercontinent Pangaea, surrounded by one expansive ocean called Panthalassa. The Earth was relatively dry compared to the Carboniferous, with harsh seasons, as the climate of the interior of Pangaea was not moderated by large bodies of water. Amniotes still flourished and diversified in the new dry climate, particularly synapsids such as Dimetrodon, Edaphosaurus and therapsids, which gave rise to the ancestors of modern mammals. The first conifers evolved during this period, then dominated the terrestrial landscape. The Permian ended with at least one mass extinction, an event sometimes known as "the Great Dying", caused by large floods of lava (the Siberian Traps in Russia and the Emeishan Traps in China). This extinction was the largest in Earth's history and led to the loss of 95% of all species of life. Mesozoic Era The Mesozoic ranges from 252 million to 66 million years ago. Also referred to as the Age of Reptiles, Age of Dinosaurs or Age of Conifers, the Mesozoic featured the first time the sauropsids ascended to ecological dominance over the synapsids, as well as the diversification of many modern ray-finned fish, insects, molluscs (particularly the coleoids), tetrapods and plants. The Mesozoic is subdivided into three periods: the Triassic, Jurassic and Cretaceous. Triassic Period The Triassic ranges from 252 million to 201 million years ago. The Triassic is mostly a transitional recovery period between the desolate aftermath of the Permian Extinction and the lush Jurassic Period. It has three major epochs: Early Triassic, Middle Triassic, and Late Triassic. The Early Triassic lasted between 252 million to 247 million years ago, and was a hot and arid epoch in the aftermath of the Permian Extinction. Many tetrapods during this epoch represented a disaster fauna, a group of survivor animals with low diversity and cosmopolitanism (wide geographic ranges). Temnospondyli recovered first and evolved into large aquatic predators during the Triassic. Other reptiles also diversified rapidly, with aquatic reptiles such as ichthyosaurs and sauropterygians proliferating in the seas. On land, the first true archosaurs appeared, including pseudosuchians (crocodile relatives) and avemetatarsalians (bird/dinosaur relatives). The Middle Triassic spans from 247 million to 237 million years ago. The Middle Triassic featured the beginnings of the break-up of Pangaea as rifting commenced in north Pangaea. The northern part of the Tethys Ocean, the Paleotethys Ocean, had become a passive basin, but a spreading center was active in the southern part of the Tethys Ocean, the Neotethys Ocean. Phytoplankton, coral, crustaceans and many other marine invertebrates recovered from the Permian extinction by the end of the Middle Triassic. Meanwhile, on land, reptiles continued to diversify, conifer forests flourished, as well as the first flies. The Late Triassic spans from 237 million to 201 million years ago. Following the bloom of the Middle Triassic, the Late Triassic was initially warm and arid with a strong monsoon climate and with most precipitation limited to coastal regions and high latitudes. This changed late in the Carnian period with a 2 million years-long wet season which transformed the arid continental interior into lush alluvial forests. The first true dinosaurs appeared early in the Late Triassic, and pterosaurs evolved a bit later. Other large reptilian competitors to the dinosaurs were wiped out by the Triassic–Jurassic extinction event, in which most archosaurs (excluding crocodylomorphs, pterosaurs and dinosaurs), most therapsids (except cynodonts) and almost all large amphibians became extinct, as well as 34% of marine life in the fourth mass extinction event. The cause of the extinction is debated, but likely resulted from eruptions of the CAMP large igneous province. Jurassic Period The Jurassic ranges from 201 million to 145 million years ago, and features three major epochs: Early Jurassic, Middle Jurassic and Late Jurassic. The Early Jurassic epoch spans from 201 million to 174 million years ago. The climate was much more humid than during the Triassic, and as a result, the world was warm and partially tropical, though possibly with short colder intervals. Plesiosaurs, ichthyosaurs and ammonites dominated the seas, while dinosaurs, pterysaurs and other reptiles dominated the land, with species such as Dilophosaurus at the apex. Crocodylomorphs evolved into aquatic forms, pushing the remaining large amphibians to near extinction. True mammals were present during the Jurassic but remained small, with average body masses of less than until the end of the Cretaceous. The Middle and Late Jurassic Epochs span from 174 million to 145 million years ago. Conifer savannahs made up a large portion of the world's forests. In the oceans, plesiosaurs were quite common, and ichthyosaurs were flourishing. The Late Jurassic Epoch spans from 163 million to 145 million years ago. The Late Jurassic featured a severe extinction of sauropods in northern continents, alongside many ichthyosaurs. However, the Jurassic-Cretaceous boundary did not strongly impact most forms of life. Cretaceous Period The Cretaceous is the Phanerozoic's longest period and the last period of the Mesozoic. It spans from 145 million to 66 million years ago, and is divided into two epochs: Early Cretaceous, and Late Cretaceous. The Early Cretaceous Epoch spans from 145 million to 100 million years ago. Dinosaurs continued to be abundant, with groups such as tyrannosauroids, avialans (birds), marginocephalians, and ornithopods seeing early glimpses of later success. Other tetrapods, such as stegosaurs and ichthyosaurs, declined significantly, and sauropods were restricted to southern continents. The Late Cretaceous Epoch spans from 100 million to 66 million years ago. The Late Cretaceous featured a cooling trend that would continue into the Cenozoic Era. Eventually, the tropical climate was restricted to the equator and areas beyond the tropic lines featured more seasonal climates. Dinosaurs still thrived as new species such as Tyrannosaurus, Ankylosaurus, Triceratops and hadrosaurs dominated the food web. Whether or not pterosaurs went into a decline as birds radiated is debated; however, many families survived until the end of the Cretaceous, alongside new forms such as the gigantic Quetzalcoatlus. Mammals diversified despite their small sizes, with metatherians (marsupials and kin) and eutherians (placentals and kin) coming into their own. In the oceans, mosasaurs diversified to fill the role of the now-extinct ichthyosaurs, alongside huge plesiosaurs such as Elasmosaurus. Also, the first flowering plants evolved. At the end of the Cretaceous, the Deccan Traps and other volcanic eruptions were poisoning the atmosphere. As this was continued, it is thought that a large meteor smashed into Earth, creating the Chicxulub Crater and causing the event known as the K–Pg extinction, the fifth and most recent mass extinction event, during which 75% of life on Earth became extinct, including all non-avian dinosaurs. Every living thing with a body mass over 10 kilograms became extinct, and the Age of Dinosaurs came to an end. Cenozoic Era The Cenozoic featured the rise of mammals and birds as the dominant class of animals, as the end of the Age of Dinosaurs left significant open niches. There are three divisions of the Cenozoic: Paleogene, Neogene and Quaternary. Paleogene Period The Paleogene spans from the extinction of the non-avian dinosaurs, some 66 million years ago, to the dawn of the Neogene 23 million years ago. It features three epochs: Paleocene, Eocene and Oligocene. The Paleocene Epoch began with the K–Pg extinction event, and the early part of the Paleocene saw the recovery of the Earth from that event. The continents began to take their modern shapes, but most continents (and India) remained separated from each other: Africa and Eurasia were separated by the Tethys Sea, and the Americas were separated by the Panamanic Seaway (as the Isthmus of Panama had not yet formed). This epoch featured a general warming trend that peaked at the Paleocene-Eocene Thermal Maximum, and the earliest modern jungles expanded, eventually reaching the poles. The oceans were dominated by sharks, as the large reptiles that had once ruled had become extinct. Mammals diversified rapidly, but most remained small. The largest tetrapod carnivores during the Paleocene were reptiles, including crocodyliforms, choristoderans and snakes. Titanoboa, the largest known snake, lived in South America during the Paleocene. The Eocene Epoch ranged from 56 million to 34 million years ago. In the early Eocene, most land mammals were small and living in cramped jungles, much like the Paleocene. Among them were early primates, whales and horses along with many other early forms of mammals. The climate was warm and humid, with little temperature gradient from pole to pole. In the Middle Eocene Epoch, the Antarctic Circumpolar Current formed when South America and Australia both separated from Antarctica to open up the Drake Passage and Tasmanian Passage, disrupting ocean currents worldwide, resulting in global cooling and causing the jungles to shrink. More modern forms of mammals continued to diversify with the cooling climate even as more archaic forms died out. By the end of the Eocene, whales such as Basilosaurus had become fully aquatic. The late Eocene Epoch saw the rebirth of seasons, which caused the expansion of savanna-like areas with the earliest substantial grasslands. At the transition between the Eocene and Oligocene epochs there was a significant extinction event, the cause of which is debated. The Oligocene Epoch spans from 34 million to 23 million years ago. The Oligocene was an important transitional period between the tropical world of the Eocene and more modern ecosystems. This period featured a global expansion of grass which led to many new species taking advantage, including the first elephants, felines, canines, marsupials and many other species still prevalent today. Many other species of plants evolved during this epoch also, such as the evergreen trees. The long term cooling continued and seasonal rain patterns established. Mammals continued to grow larger. Paraceratherium, one of the largest land mammals to ever live, evolved during this epoch, along with many other perissodactyls. Neogene Period The Neogene spans from 23.03 million to 2.58 million years ago. It features two epochs: the Miocene and the Pliocene. The Miocene spans from 23.03 million to 5.333 million years ago and is a period in which grass spread further across, effectively dominating a large portion of the world, diminishing forests in the process. Kelp forests evolved, leading to the evolution of new species such as sea otters. During this time, perissodactyls thrived, and evolved into many different varieties. Alongside them were the apes, which evolved into 30 species. Overall, arid and mountainous land dominated most of the world, as did grazers. The Tethys Sea finally closed with the creation of the Arabian Peninsula and in its wake left the Black, Red, Mediterranean and Caspian seas. This only increased aridity. Many new plants evolved, and 95% of modern seed plants evolved in the mid-Miocene. The Pliocene lasted from 5.333 million to 2.58 million years ago. The Pliocene featured dramatic climatic changes, which ultimately led to modern species and plants. The Mediterranean Sea dried up for hundreds of thousand years in the Messinian salinity crisis. Along with these major geological events, Africa saw the appearance of Australopithecus, the ancestor of Homo. The Isthmus of Panama formed, and animals migrated between North and South America, wreaking havoc on the local ecology. Climatic changes brought savannas that are still continuing to spread across the world, Indian monsoons, deserts in East Asia, and the beginnings of the Sahara Desert. The Earth's continents and seas moved into their present shapes. The world map has not changed much since, save for changes brought about by the Quaternary glaciation such as Lake Agassiz (precursor of the Great Lakes). Quaternary Period The Quaternary spans from 2.58 million years ago to present day, and is the shortest geological period in the Phanerozoic Eon. It features modern animals, and dramatic changes in the climate. It is divided into two epochs: the Pleistocene and the Holocene. The Pleistocene lasted from 2.58 million to 11,700 years ago. This epoch was marked by a series of glacial periods (ice ages) as a result of the cooling trend that started in the mid-Eocene. There were numerous separate glaciation periods marked by the advance of ice caps as far south as 40 degrees N latitude in mountainous areas. Meanwhile, Africa experienced a trend of desiccation which resulted in the creation of the Sahara, Namib and Kalahari deserts. Mammoths, giant ground sloths, dire wolves, sabre-toothed cats and archaic humans such as Homo erectus were common and widespread during the Pleistocene. A more anatomically modern human, Homo sapiens, began migrating out of East Africa in at least two waves, the first being as early as 270,000 years ago. After a supervolcano eruption in Sumatra 74,000 years ago caused a global population bottleneck of humans, a second wave of Homo sapiens migration successfully repopulated every continents except Antarctica. As the Pleistocene drew to a close, a major extinction wiped out much of the world's megafauna, including non-Homo sapiens human species such as Homo neanderthalensis and Homo floresiensis. All the continents were affected, but Africa was impacted to a lesser extent and retained many large animals such as elephants, rhinoceros and hippopotamus. The extent to which Homo sapiens were involved in this megafaunal extinction is debated. The Holocene began 11,700 years ago at the end of Younger Dryas and lasts until the present day. All recorded history and so-called "human history" lies within the boundaries of the Holocene epoch. Human activity is blamed for an ongoing mass extinction that began roughly 10,000 years ago, though the species becoming extinct have only been recorded since the Industrial Revolution. This is sometimes referred to as the "Sixth Extinction" with hundreds of species gone extinct due to human activities such as overhunting, habitat destruction and introduction of invasive species. Biodiversity It has been demonstrated that changes in biodiversity through the Phanerozoic correlate much better with the hyperbolic model (widely used in demography and macrosociology) than with exponential and logistic models (traditionally used in population biology and extensively applied to fossil biodiversity as well). The latter models imply that changes in diversity are guided by a first-order positive feedback (more ancestors, more descendants) or a negative feedback that arises from resource limitation, or both. The hyperbolic model implies a second-order positive feedback. The hyperbolic pattern of the human population growth arises from quadratic positive feedback, caused by the interaction of the population size and the rate of technological growth. The character of biodiversity growth in the Phanerozoic Eon can be similarly accounted for by a feedback between the diversity and community structure complexity. It has been suggested that the similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the superposition on the hyperbolic trend of cyclical and random dynamics. Climate Across the Phanerozoic, the dominant driver of long-term climatic change was the concentration of carbon dioxide in the atmosphere, though some studies have suggested a decoupling of carbon dioxide and palaeotemperature, particularly during cold intervals of the Phanerozoic. Phanerozoic carbon dioxide concentrations have been governed partially by a 26 million year oceanic crustal cycle. Since the Devonian, large swings in carbon dioxide of 2,000 ppm or more were uncommon over short timescales. Variations in global temperature were limited by negative feedbacks in the phosphorus cycle, wherein increased phosphorus input into the ocean would increase surficial biological productivity that would in turn enhance iron redox cycling and thus remove phosphorus from seawater; this maintained a relatively stable rate of removal of carbon from the atmosphere and ocean via organic carbon burial. The climate also controlled the availability of phosphate through its regulation of rates of continental and seafloor weathering. Major global temperature variations of >7 °C during the Phanerozoic were strongly associated with mass extinctions.
Physical sciences
Geological periods
null
23747
https://en.wikipedia.org/wiki/Peroxide
Peroxide
In chemistry, peroxides are a group of compounds with the structure , where the R's represent a radical (a portion of a complete molecule; not necessarily a free radical) and O's are single oxygen atoms. Oxygen atoms are joined to each other and to adjacent elements through single covalent bonds, denoted by dashes or lines. The group in a peroxide is often called the peroxide group, though some nomenclature discrepancies exist. This linkage is recognized as a common polyatomic ion, and exists in many molecules. General structure The characteristic structure of any regular peroxide is the oxygen-oxygen covalent single bond, which connects the two main atoms together. In the event that the molecule has no chemical substituents, the peroxide group will have a [-2] net charge. Each oxygen atom has a charge of negative one, as 5 of its valence electrons remain in the outermost orbital shell whilst one is occupied in the covalent bond. Because of the nature of the covalent bond, this arrangement results in each atom having the equivalent of 7 valence electrons, reducing the oxygens and giving them a negative charge. This charge is affected by the addition of other elements, with the properties and structure changing depending on the added group(s). Common forms The most common peroxide is hydrogen peroxide (), colloquially known simply as "peroxide". It is marketed as solutions in water at various concentrations. Many organic peroxides are known as well. In addition to hydrogen peroxide, some other major classes of peroxides are: Peroxy acids, the peroxy derivatives of many familiar acids, examples being peroxymonosulfuric acid and peracetic acid, and their salts, one example of which is potassium peroxydisulfate. Main group peroxides, compounds with the linkage (E = main group element). Metal peroxides, examples being barium peroxide (), sodium peroxide () and zinc peroxide (). Organic peroxides, compounds with the linkage or . One example is tert-butylhydroperoxide. Nomenclature The linkage between the oxygen molecules is known as a peroxy group (sometimes called peroxo group, peroxyl group, of peroxy linkage). The nomenclature of the peroxy group is somewhat variable, and exists as an exception to the rules of naming polyatomic ions. This is because, when it was discovered, it was believed to be monatomic. The term was introduced by Thomas Thomson in 1804 for a compound combined with as much oxygen as possible, or the oxide with the greatest quantity of oxygen.
Physical sciences
Peroxide salts
Chemistry
23749
https://en.wikipedia.org/wiki/Platypus
Platypus
The platypus (Ornithorhynchus anatinus), sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia, including Tasmania. The platypus is the sole living representative or monotypic taxon of its family Ornithorhynchidae and genus Ornithorhynchus, though a number of related species appear in the fossil record. Together with the four species of echidna, it is one of the five extant species of monotremes, mammals that lay eggs instead of giving birth to live young. Like other monotremes, the platypus has a sense of electrolocation, which it uses to detect prey in cloudy water. It is one of the few species of venomous mammals, as the male platypus has a spur on the hind foot that delivers an extremely painful venom. The unusual appearance of this egg-laying, duck-billed, beaver-tailed, otter-footed mammal at first baffled European naturalists. In 1799, the first scientists to examine a preserved platypus body judged it a fake made of several animals sewn together. The unique features of the platypus make it important in the study of evolutionary biology, and a recognisable and iconic symbol of Australia. It is culturally significant to several Aboriginal peoples, who also used to hunt it for food. It has appeared as a national mascot, features on the reverse of the Australian twenty-cent coin, and is an emblem of the state of New South Wales. The platypus was hunted for its fur, but it has been a legally protected species in all states where it occurs since 1912. Its population is not under severe threat, although captive-breeding programs have had slight success, and it is vulnerable to pollution. It is classified as a near-threatened species by the IUCN, but a November 2020 report has recommended that it be upgraded to threatened species under the federal EPBC Act, due to habitat destruction and declining numbers in all states. Taxonomy and naming Australian Aboriginal people name or have named the platypus in various ways depending on Australian indigenous languages and dialects. Among the names found: boondaburra, mallingong, tambreet, watjarang (names in Yass, Murrumbidgee, and Tumut), tohunbuck (region of Goomburra, Darling Downs), dulaiwarrung ou dulai warrung (Woiwurrung language, Wurundjeri, Victoria), djanbang (Bundjalung, Queensland), djumulung (Yuin language, Yuin, New South Wales), maluŋgaŋ (ngunnawal language, Ngunnawal, Australian Capital Territory), biladurang, wamul, dyiimalung, oornie, dungidany (Wiradjuri language, Wiradjuri, Vic, NSW), oonah, etc. The name chosen and approved in Palawa kani (reconstructed tasmanian language) is larila. When the platypus was first encountered by Europeans in 1798, a pelt and sketch were sent back to Great Britain by Captain John Hunter, the second Governor of New South Wales. British scientists' initial hunch was that the attributes were a hoax. George Shaw, who produced the first description of the animal in the Naturalist's Miscellany in 1799, stated it was impossible not to entertain doubts as to its genuine nature, and Robert Knox believed it might have been produced by some Asian taxidermist. It was thought somebody had sewn a duck's beak onto the body of a beaver-like animal. Shaw even took a pair of scissors to the dried skin to check for stitches. The common name "platypus" literally means 'flat-foot', deriving from the Greek word (), from ( 'broad, wide, flat') and ( 'foot'). Shaw initially assigned the species the Linnaean name Platypus anatinus when he described it, but the genus term was quickly discovered to already be in use as the name of the wood-boring ambrosia beetle genus Platypus. It was independently described as Ornithorhynchus paradoxus by Johann Blumenbach in 1800 (from a specimen given to him by Sir Joseph Banks) and following the rules of priority of nomenclature, it was later officially recognised as Ornithorhynchus anatinus. There is no universally agreed plural form of "platypus" in the English language. Scientists generally use "platypuses" or simply "platypus". Alternatively, the term "platypi" is also used for the plural, although this is a form of pseudo-Latin; going by the word's Greek roots the plural would be "platypodes". Early British settlers called it by many names, such as "watermole", "duckbill", and "duckmole". Occasionally it is specifically called the "duck-billed platypus". The scientific name Ornithorhynchus anatinus literally means 'duck-like bird-snout', deriving its genus name from the Greek root ( ornith or órnīs 'bird') and the word ( 'snout', 'beak'). Its species name is derived from Latin ('duck-like') from 'duck'. The platypus is the sole living representative or monotypic taxon of its family (Ornithorhynchidae). Description In David Collins's account of the new colony 1788–1801, he describes "an amphibious animal, of the mole species", with a drawing. The body and the broad, flat tail of the platypus are covered with dense, brown, biofluorescent fur that traps a layer of insulating air to keep the animal warm. The fur is waterproof, and textured like that of a mole. The platypus's tail stores fat reserves, an adaptation also found in the Tasmanian devil. Webbing is more significant on the front feet, which in land walking are folded up in knuckle-walking to protect the webbing. The elongated snout and lower jaw are covered in soft skin, forming the bill. The nostrils are located on the snout's dorsal surface, while the eyes and ears are just behind the snout in a groove which closes underwater. Platypuses can give a low growl when disturbed, and a range of vocalisations have been reported in captivity. Size varies considerably in different regions, with average weight from ; males have average length , while females are the smaller at . This variation does not seem to follow any particular climatic rule and may be due to other factors such as predation and human encroachment. The platypus has an average body temperature of about , lower than the typical of placental mammals. Research suggests this has been a gradual adaptation to harsh environmental conditions among the few marginal surviving monotreme species, rather than a general characteristic of past monotremes. In addition to laying eggs, the anatomy, ontogeny, and genetics of monotremes shows traces of similarity to reptiles and birds. The platypus has a reptilian gait with legs on the sides of the body, rather than underneath. The platypus's genes are a possible evolutionary link between the mammalian XY and bird/reptile ZW sex-determination systems, as one of the platypus's five X chromosomes contains the DMRT1 gene, which birds possess on their Z chromosome. As in all true mammals, the tiny bones that conduct sound in the middle ear are fully incorporated into the skull, rather than lying in the jaw as in pre-mammalian synapsids. However, the external opening of the ear still lies at the base of the jaw. The platypus has extra bones in the shoulder girdle, including an interclavicle not found in other mammals. As in many other aquatic and semiaquatic vertebrates, the bones show osteosclerosis, increasing their density to provide ballast. The platypus jaw is constructed differently from that of other mammals, and the jaw-opening muscle is different. Modern platypus young have three teeth in each of the maxillae (one premolar and two molars) and dentaries (three molars), which they lose before or just after leaving the breeding burrow; adults instead develop heavily keratinised food-grinding pads called ceratodontes. The first upper and third lower cheek teeth of platypus nestlings are small, each having one principal cusp, while the other teeth have two main cusps. Venom While both male and female platypuses are born with back ankle spurs, only the males' deliver venom. It is powerful enough to kill smaller animals such as dogs, and though it is not lethal to humans, it can inflict weeks of agony. Edema rapidly develops around the wound and gradually spreads through the affected limb, and it may develop into an excruciating hyperalgesia (heightened sensitivity to pain) persisting for days or even months. The venom is composed largely of defensin-like proteins (DLPs) produced by the immune system, three of which are unique to the platypus. In other animals, defensins kill pathogenic bacteria and viruses, but in platypuses they are also collected into a venom against predators. Venom is produced in the crural glands of the male, which are kidney-shaped alveolar glands connected by a thin-walled duct to a calcaneus spur on each hind limb. The female platypus, in common with echidnas, has rudimentary spur buds that do not develop (dropping off before the end of their first year) and lack functional crural glands. Venom production rises among males during the breeding season, and it may be used to assert dominance. Similar spurs are found on many archaic mammal groups, indicating that this was an ancient general characteristic among mammals. Electrolocation Monotremes are the only mammals (apart from the Guiana dolphin) known to have a sense of electroreception, and the platypus's electroreception is the most sensitive of any monotreme. Feeding by neither sight nor smell, the platypus closes its eyes, ears, and nose when it dives. Digging in the bottom of streams with its bill, its electroreceptors detect tiny electric currents generated by the muscular contractions of its prey, enabling it to distinguish between animate and inanimate objects. Experiments have shown the platypus will even react to an "artificial shrimp" if a small electric current is passed through it. The electroreceptors are located in rostrocaudal rows in the skin of the bill, while mechanoreceptors for touch are uniformly distributed across the bill. The electrosensory area of the cerebral cortex is in the tactile somatosensory area, and some cortical cells receive input from both electroreceptors and mechanoreceptors, suggesting the platypus feels electric fields like touches. These receptors in the bill dominate the somatotopic map of the platypus brain, in the same way human hands dominate the Penfield homunculus map. The platypus can feel the direction of an electric source, perhaps by comparing differences in signal strength across the sheet of electroreceptors, enhanced by the characteristic side-to-side motion of the animal's head while hunting. It may also be able to determine the distance of moving prey from the time lag between their electrical and mechanical pressure pulses. Monotreme electrolocation for hunting in murky waters may be tied to their tooth loss. The extinct Obdurodon was electroreceptive, but unlike the modern platypus it foraged pelagically. Eyes In recent studies it has been suggested that the eyes of the platypus are more similar to those of Pacific hagfish or Northern Hemisphere lampreys than to those of most tetrapods. The eyes also contain double cones, unlike most mammals. Although the platypus's eyes are small and not used under water, several features indicate that vision was important for its ancestors. The corneal surface and the adjacent surface of the lens is flat, while the posterior surface of the lens is steeply curved, similar to the eyes of other aquatic mammals such as otters and sea-lions. A temporal (ear side) concentration of retinal ganglion cells, important for binocular vision, indicates a vestigial role in predation, though the actual visual acuity is insufficient for such activities. Limited acuity is matched by low cortical magnification, a small lateral geniculate nucleus, and a large optic tectum, suggesting that the visual midbrain plays a more important role than the visual cortex, as in some rodents. These features suggest that the platypus has adapted to an aquatic and nocturnal lifestyle, developing its electrosensory system at the cost of its visual system. This contrasts with the small number of electroreceptors in the short-beaked echidna, which dwells in dry environments, while the long-beaked echidna, which lives in moist environments, is intermediate between the other two monotremes. Biofluorescence In 2020, research revealed that platypus fur gives a bluish-green biofluorescent glow in black light. Distribution, ecology, and behaviour The platypus is semiaquatic, inhabiting small streams and rivers over an extensive range from the cold highlands of Tasmania and the Australian Alps to the tropical rainforests of coastal Queensland as far north as the base of the Cape York Peninsula. Inland, its distribution is not well known. It was considered extinct on the South Australian mainland, with the last sighting recorded at Renmark in 1975. In the 1980s, John Wamsley created a platypus breeding program in Warrawong Sanctuary (see below), which subsequently closed. In 2017 there were some unconfirmed sightings downstream from the sanctuary, and in October 2020 a nesting platypus was filmed inside the recently reopened sanctuary. There is a population on Kangaroo Island introduced in the 1920s, said to stand at 150 individuals in the Rocky River region of Flinders Chase National Park. In the 2019–20 Australian bushfire season, large portions of the island burnt, decimating wildlife. However, SA Department for Environment and Water recovery teams worked to reinstate their habitat, with a number of sightings reported by April 2020. The platypus is no longer found in the main Murray–Darling Basin, possibly due to declining water quality from land clearing and irrigation although it is found in the Goulburn River in Victoria. Along the coastal river systems, its distribution is unpredictable: absent in some relatively healthy rivers, but present in some quite degraded ones, for example the lower Maribyrnong. In captivity, platypuses have survived to 30 years of age, and wild specimens have been recaptured when 24 years old. Mortality rates for adults in the wild appear to be low. Natural predators include snakes, water rats, goannas, hawks, owls, and eagles. Low platypus numbers in northern Australia are possibly due to predation by crocodiles. The introduction of red foxes in 1845 for sport hunting may have had some impact on its numbers on the mainland. The platypus is generally nocturnal and crepuscular, but can be active on overcast days. Its habitat bridges rivers and the riparian zone, where it finds both prey and river banks to dig resting and nesting burrows. It may have a range of up to , with a male's home range overlapping those of three or four females. The platypus is an excellent swimmer and spends much of its time in the water foraging for food. It has a swimming style unique among mammals, propelling itself by alternate strokes of the front feet, while the webbed hind feet are held against the body and only used for steering, along with the tail. It can maintain its relatively low body temperature of about 32°C (90°F) while foraging for hours in water below 5°C (41°F). Dives normally last around 30 seconds, with an estimated aerobic limit of 40 seconds, with 10 to 20 seconds at the surface between dives. The platypus rests in a short, straight burrow in the riverbank about above water level, its oval entrance-hole often hidden under a tangle of roots. It may sleep up to 14 hours per day, after half a day of diving. Diet The platypus is a carnivore, feeding on annelid worms, insect larvae, freshwater shrimp, and yabby (crayfish) that it digs out of the riverbed with its snout or catches while swimming. It carries prey to the surface in cheek-pouches before eating it. It eats about 20% of its own weight each day, which requires it to spend an average of 12 hours daily looking for food. Reproduction The species has a single breeding season between June and October, with some local variation. Investigations have found both resident and transient platypuses, and suggest a polygynous mating system. Females are believed to become sexually mature in their second year, with breeding observed in animals over nine years old. During copulation, the male grasps the female's tail with his bill, wraps his tail around her, then grips her neck or shoulder, everts his penis through his cloaca, and inserts it into her urogenital sinus. He takes no part in nesting, living in his year-long resting burrow. After mating, the female constructs a deep, elaborate nesting burrow up to long. She tucks fallen leaves and reeds underneath her curled tail, dragging them to the burrow to soften the tunnel floor with folded wet leaves, and to line the nest at the end with bedding. The male platypus has penile spines and an asymmetrical glans penis, with the right side smaller than the left. The female has two ovaries, but only the left one is functional. She lays one to three (usually two) small, leathery eggs (similar to those of reptiles), about in diameter and slightly rounder than bird eggs. The eggs develop in utero for about 28 days, with only about 10 days of external incubation (in contrast to a chicken egg, which spends about one day in tract and 21 days externally). The female curls around the incubating eggs, which develop in three phases. In the first, the embryo has no functional organs and relies on the yolk sac for sustenance, until the sac is absorbed. During the second phase, the digits develop, and in the last phase, the egg tooth appears. At first, European naturalists could hardly believe that the female platypus lays eggs, but this was finally confirmed by William Hay Caldwell in 1884. Most mammal zygotes go through holoblastic cleavage, splitting into multiple divisible daughter cells. However, monotremes like the platypus, along with reptiles and birds, undergo meroblastic cleavage, in which the ovum does not split completely. The cells at the edge of the yolk remain continuous with the egg's cytoplasm, allowing the yolk and embryo to exchange waste and nutrients with the egg through the cytoplasm. Young platypus are called "puggles". Newly hatched platypuses are vulnerable, blind, and hairless, and are fed by the mother's milk, that provides all the requirements for growth and development. The platypus's mammary glands lack teats, with milk released through pores in the skin. The milk pools in grooves on the mother's abdomen, allowing the young to lap it up. After they hatch, the offspring are milk-fed for three to four months. During incubation and weaning, the mother initially leaves the burrow only for short periods to forage. She leaves behind her a number of thin soil plugs along the length of the burrow, possibly to protect the young from predators; pushing past these on her return squeezes water from her fur and allows the burrow to remain dry. After about five weeks, the mother begins to spend more time away from her young, and at around four months, the young emerge from the burrow. A platypus is born with teeth, but these drop out at a very early age, leaving the horny plates it uses to grind food. Evolution The platypus and other monotremes were very poorly understood, and some of the 19th century myths that grew up around themfor example, that the monotremes were "inferior" or quasireptilianstill endure. In 1947, William King Gregory theorised that placental mammals and marsupials may have diverged earlier, and a subsequent branching divided the monotremes and marsupials, but later research and fossil discoveries have suggested this is incorrect. In fact, modern monotremes are the survivors of an early branching of the mammal tree, and a later branching is thought to have led to the marsupial and placental groups. Molecular clock and fossil dating suggest platypuses split from echidnas around 19–48million years ago. The oldest discovered fossil of the modern platypus dates back to about 100,000 years ago during the Quaternary period, though a limb bone of Ornithorhynchus is known from Pliocene-aged strata. The extinct monotremes Teinolophos and Steropodon from the Cretaceous were once thought to be closely related to the modern platypus, but are now considered more basal taxa. The fossilised Steropodon was discovered in New South Wales and is composed of an opalised lower jawbone with three molar teeth (whereas the adult contemporary platypus is toothless). The molar teeth were initially thought to be tribosphenic, which would have supported a variation of Gregory's theory, but later research has suggested, while they have three cusps, they evolved under a separate process. The fossil jaw of Teinolophos is thought to be about 110million years old, making it the oldest mammal fossil found in Australia. Unlike the modern platypus (and echidnas), Teinolophos lacked a beak. In 2024, Late Cretaceous (Cenomanian)-aged fossil specimens of actual early platypus relatives were recovered from the same rocks as Steropodon, including the basal Opalios and the more derived Dharragarra, the latter of which may be the oldest member of the platypus stem-lineage, as it retains the same dental formula found in Cenozoic platypus relatives. Monotrematum and Patagorhynchus, two other fossil relatives of the platypus, are known from the latest Cretaceous (Maastrichtian) and the mid-Paleocene of Argentina, indicating that some monotremes managed to colonize South America from Australia when the two continents were connected via Antarctica. These are also considered potential members of the platypus stem-lineage. The closest fossil relative of the platypus was Obdurodon, known from the late Oligocene to the Miocene of Australia. It closely resembled the modern platypus, aside from the presence of molar teeth. A fossilised tooth of the giant platypus Obdurodon tharalkooschild was dated 5–15million years ago. Judging by the tooth, the animal measured 1.3 metres long, making it the largest platypus on record. The loss of teeth in the modern platypus has long been enigmatic, as a distinctive lower molar tooth row was previously present in its lineage for over 95 million years. Even its closest relative, Obdurodon, which otherwise closely resembles the platypus, retained this tooth row. More recent studies indicate that this tooth loss was a geologically very recent event, occurring only around the Plio-Pleistocene (around 2.5 million years ago), when the rakali, a large semiaquatic rodent, colonized Australia from New Guinea. The platypus, which previously fed on a wide array of hard and soft-bodied prey, was outcompeted by the rakali over hard-bodied prey such as crayfish and mussels. This competition may have selected for the loss of teeth in the platypus and their replacement by horny pads, as a way of specializing for softer-bodied prey, which the rakali did not compete with it over. Genome Because of the early divergence from the therian mammals and the low numbers of extant monotreme species, the platypus is a frequent subject of research in evolutionary biology. In 2004, researchers at the Australian National University discovered the platypus has ten sex chromosomes, compared with two (XY) in most other mammals. These ten chromosomes form five unique pairs of XY in males and XX in females, i.e. males are XYXYXYXYXY. One of the X chromosomes of the platypus has great homology to the bird Z chromosome. The platypus genome also has both reptilian and mammalian genes associated with egg fertilisation. Though the platypus lacks the mammalian sex-determining gene SRY, a study found that the mechanism of sex determination is the AMH gene on the oldest Y chromosome. A draft version of the platypus genome sequence was published in Nature on 8May 2008, revealing both reptilian and mammalian elements, as well as two genes found previously only in birds, amphibians, and fish. More than 80% of the platypus's genes are common to the other mammals whose genomes have been sequenced. An updated genome, the most complete on record, was published in 2021, together with the genome of the short-beaked echidna. Conservation Status and threats Except for its loss from the state of South Australia, the platypus occupies the same general distribution as it did prior to European settlement of Australia. However, local changes and fragmentation of distribution due to human modification of its habitat are documented. Its historical abundance is unknown and its current abundance difficult to gauge, but it is assumed to have declined in numbers, although as of 1998 was still being considered as common over most of its current range. The species was extensively hunted for its fur until the early years of the 20th century. Although the species gained legal protections beginning in Victoria in 1890 and throughout Australia by 1912, until about 1950 it was still at risk of drowning in the nets of inland fisheries. The International Union for Conservation of Nature recategorised its status as "near threatened" in 2016. The species is protected by law, but the only state in which it is listed as endangered is South Australia, under the National Parks and Wildlife Act 1972. In November 2020 a recommendation was made to list the platypus as a vulnerable species across all states with a vulnerable listing being made official in Victoria under the state's Flora and Fauna Guarantee Act 1988 on 10 January 2021. Habitat destruction The platypus is not considered to be in immediate danger of extinction, because conservation measures have been successful, but it could be adversely affected by habitat disruption caused by dams, irrigation, pollution, netting, and trapping. Reduction of watercourse flows and water levels through excessive droughts and extraction of water for industrial, agricultural, and domestic supplies are also considered a threat. The IUCN lists the platypus on its Red List as "Near Threatened" as assessed in 2016, when it was estimated that numbers had reduced by about 30 percent on average since European settlement. The animal is listed as endangered in South Australia, but it is not covered at all under the federal EPBC Act. Researchers have worried for years that declines have been greater than assumed. In January 2020, researchers from the University of New South Wales presented evidence that the platypus is at risk of extinction, due to a combination of extraction of water resources, land clearing, climate change and severe drought. The study predicted that, considering current threats, the animals' abundance would decline by 47–66% and metapopulation occupancy by 22–32% over 50 years, causing "extinction of local populations across about 40% of the range". Under projections of climate change projections to 2070, reduced habitat due to drought would lead to 51–73% reduced abundance and 36–56% reduced metapopulation occupancy within 50 years respectively. These predictions suggested that the species would fall under the "Vulnerable" classification. The authors stressed the need for national conservation efforts, which might include conducting more surveys, tracking trends, reduction of threats and improvement of river management to ensure healthy platypus habitat. Co-author Gilad Bino is concerned that the estimates of the 2016 baseline numbers could be wrong, and numbers may have been reduced by as much as half already. A November 2020 report by scientists from the University of New South Wales, funded by a research grant from the Australian Conservation Foundation in collaboration with the World Wildlife Fund Australia and the Humane Society International Australia revealed that that platypus habitat in Australia had shrunk by 22 per cent in the previous 30 years, and recommended that the platypus should be listed as a threatened species under the EPBC Act. Declines in population had been greatest in NSW, in particular in the Murray–Darling basin. Disease Platypuses generally suffer from few diseases in the wild; however, as of 2008 there was concern in Tasmania about the potential impacts of a disease caused by the fungus Mucor amphibiorum. The disease (termed mucormycosis) affects only Tasmanian platypuses, and had not been observed in platypuses in mainland Australia. Affected platypuses can develop skin lesions or ulcers on various parts of their bodies, including their backs, tails, and legs. Mucormycosis can kill platypuses, death arising from secondary infection and by affecting the animals' ability to maintain body temperature and forage efficiently. The Biodiversity Conservation Branch at the Department of Primary Industries and Water collaborated with NRM north and University of Tasmania researchers to determine the impacts of the disease on Tasmanian platypuses, as well as the mechanism of transmission and spread of the disease. Wildlife sanctuaries Much of the world was introduced to the platypus in 1939 when National Geographic Magazine published an article on the platypus and the efforts to study and raise it in captivity. The latter is a difficult task, and only a few young have been successfully raised since, notably at Healesville Sanctuary in Victoria. The leading figure in these efforts was David Fleay, who established a platypusary (a simulated stream in a tank) at the Healesville Sanctuary, where breeding was successful in 1943. In 1972, he found a dead baby of about 50 days old, which had presumably been born in captivity, at his wildlife park at Burleigh Heads on the Gold Coast, Queensland. Healesville repeated its success in 1998 and again in 2000 with a similar stream tank. Since 2008, platypus has bred regularly at Healesville, including second-generation (captive born themselves breeding in captivity). Taronga Zoo in Sydney bred twins in 2003, and breeding was again successful there in 2006. Captivity As of 2019, the only platypuses in captivity outside of Australia are in the San Diego Zoo Safari Park in the U.S. state of California. Three attempts were made to bring the animals to the Bronx Zoo, in 1922, 1947, and 1958. Of these, only two of the three animals introduced in 1947, Penelope and Cecil, lived longer than eighteen months. Human interactions Usage Aboriginal Australians used to hunt platypuses for food (their fatty tails being particularly nutritious), while, after colonisation, Europeans hunted them for fur from the late 19th century until 1912, when it was prohibited by law. In addition, European researchers captured and killed platypus or removed their eggs, partly in order to increase scientific knowledge, but also to gain prestige and outcompete rivals from different countries. Cultural references The platypus has been a subject in the Dreamtime stories of Aboriginal Australians, some of whom believed the animal was a hybrid of a duck and a water rat. According to one story of the upper Darling River, the major animal groups, the land animals, water animals and birds, all competed for the platypus to join their respective groups, but the platypus ultimately decided to not join any of them, feeling that he did not need to be part of a group to be special, and wished to remain friends with all of those groups. Another Dreaming story emanate of the upper Darling tells of a young duck which ventured too far, ignoring the warnings of her tribe, and was kidnapped by a large water-rat called Biggoon. After managing to escape after some time, she returned and laid two eggs which hatched into strange furry creatures, so they were all banished and went to live in the mountains. The platypus is also used by some Aboriginal peoples as a totem, which is to them "a natural object, plant or animal that is inherited by members of a clan or family as their spiritual emblem", and the animal holds special meaning as a totem animal for the Wadi Wadi people, who live along the Murray River. Because of their cultural significance and importance in connection to country, the platypus is protected and conserved by these Indigenous peoples. The platypus has often been used as a symbol of Australia's cultural identity. In the 1940s, live platypuses were given to allies in the Second World War, in order to strengthen ties and boost morale. Platypuses have been used several times as mascots: Syd the platypus was one of the three mascots chosen for the Sydney 2000 summer Olympics along with an echidna and a kookaburra, Expo Oz the platypus was the mascot for World Expo 88, which was held in Brisbane in 1988, and Hexley the platypus is the mascot for the Darwin operating system, the BSD-based core of macOS and other operating systems from Apple Inc. Since the introduction of decimal currency to Australia in 1966, the embossed image of a platypus, designed and sculpted by Stuart Devlin, has appeared on the reverse (tails) side of the 20-cent coin. The platypus has frequently appeared in Australian postage stamps, most recently the 2015 "Native Animals" series and the 2016 "Australian Animals Monotremes" series. In the American animated series Phineas and Ferb, the title characters own a pet bluish-green platypus named Perry who, unknown to them, is a secret agent. Such choices were inspired by media underuse, as well as to exploit the animal's striking appearance; additionally, show creator Dan Povenmire, who also wrote the character's theme song, said that its opening lyrics are based on the introductory sentence of the Platypus article on Wikipedia, copying the "semiaquatic egg-laying mammal" phrase word for word and appending the phrase "of action"; however, the article did not include "egg-laying mammal" in the lead sentence until 2014, several years after the song released. As a character, Perry has been well received by both fans and critics. Coincidentally, real platypuses show a similar cyan colour when seen under ultraviolet lighting.
Biology and health sciences
Monotremes
null
23750
https://en.wikipedia.org/wiki/Paramagnetism
Paramagnetism
Paramagnetism is a form of magnetism whereby some materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. In contrast with this behavior, diamagnetic materials are repelled by magnetic fields and form induced magnetic fields in the direction opposite to that of the applied magnetic field. Paramagnetic materials include most chemical elements and some compounds; they have a relative magnetic permeability slightly greater than 1 (i.e., a small positive magnetic susceptibility) and hence are attracted to magnetic fields. The magnetic moment induced by the applied field is linear in the field strength and rather weak. It typically requires a sensitive analytical balance to detect the effect and modern measurements on paramagnetic materials are often conducted with a SQUID magnetometer. Paramagnetism is due to the presence of unpaired electrons in the material, so most atoms with incompletely filled atomic orbitals are paramagnetic, although exceptions such as copper exist. Due to their spin, unpaired electrons have a magnetic dipole moment and act like tiny magnets. An external magnetic field causes the electrons' spins to align parallel to the field, causing a net attraction. Paramagnetic materials include aluminium, oxygen, titanium, and iron oxide (FeO). Therefore, a simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: if all electrons in the particle are paired, then the substance made of this particle is diamagnetic; if it has unpaired electrons, then the substance is paramagnetic. Unlike ferromagnets, paramagnets do not retain any magnetization in the absence of an externally applied magnetic field because thermal motion randomizes the spin orientations. (Some paramagnetic materials retain spin disorder even at absolute zero, meaning they are paramagnetic in the ground state, i.e. in the absence of thermal motion.) Thus the total magnetization drops to zero when the applied field is removed. Even in the presence of the field there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnetic materials is non-linear and much stronger, so that it is easily observed, for instance, in the attraction between a refrigerator magnet and the iron of the refrigerator itself. Relation to electron spins Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field. The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment). In pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic moment. When a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied field. In the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied field. However, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum. If there is sufficient energy exchange between neighbouring dipoles, they will interact, and may spontaneously align or anti-align and form magnetic domains, resulting in ferromagnetism (permanent magnets) or antiferromagnetism, respectively. Paramagnetic behavior can also be observed in ferromagnetic materials that are above their Curie temperature, and in antiferromagnets above their Néel temperature. At these temperatures, the available thermal energy simply overcomes the interaction energy between the spins. In general, paramagnetic effects are quite small: the magnetic susceptibility is of the order of 10−3 to 10−5 for most paramagnets, but may be as high as 10−1 for synthetic paramagnets such as ferrofluids. Delocalization In conductive materials, the electrons are delocalized, that is, they travel through the solid more or less as free electrons. Conductivity can be understood in a band structure picture as arising from the incomplete filling of energy bands. In an ordinary nonmagnetic conductor the conduction band is identical for both spin-up and spin-down electrons. When a magnetic field is applied, the conduction band splits apart into a spin-up and a spin-down band due to the difference in magnetic potential energy for spin-up and spin-down electrons. Since the Fermi level must be identical for both bands, this means that there will be a small surplus of the type of spin in the band that moved downwards. This effect is a weak form of paramagnetism known as Pauli paramagnetism. The effect always competes with a diamagnetic response of opposite sign due to all the core electrons of the atoms. Stronger forms of magnetism usually require localized rather than itinerant electrons. However, in some cases a band structure can result in which there are two delocalized sub-bands with states of opposite spins that have different energies. If one subband is preferentially filled over the other, one can have itinerant ferromagnetic order. This situation usually only occurs in relatively narrow (d-)bands, which are poorly delocalized. s and p electrons Generally, strong delocalization in a solid due to large overlap with neighboring wave functions means that there will be a large Fermi velocity; this means that the number of electrons in a band is less sensitive to shifts in that band's energy, implying a weak magnetism. This is why s- and p-type metals are typically either Pauli-paramagnetic or as in the case of gold even diamagnetic. In the latter case the diamagnetic contribution from the closed shell inner electrons simply wins over the weak paramagnetic term of the almost free electrons. d and f electrons Stronger magnetic effects are typically only observed when d or f electrons are involved. Particularly the latter are usually strongly localized. Moreover, the size of the magnetic moment on a lanthanide atom can be quite large as it can carry up to 7 unpaired electrons in the case of gadolinium(III) (hence its use in MRI). The high magnetic moments associated with lanthanides is one reason why superstrong magnets are typically based on elements like neodymium or samarium. Molecular localization The above picture is a generalization as it pertains to materials with an extended lattice rather than a molecular structure. Molecular structure can also lead to localization of electrons. Although there are usually energetic reasons why a molecular structure results such that it does not exhibit partly filled orbitals (i.e. unpaired spins), some non-closed shell moieties do occur in nature. Molecular oxygen is a good example. Even in the frozen solid it contains di-radical molecules resulting in paramagnetic behavior. The unpaired spins reside in orbitals derived from oxygen p wave functions, but the overlap is limited to the one neighbor in the O2 molecules. The distances to other oxygen atoms in the lattice remain too large to lead to delocalization and the magnetic moments remain unpaired. Theory The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. The paramagnetic response has then two possible quantum origins, either coming from permanent magnetic moments of the ions or from the spatial motion of the conduction electrons inside the material. Both descriptions are given below. Curie's law For low levels of magnetization, the magnetization of paramagnets follows what is known as Curie's law, at least approximately. This law indicates that the susceptibility, , of paramagnetic materials is inversely proportional to their temperature, i.e. that materials become more magnetic at lower temperatures. The mathematical expression is: where: is the resulting magnetization, measured in amperes/meter (A/m), is the volume magnetic susceptibility (dimensionless), is the auxiliary magnetic field (A/m), is absolute temperature, measured in kelvins (K), is a material-specific Curie constant (K). Curie's law is valid under the commonly encountered conditions of low magnetization (μBH ≲ kBT), but does not apply in the high-field/low-temperature regime where saturation of magnetization occurs (μBH ≳ kBT) and magnetic dipoles are all aligned with the applied field. When the dipoles are aligned, increasing the external field will not increase the total magnetization since there can be no further alignment. For a paramagnetic ion with noninteracting magnetic moments with angular momentum J, the Curie constant is related to the individual ions' magnetic moments, where n is the number of atoms per unit volume. The parameter μeff is interpreted as the effective magnetic moment per paramagnetic ion. If one uses a classical treatment with molecular magnetic moments represented as discrete magnetic dipoles, μ, a Curie Law expression of the same form will emerge with μ appearing in place of μeff. When orbital angular momentum contributions to the magnetic moment are small, as occurs for most organic radicals or for octahedral transition metal complexes with d3 or high-spin d5 configurations, the effective magnetic moment takes the form ( with g-factor ge = 2.0023... ≈ 2), where Nu is the number of unpaired electrons. In other transition metal complexes this yields a useful, if somewhat cruder, estimate. When Curie constant is null, second order effects that couple the ground state with the excited states can also lead to a paramagnetic susceptibility independent of the temperature, known as Van Vleck susceptibility. Pauli paramagnetism For some alkali metals and noble metals, conduction electrons are weakly interacting and delocalized in space forming a Fermi gas. For these materials one contribution to the magnetic response comes from the interaction between the electron spins and the magnetic field known as Pauli paramagnetism. For a small magnetic field , the additional energy per electron from the interaction between an electron spin and the magnetic field is given by: where is the vacuum permeability, is the electron magnetic moment, is the Bohr magneton, is the reduced Planck constant, and the g-factor cancels with the spin . The indicates that the sign is positive (negative) when the electron spin component in the direction of is parallel (antiparallel) to the magnetic field. For low temperatures with respect to the Fermi temperature (around 104 kelvins for metals), the number density of electrons () pointing parallel (antiparallel) to the magnetic field can be written as: with the total free-electrons density and the electronic density of states (number of states per energy per volume) at the Fermi energy . In this approximation the magnetization is given as the magnetic moment of one electron times the difference in densities: which yields a positive paramagnetic susceptibility independent of temperature: The Pauli paramagnetic susceptibility is a macroscopic effect and has to be contrasted with Landau diamagnetic susceptibility which is equal to minus one third of Pauli's and also comes from delocalized electrons. The Pauli susceptibility comes from the spin interaction with the magnetic field while the Landau susceptibility comes from the spatial motion of the electrons and it is independent of the spin. In doped semiconductors the ratio between Landau's and Pauli's susceptibilities changes as the effective mass of the charge carriers can differ from the electron mass . The magnetic response calculated for a gas of electrons is not the full picture as the magnetic susceptibility coming from the ions has to be included. Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect. Pauli paramagnetism is named after the physicist Wolfgang Pauli. Before Pauli's theory, the lack of a strong Curie paramagnetism in metals was an open problem as the leading Drude model could not account for this contribution without the use of quantum statistics. Pauli paramagnetism and Landau diamagnetism are essentially applications of the spin and the free electron model, the first is due to intrinsic spin of electrons; the second is due to their orbital motion. Examples of paramagnets Materials that are called "paramagnets" are most often those that exhibit, at least over an appreciable temperature range, magnetic susceptibilities that adhere to the Curie or Curie–Weiss laws. In principle any system that contains atoms, ions, or molecules with unpaired spins can be called a paramagnet, but the interactions between them need to be carefully considered. Systems with minimal interactions The narrowest definition would be: a system with unpaired spins that do not interact with each other. In this narrowest sense, the only pure paramagnet is a dilute gas of monatomic hydrogen atoms. Each atom has one non-interacting unpaired electron. A gas of lithium atoms already possess two paired core electrons that produce a diamagnetic response of opposite sign. Strictly speaking Li is a mixed system therefore, although admittedly the diamagnetic component is weak and often neglected. In the case of heavier elements the diamagnetic contribution becomes more important and in the case of metallic gold it dominates the properties. The element hydrogen is virtually never called 'paramagnetic' because the monatomic gas is stable only at extremely high temperature; H atoms combine to form molecular H2 and in so doing, the magnetic moments are lost (quenched), because of the spins pair. Hydrogen is therefore diamagnetic and the same holds true for many other elements. Although the electronic configuration of the individual atoms (and ions) of most elements contain unpaired spins, they are not necessarily paramagnetic, because at ambient temperature quenching is very much the rule rather than the exception. The quenching tendency is weakest for f-electrons because f (especially 4f) orbitals are radially contracted and they overlap only weakly with orbitals on adjacent atoms. Consequently, the lanthanide elements with incompletely filled 4f-orbitals are paramagnetic or magnetically ordered. Thus, condensed phase paramagnets are only possible if the interactions of the spins that lead either to quenching or to ordering are kept at bay by structural isolation of the magnetic centers. There are two classes of materials for which this holds: Molecular materials with a (isolated) paramagnetic center. Good examples are coordination complexes of d- or f-metals or proteins with such centers, e.g. myoglobin. In such materials the organic part of the molecule acts as an envelope shielding the spins from their neighbors. Small molecules can be stable in radical form, oxygen O2 is a good example. Such systems are quite rare because they tend to be rather reactive. Dilute systems. Dissolving a paramagnetic species in a diamagnetic lattice at small concentrations, e.g. Nd3+ in CaCl2 will separate the neodymium ions at large enough distances that they do not interact. Such systems are of prime importance for what can be considered the most sensitive method to study paramagnetic systems: EPR. Systems with interactions As stated above, many materials that contain d- or f-elements do retain unquenched spins. Salts of such elements often show paramagnetic behavior but at low enough temperatures the magnetic moments may order. It is not uncommon to call such materials 'paramagnets', when referring to their paramagnetic behavior above their Curie or Néel-points, particularly if such temperatures are very low or have never been properly measured. Even for iron it is not uncommon to say that iron becomes a paramagnet above its relatively high Curie-point. In that case the Curie-point is seen as a phase transition between a ferromagnet and a 'paramagnet'. The word paramagnet now merely refers to the linear response of the system to an applied field, the temperature dependence of which requires an amended version of Curie's law, known as the Curie–Weiss law: This amended law includes a term θ that describes the exchange interaction that is present albeit overcome by thermal motion. The sign of θ depends on whether ferro- or antiferromagnetic interactions dominate and it is seldom exactly zero, except in the dilute, isolated cases mentioned above. Obviously, the paramagnetic Curie–Weiss description above TN or TC is a rather different interpretation of the word "paramagnet" as it does not imply the absence of interactions, but rather that the magnetic structure is random in the absence of an external field at these sufficiently high temperatures. Even if θ is close to zero this does not mean that there are no interactions, just that the aligning ferro- and the anti-aligning antiferromagnetic ones cancel. An additional complication is that the interactions are often different in different directions of the crystalline lattice (anisotropy), leading to complicated magnetic structures once ordered. Randomness of the structure also applies to the many metals that show a net paramagnetic response over a broad temperature range. They do not follow a Curie type law as function of temperature however; often they are more or less temperature independent. This type of behavior is of an itinerant nature and better called Pauli-paramagnetism, but it is not unusual to see, for example, the metal aluminium called a "paramagnet", even though interactions are strong enough to give this element very good electrical conductivity. Superparamagnets Some materials show induced magnetic behavior that follows a Curie type law but with exceptionally large values for the Curie constants. These materials are known as superparamagnets. They are characterized by a strong ferromagnetic or ferrimagnetic type of coupling into domains of a limited size that behave independently from one another. The bulk properties of such a system resembles that of a paramagnet, but on a microscopic level they are ordered. The materials do show an ordering temperature above which the behavior reverts to ordinary paramagnetism (with interaction). Ferrofluids are a good example, but the phenomenon can also occur inside solids, e.g., when dilute paramagnetic centers are introduced in a strong itinerant medium of ferromagnetic coupling such as when Fe is substituted in TlCu2Se2 or the alloy AuFe. Such systems contain ferromagnetically coupled clusters that freeze out at lower temperatures. They are also called mictomagnets.
Physical sciences
Magnetostatics
Physics
23773
https://en.wikipedia.org/wiki/Pascal%20%28programming%20language%29
Pascal (programming language)
Pascal is an imperative and procedural programming language, designed by Niklaus Wirth as a small, efficient language intended to encourage good programming practices using structured programming and data structuring. It is named after French mathematician, philosopher and physicist Blaise Pascal. Pascal was developed on the pattern of the ALGOL 60 language. Wirth was involved in the process to improve the language as part of the ALGOL X efforts and proposed a version named ALGOL W. This was not accepted, and the ALGOL X process bogged down. In 1968, Wirth decided to abandon the ALGOL X process and further improve ALGOL W, releasing this as Pascal in 1970. On top of ALGOL's scalars and arrays, Pascal enables defining complex datatypes and building dynamic and recursive data structures such as lists, trees and graphs. Pascal has strong typing on all objects, which means that one type of data cannot be converted to or interpreted as another without explicit conversions. Unlike C (and also unlike most other languages in the C-family), Pascal allows nested procedure definitions to any level of depth, and also allows most kinds of definitions and declarations inside subroutines (procedures and functions). A program is thus syntactically similar to a single procedure or function. This is similar to the block structure of ALGOL 60, but restricted from arbitrary block statements to just procedures and functions. Pascal became very successful in the 1970s, notably on the burgeoning minicomputer market. Compilers were also available for many microcomputers as the field emerged in the late 1970s. It was widely used as a teaching language in university-level programming courses in the 1980s, and also used in production settings for writing commercial software during the same period. It was displaced by the C programming language during the late 1980s and early 1990s as UNIX-based systems became popular, and especially with the release of C++. A derivative named Object Pascal designed for object-oriented programming was developed in 1985. This was used by Apple Computer (for the Lisa and Macintosh machines) and Borland in the late 1980s and later developed into Delphi on the Microsoft Windows platform. Extensions to the Pascal concepts led to the languages Modula-2 and Oberon, both developed by Wirth. History Earlier efforts Much of the history of computer language design during the 1960s can be traced to the ALGOL 60 language. ALGOL was developed during the 1950s with the explicit goal of being able to clearly describe algorithms. It included a number of features for structured programming that remain common in languages to this day. Shortly after its introduction, in 1962 Wirth began working on his dissertation with Helmut Weber on the Euler programming language. Euler was based on ALGOL's syntax and many concepts but was not a derivative. Its primary goal was to add dynamic lists and types, allowing it to be used in roles similar to Lisp. The language was published in 1965. By this time, a number of problems in ALGOL had been identified, notably the lack of a standardized string system. The group tasked with maintaining the language had begun the ALGOL X process to identify improvements, calling for submissions. Wirth and Tony Hoare submitted a conservative set of modifications to add strings and clean up some of the syntax. These were considered too minor to be worth using as the new standard ALGOL, so Wirth wrote a compiler for the language, which became named ALGOL W. The ALGOL X efforts would go on to choose a much more complex language, ALGOL 68. The complexity of this language led to considerable difficulty producing high-performance compilers, and it was not widely used in the industry. This left an opening for newer languages. Pascal Pascal was influenced by the ALGOL W efforts, with the explicit goals of teaching programming in a structured fashion and for the development of system software. A generation of students used Pascal as an introductory language in undergraduate courses. Other goals included providing a reliable and efficient tool for writing large programs, and bridging the canyon between scientific and commercial programming, as represented by the then-widespread languages Fortran and COBOL, with a general-purpose language. One of the early successes for the language was the introduction of UCSD Pascal, a version that ran on a custom operating system that could be ported to different platforms. A key platform was the Apple II, where it saw widespread use as Apple Pascal. This led to Pascal becoming the primary high-level language used for development in the Apple Lisa, and later, the Macintosh. Parts of the original Macintosh operating system were hand-translated into Motorola 68000 assembly language from the Pascal source code. The typesetting system TeX by Donald Knuth was written in WEB, the original literate programming system, based on DEC PDP-10 Pascal. Successful commercial applications like Adobe Photoshop were written in Macintosh Programmer's Workshop Pascal, while applications like Total Commander, Skype and Macromedia Captivate were written in Delphi (Object Pascal). Apollo Computer used Pascal as the systems programming language for its operating systems beginning in 1980. Variants of Pascal have also been used for everything from research projects to PC games and embedded systems. Newer Pascal compilers exist which are widely used. Dialects Wirth's example compiler meant to propagate the language, the Pascal-P system, used a subset of the language designed to be the minimal subset of the language that could compile itself. The idea was that this could allow bootstrapping the compiler, which would then be extended to full Pascal language status. This was done with several compilers, but one notable exception was UCSD Pascal, which was based on Pascal-P2. It kept the subset status of the language based on the idea that this would run better on the new (then) microprocessors with limited memory. UCSD also converted the Pascal-P2 interpreter into a "byte machine", again, because it would be a better fit for byte oriented microprocessors. UCSD Pascal formed the basis of many systems, including Apple Pascal. Borland Pascal was not based on the UCSD codebase, but arrived during the popular period of UCSD and matched many of its features. This started the line that ended with Delphi Pascal and the compatible Open Source compiler FPC/Lazarus. The ISO standard for Pascal, ISO 7185, was published in 1983 and was widely implemented and used on mainframes, minicomputers and IBM-PCs and compatibles from 16 bits to 32 bits. The two dialects of Pascal most in use towards the end of the 20th century and up until today are the ISO 7185 standard version and the Delphi/Turbo Pascal versions (of which the two Borland versions are mostly compatible with each other). The source for much of the early history on Pascal can be found in the Pascal User's Group newsletters at: Pascal Users Group Newsletters. Object Pascal During work on the Lisa, Larry Tesler began corresponding with Wirth on the idea of adding object-oriented extensions to the language, to make Pascal a Multi-paradigm programming language. This led initially to Clascal, introduced in 1983. As the Lisa program faded and was replaced by the Macintosh, a further version was created and named Object Pascal. This was introduced on the Mac in 1985 as part of the MacApp application framework, and became Apple's main development language into the early 1990s. The Object Pascal extensions were added to Turbo Pascal with the release of version 5.5 in 1989. Over the years, Object Pascal became the basis of the Delphi system for Microsoft Windows, which is still used for developing Windows applications, and can cross-compile code to other systems. Free Pascal is an open source, cross-platform alternative with its own graphical IDE called Lazarus. Implementations Early compilers The first Pascal compiler was designed in Zürich for the CDC 6000 series mainframe computer family. Niklaus Wirth reports that a first attempt to implement it in FORTRAN 66 in 1969 was unsuccessful due to FORTRAN 66's inadequacy to express complex data structures. The second attempt was implemented in a C-like language (Scallop by Max Engeli) and then translated by hand (by R. Schild) to Pascal itself for boot-strapping. It was operational by mid-1970. Many Pascal compilers since have been similarly self-hosting, that is, the compiler is itself written in Pascal, and the compiler is usually capable of recompiling itself when new features are added to the language, or when the compiler is to be ported to a new environment. The GNU Pascal compiler is one notable exception, being written in C. The first successful port of the CDC Pascal compiler to another mainframe was completed by Welsh and Quinn at the Queen's University of Belfast (QUB) in 1972. The target was the International Computers Limited (ICL) 1900 series. This compiler, in turn, was the parent of the Pascal compiler for the Information Computer Systems (ICS) Multum minicomputer. The Multum port was developed – with a view to using Pascal as a systems programming language – by Findlay, Cupples, Cavouras and Davis, working at the Department of Computing Science in Glasgow University. It is thought that Multum Pascal, which was completed in the summer of 1973, may have been the first 16-bit implementation. A completely new compiler was completed by Welsh et al. at QUB in 1977. It offered a source-language diagnostic feature (incorporating profiling, tracing and type-aware formatted postmortem dumps) that was implemented by Findlay and Watt at Glasgow University. This implementation was ported in 1980 to the ICL 2900 series by a team based at Southampton University and Glasgow University. The Standard Pascal Model Implementation was also based on this compiler, having been adapted, by Welsh and Hay at Manchester University in 1984, to check rigorously for conformity to the BSI 6192/ISO 7185 Standard and to generate code for a portable abstract machine. The first Pascal compiler written in North America was constructed at the University of Illinois under Donald B. Gillies for the PDP-11 and generated native machine code. The Pascal-P system To propagate the language rapidly, a compiler porting kit was created in Zürich that included a compiler that generated so called p-code for a virtual stack machine, i.e., code that lends itself to reasonably efficient interpretation, along with an interpreter for that code – the Pascal-P system. The P-system compilers were named Pascal-P1, Pascal-P2, Pascal-P3, and Pascal-P4. Pascal-P1 was the first version, and Pascal-P4 was the last to come from Zürich. The version termed Pascal-P1 was coined after the fact for the many different sources for Pascal-P that existed. The compiler was redesigned to enhance portability, and issued as Pascal-P2. This code was later enhanced to become Pascal-P3, with an intermediate code backward compatible with Pascal-P2, and Pascal-P4, which was not backward compatible. The Pascal-P4 compiler–interpreter can still be run and compiled on systems compatible with original Pascal (as can Pascal-P2). However, it only accepts a subset of the Pascal language. Pascal-P5, created outside the Zürich group, accepts the full Pascal language and includes ISO 7185 compatibility. Pascal-P6 is a follow on to Pascal-P5 that along with other features, aims to be a compiler for specific CPUs, including AMD64. UCSD Pascal branched off Pascal-P2, where Kenneth Bowles used it to create the interpretive UCSD p-System. It was one of three operating systems available at the launch of the original IBM Personal Computer. UCSD Pascal used an intermediate code based on byte values, and thus was one of the earliest bytecode compilers. Apple Pascal was released in 1979 for the Apple II and Apple III computer systems. It was an implementation of, or largely based on, UCSD Pascal. Pascal-P1 through Pascal-P4 were not, but rather based on the CDC 6600 60-bit word length. A compiler based on the Pascal-P4 compiler, which created native binary object files, was released for the IBM System/370 mainframe computer by the Australian Atomic Energy Commission; it was named the AAEC Pascal 8000 Compiler after the abbreviation of the name of the commission. Object Pascal and Turbo Pascal Apple Computer created its own Lisa Pascal for the Lisa Workshop in 1982, and ported the compiler to the Apple Macintosh and MPW in 1985. In 1985 Larry Tesler, in consultation with Niklaus Wirth, defined Object Pascal and these extensions were incorporated in both the Lisa Pascal and Mac Pascal compilers. In the 1980s, Anders Hejlsberg wrote the Blue Label Pascal compiler for the Nascom-2. A reimplementation of this compiler for the IBM PC was marketed under the names Compas Pascal and PolyPascal before it was acquired by Borland and renamed Turbo Pascal. Turbo Pascal became hugely popular, thanks to an aggressive pricing strategy, having one of the first full-screen IDEs, and very fast turnaround time (just seconds to compile, link, and run). It was written and highly optimized entirely in assembly language, making it smaller and faster than much of the competition. In 1986, Anders ported Turbo Pascal to the Macintosh and incorporated Apple's Object Pascal extensions into Turbo Pascal. These extensions were then added back into the PC version of Turbo Pascal for version 5.5. At the same time Microsoft also implemented the Object Pascal compiler. Turbo Pascal 5.5 had a large influence on the Pascal community, which began concentrating mainly on the IBM PC in the late 1980s. Many PC hobbyists in search of a structured replacement for BASIC used this product. It also began to be adopted by professional developers. Around the same time a number of concepts were imported from C to let Pascal programmers use the C-based application programming interface (API) of Microsoft Windows directly. These extensions included null-terminated strings, pointer arithmetic, function pointers, an address-of operator, and unsafe typecasts. Turbo Pascal and other derivatives with unit or module structures are modular programming languages. However, it does not provide a nested module concept or qualified import and export of specific symbols. Other variants Super Pascal adds non-numeric labels, a return statement and expressions as names of types. TMT Pascal was the first Borland-compatible compiler for 32-bit MS-DOS compatible protected mode, OS/2, and Win32. It extends the language with function and operator overloading. The universities of Wisconsin–Madison, Zürich, Karlsruhe, and Wuppertal developed the Pascal-SC and Pascal-XSC (Extensions for Scientific Computation) compilers, aimed at programming numerical computations. Development for Pascal-SC started in 1978 supporting ISO 7185 Pascal level 0, but level 2 support was added at a later stage. Pascal-SC originally targeted the Z80 processor, but was later rewritten for DOS (x86) and 68000. Pascal-XSC has at various times been ported to Unix (Linux, SunOS, HP-UX, AIX) and Microsoft/IBM (DOS with EMX, OS/2, Windows) operating systems. It operates by generating intermediate C source code which is then compiled to a native executable. Some of the Pascal-SC language extensions have been adopted by GNU Pascal. Pascal Sol was designed around 1983 by a French team to implement a Unix-like system named Sol. It was standard Pascal level-1 (with parameterized array bounds) but the definition allowed alternative keywords and predefined identifiers in French and the language included a few extensions to ease system programming (e.g. an equivalent to lseek). The Sol team later on moved to the ChorusOS project to design a distributed operating system. IP Pascal is an implementation of the Pascal programming language using Micropolis DOS, but was moved rapidly to CP/M-80 running on the Z80. It was moved to the 80386 machine types in 1994, and exists today as Windows XP and Linux implementations. In 2008, the system was brought up to a new level and the resulting language termed "Pascaline" (after Pascal's calculator). It includes objects, namespace controls, dynamic arrays, and many other extensions, and generally features the same functionality and type protection as C#. It is the only such implementation that is also compatible with the original Pascal implementation, which is standardized as ISO 7185. Language constructs Pascal, in its original form, is a purely procedural language and includes the traditional array of ALGOL-like control structures with reserved words such as if, then, else, while, for, and case, ranging on a single statement or a begin-end statements block. Pascal also has data structuring constructs not included in the original ALGOL 60 types, like records, variants, pointers, enumerations, and sets and procedure pointers. Such constructs were in part inherited or inspired from Simula 67, ALGOL 68, Niklaus Wirth's own ALGOL W and suggestions by C. A. R. Hoare. Pascal programs start with the program keyword with a list of external file descriptors as parameters (not required in Turbo Pascal etc.); then follows the main block bracketed by the begin and end keywords. Semicolons separate statements, and the full stop (i.e., a period) ends the whole program (or unit). Letter case is ignored in Pascal source. Here is an example of the source code in use for a very simple "Hello, World!" program: program HelloWorld(output); begin WriteLn('Hello, World!') {No ";" is required after the last statement of a block - adding one adds a "null statement" to the program, which is ignored by the compiler.} end. Data types A Type Declaration in Pascal is used to define a range of values which a variable of that type is capable of storing. It also defines a set of operations that are permissible to be performed on variables of that type. The predefined types are: The range of values allowed for the basic types (except Boolean) is implementation defined. Functions are provided for some data conversions. For conversion of real to integer, the following functions are available: round (which rounds to integer using banker's rounding) and trunc (rounds towards zero). The programmer has the freedom to define other commonly used data types (e.g. byte, string, etc.) in terms of the predefined types using Pascal's type declaration facility, for example type byte = 0..255; signed_byte = -128..127; string = packed array[1..255] of char; Often-used types like byte and string are already defined in many implementations. Normally the system will use a word to store the data. For instance, the type may be stored in a machine integer - 32 bits perhaps - rather than an 8-bit value. Pascal does not contain language elements that allow the basic storage types to be defined more granularly. This capability was included in a number of Pascal extensions and follow-on languages, while others, like Modula-2, expanded the built-in set to cover most machine data types like 16-bit integers. The keyword tells the compiler to use the most efficient method of storage for the structured data types: sets, arrays and records, rather than using one word for each element. Packing may slow access on machines that do not offer easy access to parts of a word. Subrange types Subranges of any ordinal data type (any simple type except real) can also be made: var x : 1..10; y : 'a'..'z'; Set types In contrast with other programming languages from its time, Pascal supports a set type: var Set1 : set of 1..10; Set2 : set of 'a'..'z'; A set is a fundamental concept for modern mathematics, and they may be used in many algorithms. Such a feature is useful and may be faster than an equivalent construct in a language that does not support sets. For example, for many Pascal compilers: if i in [5..10] then ... executes faster than: if (i > 4) and (i < 11) then ... Sets of non-contiguous values can be particularly useful, in terms of both performance and readability: if i in [0..3, 7, 9, 12..15] then ... For these examples, which involve sets over small domains, the improved performance is usually achieved by the compiler representing set variables as bit vectors. The set operators can then be implemented efficiently as bitwise machine code operations. Record types An example of a Pascal record type: type car = record length: integer; width: integer end; An example of a variant record type: type Shape = (Circle, Square, Triangle); Dimensions = record case Figure: Shape of Circle: (Diameter: real); Square: (Width: real); Triangle: (Side: real; Angle1, Angle2: 0..360) end; Variant records allow several fields of the record to overlap each other to save space. Type declarations Types can be defined from other types using type declarations: type x = integer; y = x; ... Further, complex types can be constructed from simple types: type a = array[1..10] of integer; b = record x : integer; y : char {extra semicolon not strictly required} end; c = file of a; File type type a = file of integer; b = record x : integer; y : char end; c = file of b; As shown in the example above, Pascal files are sequences of components. Every file has a buffer variable which is denoted by f^. The procedures get (for reading) and put (for writing) move the buffer variable to the next element. Read is introduced such that read(f, x) is the same as x := f^; get(f);. Write is introduced such that write(f, x) is the same as f^ := x; put(f); The type is predefined as file of char. While the buffer variable could be used for inspecting the next character to be used (check for a digit before reading an integer), this leads to serious problems with interactive programs in early implementations, but was solved later with the "lazy I/O" concept, which waits until the file buffer variable is actually accessed before performing file operations. Pointer types Pascal supports the use of pointers: type pNode = ^Node; Node = record a : integer; b : char; c : pNode end; var NodePtr : pNode; IntPtr : ^integer; Here the variable NodePtr is a pointer to the data type Node, a record. Pointers can be used before they are declared. This is a forward declaration, an exception to the rule that things must be declared before they are used. To create a new record and assign the value 10 and character A to the fields a and b in the record, and to initialise the pointer c to the null pointer ("NIL" in Pascal), the statements would be: new(NodePtr); ... NodePtr^.a := 10; NodePtr^.b := 'A'; NodePtr^.c := nil; ... This could also be done using the with statement, as follows: new(NodePtr); ... with NodePtr^ do begin a := 10; b := 'A'; c := nil end; ... Inside of the scope of the with statement, a and b refer to the subfields of the record pointer NodePtr and not to the record Node or the pointer type pNode. Linked lists, stacks and queues can be created by including a pointer type field (c) in the record. Unlike many languages that feature pointers, Pascal only allows pointers to reference dynamically created variables that are anonymous, and does not allow them to reference standard static or local variables. Pointers also must have an associated type, and a pointer to one type is not compatible with a pointer to another type (e.g. a pointer to a char is not compatible with a pointer to an integer). This helps eliminate the type security issues inherent with other pointer implementations, particularly those used for PL/I or C. It also removes some risks caused by dangling pointers, but the ability to dynamically deallocate referenced space by using the dispose function (which has the same effect as the free library function found in C) means that the risk of dangling pointers has not been eliminated as it has in languages such as Java and C#, which provide automatic garbage collection (but which do not eliminate the related problem of memory leaks). Some of these restrictions can be lifted in newer dialects. Control structures Pascal is a structured programming language, meaning that the flow of control is structured into standard statements, usually without 'goto' commands. while a <> b do WriteLn('Waiting'); if a > b then WriteLn('Condition met') {no semicolon allowed before else} else WriteLn('Condition not met'); for i := 1 to 10 do {no semicolon here as it would detach the next statement} WriteLn('Iteration: ', i); repeat a := a + 1 until a = 10; case i of 0 : Write('zero'); 1 : Write('one'); 2 : Write('two'); 3,4,5,6,7,8,9,10: Write('?') end; Procedures and functions Pascal structures programs into procedures and functions. Generally, a procedure is used for its side effects, whereas a function is used for its return value. program Printing(output); var i : integer; procedure PrintAnInteger(j : integer); begin ... end; function triple(const x: integer): integer; begin triple := x * 3 end; begin { main program } ... PrintAnInteger(i); PrintAnInteger(triple(i)) end. Procedures and functions can be nested to any depth, and the 'program' construct is the logical outermost block. By default, parameters are passed by value. If 'var' precedes a parameter's name, it is passed by reference. Each procedure or function can have its own declarations of goto labels, constants, types, variables, and other procedures and functions, which must all be in that order. This ordering requirement was originally intended to allow efficient single-pass compilation. However, in some dialects (such as Delphi) the strict ordering requirement of declaration sections has been relaxed. Semicolons as statement separators Pascal adopted many language syntax features from the ALGOL language, including the use of a semicolon as a statement separator. This is in contrast to other languages, such as PL/I and C, which use the semicolon as a statement terminator. No semicolon is needed before the end keyword of a record type declaration, a block, or a case statement; before the until keyword of a repeat statement; and before the else keyword of an if statement. The presence of an extra semicolon was not permitted in early versions of Pascal. However, the addition of ALGOL-like empty statements in the 1973 Revised Report and later changes to the language in ISO 7185:1983 now allow for optional semicolons in most of these cases. A semicolon is still not permitted immediately before the else keyword in an if statement, because the else follows a single statement, not a statement sequence. In the case of nested ifs, a semicolon cannot be used to avoid the dangling else problem (where the inner if does not have an else, but the outer if does) by putatively terminating the nested if with a semicolon – this instead terminates both if clauses. Instead, an explicit begin...end block must be used. Resources Compilers and interpreters Several Pascal compilers and interpreters are available for general use: Delphi is Embarcadero's (formerly Borland/CodeGear) flagship rapid application development (RAD) product. It uses the Object Pascal language (termed 'Delphi' by Borland), descended from Pascal, to create applications for Windows, macOS, iOS, and Android. The .NET support that existed from D8 through D2005, D2006, and D2007 has been terminated, and replaced by a new language (Prism, which is rebranded Oxygene, see below) that is not fully backward compatible. In recent years Unicode support and generics were added (D2009, D2010, Delphi XE). Free Pascal is a cross-platform compiler written in Object Pascal (and is self-hosting). It is aimed at providing a convenient and powerful compiler, both able to compile legacy applications and to be the means to develop new ones. It is distributed under the GNU General Public License (GNU GPL), while packages and runtime library come under a modified GNU Lesser General Public License (GNU LGPL). In addition to compatibility modes for Turbo Pascal, Delphi, and Mac Pascal, it has its own procedural and object-oriented syntax modes with support for extended features such as operator overloading. It supports many platforms and operating systems. Current versions also feature an ISO mode. Turbo51 is a free Pascal compiler for the Intel 8051 family of microcontrollers, with Turbo Pascal 7 syntax. Oxygene (formerly named Chrome) is an Object Pascal compiler for the .NET and Mono platforms. It was created and is sold by RemObjects Software, and sold for a while by Embarcadero as the backend compiler of Prism. Kylix was a descendant of Delphi, with support for the Linux operating system and an improved object library. It is no longer supported. Compiler and IDE are available now for non-commercial use. GNU Pascal Compiler (GPC) is the Pascal compiler of the GNU Compiler Collection (GCC). The compiler is written in C, the runtime library mostly in Pascal. Distributed under the GNU General Public License, it runs on many platforms and operating systems. It supports the ANSI/ISO standard languages and has partial Turbo Pascal dialect support. One of the more notable omissions is the absence of a fully Turbo Pascal-compatible (short)string type. Support for Borland Delphi and other language variants is quite limited. There is some support for Mac-pascal, however. Virtual Pascal was created by Vitaly Miryanov in 1995 as a native OS/2 compiler compatible with Borland Pascal syntax. Then, it had been commercially developed by fPrint, adding Win32 support, and in 2000 it became freeware. Today it can compile for Win32, OS/2, and Linux, and is mostly compatible with Borland Pascal and Delphi. Development was canceled on April 4, 2005. Pascal-P4 compiler, the basis for many subsequent Pascal-implemented-in-Pascal compilers. It implements a subset of full Pascal. Pascal-P5 compiler is an ISO 7185 (full Pascal) adaption of Pascal-P4. Pascal-P6 compiler is an extended version of Pascal adaption of Pascal-P5 according to the Pascaline language specification. Smart Mobile Studio is a Pascal to HTML5/JavaScript compiler Turbo Pascal was the dominant Pascal compiler for PCs during the 1980s and early 1990s, popular both because of its powerful extensions and extremely short compilation times. Turbo Pascal was compactly written and could compile, run, and debug all from memory without accessing disk. Slow floppy disk drives were common for programmers at the time, further magnifying Turbo Pascal's speed advantage. Currently, older versions of Turbo Pascal (up to 5.5) are available for free download from Borland's site. IP Pascal implements the language "Pascaline" (named after Pascal's calculator), which is a highly extended Pascal compatible with original Pascal according to ISO 7185. It features modules with namespace control, including parallel tasking modules with semaphores, objects, dynamic arrays of any dimensions that are allocated at runtime, overloads, overrides, and many other extensions. IP Pascal has a built-in portability library that is custom tailored to the Pascal language. For example, a standard text output application from 1970's original Pascal can be recompiled to work in a window and even have graphical constructs added. Pascal-XT was created by Siemens for their mainframe operating systems BS2000 and SINIX. PocketStudio is a Pascal subset compiler and RAD tool for Palm OS and MC68xxx processors with some of its own extensions to assist interfacing with the Palm OS API. It resembles Delphi and Lazarus with a visual form designer, an object inspector and a source code editor. MIDletPascal – A Pascal compiler and IDE that generates small and fast Java bytecode specifically designed to create software for mobiles. Vector Pascal is a language for SIMD instruction sets such as the MMX and the AMD 3d Now, supporting all Intel and AMD processors, and Sony's PlayStation 2 Emotion Engine. Morfik Pascal allows the development of Web applications entirely written in Object Pascal (both server and browser side). WDSibyl – Visual Development Environment and Pascal compiler for Win32 and OS/2. PP Compiler, a compiler for Palm OS that runs directly on the handheld computer. CDC 6000 Pascal compiler is the source code for the first (CDC 6000) Pascal compiler. Pascal-S AmigaPascal is a free Pascal compiler for Amiga systems. VSI Pascal for OpenVMS (formerly HP Pascal for OpenVMS, Compaq Pascal, DEC Pascal, VAX Pascal and originally VAX-11 Pascal) is a Pascal compiler that runs on OpenVMS systems. It was also supported under Tru64. VSI Pascal for OpenVMS is compatible with ISO/IEC 7185:1990 Pascal as well some of ISO/IEC 10206:1990 Extended Pascal, and also includes its own extensions. The compiler frontend is implemented in BLISS. Stony Brook Pascal+ was a 16-bit (later 32-bit) optimizing compiler for DOS and OS/2, marketed as a direct replacement for Turbo Pascal, but producing code that executed at least twice as fast. IDEs Dev-Pascal is a Pascal IDE that was designed in Borland Delphi and which supports Free Pascal and GNU Pascal as backends. Lazarus is a free Delphi-like visual cross-platform IDE for rapid application development (RAD). Based on Free Pascal, Lazarus is available for numerous platforms including Linux, FreeBSD, macOS and Microsoft Windows. Fire (macOS) and Water (Windows) for the Oxygene and the Elements Compiler Libraries WOL Library for creating GUI applications with the Free Pascal Compiler. Standards ISO/IEC 7185:1990 Pascal In 1983, the language was standardized in the international standard IEC/ISO 7185 and several local country-specific standards, including the American ANSI/IEEE770X3.97-1983, and ISO 7185:1983. These two standards differed only in that the ISO standard included a "level 1" extension for conformant arrays (an array where the boundaries of the array are not known until run time), where ANSI did not allow for this extension to the original (Wirth version) language. In 1989, ISO 7185 was revised (ISO 7185:1990) to correct various errors and ambiguities found in the original document. The ISO 7185 was stated to be a clarification of Wirth's 1974 language as detailed by the User Manual and Report [Jensen and Wirth], but was also notable for adding "Conformant Array Parameters" as a level 1 to the standard, level 0 being Pascal without conformant arrays. This addition was made at the request of C. A. R. Hoare, and with the approval of Niklaus Wirth. The precipitating cause was that Hoare wanted to create a Pascal version of the (NAG) Numerical Algorithms Library, which had originally been written in FORTRAN, and found that it was not possible to do so without an extension that would allow array parameters of varying size. Similar considerations motivated the inclusion in ISO 7185 of the facility to specify the parameter types of procedural and functional parameters. Niklaus Wirth himself referred to the 1974 language as "the Standard", for example, to differentiate it from the machine specific features of the CDC 6000 compiler. This language was documented in The Pascal Report, the second part of the "Pascal users manual and report". On the large machines (mainframes and minicomputers) Pascal originated on, the standards were generally followed. On the IBM PC, they were not. On IBM PCs, the Borland standards Turbo Pascal and Delphi have the greatest number of users. Thus, it is typically important to understand whether a particular implementation corresponds to the original Pascal language, or a Borland dialect of it. The IBM PC versions of the language began to differ with the advent of UCSD Pascal, an interpreted implementation that featured several extensions to the language, along with several omissions and changes. Many UCSD language features survive today, including in Borland's dialect. ISO/IEC 10206:1990 Extended Pascal In 1990, an extended Pascal standard was created as ISO/IEC 10206, which is identical in technical content to IEEE/ANSI 770X3.160-1989 As of 2019, Support of Extended Pascal in FreePascal Compiler is planned. Variations Niklaus Wirth's Zürich version of Pascal was issued outside ETH in two basic forms: the CDC 6000 compiler source, and a porting kit called Pascal-P system. The Pascal-P compiler left out several features of the full language that were not required to bootstrap the compiler. For example, procedures and functions used as parameters, undiscriminated variant records, packing, dispose, interprocedural gotos and other features of the full compiler were omitted. UCSD Pascal, under Professor Kenneth Bowles, was based on the Pascal-P2 kit, and consequently shared several of the Pascal-P language restrictions. UCSD Pascal was later adopted as Apple Pascal, and continued through several versions there. Although UCSD Pascal actually expanded the subset Pascal in the Pascal-P kit by adding back standard Pascal constructs, it was still not a complete standard installation of Pascal. In the early 1990s, Alan Burns and Geoff Davies developed Pascal-FC, an extension to Pl/0 (from the Niklaus' book Algorithms + Data Structures = Programs). Several constructs were added to use Pascal-FC as a teaching tool for Concurrent Programming (such as semaphores, monitors, channels, remote-invocation and resources). To be able to demonstrate concurrency, the compiler output (a kind of P-code) could then be executed on a virtual machine. This virtual machine not only simulated a normal – fair – environment, but could also simulate extreme conditions (unfair mode). Borland-like Pascal compilers Borland's Turbo Pascal, written by Anders Hejlsberg, was written in assembly language independent of UCSD and the Zürich compilers. However, it adopted much of the same subset and extensions as the UCSD compiler. This is probably because the UCSD system was the most common Pascal system suitable for developing applications on the resource-limited microprocessor systems available at that time. The shrink-wrapped Turbo Pascal version 3 and later incarnations, including Borland's Object Pascal and Delphi and non-Borland near-compatibles became popular with programmers including shareware authors, and so the SWAG library of Pascal code features a large amount of code written with such versions as Delphi in mind. Software products (compilers, and IDE/Rapid Application Development (RAD)) in this category: Turbo Pascal – "TURBO.EXE" up to version 7, and Turbo Pascal for Windows ("TPW") and Turbo Pascal for Macintosh. Pure Pascal and HiSPeed Pascal 2 Pascal language Environment for the Atari ST range of computers. Borland Pascal 7 – A professional version of Turbo Pascal line which targeted both DOS and Windows. Object Pascal – an extension of the Pascal language that was developed at Apple Computer by a team led by Larry Tesler in consultation with Niklaus Wirth, the inventor of Pascal; its features were added to Borland's Turbo Pascal for Macintosh and in 1989 for Turbo Pascal 5.5 for DOS. Delphi – Object Pascal is essentially its underlying language. Free Pascal compiler (FPC) – Free Pascal adopted the standard dialect of Borland Pascal programmers, Borland Turbo Pascal and, later, Delphi. PascalABC.NET – a new generation Pascal programming language including compiler and IDE. Borland Kylix is a compiler and IDE formerly sold by Borland, but later discontinued. It is a Linux version of the Borland Delphi software development environment and C++Builder. Lazarus – similar to Kylix in function, is a free cross-platform visual IDE for RAD using the Free Pascal compiler, which supports dialects of Object Pascal to varying degrees. Virtual Pascal – VP2/1 is a fully Borland Pascal– and Borland Delphi–compatible 32-bit Pascal compiler for OS/2 and Windows 32 (with a Linux version "on the way"). Sybil is an open source Delphi-like IDE and compiler; implementations include: WDSibyl for Microsoft Windows and OS/2, a commercial Borland Pascal compatible environment released by a company named Speedsoft that was later developed into a Delphi-like rapid application development (RAD) environment named Sybil and then open sourced under the GPL when that company closed down; Open Sybil, which is an ongoing project, an open source tool for OS/2 and eCS that was originally based on Speedsoft's WDsybl Sibyl Portable Component Classes (SPCC) and Sibyl Visual Development Tool (SVDE) sources, but now its core is IBM System Object Model (SOM), WPS and OpenDoc. List of related standards ISO 8651-2:1988 Information processing systems – Computer graphics – Graphical Kernel System (GKS) language bindings – Part 2: Pascal Reception Pascal generated a wide variety of responses in the computing community, both critical and complimentary. Early criticism Wirth's initial definition of the language was widely criticized. In particular, Nico Habermann commented in his "Critical Comments on the Programming Language Pascal" (1973) that many of its constructs were poorly defined, in particular for data types, ranges, structures, and goto. Later, Brian Kernighan, who popularized the C language, outlined his criticisms of Pascal in 1981 in his article "Why Pascal is Not My Favorite Programming Language". The most serious problem Kernighan described was that array sizes and string lengths were part of the type, so it was not possible to write a function that would accept variable-length arrays or even strings as parameters. This made it unfeasible to write, for example, a sorting library. Kernighan also criticized the unpredictable order of evaluation of Boolean expressions, poor library support, and lack of static variables, and raised a number of smaller issues. Also, he stated that the language did not provide any simple constructs to "escape" (knowingly and forcibly ignore) restrictions and limitations. More general complaints from other sources noted that the scope of declarations was not clearly defined in the original language definition, which sometimes had serious consequences when using forward declarations to define pointer types, or when record declarations led to mutual recursion, or when an identifier may or may not have been used in an enumeration list. Another difficulty was that, like ALGOL 60, the language did not allow procedures or functions passed as parameters to predefine the expected type of their parameters. Rising popularity in the 1970s and 1980s In the two decades after 1975, Pascal gained increasing attention and became a major programming language for important platforms (including Apple II, Apple III, Apple Lisa, Commodore systems, Z-80-based machines and IBM PC) due to the availability of UCSD Pascal and Turbo Pascal. Despite initial criticisms, Pascal continued to evolve, and most of Kernighan's points do not apply to versions of the language which were enhanced to be suitable for commercial product development, such as Borland's Turbo Pascal. As Kernighan predicted in his article, most of the extensions to fix these issues were incompatible from compiler to compiler. Since the early 1990s, however, most of the varieties seem condensed into two categories: ISO and Borland-like. Extended Pascal addresses many of these early criticisms. It supports variable-length strings, variable initialization, separate compilation, short-circuit Boolean operators, and default (otherwise) clauses for case statements. Some of the problems arising from the differences in the implementations of Pascal were later partly solved by the advent of Free Pascal, which supports several dialects with mode directives.
Technology
"Historical" languages
null
23776
https://en.wikipedia.org/wiki/Paint
Paint
Paint is a material or mixture that, when applied to a solid material and allowed to dry, adds a film-like layer. As art, this is used to create an image or images known as a painting. Paint can be made in many colors and types. Most paints are either oil-based or water-based, and each has distinct characteristics. Primitive forms of paint were used tens of thousands of years ago in cave paintings. Clean-up solvents are also different for water-based paint than oil-based paint. Water-based paints and oil-based paints will cure differently based on the outside ambient temperature of the object being painted (such as a house). Usually, the object being painted must be over , although some manufacturers of external paints/primers claim they can be applied when temperatures are as low as . History Paint was used in some of the earliest known human artworks. Some cave paintings drawn with red or yellow ochre, hematite, manganese oxide, and charcoal may have been made by early Homo sapiens as long as 40,000 years ago. Paint may be even older. In 2003 and 2004, South African archeologists reported finds in Blombos Cave of a 100,000-year-old human-made ochre-based mixture that could have been used like paint. Further excavation in the same cave resulted in the 2011 report of a complete toolkit for grinding pigments and making a primitive paint-like substance. Interior walls at the 5,000-year-old Ness of Brodgar have been found to incorporate individual stones painted in yellows, reds, and oranges, using ochre pigment made of haematite mixed with animal fat, milk or eggs. Ancient colored walls at Dendera, Egypt, which were exposed for years to the elements, still possess their brilliant color, as vivid as when they were painted about 2,000 years ago. The Egyptians mixed their colors with a gummy substance and applied them separately from each other without any blending or mixture. They appear to have used six colors: white, black, blue, red, yellow, and green. They first covered the area entirely with white, then traced the design in black, leaving out the lights of the ground color. They used minium for red, generally of a dark tinge. The oldest known oil paintings are Buddhist murals created . The works are located in cave-like rooms carved from the cliffs of Afghanistan's Bamiyan Valley, "using walnut and poppy seed oils." Pliny mentions some painted ceilings in his day in the town of Ardea, which had been made before the foundation of Rome. After the lapse of so many centuries, he expressed great surprise and admiration at their freshness. In the 13th century, oil was used to detail tempera paintings. In the 14th century, Cennino Cennini described a painting technique utilizing tempera painting covered by light layers of oil. The slow-drying properties of organic oils were commonly known to early European painters. However, the difficulty in acquiring and working the materials meant that they were rarely used (and indeed, the slow drying was seen as a disadvantage). The paint was made with the yolk of eggs, and therefore, the substance would harden and adhere to the surface it was applied to. The pigment was made from plants, sand, and different soils. Most paints use either oil or water as a base (the diluent, solvent, or vehicle for the pigment). The Flemish-trained or influenced Antonello da Messina, who Vasari wrongly credited with the introduction of oil paint to Italy, does seem to have improved the formula by adding litharge, or lead (II) oxide. A still extant example of 17th-century house oil painting is Ham House in Surrey, England, where a primer was used along with several undercoats and an elaborate decorative overcoat; the pigment and oil mixture would have been ground into a paste with a mortar and pestle. The painters did the process by hand, which exposed them to lead poisoning due to the white-lead powder. In 1718, Marshall Smith invented a "Machine or Engine for the Grinding of Colors" in England. It is not known precisely how it operated, but it was a device that dramatically increased the efficiency of pigment grinding. Soon, a company called Emerton and Manby was advertising exceptionally low-priced paints that had been ground with labor-saving technology: By the proper onset of the Industrial Revolution, in the mid-18th century, paint was being ground in steam-powered mills, and an alternative to lead-based pigments had been found in a white derivative of zinc oxide. Interior house painting increasingly became the norm as the 19th century progressed, both for decorative reasons and because the paint was effective in preventing the walls rotting from damp. Linseed oil was also increasingly used as an inexpensive binder. In 1866, Sherwin-Williams in the United States opened as a large paint-maker and invented a paint that could be used from the tin without preparation. It was only when the stimulus of World War II created a shortage of linseed oil in the supply market that artificial resins, or alkyds, were invented. Cheap and easy to make, they held the color well and lasted for a long time. Types Pigmented Through the 20th century, paints used pigments, typically suspended in a liquid. Structural In the 21st century, "paints" that used structural color were created. Aluminum flakes dotted with smaller aluminum nanoparticles could be tuned to produce arbitrary colors by adjusting the nanoparticle sizes rather than picking/mixing minerals to do so. These paints weighed a tiny fraction of the weight of conventional paints, a particular advantage in air and road vehicles. They reflect heat from sunlight and do not break down outdoors. Preliminary experiments suggest it can reduce temperatures by 20 to 30 degrees Fahrenheit vs conventional paint. Its constituents are also less toxic. Making the paint starts with a thin double-sided mirror. The researchers deposited metallic nanoparticles on both sides of the sheet. Large sheets were ground to produce small flakes. Components Vehicle The vehicle is composed of binder; if it is necessary to thin it with a diluent like solvent or water, it is a combination of binder and diluent. In this case, once the paint has dried or cured very nearly all of the diluent has evaporated and only the binder is left on the coated surface. Thus, an important quantity in coatings formulation is the "vehicle solids", sometimes called the "resin solids" of the formula. This is the proportion of the wet coating weight that is binder, i.e., the polymer backbone of the film that will remain after drying or curing is complete. The volume of paint after it has dried, therefore only leaving the solids, is expressed as the volume solid. Binder or film former The binder is the film-forming component of paint. It is the only component that is always present among all the various types of formulations. Many binders must be thick enough to be applied and thinned. The type of thinner, if present, varies with the binder. The binder imparts properties such as gloss, durability, flexibility, and toughness. Binders include synthetic or natural resins such as alkyds, acrylics, vinyl-acrylics, vinyl acetate/ethylene (VAE), polyurethanes, polyesters, melamine resins, epoxy, silanes or siloxanes or oils. Binders can be categorized according to the mechanisms for film formation. Thermoplastic mechanisms include drying and coalescence. Drying refers to simply evaporating the solvent or thinner to leave a coherent film behind. Coalescence refers to a mechanism that involves drying followed by actual interpenetration and fusion of formerly discrete particles. Thermoplastic film-forming mechanisms are sometimes described as "thermoplastic cure," but that is a misnomer because no chemical curing reactions are required to knit the film. On the other hand, thermosetting mechanisms are true curing mechanisms involving chemical reaction(s) among the polymers that make up the binder. Thermoplastic Mechanisms Some films are formed by simply cooling the binder. For example, encaustic or wax paints are liquid when warm, and harden upon cooling. In many cases, they re-soften or liquify if reheated. Paints that dry by solvent evaporation and contain the solid binder dissolved in a solvent are known as lacquers. A solid film forms when the solvent evaporates. Because no chemical crosslinking is involved, the film can re-dissolve in solvent; lacquers are unsuitable for applications where chemical resistance is important. Classic nitrocellulose lacquers fall into this category, as do non-grain raising stains composed of dyes dissolved in solvent. Performance varies by formulation, but lacquers generally tend to have better UV resistance and lower corrosion resistance than comparable systems that cure by polymerization or coalescence. The paint type known as Emulsion in the UK and Latex in the United States is a water-borne dispersion of sub-micrometer polymer particles. These terms in their respective countries cover all paints that use synthetic polymers such as acrylic, vinyl acrylic (PVA), styrene acrylic, etc. as binders. The term "latex" in the context of paint in the United States simply means an aqueous dispersion; latex rubber from the rubber tree is not an ingredient. These dispersions are prepared by emulsion polymerization. Such paints cure by a process called coalescence where first the water and then the trace, or coalescing, solvent, evaporate and draw together and soften the binder particles and fuse them together into irreversibly bound networked structures, so that the paint cannot redissolve in the solvent/water that originally carried it. The residual surfactants in paint, as well as hydrolytic effects with some polymers cause the paint to remain susceptible to softening and, over time, degradation by water. The general term of latex paint is usually used in the United States, while the term emulsion paint is used for the same products in the UK, and the term latex paint is not used at all. Thermosetting Mechanisms Paints that cure by polymerization are generally one- or two-package coatings that polymerize by way of a chemical reaction and cure into a cross-linked film. Depending on composition, they may need to dry first by evaporation of solvent. Classic two-package epoxies or polyurethanes would fall into this category. The "drying oils", counter-intuitively, cure by a crosslinking reaction even if they are not put through an oven cycle and seem to dry in air. The film formation mechanism of the simplest examples involves the first evaporation of solvents followed by a reaction with oxygen from the environment over a period of days, weeks, and even months to create a crosslinked network. Classic alkyd enamels would fall into this category. Oxidative cure coatings are catalyzed by metal complex driers such as cobalt naphthenate though cobalt octoate is more common. Recent environmental requirements restrict the use of volatile organic compounds (VOCs), and alternative means of curing have been developed, generally for industrial purposes. UV curing paints, for example, enable formulation with very low amounts of solvent, or even none at all. This can be achieved because of the monomers and oligomers used in the coating have relatively very low molecular weight, and are therefore low enough in viscosity to enable good fluid flow without the need for additional thinner. If solvent is present in significant amounts, generally it is mostly evaporated first and then crosslinking is initiated by ultraviolet light. Similarly, powder coatings contain no solvent. Flow and cure are produced by the heating of the substrate after electrostatic application of the dry powder. Combination mechanisms So-called "catalyzed" lacquers" or "crosslinking latex" coatings are designed to form films by a combination of methods: classic drying plus a curing reaction that benefits from the catalyst. There are paints called plastisols/organosols, which are made by blending PVC granules with a plasticiser. These are stoved and the mix coalesces. Diluent or solvent or thinner The main purposes of the diluent are to dissolve the polymer and adjust the viscosity of the paint. It is volatile and does not become part of the paint film. It also controls flow and application properties, and in some cases can affect the stability of the paint while in liquid state. Its main function is as the carrier for the non-volatile components. To spread heavier oils (for example, linseed) as in oil-based interior house paint, a thinner oil is required. These volatile substances impart their properties temporarily—once the solvent has evaporated, the remaining paint is fixed to the surface. This component is optional: some paints have no diluent. Water is the main diluent for water-borne paints, even the co-solvent types. Solvent-borne, also called oil-based, paints can have various combinations of organic solvents as the diluent, including aliphatics, aromatics, alcohols, ketones and white spirit. Specific examples are organic solvents such as petroleum distillate, esters, glycol ethers, and the like. Sometimes volatile low-molecular weight synthetic resins also serve as diluents. Pigment, dye and filler Pigments are solid particles or flakes incorporated in the paint, usually to contribute color to the paint film. Pigments impart color by selective absorption of certain wavelengths of light and/or by scattering or reflecting light. The particle size of the pigment is critical to the light-scattering mechanism. The size of such particles can be measured with a Hegman gauge. Dyes, on the other hand, are dissolve in the paint and impart color only by the selective absorption mechanism. Paints can be formulated with only pigments, only dyes, both, or neither. Pigments can also be used to give the paint special physical or optical properties, as opposed to imparting color, in which case they are called functional pigments. Fillers or extenders are an important class of the functional pigments. These are typically used to build film thickness and/or reduce the cost of the paint, or they can impart toughness and texture to the film. Fillers are usually cheap and inert materials, such as diatomaceous earth, talc, lime, barytes, clay, etc. Floor paints that must resist abrasion may contain fine quartz sand as a filler. Sometimes, a single pigment can serve both decorative and functional purposes. For example some decorative pigments protect the substrate from the harmful effects of ultraviolet light by making the paint opaque to these wavelengths, i.e. by selectively absorbing them. These hiding pigments include titanium dioxide, phthalo blue, red iron oxide, and many others. Some pigments are toxic, such as the lead pigments that are used in lead paint. Paint manufacturers began replacing white lead pigments with titanium white (titanium dioxide), before lead was banned in paint for residential use in 1978 by the US Consumer Product Safety Commission. The titanium dioxide used in most paints today is often coated with silica/alumina/zirconium for various reasons, such as better exterior durability, or better hiding performance (opacity) promoted by more optimal spacing within the paint film. Micaceous iron oxide (MIO) is another alternative to lead for protection of steel, giving more protection against water and light damage than most paints. When MIO pigments are ground into fine particles, most cleave into shiny layers, which reflect light, thus minimising UV degradation and protecting the resin binder. Most pigments used in paint tend to be spherical, but lamellar pigments, such as glass flake and MIO have overlapping plates, which impede the path of water molecules. For optimum performance MIO should have a high content of thin flake-like particles resembling mica. ISO 10601 sets two levels of MIO content. MIO is often derived from a form of hematite. Pigments can be classified as either natural or synthetic. Natural pigments are taken from the earth or plant sources and include colorants such as metal oxides or carbon black, or various clays, calcium carbonate, mica, silicas, and talcs. Synthetics include a host of colorants created in the lab as well as engineered molecules, calcined clays, blanc fixe, precipitated calcium carbonate, and synthetic pyrogenic silicas. The pigments and dyes that are used as colorants are classified by chemical type using the Color Index system, which is commercially significant. Additives Besides the three main categories of ingredients (binder, diluent, pigment), paint can have a wide variety of miscellaneous additives, which are usually added in small amounts, yet provide a significant effect on the product. Some examples include additives to modify texture , surface tension, improve flow properties, improve the finished appearance, increase wet edge, improve pigment stability, impart antifreeze properties, control foaming, control skinning, create acrylic pouring cells, etc. Other types of additives include catalysts, thickeners, stabilizers, emulsifiers, texturizers, adhesion promoters, UV stabilizers, flatteners (de-glossing agents), biocides to fight bacterial growth and the like. Additives normally do not significantly alter the percentages of individual components in a formulation. Color changing Various technologies exist for making paints that change color. Thermochromic ink and coatings contain materials that change conformation when heat is applied or removed, and so they change color. Liquid crystals have been used in such paints, such as in the thermometer strips and tapes used in aquaria and novelty/promotional thermal cups and straws. Photochromic materials are used to make eyeglasses and other products. Similar to thermochromic molecules, photochromic molecules change conformation when light energy is applied or removed, and so they change color. Color-changing paints can also be made by adding halochromic compounds or other organic pigments. One patent cites use of these indicators for wall coating applications for light-colored paints. When the paint is wet it is pink in color but upon drying it regains its original white color. As cited in patent, this property of the paint enabled two or more coats to be applied on a wall properly and evenly. The previous coats having dried would be white whereas the new wet coat would be distinctly pink. Ashland Inc. introduced foundry refractory coatings with similar principle in 2005 for use in foundries. Electrochromic paints change color in response to an applied electric current. Car manufacturer Nissan has been reportedly working on an electrochromic paint, based on particles of paramagnetic iron oxide. When subjected to an electromagnetic field the paramagnetic particles change spacing, modifying their color and reflective properties. The electromagnetic field would be formed using the conductive metal of the car body. Electrochromic paints can be applied to plastic substrates as well, using a different coating chemistry. The technology involves using special dyes that change conformation when an electric current is applied across the film itself. This new technology has been used to achieve glare protection at the touch of a button in passenger airplane windows. Color can also change depending on viewing angle, using iridescence, for example, in ChromaFlair. Art Since the time of the Renaissance, siccative (drying) oil paints, primarily linseed oil, have been the most commonly used kind of paints in fine art applications; oil paint is still common today. However, in the 20th century, new water-borne paints such acrylic paints, entered the market with the development of acrylic and other latex paints. Milk paints (also called casein), where the medium is derived from the natural emulsion that is milk, were common in the 19th century and are still used. Used by the earliest western artists, Egg tempera (where the medium is an emulsion of raw egg yolk mixed with oil) remains in use as well, as are encaustic wax-based paints. Gouache is an opaque variant of watercolor, which is based around varying levels of translucency; both paints use gum arabic as the binder and water as a thinner. Gouache is also known as 'designer color' or 'body color'. Poster paint is a distemper paint that has been used primarily in the creation of student works, or by children. There are varying brands of poster paint and depending on the brand, the quality will differ. More inexpensive brands will often crack or fade over time if they are left on a poster for an extended time. Application Paint can be applied as a solid, a gas, a gaseous suspension (aerosol) or a liquid. Techniques vary depending on the practical or artistic results desired. As a solid (usually used in industrial and automotive applications), the paint is applied as a very fine powder, then baked at high temperature. This melts the powder and causes it to adhere to the surface. The reasons for doing this involve the chemistries of the paint, the surface itself, and perhaps even the chemistry of the substrate (the object being painted). This is called "powder coating" an object. In a gas phase application, the coating composition is introduced (if gaseous), vaporized (if liquid) or sublimed (if solid) then deposited on a distant substrate, often under vacuum. These applications are classed broadly into physical vapor deposition methods like sputtering or vacuum deposition, in which solid or liquid starting materials produce a vapor that condenses on the substrate; or chemical vapor deposition methods, in which gaseous starting materials chemically react with the substrate to form a coating. These techniques are especially important in the electronics and optical industries. As a gaseous suspension, liquid paint is aerosolized by the force of compressed air or by the action of high-pressure compression of the paint itself, and the paint is turned into small droplets that travel to the article to be painted. Alternate methods are airless spray, hot spray, hot airless spray, and any of these with an electrostatic spray included. There are numerous electrostatic methods available. The reasons for doing this include: The application mechanism is air and thus no solid object touches the object being painted; The distribution of the paint is uniform, so there are no sharp lines; It is possible to deliver very small amounts of paint; Painting multiple items at once quickly and efficiently; A chemical (typically a solvent) can be sprayed along with the paint to dissolve together both the delivered paint and the chemicals on the surface of the object being painted; Some chemical reactions in paint involve the orientation of the paint molecules. Expression In a liquid application, paint can be applied by direct application using brushes, paint rollers, blades, scrapers, other instruments, or body parts such as fingers and thumbs. Rollers generally have a handle that allows for different lengths of poles to be attached, allowing painting at different heights. Generally, roller application requires two coats for an even color. A roller with a thicker nap is used to apply paint on uneven surfaces. Edges are often finished with an angled brush. For a flat (matte) finish, a 1/2" nap roller would most likely be used For an eggshell finish, a 3/8" nap roller would most likely be used For a satin or pearl finish, a 3/8" nap roller would most likely be used For a semi-gloss or gloss finish, a 3/16" nap roller would most likely be used After liquid paint is applied, there is an interval during which it can be blended with additional painted regions (at the "wet edge") called "open time". The open time of an oil or alkyd-based emulsion paint can be extended by adding white spirit, similar glycols such as Dowanol (propylene glycol ether) or open time prolongers. This can also facilitate the mixing of different wet paint layers for aesthetic effect. Latex and acrylic emulsions require the use of drying retardants suitable for water-based coatings. Depending on the quality and type of liquid paint used, the open time will vary. Oil paints for instance are renowned for their open time as oil paints allow for artists to blend the colors for extended periods of time without having to add any extending agents. Dipping used to be the norm for objects such as filing cabinets, but this has been replaced by high-speed air turbine-driven bells with electrostatic spray. Car bodies are primed using cathodic elephoretic primer, which is applied by charging the body depositing a layer of primer. The unchanged residue is rinsed off and the primer stoved. Many paints tend to separate when stored, the heavier components settling to the bottom, and require mixing before use. Some paint outlets have machines for mixing the paint by shaking the can vigorously for a few minutes. The opacity and the film thickness of paint may be measured using a drawdown card. Water-based paints tend to be the easiest to clean up after use; the brushes and rollers can be cleaned with soap and water. Proper disposal of left over paint is a challenge. Sometimes it can be recycled: Old paint may be usable for a primer coat or an intermediate coat, and paints of similar chemistry can be mixed to make a larger amount of a uniform color. To dispose of paint it can be dried and disposed of in the domestic waste stream, provided that it contains no prohibited substances (see container). Disposal of liquid paint usually requires special handling and should be treated as hazardous waste, and disposed of according to local regulations. Product variants Primer is a preparatory coating put on materials before applying the paint itself. The primed surface ensures better adhesion of the paint, thereby increasing the durability of the paint and providing improved protection for the painted surface. Suitable primers also may block and seal stains, or hide a color that is to be painted over. Emulsion paints are water-based paints in which the paint material is dispersed in a liquid that consists mainly of water. For suitable purposes this has advantages in fast-drying, low toxicity, low cost, easier application, and easier cleaning of equipment, among other factors. Varnish and shellac are in effect paints without pigment; they provide a protective coating without substantially changing the color of the surface, though they can emphasise the colour of the material. Wood stain is a type of paint that is formulated to be very "thin", meaning low in viscosity, so that the pigment soaks into a material such as wood rather than remaining in a film on the surface. Stain is mainly dispersed pigment or dissolved dye plus binder material in a solvent. It is designed to add color without providing a surface coating. Lacquer is a solvent-based paint or varnish that produces an especially hard, durable finish. Usually it is a rapidly drying formulation. Enamel paint is formulated to give an especially hard, usually glossy, finish. Some enamel paints contain fine glass powder or metal flake instead of the color pigments in standard oil-based paints. Enamel paint sometimes is mixed with varnish or urethane to improve its shine and hardness. A glaze is an additive used with paint to slow drying time and increase translucency, as in faux painting and for some artistic effects. A roof coating is a fluid that sets as an elastic membrane that can stretch without harm. It provides UV protection to polyurethane foam and is widely used in roof restoration. Fingerpaints are formulations suitable for application with the fingers; they are popular for use by children in primary school activities. Inks are similar to paints, except that they are typically made using finely ground pigments or dyes, and are not designed to leave a thick film of binder. They are used largely for writing, printing, or calligraphy. Anti-graffiti coatings are used to defeat the marking of surfaces by graffiti artists or vandals. There are two categories of anti-graffiti coatings: sacrificial and non-bonding: Sacrificial coatings are clear coatings that allow the removal of graffiti, usually by washing the surface with high-pressure water that removes the graffiti together with the coating (hence the term "sacrificial"). After removal of the graffiti, the sacrificial coating must be re-applied for continued protection. Such sacrificial protective coatings are most commonly used on natural-looking masonry surfaces, such as statuary and marble walls, and on rougher surfaces that are difficult to clean. Non-bonding coatings are clear, high-performance coatings, usually catalyzed polyurethanes, that do not bond strongly to paints used for graffiti. Graffiti on such a surface can be removed with a solvent wash, without damaging either the underlying surface or the protective non-bonding coating. These coatings work best on smooth surfaces, and are especially useful on decorative surfaces such as mosaics or painted murals, which might be expected to suffer harm from high pressure sprays. Urine-repellent paint is a very hydrophobic (water-repellent) paint. It has been used by cities and other property owners to deter men from urinating against walls, as the urine splashes back on their shoes, instead of dripping down the wall. Anti-climb paint is a non-drying paint that appears normal but is extremely slippery. It is useful on drainpipes and ledges to deter burglars and vandals from climbing them, and is found in many public places. When a person attempts to climb objects coated with the paint, it rubs off onto the climber, as well as making it hard for them to climb. Anti-fouling paint, or bottom paint, prevents barnacles and other marine organisms from adhering to the hulls of ships. Insulative paint or insulating paint, reduces the rate of thermal transfer through a surface it's applied to. One type of formulation is based on the addition of hollow microspheres to any suitable type of paint. Anti-slip paint contains chemicals or grit to increase the friction of a surface so as to decrease the risk of slipping, particularly in wet conditions. Road marking paint is specially used to marking and painting road traffic signs and lines, to form a durable coating film on the road surface. It must be fast-drying, provide a thick coating, and resist wear and slipping, especially in wet conditions. Luminous paint or luminescent paint is paint that exhibits luminescence. In other words, it gives off visible light through fluorescence, phosphorescence, or radioluminescence. Chalk paint is a decorative paint used for home decor to achieve looks such as shabby chic or vintage with home decor. Finish types Flat Finish paint is generally used on ceilings or walls that are in bad shape. This finish is useful for hiding imperfections in walls and it is economical in effectively covering large areas. However, this finish is not easily washable and is subject to staining. Matte Finish is generally similar to flat finish, but such paints commonly offer superior washability and coverage. (See Gloss and matte paint.) Eggshell Finish has some sheen, supposedly like that of the shell on an egg. This finish provides great washability but is not very effective at hiding imperfections on walls and similar surfaces. The eggshell finish is valued for bathrooms because it is washable and water-repellent, so it tends not to peel in a wet environment. Pearl (Satin) Finish is very durable in terms of washability and resistance to moisture, even in comparison to an eggshell finish. It protects walls from dirt, moisture, and stains. Accordingly, it is exceptionally valuable for bathrooms, furniture, and kitchens, but it is shinier than eggshell, so it is even more prone to show imperfections. It has a soft, velvety appearance that is perfect for creating a luxurious feel in any room. Satin paint is also very durable and easy to clean, making it ideal for high-traffic areas like kitchens and bathrooms. Semi-Gloss Finish typically is used on trim to emphasize detail and elegance, and to show off woodwork, such as on doors and furniture. It provides a shiny surface and provides good protection from moisture and stains on walls. Its gloss does however emphasize imperfections on the walls and similar surfaces. It is popular in schools and factories where washability and durability are the main considerations. High-gloss paint is a highly glossy form of paint that is light reflecting and has a mirror-like look. It pairs well with other finishes. While it is highly durable and easy to clean, high gloss paint is known for obvious visibility of imperfections like scratches and dents. Failure The main reasons for paint failure after application on the surface are the applicator and improper treatment of the surface. Defects or degradation can be attributed to: Dilution This usually occurs when the dilution of the paint is not done as per manufacturers recommendation. There can be a case of over dilution and under dilution, as well as dilution with the incorrect diluent. Contamination Foreign contaminants can cause various film defects. Peeling/Blistering Most commonly due to improper surface treatment before application and inherent moisture/dampness being present in the substrate. The degree of blistering can be assessed according to ISO 4628 Part 2 or ASTM Method D714 (Standard Test Method for Evaluating Degree of Blistering of Paints). Chalking Chalking is the progressive powdering of the paint film on the painted surface. The primary reason for the problem is polymer degradation of the paint matrix due to exposure of UV radiation in sunshine and condensation from dew. The degree of chalking varies as epoxies react quickly while acrylics and polyurethanes can remain unchanged for long periods. The degree of chalking can be assessed according to International Standard ISO 4628 Part 6 or 7 or American Society of Testing and Materials(ASTM) Method D4214 (Standard Test Methods for Evaluating the Degree of Chalking of Exterior Paint Films). Cracking Cracking of paint film is due to the unequal expansion or contraction of paint coats. It usually happens when the coats of the paint are not allowed to cure/dry completely before the next coat is applied. The degree of cracking can be assessed according to International Standard ISO 4628 Part 4 or ASTM Method D661 (Standard Test Method for Evaluating Degree of Cracking of Exterior Paints). Cracking can also occur when the paint is applied to a surface that is incompatible or unstable. For instance, clay that hasn't dried completely when painted will cause the paint to crack due to the residual moisture in the clay. Erosion Erosion is very quick chalking. It occurs due to external agents like air, water etc. It can be evaluated using ASTM Method ASTM D662 (Standard Test Method for Evaluating Degree of Erosion of Exterior Paints). The generation of acid by fungal species can be a significant component of erosion of painted surfaces. The fungus Aureobasidium pullulans is known for damaging wall paints. Dangers Volatile organic compounds (VOCs) in paint are considered harmful to the environment and especially for people who work with them on a regular basis. Extensive exposure to these vapours has been strongly related to organic solvent syndrome, although a definitive relation has yet to be fully established. The controversial solvent 2-butoxyethanol is also used in paint production. Jurisdictions such as Canada, China, the EU, India, the United States, and South Korea have definitions for VOCs in place, along with regulations to limit the use of VOCs in consumer products such as paint. In the US, environmental regulations, consumer demand, and advances in technology led to the development of low-VOC and zero-VOC paints and finishes. These new paints are widely available and meet or exceed the old high-VOC products in performance and cost-effectiveness while having significantly less impact on human and environmental health. Globally, the most widely accepted standard for acceptable levels of VOC in paint is Green Seal’s GS-11 Standards from the US which defines different VOC levels acceptable for different types of paint based on use case and performance requirements. A polychlorinated biphenyl (PCB) was reported (published in 2009) in air samples collected in Chicago, Philadelphia, the Arctic, and several sites around the Great Lakes. PCB is a global pollutant and was measured in the wastewater effluent from paint production. The widespread distribution of PCB suggests volatilization of this compound from surfaces, roofs etc. PCB is present in consumer goods including newspapers, magazines, and cardboard boxes, which usually contain color pigments. Therefore, a hypothesis exists that PCB congeners are present as byproduct in some current commercial pigments. Research is ongoing to remove heavy metals from paint formulations completely. Environmental impact of plastics in paints The ongoing scrutiny of the environmental impact of plastics in paint production is reminiscent of previous investigations into the use of lead in paints. This assessment is driven by accumulating evidence that underscores the role of paint as a significant contributor to microplastic pollution. In 2019, of the 44.4 million tons of globally produced paint, 95 percent was plastic-based. Further, a 2022 study by Environmental Action revealed that approximately 58 percent of the microplastics found in oceans and waterways could be traced back to paint. Efforts to mitigate this environmental issue have spurred the development and exploration of alternatives to plastic-based paints, such as those derived from linseed, walnut, milk, and limewash. However, their cost is a significant deterrent to the widespread adoption of these environmentally-friendly alternatives. As of 2023, a gallon of plastic-based paint may cost around $20 to $30, however the price for specialized paint, such as graphene and lime, ranges from $34 to $114 per gallon, underlining the financial challenges associated with transitioning from plastic-based paints.
Technology
Artist's tools
null
23798
https://en.wikipedia.org/wiki/Poincar%C3%A9%20conjecture
Poincaré conjecture
In the mathematical field of geometric topology, the Poincaré conjecture (, , ) is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. Originally conjectured by Henri Poincaré in 1904, the theorem concerns spaces that locally look like ordinary three-dimensional space but which are finite in extent. Poincaré hypothesized that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. Attempts to resolve the conjecture drove much progress in the field of geometric topology during the 20th century. The eventual proof built upon Richard S. Hamilton's program of using the Ricci flow to solve the problem. By developing a number of new techniques and results in the theory of Ricci flow, Grigori Perelman was able to modify and complete Hamilton's program. In papers posted to the arXiv repository in 2002 and 2003, Perelman presented his work proving the Poincaré conjecture (and the more powerful geometrization conjecture of William Thurston). Over the next several years, several mathematicians studied his papers and produced detailed formulations of his work. Hamilton and Perelman's work on the conjecture is widely recognized as a milestone of mathematical research. Hamilton was recognized with the Shaw Prize and the Leroy P. Steele Prize for Seminal Contribution to Research. The journal Science marked Perelman's proof of the Poincaré conjecture as the scientific Breakthrough of the Year in 2006. The Clay Mathematics Institute, having included the Poincaré conjecture in their well-known Millennium Prize Problem list, offered Perelman their prize of US$1 million for the conjecture's resolution. He declined the award, saying that Hamilton's contribution had been equal to his own. Overview The Poincaré conjecture was a mathematical problem in the field of geometric topology. In terms of the vocabulary of that field, it says the following: Poincaré conjecture.Every three-dimensional topological manifold which is closed, connected, and has trivial fundamental group is homeomorphic to the three-dimensional sphere. Familiar shapes, such as the surface of a ball (which is known in mathematics as the two-dimensional sphere) or of a torus, are two-dimensional. The surface of a ball has trivial fundamental group, meaning that any loop drawn on the surface can be continuously deformed to a single point. By contrast, the surface of a torus has nontrivial fundamental group, as there are loops on the surface which cannot be so deformed. Both are topological manifolds which are closed (meaning that they have no boundary and take up a finite region of space) and connected (meaning that they consist of a single piece). Two closed manifolds are said to be homeomorphic when it is possible for the points of one to be reallocated to the other in a continuous way. Because the (non)triviality of the fundamental group is known to be invariant under homeomorphism, it follows that the two-dimensional sphere and torus are not homeomorphic. The two-dimensional analogue of the Poincaré conjecture says that any two-dimensional topological manifold which is closed and connected but non-homeomorphic to the two-dimensional sphere must possess a loop which cannot be continuously contracted to a point. (This is illustrated by the example of the torus, as above.) This analogue is known to be true via the classification of closed and connected two-dimensional topological manifolds, which was understood in various forms since the 1860s. In higher dimensions, the closed and connected topological manifolds do not have a straightforward classification, precluding an easy resolution of the Poincaré conjecture. History Poincaré's question In the 1800s, Bernhard Riemann and Enrico Betti initiated the study of topological invariants of manifolds. They introduced the Betti numbers, which associate to any manifold a list of nonnegative integers. Riemann had showed that a closed connected two-dimensional manifold is fully characterized by its Betti numbers. As part of his 1895 paper Analysis Situs (announced in 1892), Poincaré showed that Riemann's result does not extend to higher dimensions. To do this he introduced the fundamental group as a novel topological invariant, and was able to exhibit examples of three-dimensional manifolds which have the same Betti numbers but distinct fundamental groups. He posed the question of whether the fundamental group is sufficient to topologically characterize a manifold (of given dimension), although he made no attempt to pursue the answer, saying only that it would "demand lengthy and difficult study". The primary purpose of Poincaré's paper was the interpretation of the Betti numbers in terms of his newly-introduced homology groups, along with the Poincaré duality theorem on the symmetry of Betti numbers. Following criticism of the completeness of his arguments, he released a number of subsequent "supplements" to enhance and correct his work. The closing remark of his second supplement, published in 1900, said: In order to avoid making this work too prolonged, I confine myself to stating the following theorem, the proof of which will require further developments: Each polyhedron which has all its Betti numbers equal to 1 and all its tables orientable is simply connected, i.e., homeomorphic to a hypersphere. (In a modern language, taking note of the fact that Poincaré is using the terminology of simple-connectedness in an unusual way, this says that a closed connected oriented manifold with the homology of a sphere must be homeomorphic to a sphere.) This modified his negative generalization of Riemann's work in two ways. Firstly, he was now making use of the full homology groups and not only the Betti numbers. Secondly, he narrowed the scope of the problem from asking if an arbitrary manifold is characterized by topological invariants to asking whether the sphere can be so characterized. However, after publication he found his announced theorem to be incorrect. In his fifth and final supplement, published in 1904, he proved this with the counterexample of the Poincaré homology sphere, which is a closed connected three-dimensional manifold which has the homology of the sphere but whose fundamental group has 120 elements. This example made it clear that homology is not powerful enough to characterize the topology of a manifold. In the closing remarks of the fifth supplement, Poincaré modified his erroneous theorem to use the fundamental group instead of homology: One question remains to be dealt with: is it possible for the fundamental group of to reduce to the identity without being simply connected? [...] However, this question would carry us too far away. In this remark, as in the closing remark of the second supplement, Poincaré used the term "simply connected" in a way which is at odds with modern usage, as well as his own 1895 definition of the term. (According to modern usage, Poincaré's question is a tautology, asking if it is possible for a manifold to be simply connected without being simply connected.) However, as can be inferred from context, Poincaré was asking whether the triviality of the fundamental group uniquely characterizes the sphere. Throughout the work of Riemann, Betti, and Poincaré, the topological notions in question are not defined or used in a way that would be recognized as precise from a modern perspective. Even the key notion of a "manifold" was not used in a consistent way in Poincaré's own work, and there was frequent confusion between the notion of a topological manifold, a PL manifold, and a smooth manifold. For this reason, it is not possible to read Poincaré's questions unambiguously. It is only through the formalization and vocabulary of topology as developed by later mathematicians that Poincaré's closing question has been understood as the "Poincaré conjecture" as stated in the preceding section. However, despite its usual phrasing in the form of a conjecture, proposing that all manifolds of a certain type are homeomorphic to the sphere, Poincaré only posed an open-ended question, without venturing to conjecture one way or the other. Moreover, there is no evidence as to which way he believed his question would be answered. Solutions In the 1930s, J. H. C. Whitehead claimed a proof but then retracted it. In the process, he discovered some examples of simply-connected (indeed contractible, i.e. homotopically equivalent to a point) non-compact 3-manifolds not homeomorphic to , the prototype of which is now called the Whitehead manifold. In the 1950s and 1960s, other mathematicians attempted proofs of the conjecture only to discover that they contained flaws. Influential mathematicians such as Georges de Rham, R. H. Bing, Wolfgang Haken, Edwin E. Moise, and Christos Papakyriakopoulos attempted to prove the conjecture. In 1958, R. H. Bing proved a weak version of the Poincaré conjecture: if every simple closed curve of a compact 3-manifold is contained in a 3-ball, then the manifold is homeomorphic to the 3-sphere. Bing also described some of the pitfalls in trying to prove the Poincaré conjecture. Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true. Over time, the conjecture gained the reputation of being particularly tricky to tackle. John Milnor commented that sometimes the errors in false proofs can be "rather subtle and difficult to detect". Work on the conjecture improved understanding of 3-manifolds. Experts in the field were often reluctant to announce proofs and tended to view any such announcement with skepticism. The 1980s and 1990s witnessed some well-publicized fallacious proofs (which were not actually published in peer-reviewed form). An exposition of attempts to prove this conjecture can be found in the non-technical book Poincaré's Prize by George Szpiro. Dimensions The classification of closed surfaces gives an affirmative answer to the analogous question in two dimensions. For dimensions greater than three, one can pose the Generalized Poincaré conjecture: is a homotopy n-sphere homeomorphic to the n-sphere? A stronger assumption than simply-connectedness is necessary; in dimensions four and higher there are simply-connected, closed manifolds which are not homotopy equivalent to an n-sphere. Historically, while the conjecture in dimension three seemed plausible, the generalized conjecture was thought to be false. In 1961, Stephen Smale shocked mathematicians by proving the Generalized Poincaré conjecture for dimensions greater than four and extended his techniques to prove the fundamental h-cobordism theorem. In 1982, Michael Freedman proved the Poincaré conjecture in four dimensions. Freedman's work left open the possibility that there is a smooth four-manifold homeomorphic to the four-sphere which is not diffeomorphic to the four-sphere. This so-called smooth Poincaré conjecture, in dimension four, remains open and is thought to be very difficult. Milnor's exotic spheres show that the smooth Poincaré conjecture is false in dimension seven, for example. These earlier successes in higher dimensions left the case of three dimensions in limbo. The Poincaré conjecture was essentially true in both dimension four and all higher dimensions for substantially different reasons. In dimension three, the conjecture had an uncertain reputation until the geometrization conjecture put it into a framework governing all 3-manifolds. John Morgan wrote: Hamilton's program and solution Hamilton's program was started in his 1982 paper in which he introduced the Ricci flow on a manifold and showed how to use it to prove some special cases of the Poincaré conjecture. In the following years, he extended this work but was unable to prove the conjecture. The actual solution was not found until Grigori Perelman published his papers. In late 2002 and 2003, Perelman posted three papers on arXiv. In these papers, he sketched a proof of the Poincaré conjecture and a more general conjecture, Thurston's geometrization conjecture, completing the Ricci flow program outlined earlier by Richard S. Hamilton. From May to July 2006, several groups presented papers that filled in the details of Perelman's proof of the Poincaré conjecture, as follows: Bruce Kleiner and John W. Lott posted a paper on arXiv in May 2006 which filled in the details of Perelman's proof of the geometrization conjecture, following partial versions which had been publicly available since 2003. Their manuscript was published in the journal "Geometry and Topology" in 2008. A small number of corrections were made in 2011 and 2013; for instance, the first version of their published paper made use of an incorrect version of Hamilton's compactness theorem for Ricci flow. Huai-Dong Cao and Xi-Ping Zhu published a paper in the June 2006 issue of the Asian Journal of Mathematics with an exposition of the complete proof of the Poincaré and geometrization conjectures. The opening paragraph of their paper stated Some observers interpreted Cao and Zhu as taking credit for Perelman's work. They later posted a revised version, with new wording, on arXiv. In addition, a page of their exposition was essentially identical to a page in one of Kleiner and Lott's early publicly available drafts; this was also amended in the revised version, together with an apology by the journal's editorial board. John Morgan and Gang Tian posted a paper on arXiv in July 2006 which gave a detailed proof of just the Poincaré Conjecture (which is somewhat easier than the full geometrization conjecture) and expanded this to a book. All three groups found that the gaps in Perelman's papers were minor and could be filled in using his own techniques. On August 22, 2006, the ICM awarded Perelman the Fields Medal for his work on the Ricci flow, but Perelman refused the medal. John Morgan spoke at the ICM on the Poincaré conjecture on August 24, 2006, declaring that "in 2003, Perelman solved the Poincaré Conjecture". In December 2006, the journal Science honored the proof of Poincaré conjecture as the Breakthrough of the Year and featured it on its cover. Ricci flow with surgery Hamilton's program for proving the Poincaré conjecture involves first putting a Riemannian metric on the unknown simply connected closed 3-manifold. The basic idea is to try to "improve" this metric; for example, if the metric can be improved enough so that it has constant positive curvature, then according to classical results in Riemannian geometry, it must be the 3-sphere. Hamilton prescribed the "Ricci flow equations" for improving the metric; where g is the metric and R its Ricci curvature, and one hopes that, as the time t increases, the manifold becomes easier to understand. Ricci flow expands the negative curvature part of the manifold and contracts the positive curvature part. In some cases, Hamilton was able to show that this works; for example, his original breakthrough was to show that if the Riemannian manifold has positive Ricci curvature everywhere, then the above procedure can only be followed for a bounded interval of parameter values, with , and more significantly, that there are numbers such that as , the Riemannian metrics smoothly converge to one of constant positive curvature. According to classical Riemannian geometry, the only simply-connected compact manifold which can support a Riemannian metric of constant positive curvature is the sphere. So, in effect, Hamilton showed a special case of the Poincaré conjecture: if a compact simply-connected 3-manifold supports a Riemannian metric of positive Ricci curvature, then it must be diffeomorphic to the 3-sphere. If, instead, one only has an arbitrary Riemannian metric, the Ricci flow equations must lead to more complicated singularities. Perelman's major achievement was to show that, if one takes a certain perspective, if they appear in finite time, these singularities can only look like shrinking spheres or cylinders. With a quantitative understanding of this phenomenon, he cuts the manifold along the singularities, splitting the manifold into several pieces and then continues with the Ricci flow on each of these pieces. This procedure is known as Ricci flow with surgery. Perelman provided a separate argument based on curve shortening flow to show that, on a simply-connected compact 3-manifold, any solution of the Ricci flow with surgery becomes extinct in finite time. An alternative argument, based on the min-max theory of minimal surfaces and geometric measure theory, was provided by Tobias Colding and William Minicozzi. Hence, in the simply-connected context, the above finite-time phenomena of Ricci flow with surgery is all that is relevant. In fact, this is even true if the fundamental group is a free product of finite groups and cyclic groups. This condition on the fundamental group turns out to be necessary and sufficient for finite time extinction. It is equivalent to saying that the prime decomposition of the manifold has no acyclic components and turns out to be equivalent to the condition that all geometric pieces of the manifold have geometries based on the two Thurston geometries and S3. In the context that one makes no assumption about the fundamental group whatsoever, Perelman made a further technical study of the limit of the manifold for infinitely large times, and in so doing, proved Thurston's geometrization conjecture: at large times, the manifold has a thick-thin decomposition, whose thick piece has a hyperbolic structure, and whose thin piece is a graph manifold. Due to Perelman's and Colding and Minicozzi's results, however, these further results are unnecessary in order to prove the Poincaré conjecture. Solution On November 11, 2002, Russian mathematician Grigori Perelman posted the first of a series of three eprints on arXiv outlining a solution of the Poincaré conjecture. Perelman's proof uses a modified version of a Ricci flow program developed by Richard S. Hamilton. In August 2006, Perelman was awarded, but declined, the Fields Medal (worth $15,000 CAD) for his work on the Ricci flow. On March 18, 2010, the Clay Mathematics Institute awarded Perelman the $1 million Millennium Prize in recognition of his proof. Perelman rejected that prize as well. Perelman proved the conjecture by deforming the manifold using the Ricci flow (which behaves similarly to the heat equation that describes the diffusion of heat through an object). The Ricci flow usually deforms the manifold towards a rounder shape, except for some cases where it stretches the manifold apart from itself towards what are known as singularities. Perelman and Hamilton then chop the manifold at the singularities (a process called "surgery"), causing the separate pieces to form into ball-like shapes. Major steps in the proof involve showing how manifolds behave when they are deformed by the Ricci flow, examining what sort of singularities develop, determining whether this surgery process can be completed, and establishing that the surgery need not be repeated infinitely many times. The first step is to deform the manifold using the Ricci flow. The Ricci flow was defined by Richard S. Hamilton as a way to deform manifolds. The formula for the Ricci flow is an imitation of the heat equation, which describes the way heat flows in a solid. Like the heat flow, Ricci flow tends towards uniform behavior. Unlike the heat flow, the Ricci flow could run into singularities and stop functioning. A singularity in a manifold is a place where it is not differentiable: like a corner or a cusp or a pinching. The Ricci flow was only defined for smooth differentiable manifolds. Hamilton used the Ricci flow to prove that some compact manifolds were diffeomorphic to spheres, and he hoped to apply it to prove the Poincaré conjecture. He needed to understand the singularities. Hamilton created a list of possible singularities that could form, but he was concerned that some singularities might lead to difficulties. He wanted to cut the manifold at the singularities and paste in caps and then run the Ricci flow again, so he needed to understand the singularities and show that certain kinds of singularities do not occur. Perelman discovered the singularities were all very simple: consider that a cylinder is formed by 'stretching' a circle along a line in another dimension, repeating that process with spheres instead of circles essentially gives the form of the singularities. Perelman proved this using something called the "Reduced Volume", which is closely related to an eigenvalue of a certain elliptic equation. Sometimes, an otherwise complicated operation reduces to multiplication by a scalar (a number). Such numbers are called eigenvalues of that operation. Eigenvalues are closely related to vibration frequencies and are used in analyzing a famous problem: can you hear the shape of a drum? Essentially, an eigenvalue is like a note being played by the manifold. Perelman proved this note goes up as the manifold is deformed by the Ricci flow. This helped him eliminate some of the more troublesome singularities that had concerned Hamilton, particularly the cigar soliton solution, which looked like a strand sticking out of a manifold with nothing on the other side. In essence, Perelman showed that all the strands that form can be cut and capped and none stick out on one side only. Completing the proof, Perelman takes any compact, simply connected, three-dimensional manifold without boundary and starts to run the Ricci flow. This deforms the manifold into round pieces with strands running between them. He cuts the strands and continues deforming the manifold until, eventually, he is left with a collection of round three-dimensional spheres. Then, he rebuilds the original manifold by connecting the spheres together with three-dimensional cylinders, morphs them into a round shape, and sees that, despite all the initial confusion, the manifold was, in fact, homeomorphic to a sphere. One immediate question posed was how one could be sure that infinitely many cuts are not necessary. This was raised due to the cutting potentially progressing forever. Perelman proved this cannot happen by using minimal surfaces on the manifold. A minimal surface is one on which any local deformation increases area; a familiar example is a soap film spanning a bent loop of wire. Hamilton had shown that the area of a minimal surface decreases as the manifold undergoes Ricci flow. Perelman verified what happened to the area of the minimal surface when the manifold was sliced. He proved that, eventually, the area is so small that any cut after the area is that small can only be chopping off three-dimensional spheres and not more complicated pieces. This is described as a battle with a Hydra by Sormani in Szpiro's book cited below. This last part of the proof appeared in Perelman's third and final paper on the subject.
Mathematics
Geometry
null
23799
https://en.wikipedia.org/wiki/Power%20set
Power set
In mathematics, the power set (or powerset) of a set is the set of all subsets of , including the empty set and itself. In axiomatic set theory (as developed, for example, in the ZFC axioms), the existence of the power set of any set is postulated by the axiom of power set. The powerset of is variously denoted as , , , , or . Any subset of is called a family of sets over . Example If is the set , then all the subsets of are (also denoted or , the empty set or the null set) and hence the power set of is . Properties If is a finite set with the cardinality (i.e., the number of all elements in the set is ), then the number of all the subsets of is . This fact as well as the reason of the notation denoting the power set are demonstrated in the below. An indicator function or a characteristic function of a subset of a set with the cardinality is a function from to the two-element set , denoted as , and it indicates whether an element of belongs to or not; If in belongs to , then , and otherwise. Each subset of is identified by or equivalent to the indicator function , and as the set of all the functions from to consists of all the indicator functions of all the subsets of . In other words, is equivalent or bijective to the power set . Since each element in corresponds to either or under any function in , the number of all the functions in is . Since the number can be defined as (see, for example, von Neumann ordinals), the is also denoted as . Obviously holds. Generally speaking, is the set of all functions from to and . Cantor's diagonal argument shows that the power set of a set (whether infinite or not) always has strictly higher cardinality than the set itself (or informally, the power set must be larger than the original set). In particular, Cantor's theorem shows that the power set of a countably infinite set is uncountably infinite. The power set of the set of natural numbers can be put in a one-to-one correspondence with the set of real numbers (see Cardinality of the continuum). The power set of a set , together with the operations of union, intersection and complement, is a Σ-algebra over and can be viewed as the prototypical example of a Boolean algebra. In fact, one can show that any finite Boolean algebra is isomorphic to the Boolean algebra of the power set of a finite set. For infinite Boolean algebras, this is no longer true, but every infinite Boolean algebra can be represented as a subalgebra of a power set Boolean algebra (see Stone's representation theorem). The power set of a set forms an abelian group when it is considered with the operation of symmetric difference (with the empty set as the identity element and each set being its own inverse), and a commutative monoid when considered with the operation of intersection (with the entire set as the identity element). It can hence be shown, by proving the distributive laws, that the power set considered together with both of these operations forms a Boolean ring. Representing subsets as functions In set theory, is the notation representing the set of all functions from to . As "" can be defined as (see, for example, von Neumann ordinals), (i.e., ) is the set of all functions from to . As shown above, and the power set of , , are considered identical set-theoretically. This equivalence can be applied to the example above, in which , to get the isomorphism with the binary representations of numbers from 0 to , with being the number of elements in the set or . First, the enumerated set is defined in which the number in each ordered pair represents the position of the paired element of in a sequence of binary digits such as ; of is located at the first from the right of this sequence and is at the second from the right, and 1 in the sequence means the element of corresponding to the position of it in the sequence exists in the subset of for the sequence while 0 means it does not. For the whole power set of , we get: Such an injective mapping from to integers is arbitrary, so this representation of all the subsets of is not unique, but the sort order of the enumerated set does not change its cardinality. (E.g., can be used to construct another injective mapping from to the integers without changing the number of one-to-one correspondences.) However, such finite binary representation is only possible if can be enumerated. (In this example, , , and are enumerated with , , and respectively as the position of binary digit sequences.) The enumeration is possible even if has an infinite cardinality (i.e., the number of elements in is infinite), such as the set of integers or rationals, but not possible for example if is the set of real numbers, in which case we cannot enumerate all irrational numbers. Relation to binomial theorem The binomial theorem is closely related to the power set. A –elements combination from some set is another name for a –elements subset, so the number of combinations, denoted as (also called binomial coefficient) is a number of subsets with elements in a set with elements; in other words it's the number of sets with elements which are elements of the power set of a set with elements. For example, the power set of a set with three elements, has: subset with elements (the empty subset), subsets with element (the singleton subsets), subsets with elements (the complements of the singleton subsets), subset with elements (the original set itself). Using this relationship, we can compute using the formula: Therefore, one can deduce the following identity, assuming : Recursive definition If is a finite set, then a recursive definition of proceeds as follows: If , then . Otherwise, let and ; then . In words: The power set of the empty set is a singleton whose only element is the empty set. For a non-empty set , let be any element of the set and its relative complement; then the power set of is a union of a power set of and a power set of whose each element is expanded with the element. Subsets of limited cardinality The set of subsets of of cardinality less than or equal to is sometimes denoted by or , and the set of subsets with cardinality strictly less than is sometimes denoted or . Similarly, the set of non-empty subsets of might be denoted by or . Power object A set can be regarded as an algebra having no nontrivial operations or defining equations. From this perspective, the idea of the power set of as the set of subsets of generalizes naturally to the subalgebras of an algebraic structure or algebra. The power set of a set, when ordered by inclusion, is always a complete atomic Boolean algebra, and every complete atomic Boolean algebra arises as the lattice of all subsets of some set. The generalization to arbitrary algebras is that the set of subalgebras of an algebra, again ordered by inclusion, is always an algebraic lattice, and every algebraic lattice arises as the lattice of subalgebras of some algebra. So in that regard, subalgebras behave analogously to subsets. However, there are two important properties of subsets that do not carry over to subalgebras in general. First, although the subsets of a set form a set (as well as a lattice), in some classes it may not be possible to organize the subalgebras of an algebra as itself an algebra in that class, although they can always be organized as a lattice. Secondly, whereas the subsets of a set are in bijection with the functions from that set to the set , there is no guarantee that a class of algebras contains an algebra that can play the role of in this way. Certain classes of algebras enjoy both of these properties. The first property is more common; the case of having both is relatively rare. One class that does have both is that of multigraphs. Given two multigraphs and , a homomorphism consists of two functions, one mapping vertices to vertices and the other mapping edges to edges. The set of homomorphisms from to can then be organized as the graph whose vertices and edges are respectively the vertex and edge functions appearing in that set. Furthermore, the subgraphs of a multigraph are in bijection with the graph homomorphisms from to the multigraph definable as the complete directed graph on two vertices (hence four edges, namely two self-loops and two more edges forming a cycle) augmented with a fifth edge, namely a second self-loop at one of the vertices. We can therefore organize the subgraphs of as the multigraph , called the power object of . What is special about a multigraph as an algebra is that its operations are unary. A multigraph has two sorts of elements forming a set of vertices and of edges, and has two unary operations giving the source (start) and target (end) vertices of each edge. An algebra all of whose operations are unary is called a presheaf. Every class of presheaves contains a presheaf that plays the role for subalgebras that plays for subsets. Such a class is a special case of the more general notion of elementary topos as a category that is closed (and moreover cartesian closed) and has an object , called a subobject classifier. Although the term "power object" is sometimes used synonymously with exponential object , in topos theory is required to be . Functors and quantifiers There is both a covariant and contravariant power set functor, and . The covariant functor is defined more simply. as the functor which sends a set to and a morphism (here, a function between sets) to the image morphism. That is, for . Elsewhere in this article, the power set was defined as the set of functions of into the set with 2 elements. Formally, this defines a natural isomorphism . The contravariant power set functor is different from the covariant version in that it sends to the preimage morphism, so that if . This is because a general functor takes a morphism to precomposition by h, so a function , which takes morphisms from b to c and takes them to morphisms from a to c, through b via h. In category theory and the theory of elementary topoi, the universal quantifier can be understood as the right adjoint of a functor between power sets, the inverse image functor of a function between sets; likewise, the existential quantifier is the left adjoint.
Mathematics
Set theory
null
23842
https://en.wikipedia.org/wiki/Planets%20beyond%20Neptune
Planets beyond Neptune
Following the discovery of the planet Neptune in 1846, there was considerable speculation that another planet might exist beyond its orbit. The search began in the mid-19th century and continued at the start of the 20th with Percival Lowell's quest for Planet X. Lowell proposed the Planet X hypothesis to explain apparent discrepancies in the orbits of the giant planets, particularly Uranus and Neptune, speculating that the gravity of a large unseen ninth planet could have perturbed Uranus enough to account for the irregularities. Clyde Tombaugh's discovery of Pluto in 1930 appeared to validate Lowell's hypothesis, and Pluto was officially named the ninth planet. In 1978, Pluto was conclusively determined to be too small for its gravity to affect the giant planets, resulting in a brief search for a tenth planet. The search was largely abandoned in the early 1990s, when a study of measurements made by the Voyager 2 spacecraft found that the irregularities observed in Uranus's orbit were due to a slight overestimation of Neptune's mass. After 1992, the discovery of numerous small icy objects with similar or even wider orbits than Pluto led to a debate over whether Pluto should remain a planet, or whether it and its neighbours should, like the asteroids, be given their own separate classification. Although a number of the larger members of this group were initially described as planets, in 2006 the International Astronomical Union (IAU) reclassified Pluto and its largest neighbours as dwarf planets, leaving Neptune the farthest known planet in the Solar System. While the astronomical community widely agrees that Planet X, as originally envisioned, does not exist, the concept of an as-yet-unobserved planet has been revived by a number of astronomers to explain other anomalies observed in the outer Solar System. As of March 2014, observations with the WISE telescope have ruled out the possibility of a Saturn-sized object (95 Earth masses) out to 10,000 AU, and a Jupiter-sized (≈318 Earth masses) or larger object out to 26,000 AU. In 2014, based on similarities of the orbits of a group of recently discovered extreme trans-Neptunian objects, astronomers hypothesized the existence of a super-Earth or ice giant planet, 2 to 15 times the mass of the Earth and beyond 200 AU with possibly a highly inclined orbit at some 1,500 AU. In 2016, further work showed this unknown distant planet is likely to be on an inclined, eccentric orbit that goes no closer than about 200 AU and no farther than about 1,200 AU from the Sun. The orbit is predicted to be anti-aligned to the clustered extreme trans-Neptunian objects. Because Pluto is no longer considered a planet by the IAU, this new hypothetical object has become known as Planet Nine. Early speculation In the 1840s, the French mathematician Urbain Le Verrier used Newtonian mechanics to analyse perturbations in the orbit of Uranus, and hypothesised that they were caused by the gravitational pull of a yet-undiscovered planet. Le Verrier predicted the position of this new planet and sent his calculations to German astronomer Johann Gottfried Galle. On 23 September 1846, the night following his receipt of the letter, Galle and his student Heinrich d'Arrest discovered Neptune, exactly where Le Verrier had predicted. There remained some slight discrepancies in the giant planets' orbits. These were taken to indicate the existence of yet another planet orbiting beyond Neptune. Even before Neptune's discovery, some speculated that one planet alone was not enough to explain the discrepancy. On 17 November 1834, the British amateur astronomer the Reverend Thomas John Hussey reported a conversation he had had with French astronomer Alexis Bouvard to George Biddell Airy, the British Astronomer Royal. Hussey reported that when he suggested to Bouvard that the unusual motion of Uranus might be due to the gravitational influence of an undiscovered planet, Bouvard replied that the idea had occurred to him, and that he had corresponded with Peter Andreas Hansen, director of the Seeberg Observatory in Gotha, about the subject. Hansen's opinion was that a single body could not adequately explain the motion of Uranus, and postulated that two planets lay beyond Uranus. In 1848, Jacques Babinet raised an objection to Le Verrier's calculations, claiming that Neptune's observed mass was smaller and its orbit larger than Le Verrier had initially predicted. He postulated, based largely on simple subtraction from Le Verrier's calculations, that another planet of roughly 12 Earth masses, which he named "Hyperion", must exist beyond Neptune. Le Verrier denounced Babinet's hypothesis, saying, "[There is] absolutely nothing by which one could determine the position of another planet, barring hypotheses in which imagination played too large a part." In 1850 James Ferguson, Assistant Astronomer at the United States Naval Observatory, noted that he had "lost" a star he had observed, GR1719k, which Lt. Matthew Maury, the superintendent of the Observatory, claimed was evidence that it must be a new planet. Subsequent searches failed to recover the "planet" in a different position, and in 1878, CHF Peters, director of the Hamilton College Observatory in New York, showed that the star had not in fact vanished, and that the previous results had been due to human error. In 1879, Camille Flammarion noted that the comets 1862 III and 1889 III had aphelia of 47 and 49 AU, respectively, suggesting that they might mark the orbital radius of an unknown planet that had dragged them into an elliptical orbit. Astronomer George Forbes concluded on the basis of this evidence that two planets must exist beyond Neptune. He calculated, based on the fact that four comets possessed aphelia at around 100 AU and a further six with aphelia clustered at around 300 AU, the orbital elements of a pair of hypothetical trans-Neptunian planets. These elements accorded suggestively with those made independently by another astronomer named David Peck Todd, suggesting to many that they might be valid. However, sceptics argued that the orbits of the comets involved were still too uncertain to produce meaningful results. Some have considered Forbes's hypothesis a precursor to Planet Nine. In 1900 and 1901, Harvard College Observatory director William Henry Pickering led two searches for trans-Neptunian planets. The first was begun by Danish astronomer Hans Emil Lau who, after studying the data on the orbit of Uranus from 1690 to 1895, concluded that one trans-Neptunian planet alone could not account for the discrepancies in its orbit, and postulated the positions of two planets he believed were responsible. The second was launched when Gabriel Dallet suggested that a single trans-Neptunian planet lying at 47 AU could account for the motion of Uranus. Pickering agreed to examine plates for any suspected planets. In neither case were any found. In 1902, after observing the orbits of comets with aphelia beyond Neptune, Theodor Grigull of Münster, Germany proclaimed the existence of a Uranus-sized planet at 50 AU with a 360-year period, which he named Hades, cross-checking with the deviations in the orbit of Uranus. In 1921, Grigull revised his orbital period to 310–330 years, to better fit the observed deviations. In 1909, Thomas Jefferson Jackson See, an astronomer with a reputation as an egocentric contrarian, opined "there is certainly one, most likely two and possibly three planets beyond Neptune". Tentatively naming the first planet "Oceanus", he placed their respective distances at 42, 56 and 72 AU from the Sun. He gave no indication as to how he determined their existence, and no known searches were mounted to locate them. In 1911, Indian astronomer Venkatesh P. Ketakar suggested the existence of two trans-Neptunian planets, which he named after the Hindu gods Brahma and Vishnu, by reworking the patterns observed by Pierre-Simon Laplace in the planetary satellites of Jupiter and applying them to the outer planets. The three inner Galilean moons of Jupiter, Io, Europa and Ganymede, are locked in a complicated 1:2:4 resonance called a Laplace resonance. Ketakar suggested that Uranus, Neptune and his hypothetical trans-Neptunian planets were also locked in Laplace-like resonances. This is incorrect; Uranus and Neptune, while in a near-2:1 resonance, are not in full resonance. His calculations predicted a mean distance for Brahma of 38.95 AU and an orbital period of 242.28 Earth years (3:4 resonance with Neptune). When Pluto was discovered 19 years later, its mean distance of 39.48 AU and orbital period of 248 Earth years were close to Ketakar's prediction (Pluto in fact has a 2:3 resonance with Neptune). Ketakar made no predictions for the orbital elements other than mean distance and period. It is not clear how Ketakar arrived at these figures, and his second planet, Vishnu, was never located. Planet X In 1894, with the help of William Pickering, Percival Lowell (a wealthy Bostonian) founded the Lowell Observatory in Flagstaff, Arizona. In 1906, convinced he could resolve the conundrum of Uranus's orbit, he began an extensive project to search for a trans-Neptunian planet, which he named Planet X, a name previously used by Gabriel Dallet. The X in the name represents an unknown and is pronounced as the letter, as opposed to the Roman numeral for 10 (at the time, Planet X would have been the ninth planet). Lowell's hope in tracking down Planet X was to establish his scientific credibility, which had eluded him due to his widely derided belief that channel-like features visible on the surface of Mars were canals constructed by an intelligent civilization. Lowell's first search focused on the ecliptic, the plane encompassed by the zodiac where the other planets in the Solar System lie. Using a 5-inch photographic camera, he manually examined over 200 three-hour exposures with a magnifying glass, and found no planets. At that time Pluto was too far above the ecliptic to be imaged by the survey. After revising his predicted possible locations, Lowell conducted a second search from 1914 to 1916. In 1915, he published his Memoir of a Trans-Neptunian Planet, in which he concluded that Planet X had a mass roughly seven times that of Earth—about half that of Neptune—and a mean distance from the Sun of 43 AU. He assumed Planet X would be a large, low-density object with a high albedo, like the giant planets. As a result, it would show a disc with diameter of about one arcsecond and an apparent magnitude between 12 and 13—bright enough to be spotted. Separately, in 1908, Pickering announced that, by analysing irregularities in Uranus's orbit, he had found evidence for a ninth planet. His hypothetical planet, which he termed "Planet O" (because it came after "N", i.e. Neptune), possessed a mean orbital radius of 51.9 AU and an orbital period of 373.5 years. Plates taken at his observatory in Arequipa, Peru, showed no evidence for the predicted planet, and British astronomer P. H. Cowell showed that the irregularities observed in Uranus's orbit virtually disappeared once the planet's displacement of longitude was taken into account. Lowell himself, despite his close association with Pickering, dismissed Planet O out of hand, saying, "This planet is very properly designated "O", [for it] is nothing at all." Unbeknownst to Pickering, four of the photographic plates taken in the search for "Planet O" by astronomers at the Mount Wilson Observatory in 1919 captured images of Pluto, though this was only recognised years later. Pickering went on to suggest many other possible trans-Neptunian planets up to the year 1932, which he named P, Q, R, S, T, and U; none were ever detected. Discovery of Pluto Lowell's sudden death in 1916 temporarily halted the search for Planet X. Failing to find the planet, according to one friend, "virtually killed him". Lowell's widow, Constance, engaged in a legal battle with the observatory over Lowell's legacy which halted the search for Planet X for several years. In 1925, the observatory obtained glass discs for a new wide-field telescope to continue the search, constructed with funds from Abbott Lawrence Lowell, Percival's brother. In 1929 the observatory's director, Vesto Melvin Slipher, summarily handed the job of locating the planet to Clyde Tombaugh, a 22-year-old Kansas farm boy who had only just arrived at the Lowell Observatory after Slipher had been impressed by a sample of his astronomical drawings. Tombaugh's task was to systematically capture sections of the night sky in pairs of images. Each image in a pair was taken two weeks apart. He then placed both images of each section in a machine called a blink comparator, which by exchanging images quickly created a time lapse illusion of the movement of any planetary body. To reduce the chances that a faster-moving (and thus closer) object be mistaken for the new planet, Tombaugh imaged each region near its opposition point, 180 degrees from the Sun, where the apparent retrograde motion for objects beyond Earth's orbit is at its strongest. He also took a third image as a control to eliminate any false results caused by defects in an individual plate. Tombaugh decided to image the entire zodiac, rather than focus on those regions suggested by Lowell. By the beginning of 1930, Tombaugh's search had reached the constellation of Gemini. On 18 February 1930, after searching for nearly a year and examining nearly 2 million stars, Tombaugh discovered a moving object on photographic plates taken on 23 January and 29 January of that year. A lesser-quality photograph taken on January 21 confirmed the movement. Upon confirmation, Tombaugh walked into Slipher's office and declared, "Doctor Slipher, I have found your Planet X." The object was just six degrees from one of two locations for Planet X Lowell had suggested; thus it seemed he had at last been vindicated. After the observatory obtained further confirmatory photographs, news of the discovery was telegraphed to the Harvard College Observatory on March 13, 1930. The new object was later precovered on photographs dating back to 19 March 1915. The decision to name the object Pluto was intended in part to honour Percival Lowell, as his initials made up the word's first two letters. After discovering Pluto, Tombaugh continued to search the ecliptic for other distant objects. He found hundreds of variable stars and asteroids, as well as two comets, but no further planets. Pluto loses Planet X title To the observatory's disappointment and surprise, Pluto showed no visible disc; it appeared as a point, no different from a star, and, at only 15th magnitude, was six times dimmer than Lowell had predicted, which meant it was either very small, or very dark. Because of Lowell's predictions, astronomers thought that Pluto would be massive enough to perturb planets. This led them to assume that its albedo could be no less than 0.07 (meaning that, at minimum, it would reflect 7% of the light that hit it), which would have made Pluto about as dark as asphalt, and similar in reflectivity to the least reflective planet, which is Mercury. This would have given Pluto an estimated mass of no more than 70% that of Earth. Observations also revealed that Pluto's orbit was very elliptical, far more than that of any other planet. Almost immediately, some astronomers questioned Pluto's status as a planet. Barely a month after its discovery was announced, on April 14, 1930, in an article in The New York Times, Armin O. Leuschner suggested that Pluto's dimness and high orbital eccentricity made it more similar to an asteroid or comet: "The Lowell result confirms the possible high eccentricity announced by us on April 5. Among the possibilities are a large asteroid greatly disturbed in its orbit by close approach to a major planet such as Jupiter, or it may be one of many long-period planetary objects yet to be discovered, or a bright cometary object." In that same article, Harvard Observatory director Harlow Shapley wrote that Pluto was a "member of the Solar System not comparable with known asteroids and comets, and perhaps of greater importance to cosmogony than would be another major planet beyond Neptune." In 1931, after examining the structure of the residuals of Uranus' longitude using a trigonometric formula, Ernest W. Brown asserted (in agreement with E. C. Bower) that the presumed irregularities in the orbit of Uranus could not be due to the gravitational effect of a more distant planet, and thus that Lowell's supposed prediction was "purely accidental". Throughout the mid-20th century, estimates of Pluto's mass were revised downward. In 1931, Nicholson and Mayall calculated its mass, based on its supposed effect on the giant planets, as roughly that of Earth; a value somewhat in accord with the 0.91 Earth mass calculated in 1942 by Lloyd R. Wylie at the US Naval Observatory, using the same assumptions. In 1949, Gerard Kuiper's measurements of Pluto's diameter with the 200-inch telescope at Mount Palomar Observatory led him to the conclusion that it was midway in size between Mercury and Mars and that its mass was most probably about 0.1 Earth mass. In 1973, based on the similarities in the periodicity and amplitude of brightness variation with Triton, Dennis Rawlins conjectured Pluto's mass must be similar to Triton's. In retrospect, the conjecture turns out to have been correct; it had been argued by astronomers Walter Baade and E.C. Bower as early as 1934. However, because Triton's mass was then believed to be roughly 2.5% of the Earth–Moon system (more than ten times its actual value), Rawlins's determination for Pluto's mass was similarly incorrect. It was nonetheless a meagre enough value for him to conclude Pluto was not Planet X. In 1976, Dale Cruikshank, Carl Pilcher, and David Morrison of the University of Hawaii analysed spectra from Pluto's surface and determined that it must contain methane ice, which is highly reflective. This meant that Pluto, far from being dark, was in fact exceptionally bright, and thus was probably no more than  Earth mass. Pluto's size was finally determined conclusively in 1978, when American astronomer James W. Christy discovered its moon Charon. This enabled him, together with Robert Sutton Harrington of the U.S. Naval Observatory, to measure the mass of the Pluto–Charon system directly by observing the moon's orbital motion around Pluto. They determined Pluto's mass to be 1.31×1022 kg; roughly one five-hundredth that of Earth or one-sixth that of the Moon, and far too small to account for the observed discrepancies in the orbits of the outer planets. Lowell's prediction had been a coincidence: If there was a Planet X, it was not Pluto. Further searches for Planet X After 1978, a number of astronomers kept up the search for Lowell's Planet X, convinced that, because Pluto was no longer a viable candidate, an unseen tenth planet must have been perturbing the outer planets. In the 1980s and 1990s, Robert Harrington led a search to determine the real cause of the apparent irregularities. He calculated that any Planet X would be at roughly three times the distance of Neptune from the Sun; its orbit would be highly eccentric, and strongly inclined to the ecliptic—the planet's orbit would be at roughly a 32-degree angle from the orbital plane of the other known planets. This hypothesis was met with a mixed reception. Noted Planet X skeptic Brian G. Marsden of the Minor Planet Center pointed out that these discrepancies were a hundredth the size of those noticed by Le Verrier, and could easily be due to observational error. In 1972, Joseph Brady of the Lawrence Livermore National Laboratory studied irregularities in the motion of Halley's Comet. Brady claimed that they could have been caused by a Jupiter-sized planet beyond Neptune at 59 AU that is in a retrograde orbit around the Sun. However, both Marsden and Planet X proponent P. Kenneth Seidelmann attacked the hypothesis, showing that Halley's Comet randomly and irregularly ejects jets of material, causing changes to its own orbital trajectory, and that such a massive object as Brady's Planet X would have severely affected the orbits of known outer planets. Although its mission did not involve a search for Planet X, the IRAS space observatory made headlines briefly in 1983 due to an "unknown object" that was at first described as "possibly as large as the giant planet Jupiter and possibly so close to Earth that it would be part of this Solar System". Further analysis revealed that of several unidentified objects, nine were distant galaxies and the tenth was "interstellar cirrus"; none were found to be Solar System bodies. In 1988, A. A. Jackson and R. M. Killen studied the stability of Pluto's resonance with Neptune by placing test "Planet X-es" with various masses and at various distances from Pluto. Pluto and Neptune's orbits are in a 3:2 resonance, which prevents their collision or even any close approaches, regardless of their separation in the z axis. It was found that the hypothetical object's mass had to exceed 5 Earth masses to break the resonance, and the parameter space is quite large and a large variety of objects could have existed beyond Pluto without disturbing the resonance. Four test orbits of a trans-Plutonian planet have been integrated forward for four million years in order to determine the effects of such a body on the stability of the Neptune–Pluto 3:2 resonance. Planets beyond Pluto with masses of 0.1 and 1.0 Earth masses in orbits at 48.3 and 75.5 AU, respectively, do not disturb the 3:2 resonance. Test planets of 5 Earth masses with semi-major axes of 52.5 and 62.5 AU disrupt the four-million-year libration of Pluto's argument of perihelion. Planet X disproved Harrington died in January 1993, without having found Planet X. Six months before, E. Myles Standish had used data from Voyager 2'''s 1989 flyby of Neptune, which had revised the planet's total mass downward by 0.5%—an amount comparable to the mass of Mars—to recalculate its gravitational effect on Uranus. When Neptune's newly determined mass was used in the Jet Propulsion Laboratory Developmental Ephemeris (JPL DE), the supposed discrepancies in the Uranian orbit, and with them the need for a Planet X, vanished. There are no discrepancies in the trajectories of any space probes such as Pioneer 10, Pioneer 11, Voyager 1, and Voyager 2 that can be attributed to the gravitational pull of a large undiscovered object in the outer Solar System. Today, most astronomers agree that Planet X, as Lowell defined it, does not exist. Discovery of further trans-Neptunian objects After the discovery of Pluto and Charon, no more trans-Neptunian objects (TNOs) were found until 15760 Albion in 1992. Since then, thousands of such objects have been discovered. Most are now recognized as part of the Kuiper belt, a swarm of icy bodies left over from the Solar System's formation that orbit near the ecliptic plane just beyond Neptune. Though none were as large as Pluto, some of these distant trans-Neptunian objects, such as Sedna, were initially described in the media as "new planets". In 2005, astronomer Mike Brown and his team announced the discovery of (later named after the Greek goddess of discord and strife), a trans-Neptunian object then thought to be just barely larger than Pluto. Soon afterwards, a NASA Jet Propulsion Laboratory press release described the object as the "tenth planet". Eris was never officially classified as a planet, and the 2006 definition of planet defined both Eris and Pluto not as planets but as dwarf planets because they have not cleared their neighbourhoods. They do not orbit the Sun alone, but as part of a population of similarly sized objects. Pluto itself is now recognized as being a member of the Kuiper belt and the largest dwarf planet, larger than the more massive Eris. A number of astronomers, most notably Alan Stern, the head of NASA's New Horizons mission to Pluto, contend that the IAU's definition is flawed, and that Pluto and Eris, and all large trans-Neptunian objects, such as , , , and , should be considered planets in their own right. However, the discovery of Eris did not rehabilitate the Planet X theory because it is far too small to have significant effects on the outer planets' orbits. Subsequently proposed trans-Neptunian planets Although most astronomers accept that Lowell's Planet X does not exist, a number have revived the idea that a large unseen planet could create observable gravitational effects in the outer Solar System. These hypothetical objects are often referred to as "Planet X", although the conception of these objects may differ considerably from that proposed by Lowell. Orbits of distant objects Sedna's orbit When Sedna was discovered, its extreme orbit raised questions about its origin. Its perihelion is so distant (approximately ) that no currently observed mechanism can explain Sedna's eccentric distant orbit. It is too far from the planets to have been affected by the gravity of Neptune or the other giant planets and too bound to the Sun to be affected by outside forces such as the galactic tides. Hypotheses to explain its orbit include that it was affected by a passing star, that it was captured from another planetary system, or that it was tugged into its current position by a trans-Neptunian planet. The most obvious solution to determining Sedna's peculiar orbit would be to locate a number of objects in a similar region, whose various orbital configurations would provide an indication as to their history. If Sedna had been pulled into its orbit by a trans-Neptunian planet, any other objects found in its region would have a similar perihelion to Sedna (around ). Excitement of Kuiper belt orbits In 2008, Tadashi Mukai and Patryk Sofia Lykawka suggested a distant Mars- or Earth-sized planet, currently in a highly eccentric orbit between 100 and and orbital period of 1000 years with an inclination of 20° to 40°, was responsible for the structure of the Kuiper belt. They proposed that the perturbations of this planet excited the eccentricities and inclinations of the trans-Neptunian objects, truncated the planetesimal disk at 48 AU, and detached the orbits of objects like Sedna from Neptune. During Neptune's migration this planet is posited to have been captured in an outer resonance of Neptune and to have evolved into a higher perihelion orbit due to the Kozai mechanism leaving the remaining trans-Neptunian objects on stable orbits. Elongated orbits of group of Kuiper belt objects In 2012, Rodney Gomes modelled the orbits of 92 Kuiper belt objects and found that six of those orbits were far more elongated than the model predicted. He concluded that the simplest explanation was the gravitational pull of a distant planetary companion, such as a Neptune-sized object at 1,500 AU. This Neptune-sized object would cause the perihelia of objects with semi-major axes greater than 300 AU to oscillate, delivering them into planet-crossing orbits like those of and or detached orbits like Sedna's. Planet Nine In 2014, astronomers announced the discovery of , a large object with a Sedna-like 4,200-year orbit and a perihelion of roughly 80 AU, which led them to suggest that it offered evidence of a potential trans-Neptunian planet. Trujillo and Sheppard argued that the orbital clustering of arguments of perihelia for and other extremely distant TNOs suggests the existence of a "super-Earth" of between 2 and 15 Earth masses beyond 200 AU and possibly on an inclined orbit at 1,500 AU. In 2014 astronomers at the Universidad Complutense in Madrid suggested that the available data actually indicates more than one trans-Neptunian planet; subsequent work further suggests that the evidence is robust enough but rather than connected with the longitudes of the ascending nodes and the arguments of perihelia, semi-major axes and nodal distances could be the signposts. Additional work based on improved orbits of 39 objects still indicates that more than one perturber could be present and that one of them could orbit the Sun at 300-400 AU. On January 20, 2016, Brown and Konstantin Batygin published an article corroborating Trujillo and Sheppard's initial findings; proposing a super-Earth (dubbed Planet Nine) based on a statistical clustering of the arguments of perihelia (noted before) near zero and also ascending nodes near 113° of six distant trans-Neptunian objects. They estimated it to be ten times the mass of Earth (about 60% the mass of Neptune) with a semimajor axis of approximately 400–1500 AU. Probability Even without gravitational evidence, Mike Brown, the discoverer of Sedna, has argued that Sedna's 12,000-year orbit means that probability alone suggests that an Earth-sized object exists beyond Neptune. Sedna's orbit is so eccentric that it spends only a small fraction of its orbital period near the Sun, where it can be easily observed. This means that unless its discovery was a freak accident, there is probably a substantial population of objects roughly Sedna's diameter yet to be observed in its orbital region. Mike Brown noted that However, Brown notes that even though it might approach or exceed Earth in size, should such an object be found it would still be a "dwarf planet" by the current definition, because it would not have cleared its neighbourhood sufficiently. Kuiper cliff and "Planet Ten" Additionally, speculation of a possible trans-Neptunian planet has revolved around the so-called "Kuiper cliff". The Kuiper belt terminates suddenly at a distance of from the Sun. Brunini and Melita have speculated that this sudden drop-off may be attributed to the presence of an object with a mass between those of Mars and Earth located beyond 48 AU. The presence of an object with a mass similar to that of Mars in a circular orbit at leads to a trans-Neptunian object population incompatible with observations. For instance, it would severely deplete the plutino population. Astronomers have not excluded the possibility of an object with a mass similar to that of Earth located farther than with an eccentric and inclined orbit. Computer simulations by Patryk Lykawka of Kobe University have suggested that an object with a mass between , ejected outward by Neptune early in the Solar System's formation and currently in an elongated orbit between from the Sun, could explain the Kuiper cliff and the peculiar detached objects such as Sedna and . Although some astronomers, such as Renu Malhotra and David Jewitt, have cautiously supported these claims, others, such as Alessandro Morbidelli, have dismissed them as "contrived". Malhotra & Volk (2017) argued that an unexpected variance in inclination for KBOs farther than the cliff at provided evidence of a possible Mars-sized planet, possibly up to , residing at the edge of the Solar System, which many news sources began referring to as "Planet Ten". Shortly after it was proposed, Lorenzo Iorio showed that the hypothetical planet's existence cannot be ruled out by Cassini ranging data. Starting in 2018, several surveys have discovered multiple objects located beyond the Kuiper Cliff. Some of these new discoveries are close to the heliopause (120 AU) or well beyond it (, , , ). An analysis of the TNO data available prior to September 2023 shows that there is a gap at about 72 AU, far from any mean-motion resonances with Neptune. Such a gap may have been induced by a massive perturber located further away. Other proposed planets Tyche was a hypothetical gas giant proposed to be located in the Solar System's Oort cloud. It was first proposed in 1999 by astrophysicists John Matese, Patrick Whitman and Daniel Whitmire of the University of Louisiana at Lafayette. They argued that evidence of Tyche's existence could be seen in a supposed bias in the points of origin for long-period comets. In 2013, Matese and Whitmire re-evaluated the comet data and noted that Tyche, if it existed, would be detectable in the archive of data that was collected by NASA's Wide-field Infrared Survey Explorer (WISE) telescope. In 2014, NASA announced that the WISE survey had ruled out any object with Tyche's characteristics, indicating that Tyche as hypothesized by Matese, Whitman, and Whitmire does not exist. Conversely, in 1999, British astronomer John Murray theorized the existence of a Jupiter-sized planet similar to Tyche 32,000 astronomical units away from the Sun in a retrograde orbit. Murray estimates that the planet would be located in the constellation of Delphinus. These parameters, also based on the orbits of various long-period comets, are different from those originally hypothesized by Matese, Whitman, and Whitmire for Tyche, and hence signify a different object. Unlike Tyche, this putative planet lies outside the 26,000 AU limit set by mid-infrared observations by the WISE telescope, but this limit can be as high as 82,000 AU based on albedo. A brown dwarf, for instance, would have a smaller albedo than a Jupiter analog. The oligarch theory of planet formation states that there were hundreds of planet-sized objects, known as oligarchs, in the early stages of the Solar System's evolution. In 2005, astronomer Eugene Chiang speculated that although some of these oligarchs became the planets we know today, most would have been flung outward by gravitational interactions. Some may have escaped the Solar System altogether to become free-floating planets, whereas others would be orbiting in a halo around the Solar System, with orbital periods of millions of years. This halo would lie at between from the Sun, or between a third and a thirtieth the distance to the Oort cloud. In December 2015, astronomers at the Atacama Large Millimeter Array (ALMA) detected a brief series of 350 GHz pulses that they concluded must either be a series of independent sources, or a single, fast moving source. Deciding that the latter was the most likely, they calculated based on its speed that, were it bound to the Sun, the object, which they named "Gna" after a fast-moving messenger goddess in Norse mythology, would be about 12–25 AU distant and have a dwarf planet-sized diameter of 220 to 880 km. However, if it were a rogue planet not gravitationally bound to the Sun, and as far away as 4000 AU, it could be much larger. The paper was never formally accepted, and has been withdrawn until the detection is confirmed. Scientists' reactions to the notice were largely sceptical; Mike Brown commented that, "If it is true that ALMA accidentally discovered a massive outer Solar System object in its tiny, tiny, tiny, field of view, that would suggest that there are something like 200,000 Earth-sized planets in the outer Solar System ... Even better, I just realized that this many Earth-sized planets existing would destabilize the entire Solar System and we would all die." Constraints on additional planets As of 2023 the following observations severely constrain the mass and distance of any possible additional Solar System planet: An analysis of mid-infrared observations with the WISE telescope have ruled out the possibility of a Saturn-sized object (95 Earth masses) out to 10,000 AU, and a Jupiter-sized or larger object out to 26,000 AU. WISE'' has continued to take more data since then, and NASA has invited the public to help search this data for evidence of planets beyond these limits, via the Backyard Worlds: Planet 9 citizen science project. Using modern data on the anomalous precession of the perihelia of Saturn, Earth, and Mars, Lorenzo Iorio concluded that any unknown planet with a mass of 0.7 times that of Earth must be farther than 350–400 AU; one with a mass of 2 times that of Earth, farther than 496–570 AU; and finally one with a mass of 15 times that of Earth, farther than 970–1,111 AU. Moreover, Iorio stated that the modern ephemerides of the Solar System outer planets has provided even tighter constraints: no celestial body with a mass of 15 times that of Earth can exist closer than 1,100–1,300 AU. However, work by another group of astronomers using a more comprehensive model of the Solar System found that Iorio's conclusion was only partially correct. Their analysis of Cassini data on Saturn's orbital residuals found that observations were inconsistent with a planetary body with the orbit and mass similar to those of Batygin and Brown's Planet Nine having a true anomaly of −130° to −110°, or −65° to 85°. Furthermore, the analysis found that Saturn's orbit is slightly better explained if such a body is located at a true anomaly of . At this location, it would be approximately 630 AU from the Sun. Using public data on the orbits of the extreme trans-Neptunian objects, it has been confirmed that a statistically significant (62σ) asymmetry between the shortest mutual ascending and descending nodal distances does exist; in addition, multiple highly improbably (p < 0.0002) correlated pairs of orbits with mutual nodal distances as low as 0.2 AU at 152 AU from the Solar System's barycentre or 1.3 AU at 339 AU have been found. Both findings suggest that massive perturbers may exist at hundreds of AUs from the Sun and are difficult to explain within the context of a uniform distribution of orbital orientations in the outermost Solar System.
Physical sciences
Solar System
Astronomy
23862
https://en.wikipedia.org/wiki/Python%20%28programming%20language%29
Python (programming language)
Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. It is often described as a "batteries included" language due to its comprehensive standard library. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language and first released it in 1991 as Python 0.9.0. Python 2.0 was released in 2000. Python 3.0, released in 2008, was a major revision not completely backward-compatible with earlier versions. Python 2.7.18, released in 2020, was the last release of Python 2. Python consistently ranks as one of the most popular programming languages, and has gained widespread use in the machine learning community. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Its implementation began in December 1989. Van Rossum shouldered sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from his responsibilities as Python's "benevolent dictator for life" (BDFL), a title the Python community bestowed upon him to reflect his long-term commitment as the project's chief decision-maker (he has since come out of retirement and is self-titled "BDFL-emeritus"). In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python is said to come from the British comedy series Monty Python's Flying Circus. Python 2.0 was released on 16 October 2000, with many major new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. No further security patches or other improvements will be released for it. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e. "2.7.18+" (plus 3.10), with the plus meaning (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, with some new semantics and changed syntax. At least every Python release since (now unsupported) 3.5 has added some syntax to the language, and a few later releases have dropped outdated modules, or changed semantics, at least in a minor way. , Python 3.13 is the latest stable release, and it and, for few more months, 3.12 are the only releases with active support including for bug fixes (as opposed to just for security) and Python 3.9, is the oldest supported version of Python (albeit in the 'security support' phase), due to Python 3.8 reaching end-of-life. Starting with 3.13, it and later versions have 2 years of full support (up from one and a half), followed by 3 years of security support (for same total support as before). Security updates were expedited in 2021 (and again twice in 2022, and more fixed in 2023 and in September 2024 for Python 3.12.6 down to 3.8.20), since all Python versions were insecure (including 2.7) because of security issues leading to possible remote code execution and web-cache poisoning. Python 3.10 added the | union type operator and the match and case keywords (for structural pattern matching statements). 3.11 expanded exception handling functionality. Python 3.12 added the new keyword type. Notable changes in 3.11 from 3.10 include increased program execution speed and improved error reporting. Python 3.11 claims to be between 10 and 60% faster than Python 3.10, and Python 3.12 adds another 5% on top of that. It also has improved error messages (again improved in 3.14), and many other changes. Python 3.13 introduces more syntax for types, a new and improved interactive interpreter (REPL), featuring multi-line editing and color support; an incremental garbage collector (producing shorter pauses for collection in programs with a lot of objects, and addition to the improved speed in 3.11 and 3.12), and an experimental just-in-time (JIT) compiler (such features, can/needs to be enabled specifically for the increase in speed), and an experimental free-threaded build mode, which disables the global interpreter lock (GIL), allowing threads to run more concurrently, that latter feature enabled with python3.13t or python3.13t.exe. Python 3.13 introduces some change in behavior, i.e. new "well-defined semantics", fixing bugs (plus many removals of deprecated classes, functions and methods, and removed some of the C API and outdated modules): "The [old] implementation of locals() and frame.f_locals is slow, inconsistent and buggy [and it] has many corner cases and oddities. Code that works around those may need to be changed. Code that uses locals() for simple templating, or print debugging, will continue to work correctly." Some (more) standard library modules and many deprecated classes, functions and methods, will be removed in Python 3.15 or 3.16. Python 3.11 adds Sigstore digital verification signatures for all CPython artifacts (in addition to PGP). Since use of PGP has been criticized by security practitioners Python is moving to Sigstore exclusively and dropping PGP from 3.14. Python 3.14 is now in alpha 3; regarding possible change to annotations: "In Python 3.14, from __future__ import annotations will continue to work as it did before, converting annotations into strings." PEP 711 proposes PyBI: a standard format for distributing Python Binaries. Python 3.15 will "Make UTF-8 mode default", the mode exists in all current Python versions, but currently needs to be opted into. UTF-8 is already used, by default, on Windows (and elsewhere), for most things, but e.g. to open files it's not and enabling also makes code fully cross-platform, i.e. use UTF-8 for everything on all platforms. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming (including metaprogramming and metaobjects). Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language''' because it can seamlessly integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Its design offers some support for functional programming in the Lisp tradition. It has functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules ( and ) that implement functional tools borrowed from Haskell and Standard ML. Its core philosophy is summarized in the Zen of Python (PEP 20), which includes aphorisms such as: Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Readability counts. However, Python features regularly violate these principles and have received criticism for adding unnecessary language bloat. Responses to these criticisms are that the Zen of Python is a guideline rather than a rule. The addition of some new features had been so controversial that Guido van Rossum resigned as Benevolent Dictator for Life following vitriol over the addition of the assignment expression operator in Python 3.8. Nevertheless, rather than building all of its functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which espoused the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar while giving developers a choice in their coding methodology. In contrast to Perl's "there is more than one way to do it" motto, Python embraces a "there should be one—and preferably only one—obvious way to do it." philosophy. In practice, however, Python provides many ways to achieve the same task. There are, for example, at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli, a Fellow at the Python Software Foundation and Python book author, wrote: "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers usually strive to avoid premature optimization and reject patches to non-critical parts of the CPython reference implementation that would offer marginal increases in speed at the cost of clarity. Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. It is also possible to cross-compile to other languages, but it either doesn't provide the full speed-up that might be expected, since Python is a very dynamic language, or a restricted subset of Python is compiled, and possibly semantics are slightly changed. Python's developers aim for it to be fun to use. This is reflected in its name—a tribute to the British comedy group Monty Python—and in occasionally playful approaches to tutorials and reference materials, such as the use of the terms "spam" and "eggs" (a reference to a Monty Python sketch) in examples, instead of the often-used "foo" and "bar". A common neologism in the Python community is pythonic, which has a wide range of meanings related to program style. "Pythonic" code may use Python idioms well, be natural or show fluency in the language, or conform with Python's minimalist philosophy and emphasis on readability. Code that is difficult to understand or reads like a rough transcription from another programming language is called unpythonic. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Indentation Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Statements and control flow Python's statements include: The assignment statement, using a single equals sign = The if statement, which conditionally executes a block of code, along with else and elif (a contraction of else if) The for statement, which iterates over an iterable object, capturing each element to a local variable for use by the attached block The while statement, which executes a block of code as long as its condition is true The try statement, which allows exceptions raised in its attached code block to be caught and handled by except clauses (or new syntax except* in Python 3.11 for exception groups); it also ensures that clean-up code in a finally block is always run regardless of how the block exits The raise statement, used to raise a specified exception or re-raise a caught exception The class statement, which executes a block of code and attaches its local namespace to a class, for use in object-oriented programming The def statement, which defines a function or method The with statement, which encloses a code block within a context manager (for example, acquiring a lock before it is run, then releasing the lock; or opening and closing a file), allowing resource-acquisition-is-initialization (RAII)-like behavior and replacing a common try/finally idiom The break statement, which exits a loop The continue statement, which skips the rest of the current iteration and continues with the next The del statement, which removes a variable—deleting the reference from the name to the value, and producing an error if the variable is referred to before it is redefined The pass statement, serving as a NOP, syntactically needed to create an empty code block The assert statement, used in debugging to check for conditions that should apply The yield statement, which returns a value from a generator function (and also an operator); used to implement coroutines The return statement, used to return a value from a function The import and from statements, used to import modules whose functions or variables can be used in the current program The match and case statements, an analog of the switch statement construct, that compares an expression against one or more cases as a control-of-flow measure. The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations, and, according to Van Rossum, it never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, it can be passed through multiple stack levels. Expressions Python's expressions include: The +, -, and * operators for mathematical addition, subtraction, and multiplication are similar to other languages, but the behavior of division differs. There are two types of divisions in Python: floor division (or integer division) // and floating-point / division. Python uses the ** operator for exponentiation. Python uses the + operator for string concatenation. Python uses the * operator for duplicating a string a specified number of times. The @ infix operator is intended to be used by libraries such as NumPy for matrix multiplication. The syntax :=, called the "walrus operator", was introduced in Python 3.8. It assigns values to variables as part of a larger expression. In Python, == compares by value. Python's is operator may be used to compare object identities (comparison by reference), and comparisons may be chained—for example, . Python uses and, or, and not as Boolean operators. Python has a type of expression named a list comprehension, and a more general expression named a generator expression. Anonymous functions are implemented using lambda expressions; however, there may be only one expression in each body. Conditional expressions are written as (different in order of operands from the c ? x : y operator common to many other languages). Python makes a distinction between lists and tuples. Lists are written as , are mutable, and cannot be used as the keys of dictionaries (dictionary keys must be immutable in Python). Tuples, written as , are immutable and thus can be used as keys of dictionaries, provided all of the tuple's elements are immutable. The + operator can be used to concatenate two tuples, which does not directly modify their contents, but produces a new tuple containing the elements of both. Thus, given the variable t initially equal to , executing first evaluates , which yields , which is then assigned back to t—thereby effectively "modifying the contents" of t while conforming to the immutable nature of tuple objects. Parentheses are optional for tuples in unambiguous contexts. Python features sequence unpacking where multiple expressions, each evaluating to anything that can be assigned (to a variable, writable property, etc.) are associated in an identical manner to that forming tuple literals—and, as a whole, are put on the left-hand side of the equal sign in an assignment statement. The statement expects an iterable object on the right-hand side of the equal sign that produces the same number of values as the provided writable expressions; when iterated through them, it assigns each of the produced values to the corresponding expression on the left. Python has a "string format" operator % that functions analogously to printf format strings in C—e.g. evaluates to "spam=blah eggs=2". In Python 2.6+ and 3+, this was supplemented by the format() method of the str class, e.g. . Python 3.6 added "f-strings": . Strings in Python can be concatenated by "adding" them (with the same operator as for adding integers and floats), e.g. returns "spameggs". If strings contain numbers, they are added as strings rather than integers, e.g. returns "22". Python has various string literals: Delimited by single or double quotes; unlike in Unix shells, Perl, and Perl-influenced languages, single and double quotes work the same. Both use the backslash (\) as an escape character. String interpolation became available in Python 3.6 as "formatted string literals". Triple-quoted (beginning and ending with three single or double quotes), which may span multiple lines and function like here documents in shells, Perl, and Ruby. Raw string varieties, denoted by prefixing the string literal with r. Escape sequences are not interpreted; hence raw strings are useful where literal backslashes are common, such as regular expressions and Windows-style paths. (Compare "@-quoting" in C#.) Python has array index and array slicing expressions in lists, denoted as a[key], or . Indexes are zero-based, and negative indexes are relative to the end. Slices take elements from the start index up to, but not including, the stop index. The third slice parameter, called step or stride, allows elements to be skipped and reversed. Slice indexes may be omitted—for example, returns a copy of the entire list. Each element of a slice is a shallow copy. In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This leads to duplicating some functionality. For example: List comprehensions vs. for-loops Conditional expressions vs. if blocks The eval() vs. exec() built-in functions (in Python 2, exec is a statement); the former is for expressions, the latter is for statements Statements cannot be a part of an expression—so list and other comprehensions or lambda expressions, all being expressions, cannot contain statements. A particular case is that an assignment statement such as cannot form part of the conditional expression of a conditional statement. Methods Methods on objects are functions attached to the object's class; the syntax is, for normal methods and functions, syntactic sugar for . Python methods have an explicit self parameter to access instance data, in contrast to the implicit self (or this) in some other object-oriented programming languages (e.g., C++, Java, Objective-C, Ruby). Python also provides methods, often called dunder methods (due to their names beginning and ending with double-underscores), to allow user-defined classes to modify how they are handled by native operations including length, comparison, in arithmetic operations and type conversion. Typing Python uses duck typing and has typed objects but untyped variable names. Type constraints are not checked at compile time; rather, operations on an object may fail, signifying that it is not of a suitable type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are not well-defined (for example, adding a number to a string) rather than silently attempting to make sense of them. Python allows programmers to define their own types using classes, most often used for object-oriented programming. New instances of classes are constructed by calling the class (for example, or ), and the classes are instances of the metaclass type (itself an instance of itself), allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes (both using the same syntax): old-style and new-style; current Python versions only support the semantics of the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Mypy also supports a Python compiler called mypyc, which leverages type annotations for optimization. Arithmetic operations Python has the usual symbols for arithmetic operators (+, -, *, /), the floor division operator // and the modulo operation % (where the remainder can be negative, e.g. 4 % -3 == -2). It also has ** for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0, and a matrix‑multiplication operator @ . These operators work like in traditional math; with the same precedence rules, the operators infix (+ and - can also be unary to represent positive and negative numbers respectively). The division between integers produces floating-point results. The behavior of division has changed significantly over time: Current Python (i.e. since 3.0) changed / to always be floating-point division, e.g. . The floor division // operator was introduced. So 7//3 == 2, -7//3 == -3, 7.5//3 == 2.0 and -7.5//3 == -3.0. Adding causes a module used in Python 2.7 to use Python 3.0 rules for division (see above). In Python terms, / is true division (or simply division), and // is floor division. / before version 3.0 is classic division. Rounding towards negative infinity, though different from most languages, adds consistency. For instance, it means that the equation is always true. It also means that the equation is valid for both positive and negative values of a. However, maintaining the validity of this equation means that while the result of a%b is, as expected, in the half-open interval [0, b), where b is a positive integer, it has to lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses round to even: round(1.5) and round(2.5) both produce 2. Versions before 3 used round-away-from-zero: round(0.5) is 1.0, round(-0.5) is −1.0. Python allows Boolean expressions with multiple equality relations in a manner that is consistent with general use in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision and several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library, and the third-party library NumPy that further extends the native capabilities, it is frequently used as a scientific scripting language to aid in problems such as numerical data processing and manipulation. Function syntax Functions are created in Python using the def keyword. In Python, you define the function as if you were calling it, by typing the function name and then the attributes required. Here is an example of a function that will print whatever is given:def printer(input1, input2="already there"): print(input1) print(input2) printer("hello") # Example output: # hello # already thereIf you want the attribute to have a set value if no value is given, use the variable-defining syntax inside the function definition. Programming examples "Hello, World!" program: print('Hello, world!') Program to calculate the factorial of a positive integer: n = int(input('Type a number, and its factorial will be printed: ')) if n < 0: raise ValueError('You must enter a non-negative integer') factorial = 1 for i in range(2, n + 1): factorial *= i print(factorial) Libraries Python's large standard library provides tools suited to many tasks and is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. It includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules need altering or rewriting for variant implementations. the Python Package Index (PyPI), the official repository for third-party Python software, contains over 523,000 packages with a wide range of functionality, including: Development environments Most Python implementations (including CPython) include a read–eval–print loop (REPL), permitting them to function as a command line interpreter for which users enter statements sequentially and receive results immediately. Python also comes with an Integrated development environment (IDE) called IDLE, which is more beginner-oriented. Other shells, including IDLE and IPython, add further abilities such as improved auto-completion, session state retention, and syntax highlighting. As well as standard desktop integrated development environments including PyCharm, IntelliJ Idea, Visual Studio Code etc, there are web browser-based IDEs, including SageMath, for developing science- and math-related programs; PythonAnywhere, a browser-based IDE and hosting environment; and Canopy IDE, a commercial IDE emphasizing scientific computing. Implementations Reference implementation CPython is the reference implementation of Python. It is written in C, meeting the C89 standard (Python 3.11 uses C11) with several select C99 features. CPython includes its own C extensions, but third-party extensions are not limited to older C versions—e.g. they can be implemented with C11 or C++. CPython compiles Python programs into an intermediate bytecode which is then executed by its virtual machine. CPython is distributed with a large standard library written in a mixture of C and native Python, and is available for many platforms, including Windows (starting with Python 3.9, the Python installer deliberately fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5) and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, with experimental installer), with unofficial support for VMS. Platform portability was one of its earliest priorities. (During Python 1 and 2 development, even OS/2 and Solaris were supported, but support has since been dropped for many platforms.) All current Python versions (i.e. since 3.7) only support operating systems with multi-threading support. Other implementations All alternative implementations have at least slightly different semantics (e.g. may have unordered dictionaries, unlike all current Python versions), e.g. with the larger Python ecosystem, such as with supporting the C Python API of with PyPy: PyPy is a fast, compliant interpreter of Python 2.7 and 3.10. Its just-in-time compiler often brings a significant speed improvement over CPython, but some libraries written in C cannot be used with it. It has e.g. RISC-V support. Codon is a language with an ahead-of-time (AOT) compiler, that (AOT) compiles a statically-typed Python-like language with "syntax and semantics are nearly identical to Python's, there are some notable differences" e.g. it uses 64-bit machine integers, for speed, not arbitrary like Python, and it claims speedups over CPython are usually on the order of 10–100x. It compiles to machine code (via LLVM) and supports native multithreading. Codon can also compile to Python extension modules that can be imported and used from Python. Stackless Python is a significant fork of CPython that implements microthreads; it does not use the call stack in the same way, thus allowing massively concurrent programs. PyPy also has a stackless version. MicroPython and CircuitPython are Python 3 variants optimized for microcontrollers, including Lego Mindstorms EV3. Pyston is a variant of the Python runtime that uses just-in-time compilation to speed up the execution of Python programs. Cinder is a performance-oriented fork of CPython 3.8 that contains a number of optimizations, including bytecode inline caching, eager evaluation of coroutines, a method-at-a-time JIT, and an experimental bytecode compiler. Snek Embedded Computing Language (compatible with e.g. 8-bit AVR microcontrollers such as ATmega 328P-based Arduino, as well as larger ones compatible with MicroPython) "is Python-inspired, but it is not Python. It is possible to write Snek programs that run under a full Python system, but most Python programs will not run under Snek." It is an imperative language not including OOP / classes, unlike Python, and simplifying to one number type with 32-bit single-precision (similar to JavaScript, except smaller). No longer supported implementations Other just-in-time Python compilers have been developed, but are now unsupported: Google began a project named Unladen Swallow in 2009, with the aim of speeding up the Python interpreter five-fold by using the LLVM, and of improving its multithreading ability to scale to thousands of cores, while ordinary implementations suffer from the global interpreter lock. Psyco is a discontinued just-in-time specializing compiler that integrates with CPython and transforms bytecode to machine code at runtime. The emitted code is specialized for certain data types and is faster than the standard Python code. Psyco does not support Python 2.7 or later. PyS60 was a Python 2 interpreter for Series 60 mobile phones released by Nokia in 2005. It implemented many of the modules from the standard library and some additional modules for integrating with the Symbian operating system. The Nokia N900 also supports Python with GTK widget libraries, enabling programs to be written and run on the target device. Cross-compilers to other languages There are several compilers/transpilers to high-level object languages, with either unrestricted Python, a restricted subset of Python, or a language similar to Python as the source language: Brython, Transcrypt and Pyjs (latest release in 2012) compile Python to JavaScript. Cython compiles (a superset of) Python to C. The resulting code is also usable with Python via direct C-level API calls into the Python interpreter. PyJL compiles/transpiles a subset of Python to "human-readable, maintainable, and high-performance Julia source code". Despite claiming high performance, no tool can claim to do that for arbitrary Python code; i.e. it's known not possible to compile to a faster language or machine code. Unless semantics of Python are changed, but in many cases speedup is possible with few or no changes in the Python code. The faster Julia source code can then be used from Python, or compiled to machine code, and based that way. Nuitka compiles Python into C. It works with Python 3.4 to 3.12 (and 2.6 and 2.7), for Python's main supported platforms (and Windows 7 or even Windows XP) and for Android. It claims complete support for Python 3.10, some support for 3.11 and 3.12 and experimental support for Python 3.13. It supports macOS including Apple Silicon-based. It's a free compiler, though it also has commercial add-ons (e.g. for hiding source code). Numba is used from Python, as a tool (enabled by adding a decorator to relevant Python code), a JIT compiler that translates a subset of Python and NumPy code into fast machine code. Pythran compiles a subset of Python 3 to C++ (C++11). RPython can be compiled to C, and is used to build the PyPy interpreter of Python. The Python → 11l → C++ transpiler compiles a subset of Python 3 to C++ (C++17). Specialized: MyHDL is a Python-based hardware description language (HDL), that converts MyHDL code to Verilog or VHDL code. Older projects (or not to be used with Python 3.x and latest syntax): Google's Grumpy (latest release in 2017) transpiles Python 2 to Go. IronPython allows running Python 2.7 programs (and an alpha, released in 2021, is also available for "Python 3.4, although features and behaviors from later versions may be included") on the .NET Common Language Runtime. Jython compiles Python 2.7 to Java bytecode, allowing the use of the Java libraries from a Python program. Pyrex (latest release in 2010) and Shed Skin (latest release in 2013) compile to C and C++ respectively. Performance Performance comparison of various Python implementations on a non-numerical (combinatorial) workload was presented at EuroSciPy '13. Python's performance compared to other programming languages is also benchmarked by The Computer Language Benchmarks Game. Development Python's development is conducted largely through the Python Enhancement Proposal (PEP) process, the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with the development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted at by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases come in three types, distinguished by which part of the version number is incremented: Backward-incompatible versions, where code is expected to break and needs to be manually ported. The first part of the version number is incremented. These releases happen infrequently—version 3.0 was released 8 years after 2.0. According to Guido van Rossum, a version 4.0 is very unlikely to ever happen. Major or "feature" releases are largely compatible with the previous version but introduce new features. The second part of the version number is incremented. Starting with Python 3.9, these releases are expected to happen annually. Each major version is supported by bug fixes for several years after its release. Bug fix releases, which introduce no new features, occur about every 3 months and are made when a sufficient number of bugs have been fixed upstream since the last release. Security vulnerabilities are also patched in these releases. The third and final part of the version number is incremented. Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for each release, they are often delayed if the code is not ready. Python's development team monitors the state of the code by running the large unit test suite during development. The major academic conference on Python is PyCon. There are also special Python mentoring programs, such as PyLadies. Python 3.12 removed wstr meaning Python extensions need to be modified, and 3.10 added pattern matching to the language. Python 3.12 dropped some outdated modules, and more will be dropped in the future, deprecated as of 3.13; already deprecated array 'u' format code will emit DeprecationWarning since 3.13 and will be removed in Python 3.16. The 'w' format code should be used instead. Part of ctypes is also deprecated and http.server.CGIHTTPRequestHandler will emit a DeprecationWarning, and will be removed in 3.15. Using that code already has a high potential for both security and functionality bugs. Parts of the typing module are deprecated, e.g. creating a typing.NamedTuple class using keyword arguments to denote the fields and such (and more) will be disallowed in Python 3.15. API documentation generators Tools that can generate documentation for Python API include pydoc (available as part of the standard library), Sphinx, Pdoc and its forks, Doxygen and Graphviz, among others. Naming Python's name is derived from the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs instead of the traditional foo and bar. The official Python documentation also contains various references to Monty Python routines. Users of Python are sometimes referred to as "Pythonistas". The prefix Py- is used to show that something is related to Python. Examples of the use of this prefix in names of Python applications or libraries include Pygame, a binding of Simple DirectMedia Layer to Python (commonly used to create games); PyQt and PyGTK, which bind Qt and GTK to Python respectively; and PyPy, a Python implementation originally written in Python. Popularity Since 2003, Python has consistently ranked in the top ten most popular programming languages in the TIOBE Programming Community Index where it was the most popular language (ahead of C, C++, and Java). It was selected as Programming Language of the Year (for "the highest rise in ratings in a year") in 2007, 2010, 2018, and 2020 (the only language to have done so four times ). Large organizations that use Python include Wikipedia, Google, Yahoo!, CERN, NASA, Facebook, Amazon, Instagram, Spotify, and some smaller entities like Industrial Light & Magic and ITA. The social news networking site Reddit was written mostly in Python. Organizations that partially use Python include Discord and Baidu. Uses Python can serve as a scripting language for web applications, e.g. via for the Apache webserver. With Web Server Gateway Interface, a standard API has evolved to facilitate these applications. Web frameworks like Django, Pylons, Pyramid, TurboGears, web2py, Tornado, Flask, Bottle, and Zope support developers in the design and maintenance of complex applications. Pyjs and IronPython can be used to develop the client-side of Ajax-based applications. SQLAlchemy can be used as a data mapper to a relational database. Twisted is a framework to program communications between computers, and is used (for example) by Dropbox. Libraries such as NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing, with specialized libraries such as Biopython and Astropy providing domain-specific functionality. SageMath is a computer algebra system with a notebook interface programmable in Python: its library covers many aspects of mathematics, including algebra, combinatorics, numerical mathematics, number theory, and calculus. OpenCV has Python bindings with a rich set of features for computer vision and image processing. Python is commonly used in artificial intelligence projects and machine learning projects with the help of libraries like TensorFlow, Keras, Pytorch, scikit-learn and the Logic language ProbLog. As a scripting language with a modular architecture, simple syntax, and rich text processing tools, Python is often used for natural language processing. The combination of Python and Prolog has proved to be particularly useful for AI applications, with Prolog providing knowledge representation and reasoning capabilities. The Janus system, in particular, exploits the similarities between these two languages, in part because of their use of dynamic typing, and the simple recursive nature of their data structures. Typical applications of this combination include natural language processing, visual query answering, geospatial reasoning, and handling of semantic web data. The Natlog system, implemented in Python, uses Definite Clause Grammars (DCGs) as prompt generators for text-to-text generators like GPT3 and text-to-image generators like DALL-E or Stable Diffusion. Python can also be used for graphical user interface (GUI) by using libraries like Tkinter. Python has been successfully embedded in many software products as a scripting language, including in finite element method software such as Abaqus, 3D parametric modelers like FreeCAD, 3D animation packages such as 3ds Max, Blender, Cinema 4D, Lightwave, Houdini, Maya, modo, MotionBuilder, Softimage, the visual effects compositor Nuke, 2D imaging programs like GIMP, Inkscape, Scribus and Paint Shop Pro, and musical notation programs like scorewriter and capella. GNU Debugger uses Python as a pretty printer to show complex structures such as C++ containers. Esri promotes Python as the best choice for writing scripts in ArcGIS. It has also been used in several video games, and has been adopted as first of the three available programming languages in Google App Engine, the other two being Java and Go. Many operating systems include Python as a standard component. It ships with most Linux distributions, AmigaOS 4 (using Python 2.7), FreeBSD (as a package), NetBSD, and OpenBSD (as a package) and can be used from the command line (terminal). Many Linux distributions use installers written in Python: Ubuntu uses the Ubiquity installer, while Red Hat Linux and Fedora Linux use the Anaconda installer. Gentoo Linux uses Python in its package management system, Portage. Python is used extensively in the information security industry, including in exploit development. Most of the Sugar software for the One Laptop per Child XO, developed at Sugar Labs , is written in Python. The Raspberry Pi single-board computer project has adopted Python as its main user-programming language. LibreOffice includes Python and intends to replace Java with Python. Its Python Scripting Provider is a core feature since Version 4.0 from 7 February 2013. Languages influenced by Python Python's design and philosophy have influenced many other programming languages: Boo uses indentation, a similar syntax, and a similar object model. Cobra uses indentation and a similar syntax, and its Acknowledgements'' document lists Python first among languages that influenced it. CoffeeScript, a programming language that cross-compiles to JavaScript, has Python-inspired syntax. ECMAScript–JavaScript borrowed iterators and generators from Python. GDScript, a scripting language very similar to Python, built-in to the Godot game engine. Go is designed for the "speed of working in a dynamic language like Python" and shares the same syntax for slicing arrays. Groovy was motivated by the desire to bring the Python design philosophy to Java. Julia was designed to be "as usable for general programming as Python". Mojo is a non-strict superset of Python (e.g. still missing classes, and adding e.g. struct). Nim uses indentation and similar syntax. Ruby's creator, Yukihiro Matsumoto, has said: "I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python. That's why I decided to design my own language." Swift, a programming language developed by Apple, has some Python-inspired syntax. Kotlin blends Python and Java features, minimizing boilerplate code for enhanced developer efficiency. Python's development practices have also been emulated by other languages. For example, the practice of requiring a document describing the rationale for, and issues surrounding, a change to the language (in Python, a PEP) is also used in Tcl, Erlang, and Swift.
Technology
Scripting languages
null
23863
https://en.wikipedia.org/wiki/Pyridine
Pyridine
Pyridine is a basic heterocyclic organic compound with the chemical formula . It is structurally related to benzene, with one methine group replaced by a nitrogen atom . It is a highly flammable, weakly alkaline, water-miscible liquid with a distinctive, unpleasant fish-like smell. Pyridine is colorless, but older or impure samples can appear yellow, due to the formation of extended, unsaturated polymeric chains, which show significant electrical conductivity. The pyridine ring occurs in many important compounds, including agrochemicals, pharmaceuticals, and vitamins. Historically, pyridine was produced from coal tar. As of 2016, it is synthesized on the scale of about 20,000 tons per year worldwide. Properties Physical properties Pyridine is diamagnetic. Its critical parameters are: pressure 5.63 MPa, temperature 619 K and volume 248 cm3/mol. In the temperature range 340–426 °C its vapor pressure p can be described with the Antoine equation where T is temperature, A = 4.16272, B = 1371.358 K and C = −58.496 K. Structure Pyridine ring forms a hexagon. Slight variations of the and distances as well as the bond angles are observed. Crystallography Pyridine crystallizes in an orthorhombic crystal system with space group Pna21 and lattice parameters a = 1752 pm, b = 897 pm, c = 1135 pm, and 16 formula units per unit cell (measured at 153 K). For comparison, crystalline benzene is also orthorhombic, with space group Pbca, a = 729.2 pm, b = 947.1 pm, c = 674.2 pm (at 78 K), but the number of molecules per cell is only 4. This difference is partly related to the lower symmetry of the individual pyridine molecule (C2v vs D6h for benzene). A trihydrate (pyridine·3H2O) is known; it also crystallizes in an orthorhombic system in the space group Pbca, lattice parameters a = 1244 pm, b = 1783 pm, c = 679 pm and eight formula units per unit cell (measured at 223 K). Spectroscopy The optical absorption spectrum of pyridine in hexane consists of bands at the wavelengths of 195, 251, and 270 nm. With respective extinction coefficients (ε) of 7500, 2000, and 450 L·mol−1·cm−1, these bands are assigned to π → π*, π → π*, and n → π* transitions. The compound displays very low fluorescence. The 1H nuclear magnetic resonance (NMR) spectrum shows signals for α-(δ 8.5), γ-(δ7.5) and β-protons (δ7). By contrast, the proton signal for benzene is found at δ7.27. The larger chemical shifts of the α- and γ-protons in comparison to benzene result from the lower electron density in the α- and γ-positions, which can be derived from the resonance structures. The situation is rather similar for the 13C NMR spectra of pyridine and benzene: pyridine shows a triplet at δ(α-C) = 150 ppm, δ(β-C) = 124 ppm and δ(γ-C) = 136 ppm, whereas benzene has a single line at 129 ppm. All shifts are quoted for the solvent-free substances. Pyridine is conventionally detected by the gas chromatography and mass spectrometry methods. Bonding Pyridine has a conjugated system of six π electrons that are delocalized over the ring. The molecule is planar and, thus, follows the Hückel criteria for aromatic systems. In contrast to benzene, the electron density is not evenly distributed over the ring, reflecting the negative inductive effect of the nitrogen atom. For this reason, pyridine has a dipole moment and a weaker resonant stabilization than benzene (resonance energy 117 kJ/mol in pyridine vs. 150 kJ/mol in benzene). The ring atoms in the pyridine molecule are sp2-hybridized. The nitrogen is involved in the π-bonding aromatic system using its unhybridized p orbital. The lone pair is in an sp2 orbital, projecting outward from the ring in the same plane as the σ bonds. As a result, the lone pair does not contribute to the aromatic system but importantly influences the chemical properties of pyridine, as it easily supports bond formation via an electrophilic attack. However, because of the separation of the lone pair from the aromatic ring system, the nitrogen atom cannot exhibit a positive mesomeric effect. Many analogues of pyridine are known where N is replaced by other heteroatoms from the same column of the Periodic Table of Elements (see figure below). Substitution of one C–H in pyridine with a second N gives rise to the diazine heterocycles (C4H4N2), with the names pyridazine, pyrimidine, and pyrazine. History Impure pyridine was undoubtedly prepared by early alchemists by heating animal bones and other organic matter, but the earliest documented reference is attributed to the Scottish scientist Thomas Anderson. In 1849, Anderson examined the contents of the oil obtained through high-temperature heating of animal bones. Among other substances, he separated from the oil a colorless liquid with unpleasant odor, from which he isolated pure pyridine two years later. He described it as highly soluble in water, readily soluble in concentrated acids and salts upon heating, and only slightly soluble in oils. Owing to its flammability, Anderson named the new substance pyridine, after (pyr) meaning fire. The suffix idine was added in compliance with the chemical nomenclature, as in toluidine, to indicate a cyclic compound containing a nitrogen atom. The chemical structure of pyridine was determined decades after its discovery. Wilhelm Körner (1869) and James Dewar (1871) suggested that, in analogy between quinoline and naphthalene, the structure of pyridine is derived from benzene by substituting one C–H unit with a nitrogen atom. The suggestion by Körner and Dewar was later confirmed in an experiment where pyridine was reduced to piperidine with sodium in ethanol. In 1876, William Ramsay combined acetylene and hydrogen cyanide into pyridine in a red-hot iron-tube furnace. This was the first synthesis of a heteroaromatic compound. The first major synthesis of pyridine derivatives was described in 1881 by Arthur Rudolf Hantzsch. The Hantzsch pyridine synthesis typically uses a 2:1:1 mixture of a β-keto acid (often acetoacetate), an aldehyde (often formaldehyde), and ammonia or its salt as the nitrogen donor. First, a double hydrogenated pyridine is obtained, which is then oxidized to the corresponding pyridine derivative. Emil Knoevenagel showed that asymmetrically substituted pyridine derivatives can be produced with this process. The contemporary methods of pyridine production had a low yield, and the increasing demand for the new compound urged to search for more efficient routes. A breakthrough came in 1924 when the Russian chemist Aleksei Chichibabin invented a pyridine synthesis reaction, which was based on inexpensive reagents. This method is still used for the industrial production of pyridine. Occurrence Pyridine is not abundant in nature, except for the leaves and roots of belladonna (Atropa belladonna) and in marshmallow (Althaea officinalis). Pyridine derivatives, however, are often part of biomolecules such as alkaloids. In daily life, trace amounts of pyridine are components of the volatile organic compounds that are produced in roasting and canning processes, e.g. in fried chicken, sukiyaki, roasted coffee, potato chips, and fried bacon. Traces of pyridine can be found in Beaufort cheese, vaginal secretions, black tea, saliva of those suffering from gingivitis, and sunflower honey. Trace amounts of up to 16 μg/m3 have been detected in tobacco smoke. Minor amounts of pyridine are released into environment from some industrial processes such as steel manufacture, processing of oil shale, coal gasification, coking plants and incinerators. The atmosphere at oil shale processing plants can contain pyridine concentrations of up to 13 μg/m3, and 53 μg/m3 levels were measured in the groundwater in the vicinity of a coal gasification plant. According to a study by the US National Institute for Occupational Safety and Health, about 43,000 Americans work in contact with pyridine. In foods Pyridine has historically been added to foods to give them a bitter flavour, although this practise is now banned in the U.S. It may still be added to ethanol to make it unsuitable for drinking. Production Historically, pyridine was extracted from coal tar or obtained as a byproduct of coal gasification. The process is labor-consuming and inefficient: coal tar contains only about 0.1% pyridine, and therefore a multi-stage purification was required, which further reduced the output. Nowadays, most pyridines are synthesized from ammonia, aldehydes, and nitriles, a few combinations of which are suited for pyridine itself. Various name reactions are also known, but they are not practiced on scale. In 1989, 26,000 tonnes of pyridine was produced worldwide. Other major derivatives are 2-, 3-, 4-methylpyridines and 5-ethyl-2-methylpyridine. The combined scale of these alkylpyridines matches that of pyridine itself. Among the largest 25 production sites for pyridine, eleven are located in Europe (as of 1999). The major producers of pyridine include Evonik Industries, Rütgers Chemicals, Jubilant Life Sciences, Imperial Chemical Industries, and Koei Chemical. Pyridine production significantly increased in the early 2000s, with an annual production capacity of 30,000 tonnes in mainland China alone. The US–Chinese joint venture Vertellus is currently the world leader in pyridine production. Chichibabin synthesis The Chichibabin pyridine synthesis was reported in 1924 and the basic approach underpins several industrial routes. In its general form, the reaction involves the condensation reaction of aldehydes, ketones, α,β-unsaturated carbonyl compounds, or any combination of the above, in ammonia or ammonia derivatives. Application of the Chichibabin pyridine synthesis suffer from low yields, often about 30%, however the precursors are inexpensive. In particular, unsubstituted pyridine is produced from formaldehyde and acetaldehyde. First, acrolein is formed in a Knoevenagel condensation from the acetaldehyde and formaldehyde. The acrolein then condenses with acetaldehyde and ammonia to give dihydropyridine, which is oxidized to pyridine. This process is carried out in a gas phase at 400–450 °C. Typical catalysts are modified forms of alumina and silica. The reaction has been tailored to produce various methylpyridines. Dealkylation and decarboxylation of substituted pyridines Pyridine can be prepared by dealkylation of alkylated pyridines, which are obtained as byproducts in the syntheses of other pyridines. The oxidative dealkylation is carried out either using air over vanadium(V) oxide catalyst, by vapor-dealkylation on nickel-based catalyst, or hydrodealkylation with a silver- or platinum-based catalyst. Yields of pyridine up to be 93% can be achieved with the nickel-based catalyst. Pyridine can also be produced by the decarboxylation of nicotinic acid with copper chromite. Bönnemann cyclization The trimerization of a part of a nitrile molecule and two parts of acetylene into pyridine is called Bönnemann cyclization. This modification of the Reppe synthesis can be activated either by heat or by light. While the thermal activation requires high pressures and temperatures, the photoinduced cycloaddition proceeds at ambient conditions with CoCp2(cod) (Cp = cyclopentadienyl, cod = 1,5-cyclooctadiene) as a catalyst, and can be performed even in water. A series of pyridine derivatives can be produced in this way. When using acetonitrile as the nitrile, 2-methylpyridine is obtained, which can be dealkylated to pyridine. Other methods The Kröhnke pyridine synthesis provides a fairly general method for generating substituted pyridines using pyridine itself as a reagent which does not become incorporated into the final product. The reaction of pyridine with bromomethyl ketones gives the related pyridinium salt, wherein the methylene group is highly acidic. This species undergoes a Michael-like addition to α,β-unsaturated carbonyls in the presence of ammonium acetate to undergo ring closure and formation of the targeted substituted pyridine as well as pyridinium bromide. The Ciamician–Dennstedt rearrangement entails the ring-expansion of pyrrole with dichlorocarbene to 3-chloropyridine. In the Gattermann–Skita synthesis, a malonate ester salt reacts with dichloromethylamine. Other methods include the Boger pyridine synthesis and Diels–Alder reaction of an alkene and an oxazole. Biosynthesis Several pyridine derivatives play important roles in biological systems. While its biosynthesis is not fully understood, nicotinic acid (vitamin B3) occurs in some bacteria, fungi, and mammals. Mammals synthesize nicotinic acid through oxidation of the amino acid tryptophan, where an intermediate product, the aniline derivative kynurenine, creates a pyridine derivative, quinolinate and then nicotinic acid. On the contrary, the bacteria Mycobacterium tuberculosis and Escherichia coli produce nicotinic acid by condensation of glyceraldehyde 3-phosphate and aspartic acid. Reactions Because of the electronegative nitrogen in the pyridine ring, pyridine enters less readily into electrophilic aromatic substitution reactions than benzene derivatives. Instead, in terms of its reactivity, pyridine resembles nitrobenzene. Correspondingly pyridine is more prone to nucleophilic substitution, as evidenced by the ease of metalation by strong organometallic bases. The reactivity of pyridine can be distinguished for three chemical groups. With electrophiles, electrophilic substitution takes place where pyridine expresses aromatic properties. With nucleophiles, pyridine reacts at positions 2 and 4 and thus behaves similar to imines and carbonyls. The reaction with many Lewis acids results in the addition to the nitrogen atom of pyridine, which is similar to the reactivity of tertiary amines. The ability of pyridine and its derivatives to oxidize, forming amine oxides (N-oxides), is also a feature of tertiary amines. The nitrogen center of pyridine features a basic lone pair of electrons. This lone pair does not overlap with the aromatic π-system ring, consequently pyridine is basic, having chemical properties similar to those of tertiary amines. Protonation gives pyridinium, C5H5NH+.The pKa of the conjugate acid (the pyridinium cation) is 5.25. The structures of pyridine and pyridinium are almost identical. The pyridinium cation is isoelectronic with benzene. Pyridinium p-toluenesulfonate (PPTS) is an illustrative pyridinium salt; it is produced by treating pyridine with p-toluenesulfonic acid. In addition to protonation, pyridine undergoes N-centred alkylation, acylation, and N-oxidation. Pyridine and poly(4-vinyl) pyridine have been shown to form conducting molecular wires with remarkable polyenimine structure on UV irradiation, a process which accounts for at least some of the visible light absorption by aged pyridine samples. These wires have been theoretically predicted to be both highly efficient electron donors and acceptors, and yet are resistant to air oxidation. Electrophilic substitutions Owing to the decreased electron density in the aromatic system, electrophilic substitutions are suppressed in pyridine and its derivatives. Friedel–Crafts alkylation or acylation, usually fail for pyridine because they lead only to the addition at the nitrogen atom. Substitutions usually occur at the 3-position, which is the most electron-rich carbon atom in the ring and is, therefore, more susceptible to an electrophilic addition. Direct nitration of pyridine is sluggish. Pyridine derivatives wherein the nitrogen atom is screened sterically and/or electronically can be obtained by nitration with nitronium tetrafluoroborate (NO2BF4). In this way, 3-nitropyridine can be obtained via the synthesis of 2,6-dibromopyridine followed by nitration and debromination. Sulfonation of pyridine is even more difficult than nitration. However, pyridine-3-sulfonic acid can be obtained. Reaction with the SO3 group also facilitates addition of sulfur to the nitrogen atom, especially in the presence of a mercury(II) sulfate catalyst. In contrast to the sluggish nitrations and sulfonations, the bromination and chlorination of pyridine proceed well. Pyridine N-oxide Oxidation of pyridine occurs at nitrogen to give pyridine N-oxide. The oxidation can be achieved with peracids: C5H5N + RCO3H → C5H5NO + RCO2H Some electrophilic substitutions on the pyridine are usefully effected using pyridine N-oxide followed by deoxygenation. Addition of oxygen suppresses further reactions at nitrogen atom and promotes substitution at the 2- and 4-carbons. The oxygen atom can then be removed, e.g., using zinc dust. Nucleophilic substitutions In contrast to benzene ring, pyridine efficiently supports several nucleophilic substitutions. The reason for this is relatively lower electron density of the carbon atoms of the ring. These reactions include substitutions with elimination of a hydride ion and elimination-additions with formation of an intermediate aryne configuration, and usually proceed at the 2- or 4-position. Many nucleophilic substitutions occur more easily not with bare pyridine but with pyridine modified with bromine, chlorine, fluorine, or sulfonic acid fragments that then become a leaving group. So fluorine is the best leaving group for the substitution with organolithium compounds. The nucleophilic attack compounds may be alkoxides, thiolates, amines, and ammonia (at elevated pressures). In general, the hydride ion is a poor leaving group and occurs only in a few heterocyclic reactions. They include the Chichibabin reaction, which yields pyridine derivatives aminated at the 2-position. Here, sodium amide is used as the nucleophile yielding 2-aminopyridine. The hydride ion released in this reaction combines with a proton of an available amino group, forming a hydrogen molecule. Analogous to benzene, nucleophilic substitutions to pyridine can result in the formation of pyridyne intermediates as heteroaryne. For this purpose, pyridine derivatives can be eliminated with good leaving groups using strong bases such as sodium and potassium tert-butoxide. The subsequent addition of a nucleophile to the triple bond has low selectivity, and the result is a mixture of the two possible adducts. Radical reactions Pyridine supports a series of radical reactions, which is used in its dimerization to bipyridines. Radical dimerization of pyridine with elemental sodium or Raney nickel selectively yields 4,4'-bipyridine, or 2,2'-bipyridine, which are important precursor reagents in the chemical industry. One of the name reactions involving free radicals is the Minisci reaction. It can produce 2-tert-butylpyridine upon reacting pyridine with pivalic acid, silver nitrate and ammonium in sulfuric acid with a yield of 97%. Reactions on the nitrogen atom Lewis acids easily add to the nitrogen atom of pyridine, forming pyridinium salts. The reaction with alkyl halides leads to alkylation of the nitrogen atom. This creates a positive charge in the ring that increases the reactivity of pyridine to both oxidation and reduction. The Zincke reaction is used for the selective introduction of radicals in pyridinium compounds (it has no relation to the chemical element zinc). Hydrogenation and reduction Piperidine is produced by hydrogenation of pyridine with a nickel-, cobalt-, or ruthenium-based catalyst at elevated temperatures. The hydrogenation of pyridine to piperidine releases 193.8 kJ/mol, which is slightly less than the energy of the hydrogenation of benzene (205.3 kJ/mol). Partially hydrogenated derivatives are obtained under milder conditions. For example, reduction with lithium aluminium hydride yields a mixture of 1,4-dihydropyridine, 1,2-dihydropyridine, and 2,5-dihydropyridine. Selective synthesis of 1,4-dihydropyridine is achieved in the presence of organometallic complexes of magnesium and zinc, and (Δ3,4)-tetrahydropyridine is obtained by electrochemical reduction of pyridine. Birch reduction converts pyridine to dihydropyridines. Lewis basicity and coordination compounds Pyridine is a Lewis base, donating its pair of electrons to a Lewis acid. Its Lewis base properties are discussed in the ECW model. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. One example is the sulfur trioxide pyridine complex (melting point 175 °C), which is a sulfation agent used to convert alcohols to sulfate esters. Pyridine-borane (, melting point 10–11 °C) is a mild reducing agent. Transition metal pyridine complexes are numerous. Typical octahedral complexes have the stoichiometry and . Octahedral homoleptic complexes of the type are rare or tend to dissociate pyridine. Numerous square planar complexes are known, such as Crabtree's catalyst. The pyridine ligand replaced during the reaction is restored after its completion. The η6 coordination mode, as occurs in η6 benzene complexes, is observed only in sterically encumbered derivatives that block the nitrogen center. Applications Pesticides and pharmaceuticals The main use of pyridine is as a precursor to the herbicides paraquat and diquat. The first synthesis step of insecticide chlorpyrifos consists of the chlorination of pyridine. Pyridine is also the starting compound for the preparation of pyrithione-based fungicides. Cetylpyridinium and laurylpyridinium, which can be produced from pyridine with a Zincke reaction, are used as antiseptic in oral and dental care products. Pyridine is easily attacked by alkylating agents to give N-alkylpyridinium salts. One example is cetylpyridinium chloride. It is also used in the textile industry to improve network capacity of cotton. Laboratory use Pyridine is used as a polar, basic, low-reactive solvent, for example in Knoevenagel condensations. It is especially suitable for the dehalogenation, where it acts as the base for the elimination reaction. In esterifications and acylations, pyridine activates the carboxylic acid chlorides and anhydrides. Even more active in these reactions are the derivatives 4-dimethylaminopyridine (DMAP) and 4-(1-pyrrolidinyl) pyridine. Pyridine is also used as a base in some condensation reactions. Reagents As a base, pyridine can be used as the Karl Fischer reagent, but it is usually replaced by alternatives with a more pleasant odor, such as imidazole. Pyridinium chlorochromate, pyridinium dichromate, and the Collins reagent (the complex of chromium(VI) oxide) are used for the oxidation of alcohols. Hazards Pyridine is a toxic, flammable liquid with a strong and unpleasant fishy odour. Its odour threshold of 0.04 to 20 ppm is close to its threshold limit of 5 ppm for adverse effects, thus most (but not all) adults will be able to tell when it is present at harmful levels. Pyridine easily dissolves in water and harms both animals and plants in aquatic systems. Fire Pyridine has a flash point of 20 °C and is therefore highly flammable. Combustion produces toxic fumes which can include bipyridines, nitrogen oxides, and carbon monoxide. Short-term exposure Pyridine can cause chemical burns on contact with the skin and its fumes may be irritating to the eyes or upon inhalation. Pyridine depresses the nervous system giving symptoms similar to intoxication with vapor concentrations of above 3600 ppm posing a greater health risk. The effects may have a delayed onset of several hours and include dizziness, headache, lack of coordination, nausea, salivation, and loss of appetite. They may progress into abdominal pain, pulmonary congestion and unconsciousness. The lowest known lethal dose (LDLo) for the ingestion of pyridine in humans is 500 mg/kg. Long-term exposure Prolonged exposure to pyridine may result in liver, heart and kidney damage. Evaluations as a possible carcinogenic agent showed that there is inadequate evidence in humans for the carcinogenicity of pyridine, although there is sufficient evidence in experimental animals. Therefore, IARC considers pyridine as possibly carcinogenic to humans (Group 2B). Metabolism Exposure to pyridine would normally lead to its inhalation and absorption in the lungs and gastrointestinal tract, where it either remains unchanged or is metabolized. The major products of pyridine metabolism are N-methylpyridiniumhydroxide, which are formed by N-methyltransferases (e.g., pyridine N-methyltransferase), as well as pyridine N-oxide, and 2-, 3-, and 4-hydroxypyridine, which are generated by the action of monooxygenase. In humans, pyridine is metabolized only into N-methylpyridiniumhydroxide. Environmental fate Pyridine is readily degraded by bacteria to ammonia and carbon dioxide. The unsubstituted pyridine ring degrades more rapidly than picoline, lutidine, chloropyridine, or aminopyridines, and a number of pyridine degraders have been shown to overproduce riboflavin in the presence of pyridine. Ionizable N-heterocyclic compounds, including pyridine, interact with environmental surfaces (such as soils and sediments) via multiple pH-dependent mechanisms, including partitioning to soil organic matter, cation exchange, and surface complexation. Such adsorption to surfaces reduces bioavailability of pyridines for microbial degraders and other organisms, thus slowing degradation rates and reducing ecotoxicity. Nomenclature The systematic name of pyridine, within the Hantzsch–Widman nomenclature recommended by the IUPAC, is . However, systematic names for simple compounds are used very rarely; instead, heterocyclic nomenclature follows historically established common names. IUPAC discourages the use of in favor of pyridine. The numbering of the ring atoms in pyridine starts at the nitrogen (see infobox). An allocation of positions by letter of the Greek alphabet (α-γ) and the substitution pattern nomenclature common for homoaromatic systems (ortho, meta, para) are used sometimes. Here α (ortho), β (meta), and γ (para) refer to the 2, 3, and 4 position, respectively. The systematic name for the pyridine derivatives is pyridinyl, wherein the position of the substituted atom is preceded by a number. However, the historical name pyridyl is encouraged by the IUPAC and used instead of the systematic name. The cationic derivative formed by the addition of an electrophile to the nitrogen atom is called pyridinium.
Physical sciences
Amides and amines
Chemistry
23872
https://en.wikipedia.org/wiki/Polymerization
Polymerization
In polymer chemistry, polymerization (American English), or polymerisation (British English), is a process of reacting monomer molecules together in a chemical reaction to form polymer chains or three-dimensional networks. There are many forms of polymerization and different systems exist to categorize them. In chemical compounds, polymerization can occur via a variety of reaction mechanisms that vary in complexity due to the functional groups present in the reactants and their inherent steric effects. In more straightforward polymerizations, alkenes form polymers through relatively simple radical reactions; in contrast, reactions involving substitution at a carbonyl group require more complex synthesis due to the way in which reactants polymerize. As alkenes can polymerize in somewhat straightforward radical reactions, they form useful compounds such as polyethylene and polyvinyl chloride (PVC), which are produced in high tonnages each year due to their usefulness in manufacturing processes of commercial products, such as piping, insulation and packaging. In general, polymers such as PVC are referred to as "homopolymers", as they consist of repeated long chains or structures of the same monomer unit, whereas polymers that consist of more than one monomer unit are referred to as copolymers (or co-polymers). Other monomer units, such as formaldehyde hydrates or simple aldehydes, are able to polymerize themselves at quite low temperatures (ca. −80 °C) to form trimers; molecules consisting of 3 monomer units, which can cyclize to form ring cyclic structures, or undergo further reactions to form tetramers, or 4 monomer-unit compounds. Such small polymers are referred to as oligomers. Generally, because formaldehyde is an exceptionally reactive electrophile it allows nucleophilic addition of hemiacetal intermediates, which are in general short-lived and relatively unstable "mid-stage" compounds that react with other non-polar molecules present to form more stable polymeric compounds. Polymerization that is not sufficiently moderated and proceeds at a fast rate can be very hazardous. This phenomenon is known as autoacceleration, and can cause fires and explosions. Step-growth vs. chain-growth polymerization Step-growth and chain-growth are the main classes of polymerization reaction mechanisms. The former is often easier to implement but requires precise control of stoichiometry. The latter more reliably affords high molecular-weight polymers, but only applies to certain monomers. Step-growth In step-growth (or step) polymerization, pairs of reactants, of any lengths, combine at each step to form a longer polymer molecule. The average molar mass increases slowly. Long chains form only late in the reaction. Step-growth polymers are formed by independent reaction steps between functional groups of monomer units, usually containing heteroatoms such as nitrogen or oxygen. Most step-growth polymers are also classified as condensation polymers, since a small molecule such as water is lost when the polymer chain is lengthened. For example, polyester chains grow by reaction of alcohol and carboxylic acid groups to form ester links with loss of water. However, there are exceptions; for example polyurethanes are step-growth polymers formed from isocyanate and alcohol bifunctional monomers) without loss of water or other volatile molecules, and are classified as addition polymers rather than condensation polymers. Step-growth polymers increase in molecular weight at a very slow rate at lower conversions and reach moderately high molecular weights only at very high conversion (i.e., >95%). Solid state polymerization to afford polyamides (e.g., nylons) is an example of step-growth polymerization. Chain-growth In chain-growth (or chain) polymerization, the only chain-extension reaction step is the addition of a monomer to a growing chain with an active center such as a free radical, cation, or anion. Once the growth of a chain is initiated by formation of an active center, chain propagation is usually rapid by addition of a sequence of monomers. Long chains are formed from the beginning of the reaction. Chain-growth polymerization (or addition polymerization) involves the linking together of unsaturated monomers, especially containing carbon-carbon double bonds. The pi-bond is lost by formation of a new sigma bond. Chain-growth polymerization is involved in the manufacture of polymers such as polyethylene, polypropylene, polyvinyl chloride (PVC), and acrylate. In these cases, the alkenes RCH=CH2 are converted to high molecular weight alkanes (-RCHCH2-)n (R = H, CH3, Cl, CO2CH3). Other forms of chain growth polymerization include cationic addition polymerization and anionic addition polymerization. A special case of chain-growth polymerization leads to living polymerization. Ziegler–Natta polymerization allows considerable control of polymer branching. Diverse methods are employed to manipulate the initiation, propagation, and termination rates during chain polymerization. A related issue is temperature control, also called heat management, during these reactions, which are often highly exothermic. For example, for the polymerization of ethylene, 93.6 kJ of energy are released per mole of monomer. The manner in which polymerization is conducted is a highly evolved technology. Methods include emulsion polymerization, solution polymerization, suspension polymerization, and precipitation polymerization. Although the polymer dispersity and molecular weight may be improved, these methods may introduce additional processing requirements to isolate the product from a solvent. Photopolymerization Most photopolymerization reactions are chain-growth polymerizations which are initiated by the absorption of visible or ultraviolet light. Photopolymerization can also be a step-growth polymerization. The light may be absorbed either directly by the reactant monomer (direct photopolymerization), or else by a photosensitizer which absorbs the light and then transfers energy to the monomer. In general, only the initiation step differs from that of the ordinary thermal polymerization of the same monomer; subsequent propagation, termination, and chain-transfer steps are unchanged. In step-growth photopolymerization, absorption of light triggers an addition (or condensation) reaction between two comonomers that do not react without light. A propagation cycle is not initiated because each growth step requires the assistance of light. Photopolymerization can be used as a photographic or printing process because polymerization only occurs in regions which have been exposed to light. Unreacted monomer can be removed from unexposed regions, leaving a relief polymeric image. Several forms of 3D printing—including layer-by-layer stereolithography and two-photon absorption 3D photopolymerization—use photopolymerization. Multiphoton polymerization using single pulses have also been demonstrated for fabrication of complex structures using a digital micromirror device.
Physical sciences
Organic reactions
Chemistry
23878
https://en.wikipedia.org/wiki/Penguin
Penguin
Penguins are a group of aquatic flightless birds from the family Spheniscidae () of the order Sphenisciformes (). They live almost exclusively in the Southern Hemisphere: only one species, the Galápagos penguin, is found north of the Equator. Highly adapted for life in the ocean water, penguins have countershaded dark and white plumage and flippers for swimming. Most penguins feed on krill, fish, squid and other forms of sea life which they catch with their bills and swallow whole while swimming. A penguin has a spiny tongue and powerful jaws to grip slippery prey. They spend about half of their lives on land and the other half in the sea. The largest living species is the emperor penguin (Aptenodytes forsteri): on average, adults are about tall and weigh . The smallest penguin species is the little blue penguin (Eudyptula minor), also known as the fairy penguin, which stands around tall and weighs . Today, larger penguins generally inhabit colder regions, and smaller penguins inhabit regions with temperate or tropical climates. Some prehistoric penguin species were enormous: as tall or heavy as an adult human. There was a great diversity of species in subantarctic regions, and at least one giant species in a region around 2,000 km south of the equator 35 mya, during the Late Eocene, a climate decidedly warmer than today. Etymology The word penguin first appears in literature at the end of the 16th century as a synonym for the great auk. When European explorers discovered what are today known as penguins in the Southern Hemisphere, they noticed their similar appearance to the great auk of the Northern Hemisphere and named them after this bird, although they are not closely related. The etymology of the word penguin is still debated. The English word is not apparently of French, Breton or Spanish origin (the latter two are attributed to the French word ), but first appears in English or Dutch. Some dictionaries suggest a derivation from Welsh , 'head' and , 'white', including the Oxford English Dictionary, the American Heritage Dictionary, the Century Dictionary and Merriam-Webster, on the basis that the name was originally applied to the great auk, either because it was found on White Head Island () in Newfoundland, or because it had white circles around its eyes (though the head was black). However, the Welsh word can also be used to mean 'front' and, in a maritime context, means 'front end or part, bow (of a ship), prow'. An alternative etymology links the word to Latin , which means 'fat' or 'oil'. Support for this etymology can be found in the alternative Germanic word for penguin, or 'fat-goose', and the related Dutch word . Adult male penguins are sometimes called cocks, females sometimes called hens; a group of penguins on land is a waddle, and a group of penguins in the water is a raft. Pinguinus Since 1871, the Latin word Pinguinus has been used in scientific classification to name the genus of the great auk (Pinguinus impennis, meaning "plump or fat without flight feathers"), which became extinct in the mid-19th century. As confirmed by a 2004 genetic study, the genus Pinguinus belongs in the family of the auks (Alcidae), within the order of the Charadriiformes. The birds currently known as penguins were discovered later and were so named by sailors because of their physical resemblance to the great auk. Despite this resemblance, however, they are not auks, and are not closely related to the great auk. They do not belong in the genus Pinguinus, and are not classified in the same family and order as the great auk. They were classified in 1831 by Charles Lucien Bonaparte in several distinct genera within the family Spheniscidae and order Sphenisciformes. Systematics and evolution Taxonomy The family name of Spheniscidae was given by Charles Lucien Bonaparte from the genus Spheniscus, the name of that genus comes from the Greek word sphēn "wedge" used for the shape of an African penguin's swimming flippers. Some recent sources apply the phylogenetic taxon Spheniscidae to what here is referred to as Spheniscinae. Furthermore, they restrict the phylogenetic taxon Sphenisciformes to flightless taxa, and establish the phylogenetic taxon Pansphenisciformes as equivalent to the Linnean taxon Sphenisciformes, i.e., including any flying basal "proto-penguins" to be discovered eventually. Given that neither the relationships of the penguin subfamilies to each other nor the placement of the penguins in the avian phylogeny is presently resolved, this is confusing, so the established Linnean system is followed here. The number of penguin species is typically listed as between seventeen and nineteen. The International Ornithologists' Union recognizes six genera and eighteen species: Evolution Although the evolutionary and biogeographic history of Sphenisciformes is well-researched, many prehistoric forms are not fully described. Some seminal articles about the evolutionary history of penguins have been published since 2005. The basal penguins lived around the time of the Cretaceous–Paleogene extinction event in the general area of southern New Zealand and Byrd Land, Antarctica. Due to plate tectonics, these areas were at that time less than apart rather than . The most recent common ancestor of penguins and Procellariiformes can be roughly dated to the Campanian–Maastrichtian boundary, around 70–68 mya. Basal fossils The oldest known fossil penguin species is Waimanu manneringi, which lived 62 mya in New Zealand. While they were not as well-adapted to aquatic life as modern penguins, Waimanu were flightless, with short wings adapted for deep diving. They swam on the surface using mainly their feet, but the wings were – as opposed to most other diving birds (both living and extinct) – already adapting to underwater locomotion. Perudyptes from northern Peru was dated to 42 mya. An unnamed fossil from Argentina proves that, by the Bartonian (Middle Eocene), some 39–38 mya, primitive penguins had spread to South America and were in the process of expanding into Atlantic waters. Palaeeudyptines During the Late Eocene and the Early Oligocene (40–30 mya), some lineages of gigantic penguins existed. Nordenskjoeld's giant penguin was the tallest, growing nearly tall. The New Zealand giant penguin was probably the heaviest, weighing or more. Both were found on New Zealand, the former also in the Antarctic farther eastwards. Traditionally, most extinct species of penguins, giant or small, had been placed in the paraphyletic subfamily called Palaeeudyptinae. More recently, with new taxa being discovered and placed in the phylogeny if possible, it is becoming accepted that there were at least two major extinct lineages. One or two closely related ones occurred in Patagonia, and at least one other—which is or includes the paleeudyptines as recognized today – occurred on most Antarctic and Subantarctic coasts. Size plasticity was significant at this initial stage of radiation: on Seymour Island, Antarctica, for example, around 10 known species of penguins ranging in size from medium to large apparently coexisted some 35 mya during the Priabonian (Late Eocene). It is not known whether the palaeeudyptines constitute a monophyletic lineage, or whether gigantism was evolved independently in a restricted Palaeeudyptinae and the Anthropornithinae – whether they were considered valid, or whether there was a wide size range present in the Palaeeudyptinae as delimited (i.e., including Anthropornis nordenskjoeldi). The oldest well-described giant penguin, the -tall Icadyptes salasi, existed as far north as northern Peru about 36 mya. Gigantic penguins had disappeared by the end of the Paleogene, around 25 mya. Their decline and disappearance coincided with the spread of the Squalodontidae and other primitive, fish-eating toothed whales, which competed with them for food and were ultimately more successful. A new lineage, the Paraptenodytes, which includes smaller and stout-legged forms, had already arisen in southernmost South America by that time. The early Neogene saw the emergence of another morphotype in the same area, the similarly sized but more gracile Palaeospheniscinae, as well as the radiation that gave rise to the current biodiversity of penguins. Origin and systematics of modern penguins Modern penguins constitute two undisputed clades and another two more basal genera with more ambiguous relationships. To help resolve the evolution of this order, 19 high-coverage genomes that, together with two previously published genomes, encompass all extant penguin species have been sequenced. The origin of the Spheniscinae lies probably in the latest Paleogene and, geographically, it must have been much the same as the general area in which the order evolved: the oceans between the Australia-New Zealand region and the Antarctic. Presumably diverging from other penguins around 40 mya, it seems that the Spheniscinae were for quite some time limited to their ancestral area, as the well-researched deposits of the Antarctic Peninsula and Patagonia have not yielded Paleogene fossils of the subfamily. Also, the earliest spheniscine lineages are those with the most southern distribution. The genus Aptenodytes appears to be the basalmost divergence among living penguins. They have bright yellow-orange neck, breast, and bill patches; incubate by placing their eggs on their feet, and when they hatch the chicks are almost naked. This genus has a distribution centred on the Antarctic coasts and barely extends to some Subantarctic islands today. Pygoscelis contains species with a fairly simple black-and-white head pattern; their distribution is intermediate, centred on Antarctic coasts but extending somewhat northwards from there. In external morphology, these apparently still resemble the common ancestor of the Spheniscinae, as Aptenodytes autapomorphies are, in most cases, fairly pronounced adaptations related to that genus' extreme habitat conditions. As the former genus, Pygoscelis seems to have diverged during the Bartonian, but the range expansion and radiation that led to the present-day diversity probably did not occur until much later; around the Burdigalian stage of the Early Miocene, roughly 20–15 mya. The genera Spheniscus and Eudyptula contain species with a mostly Subantarctic distribution centred on South America; some, however, range quite far northwards. They all lack carotenoid colouration and the former genus has a conspicuous banded head pattern; they are unique among living penguins by nesting in burrows. This group probably radiated eastwards with the Antarctic Circumpolar Current out of the ancestral range of modern penguins throughout the Chattian (Late Oligocene), starting approximately 28 mya. While the two genera separated during this time, the present-day diversity is the result of a Pliocene radiation, taking place some 4–2 mya. The Megadyptes–Eudyptes clade occurs at similar latitudes (though not as far north as the Galápagos penguin), has its highest diversity in the New Zealand region, and represents a westward dispersal. They are characterized by hairy yellow ornamental head feathers; their bills are at least partly red. These two genera diverged apparently in the Middle Miocene (Langhian, roughly 15–14 mya), although the living species of Eudyptes are the product of a later radiation, stretching from about the late Tortonian (Late Miocene, 8 mya) to the end of the Pliocene. Geography The geographical and temporal pattern of spheniscine evolution corresponds closely to two episodes of global cooling documented in the paleoclimatic record. The emergence of the Subantarctic lineage at the end of the Bartonian corresponds with the onset of the slow period of cooling that eventually led to the ice ages some 35 million years later. With habitat on the Antarctic coasts declining, by the Priabonian more hospitable conditions for most penguins existed in the Subantarctic regions rather than in Antarctica itself. Notably, the cold Antarctic Circumpolar Current also started as a continuous circumpolar flow only around 30 mya, on the one hand forcing the Antarctic cooling, and on the other facilitating the eastward expansion of Spheniscus to South America and eventually beyond. Despite this, there is no fossil evidence to support the idea of crown radiation from the Antarctic continent in the Paleogene, although DNA study favors such a radiation. Later, an interspersed period of slight warming was ended by the Middle Miocene Climate Transition, a sharp drop in global average temperature from 14 to 12 mya, and similar abrupt cooling events followed at 8 mya and 4 mya; by the end of the Tortonian, the Antarctic ice sheet was already much like today in volume and extent. The emergence of most of today's Subantarctic penguin species almost certainly was caused by this sequence of Neogene climate shifts. Relationship to other bird orders Penguin ancestry beyond Waimanu remains unknown and not well-resolved by molecular or morphological analyses. The latter tend to be confounded by the strong adaptive autapomorphies of the Sphenisciformes; a sometimes perceived fairly close relationship between penguins and grebes is almost certainly an error based on both groups' strong diving adaptations, which are homoplasies. On the other hand, different DNA sequence datasets do not agree in detail with each other either. What seems clear is that penguins belong to a clade of Neoaves (living birds except for paleognaths and fowl) that comprises what is sometimes called "higher waterbirds" to distinguish them from the more ancient waterfowl. This group contains such birds as storks, rails, and the seabirds, with the possible exception of the Charadriiformes. Inside this group, penguin relationships are far less clear. Depending on the analysis and dataset, a close relationship to Ciconiiformes or to Procellariiformes has been suggested. Some think the penguin-like plotopterids (usually considered relatives of cormorants and anhingas) may actually be a sister group of the penguins and those penguins may have ultimately shared a common ancestor with the Pelecaniformes and consequently would have to be included in that order, or that the plotopterids were not as close to other pelecaniforms as generally assumed, which would necessitate splitting the traditional Pelecaniformes into three. A 2014 analysis of whole genomes of 48 representative bird species has concluded that penguins are the sister group of Procellariiformes, from which they diverged about 60 million years ago (95% CI, 56.8–62.7). The distantly related Puffins, which live in the North Pacific and North Atlantic, developed similar characteristics to survive in the Arctic and sub-Arctic environments. Like the penguins, puffins have a white chest, black back and short stubby wings providing excellent swimming ability in icy water. But, unlike penguins, puffins can fly, as flightless birds would not survive alongside land-based predators such as polar bears and foxes; there are no such predators in the Antarctic. Their similarities indicate that similar environments, although at great distances, can result in similar evolutionary developments, i.e. convergent evolution. Anatomy and physiology Penguins are superbly adapted to aquatic life. Their wings have evolved to become flippers, useless for flight in the air. In the water, however, penguins are astonishingly agile. Penguins' swimming looks very similar to birds' flight in the air. Within the smooth plumage a layer of air is preserved, ensuring buoyancy. The air layer also helps insulate the birds in cold waters. On land, penguins use their tails and wings to maintain balance for their upright stance. All penguins are countershaded for camouflage – that is, they have black backs and wings with white fronts. A predator looking up from below (such as an orca or a leopard seal) has difficulty distinguishing between a white penguin belly and the reflective water surface. The dark plumage on their backs camouflages them from above. Gentoo penguins are the fastest underwater birds in the world. They are capable of reaching speeds up to 36 km (about 22 miles) per hour while searching for food or escaping from predators. They are also able to dive to depths of 170–200 meters (about 560–660 feet). The small penguins do not usually dive deep; they catch their prey near the surface in dives that normally last only one or two minutes. Larger penguins can dive deep in case of need. Emperor penguins are the world's deepest-diving birds. They can dive to depths of approximately while searching for food. Penguins either waddle on their feet or slide on their bellies across the snow while using their feet to propel and steer themselves, a movement called "tobogganing", which conserves energy while moving quickly. They also jump with both feet together if they want to move more quickly or cross steep or rocky terrain. Penguins have an average sense of hearing for birds; this is used by parents and chicks to locate one another in crowded colonies. Their eyes are adapted for underwater vision and are their primary means of locating prey and avoiding predators; in air it has been suggested that they are nearsighted, although research has not supported this hypothesis. Penguins have a thick layer of insulating feathers that keeps them warm in water (heat loss in water is much greater than in air). The emperor penguin has a maximum feather density of about nine feathers per square centimeter which is actually much lower than other birds that live in Antarctic environments. However, they have been identified as having at least four different types of feather: in addition to the traditional feather, the emperor has afterfeathers, plumules, and filoplumes. The afterfeathers are downy plumes that attach directly to the main feathers and were once believed to account for the bird's ability to conserve heat when under water; the plumules are small down feathers that attach directly to the skin, and are much more dense in penguins than other birds; lastly the filoplumes are small (less than 1 cm long) naked shafts that end in a splay of fibers— filoplumes were believed to give flying birds a sense of where their plumage was and whether or not it needed preening, so their presence in penguins may seem inconsistent, but penguins also preen extensively. The emperor penguin has the largest body mass of all penguins, which further reduces relative surface area and heat loss. They also are able to control blood flow to their extremities, reducing the amount of blood that gets cold, but still keeping the extremities from freezing. In the extreme cold of the Antarctic winter, the females are at sea fishing for food, leaving the males to brave the weather by themselves. They often huddle together to keep warm and rotate positions to make sure that each penguin gets a turn in the centre of the heat pack. Calculations of the heat loss and retention ability of marine endotherms suggest that most extant penguins are too small to survive in such cold environments. In 2007, Thomas and Fordyce wrote about the "heterothermic loophole" that penguins utilize in order to survive in Antarctica. All extant penguins, even those that live in warmer climates, have a counter-current heat exchanger called the humeral plexus. The flippers of penguins have at least three branches of the axillary artery, which allows cold blood to be heated by blood that has already been warmed and limits heat loss from the flippers. This system allows penguins to efficiently use their body heat and explains why such small animals can survive in the extreme cold. They can drink salt water because their supraorbital gland filters excess salt from the bloodstream. The salt is excreted in a concentrated fluid from the nasal passages. The great auk of the Northern Hemisphere, now extinct, was superficially similar to penguins, and the word penguin was originally used for that bird centuries ago. They are only distantly related to the penguins, but are an example of convergent evolution. Around one in 50,000 penguins (of most species) are born with brown rather than black plumage. These are called isabelline penguins. Isabellinism is different from albinism. Isabelline penguins tend to live shorter lives than normal penguins, as they are not well-camouflaged against the deep and are often passed over as mates. Behaviour Breeding Penguins for the most part breed in large colonies, the exceptions being the yellow-eyed and Fiordland species; these colonies may range in size from as few as 100 pairs for gentoo penguins to several hundred thousand in the case of king, macaroni and chinstrap penguins. Living in colonies results in a high level of social interaction between birds, which has led to a large repertoire of visual as well as vocal displays in all penguin species. Agonistic displays are those intended to confront or drive off, or alternately appease and avoid conflict with, other individuals. Penguins form monogamous pairs for a breeding season, though the rate the same pair recouples varies drastically. Most penguins lay two eggs in a clutch, although the two largest species, the emperor and the king penguins, lay only one. With the exception of the emperor penguin, where the male does it all, all penguins share the incubation duties. These incubation shifts can last days and even weeks as one member of the pair feeds at sea. Penguins generally only lay one brood; the exception is the little penguin, which can raise two or three broods in a season. Penguin eggs are smaller than any other bird species when compared proportionally to the weight of the parent birds; at , the little penguin egg is 4.7% of its mothers' weight, and the emperor penguin egg is 2.3%. The relatively thick shell forms between 10 and 16% of the weight of a penguin egg, presumably to reduce the effects of dehydration and to minimize the risk of breakage in an adverse nesting environment. The yolk, too, is large and comprises 22–31% of the egg. Some yolk often remains when a chick is born, and is thought to help sustain the chick if the parents are delayed in returning with food. When emperor penguin mothers lose a chick, they sometimes attempt to "steal" another mother's chick, usually unsuccessfully as other females in the vicinity assist the defending mother in keeping her chick. In some species, such as emperor and king penguins, the chicks assemble in large groups called crèches. Distribution and habitat Although almost all penguin species are native to the Southern Hemisphere, they are not found only in cold climates, such as Antarctica. In fact, only a few species of penguin actually live so far south. Several species live in the temperate zone; one, the Galápagos penguin, lives as far north as the Galápagos Islands, but this is only made possible by the cold, rich waters of the Antarctic Humboldt Current that flows around these islands. Also, though the climate of the Arctic and Antarctic regions is similar, there are no penguins found in the Arctic. Several authors have suggested that penguins are a good example of Bergmann's Rule where larger-bodied populations live at higher latitudes than smaller-bodied populations. There is some disagreement about this and several other authors have noted that there are fossil penguin species that contradict this hypothesis and that ocean currents and upwellings are likely to have had a greater effect on species diversity than latitude alone. Major populations of penguins are found in Angola, Antarctica, Argentina, Australia, Chile, Namibia, New Zealand, and South Africa. Satellite images and photos released in 2018 show the population of 2 million in France's remote Ile aux Cochons has collapsed, with barely 200,000 remaining, according to a study published in Antarctic Science. Conservation status The majority of living penguin species have declining populations. According to the IUCN Red List, their conservation statuses range from Least Concern through to Endangered. Penguins and humans Penguins have no special fear of humans and will often approach groups of people. This is probably because penguins have no land predators in Antarctica or the nearby offshore islands. They are preyed upon by other birds like skuas, especially in eggs and as fledglings. Other birds like petrels, sheathbills, and gulls also eat the chicks. Dogs preyed upon penguins while they were allowed in Antarctica during the age of early human exploration as sled dogs, but dogs have long since been banned from Antarctica. Instead, adult penguins are at risk at sea from predators such as sharks, orcas, and leopard seals. Typically, penguins do not approach closer than around , at which point they appear to become nervous. In June 2011, an emperor penguin came ashore on New Zealand's Peka Peka Beach, off course on its journey to Antarctica. Nicknamed Happy Feet, after the film of the same name, it was suffering from heat exhaustion and had to undergo a number of operations to remove objects like driftwood and sand from its stomach. Happy Feet was a media sensation, with extensive coverage on TV and the web, including a live stream that had thousands of views and a visit from English actor Stephen Fry. Once he had recovered, Happy Feet was released back into the water south of New Zealand. In popular culture Penguins are widely considered endearing for their unusually upright, waddling gait, swimming ability and (compared to other birds) lack of fear of humans. Their black-and-white plumage is often likened to a white tie suit. Some writers and artists have penguins based at the North Pole, but there are no wild penguins in the Arctic. The cartoon series Chilly Willy helped perpetuate this myth, as the title penguin would interact with Arctic or sub-Arctic species, such as polar bears and walruses. Penguins have been the subject of many books and films, such as Happy Feet, Surf's Up and Penguins of Madagascar, all CGI films; March of the Penguins, a documentary based on the migration process of the emperor penguin; and Farce of the Penguins, a parody of the documentary. Mr. Popper's Penguins is a children's book written by Richard and Florence Atwater; it was named a Newbery Honor Book in 1939. Penguins have also appeared in a number of cartoons and television dramas, including Pingu, co-created by Otmar Gutmann and Erika Brueggemann in 1990 and covering more than 100 short episodes. At the end of 2009, Entertainment Weekly put it on its end-of-the-decade "best-of" list, saying, "Whether they were walking (March of the Penguins), dancing (Happy Feet), or hanging ten (Surf's Up), these oddly adorable birds took flight at the box office all decade long." A video game called Pengo was released by Sega in 1982. Set in Antarctica, the player controls a penguin character who must navigate mazes of ice cubes. The player is rewarded with cut-scenes of animated penguins marching, dancing, saluting and playing peekaboo. Several remakes and enhanced editions have followed, most recently in 2012. Penguins are also sometimes depicted in music. In 1941, DC Comics introduced the avian-themed character of the Penguin as a supervillain adversary of the superhero Batman (Detective Comics #58). He became one of the most enduring enemies in Batman's rogues gallery. In the 60s Batman TV series, as played by Burgess Meredith, he was one of the most popular characters, and in Tim Burton's reimagining of the story, the character played by Danny Devito in the 1992 film Batman Returns, employed an actual army of penguins (mostly African penguins and king penguins). Several pro, minor, college and high school sport teams in the United States have named themselves after the species, including the Pittsburgh Penguins team in the National Hockey League and the Youngstown State Penguins in college athletics. Penguins featured regularly in the cartoons of U.K. cartoonist Steve Bell in his strip in The Guardian newspaper, particularly during and following the Falklands War. Opus the Penguin, from the cartoons of Berkeley Breathed, is also described as hailing from the Falklands. Opus was a comical, "existentialist" penguin character in the cartoons Bloom County, Outland and Opus. He was also the star in the animated Christmas TV special A Wish for Wings That Work. In the mid-2000s, penguins became one of the most publicized species of animals that form lasting homosexual couples. A children's book, And Tango Makes Three, was written about one such penguin family in the New York Zoo.
Biology and health sciences
Sphenisciformes
null
23889
https://en.wikipedia.org/wiki/Pulley
Pulley
A pulley is a wheel on an axle or shaft enabling a taut cable or belt passing over the wheel to move and change direction, or transfer power between itself and a shaft. A sheave or pulley wheel is a pulley using an axle supported by a frame or shell (block) to guide a cable or exert force. A pulley may have a groove or grooves between flanges around its circumference to locate the cable or belt. The drive element of a pulley system can be a rope, cable, belt, or chain. The earliest evidence of pulleys dates back to Ancient Egypt in the Twelfth Dynasty (1991–1802 BC) and Mesopotamia in the early 2nd millennium BC. In Roman Egypt, Hero of Alexandria (c. 10–70 AD) identified the pulley as one of six simple machines used to lift weights. Pulleys are assembled to form a block and tackle in order to provide mechanical advantage to apply large forces. Pulleys are also assembled as part of belt and chain drives in order to transmit power from one rotating shaft to another. Plutarch's Parallel Lives recounts a scene where Archimedes proved the effectiveness of compound pulleys and the block-and-tackle system by using one to pull a fully laden ship towards him as if it was gliding through water. Block and tackle A block is a set of pulleys (wheels) assembled so that each pulley rotates independently from every other pulley. Two blocks with a rope attached to one of the blocks and threaded through the two sets of pulleys form a block and tackle. A block and tackle is assembled so one block is attached to the fixed mounting point and the other is attached to the moving load. The ideal mechanical advantage of the block and tackle is equal to the number of sections of the rope that support the moving block. In the diagram on the right, the ideal mechanical advantage of each of the block and tackle assemblies shown is as follows: Gun tackle: 2 Luff tackle: 3 Double tackle: 4 Gyn tackle: 5 Threefold purchase: 6 Rope and pulley systems A rope and pulley system—that is, a block and tackle—is characterised by the use of a single continuous rope to transmit a tension force around one or more pulleys to lift or move a load—the rope may be a light line or a strong cable. This system is included in the list of simple machines identified by Renaissance scientists. If the rope and pulley system does not dissipate or store energy, then its mechanical advantage is the number of parts of the rope that act on the load. This can be shown as follows. Consider the set of pulleys that form the moving block and the parts of the rope that support this block. If there are p of these parts of the rope supporting the load W, then a force balance on the moving block shows that the tension in each of the parts of the rope must be W/p. This means the input force on the rope is T=W/p. Thus, the block and tackle reduces the input force by the factor p. Method of operation The simplest theory of operation for a pulley system assumes that the pulleys and lines are weightless and that there is no energy loss due to friction. It is also assumed that the lines do not stretch. In equilibrium, the forces on the moving block must sum to zero. In addition the tension in the rope must be the same for each of its parts. This means that the two parts of the rope supporting the moving block must each support half the load. These are different types of pulley systems: Fixed: A fixed pulley has an axle mounted in bearings attached to a supporting structure. A fixed pulley changes the direction of the force on a rope or belt that moves along its circumference. Mechanical advantage is gained by combining a fixed pulley with a movable pulley or another fixed pulley of a different diameter. Movable: A movable pulley has an axle in a movable block. A single movable pulley is supported by two parts of the same rope and has a mechanical advantage of two. Compound: A combination of fixed and movable pulleys forms a block and tackle. A block and tackle can have several pulleys mounted on the fixed and moving axles, further increasing the mechanical advantage. The mechanical advantage of the gun tackle can be increased by interchanging the fixed and moving blocks so the rope is attached to the moving block and the rope is pulled in the direction of the lifted load. In this case the block and tackle is said to be "rove to advantage." Diagram 3 shows that now three rope parts support the load W which means the tension in the rope is W/3. Thus, the mechanical advantage is three. By adding a pulley to the fixed block of a gun tackle the direction of the pulling force is reversed though the mechanical advantage remains the same, Diagram 3a. This is an example of the Luff tackle. Free body diagrams The mechanical advantage of a pulley system can be analysed using free body diagrams which balance the tension force in the rope with the force of gravity on the load. In an ideal system, the massless and frictionless pulleys do not dissipate energy and allow for a change of direction of a rope that does not stretch or wear. In this case, a force balance on a free body that includes the load, W, and n supporting sections of a rope with tension T, yields: The ratio of the load to the input tension force is the mechanical advantage MA of the pulley system, Thus, the mechanical advantage of the system is equal to the number of sections of rope supporting the load. Belt and pulley systems A belt and pulley system is characterized by two or more pulleys in common to a belt. This allows for mechanical power, torque, and speed to be transmitted across axles. If the pulleys are of differing diameters, a mechanical advantage is realized. A belt drive is analogous to that of a chain drive; however, a belt sheave may be smooth (devoid of discrete interlocking members as would be found on a chain sprocket, spur gear, or timing belt) so that the mechanical advantage is approximately given by the ratio of the pitch diameter of the sheaves only, not fixed exactly by the ratio of teeth as with gears and sprockets. In the case of a drum-style pulley, without a groove or flanges, the pulley often is slightly convex to keep the flat belt centered. It is sometimes referred to as a crowned pulley. Though once widely used on factory line shafts, this type of pulley is still found driving the rotating brush in upright vacuum cleaners, in belt sanders and bandsaws. Agricultural tractors built up to the early 1950s generally had a belt pulley for a flat belt (which is what Belt Pulley magazine was named after). It has been replaced by other mechanisms with more flexibility in methods of use, such as power take-off and hydraulics. Just as the diameters of gears (and, correspondingly, their number of teeth) determine a gear ratio and thus the speed increases or reductions and the mechanical advantage that they can deliver, the diameters of pulleys determine those same factors. Cone pulleys and step pulleys (which operate on the same principle, although the names tend to be applied to flat belt versions and V-belt versions, respectively) are a way to provide multiple drive ratios in a belt-and-pulley system that can be shifted as needed, just as a transmission provides this function with a gear train that can be shifted. V-belt step pulleys are the most common way that drill presses deliver a range of spindle speeds. With belts and pulleys, friction is one of the most important forces. Some uses for belts and pulleys involve peculiar angles (leading to bad belt tracking and possibly slipping the belt off the pulley) or low belt-tension environments, causing unnecessary slippage of the belt and hence extra wear to the belt. To solve this, pulleys are sometimes lagged. Lagging is the term used to describe the application of a coating, cover or wearing surface with various textured patterns which is sometimes applied to pulley shells. Lagging is often applied in order to extend the life of the shell by providing a replaceable wearing surface or to improve the friction between the belt and the pulley. Notably drive pulleys are often rubber lagged (coated with a rubber friction layer) for exactly this reason. Applying powdered rosin to the belt may increase the friction temporarily, but may shorten the life of the belt.
Technology
Basics_8
null
23905
https://en.wikipedia.org/wiki/Platonic%20solid
Platonic solid
In geometry, a Platonic solid is a convex, regular polyhedron in three-dimensional Euclidean space. Being a regular polyhedron means that the faces are congruent (identical in shape and size) regular polygons (all angles congruent and all edges congruent), and the same number of faces meet at each vertex. There are only five such polyhedra: Geometers have studied the Platonic solids for thousands of years. They are named for the ancient Greek philosopher Plato, who hypothesized in one of his dialogues, the Timaeus, that the classical elements were made of these regular solids. History The Platonic solids have been known since antiquity. It has been suggested that certain carved stone balls created by the late Neolithic people of Scotland represent these shapes; however, these balls have rounded knobs rather than being polyhedral, the numbers of knobs frequently differed from the numbers of vertices of the Platonic solids, there is no ball whose knobs match the 20 vertices of the dodecahedron, and the arrangement of the knobs was not always symmetrical. The ancient Greeks studied the Platonic solids extensively. Some sources (such as Proclus) credit Pythagoras with their discovery. Other evidence suggests that he may have only been familiar with the tetrahedron, cube, and dodecahedron and that the discovery of the octahedron and icosahedron belong to Theaetetus, a contemporary of Plato. In any case, Theaetetus gave a mathematical description of all five and may have been responsible for the first known proof that no other convex regular polyhedra exist. The Platonic solids are prominent in the philosophy of Plato, their namesake. Plato wrote about them in the dialogue Timaeus 360 B.C. in which he associated each of the four classical elements (earth, air, water, and fire) with a regular solid. Earth was associated with the cube, air with the octahedron, water with the icosahedron, and fire with the tetrahedron. Of the fifth Platonic solid, the dodecahedron, Plato obscurely remarked, "...the god used [it] for arranging the constellations on the whole heaven". Aristotle added a fifth element, aither (aether in Latin, "ether" in English) and postulated that the heavens were made of this element, but he had no interest in matching it with Plato's fifth solid. Euclid completely mathematically described the Platonic solids in the Elements, the last book (Book XIII) of which is devoted to their properties. Propositions 13–17 in Book XIII describe the construction of the tetrahedron, octahedron, cube, icosahedron, and dodecahedron in that order. For each solid Euclid finds the ratio of the diameter of the circumscribed sphere to the edge length. In Proposition 18 he argues that there are no further convex regular polyhedra. Andreas Speiser has advocated the view that the construction of the five regular solids is the chief goal of the deductive system canonized in the Elements. Much of the information in Book XIII is probably derived from the work of Theaetetus. In the 16th century, the German astronomer Johannes Kepler attempted to relate the five extraterrestrial planets known at that time to the five Platonic solids. In Mysterium Cosmographicum, published in 1596, Kepler proposed a model of the Solar System in which the five solids were set inside one another and separated by a series of inscribed and circumscribed spheres. Kepler proposed that the distance relationships between the six planets known at that time could be understood in terms of the five Platonic solids enclosed within a sphere that represented the orbit of Saturn. The six spheres each corresponded to one of the planets (Mercury, Venus, Earth, Mars, Jupiter, and Saturn). The solids were ordered with the innermost being the octahedron, followed by the icosahedron, dodecahedron, tetrahedron, and finally the cube, thereby dictating the structure of the solar system and the distance relationships between the planets by the Platonic solids. In the end, Kepler's original idea had to be abandoned, but out of his research came his three laws of orbital dynamics, the first of which was that the orbits of planets are ellipses rather than circles, changing the course of physics and astronomy. He also discovered the Kepler solids, which are two nonconvex regular polyhedra. Cartesian coordinates For Platonic solids centered at the origin, simple Cartesian coordinates of the vertices are given below. The Greek letter is used to represent the golden ratio . The coordinates for the tetrahedron, dodecahedron, and icosahedron are given in two positions such that each can be deduced from the other: in the case of the tetrahedron, by changing all coordinates of sign (central symmetry), or, in the other cases, by exchanging two coordinates (reflection with respect to any of the three diagonal planes). These coordinates reveal certain relationships between the Platonic solids: the vertices of the tetrahedron represent half of those of the cube, as {4,3} or , one of two sets of 4 vertices in dual positions, as h{4,3} or . Both tetrahedral positions make the compound stellated octahedron. The coordinates of the icosahedron are related to two alternated sets of coordinates of a nonuniform truncated octahedron, t{3,4} or , also called a snub octahedron, as s{3,4} or , and seen in the compound of two icosahedra. Eight of the vertices of the dodecahedron are shared with the cube. Completing all orientations leads to the compound of five cubes. Combinatorial properties A convex polyhedron is a Platonic solid if and only if all three of the following requirements are met. All of its faces are congruent convex regular polygons. None of its faces intersect except at their edges. The same number of faces meet at each of its vertices. Each Platonic solid can therefore be assigned a pair {p, q} of integers, where p is the number of edges (or, equivalently, vertices) of each face, and q is the number of faces (or, equivalently, edges) that meet at each vertex. This pair {p, q}, called the Schläfli symbol, gives a combinatorial description of the polyhedron. The Schläfli symbols of the five Platonic solids are given in the table below. All other combinatorial information about these solids, such as total number of vertices (V), edges (E), and faces (F), can be determined from p and q. Since any edge joins two vertices and has two adjacent faces we must have: The other relationship between these values is given by Euler's formula: This can be proved in many ways. Together these three relationships completely determine V, E, and F: Swapping p and q interchanges F and V while leaving E unchanged. For a geometric interpretation of this property, see . As a configuration The elements of a polyhedron can be expressed in a configuration matrix. The rows and columns correspond to vertices, edges, and faces. The diagonal numbers say how many of each element occur in the whole polyhedron. The nondiagonal numbers say how many of the column's element occur in or at the row's element. Dual pairs of polyhedra have their configuration matrices rotated 180 degrees from each other. Classification The classical result is that only five convex regular polyhedra exist. Two common arguments below demonstrate no more than five Platonic solids can exist, but positively demonstrating the existence of any given solid is a separate question—one that requires an explicit construction. Geometric proof The following geometric argument is very similar to the one given by Euclid in the Elements: Topological proof A purely topological proof can be made using only combinatorial information about the solids. The key is Euler's observation that V − E + F = 2, and the fact that pF = 2E = qV, where p stands for the number of edges of each face and q for the number of edges meeting at each vertex. Combining these equations one obtains the equation Simple algebraic manipulation then gives Since E is strictly positive we must have Using the fact that p and q must both be at least 3, one can easily see that there are only five possibilities for {p, q}: Geometric properties Angles There are a number of angles associated with each Platonic solid. The dihedral angle is the interior angle between any two face planes. The dihedral angle, θ, of the solid {p,q} is given by the formula This is sometimes more conveniently expressed in terms of the tangent by The quantity h (called the Coxeter number) is 4, 6, 6, 10, and 10 for the tetrahedron, cube, octahedron, dodecahedron, and icosahedron respectively. The angular deficiency at the vertex of a polyhedron is the difference between the sum of the face-angles at that vertex and 2. The defect, δ, at any vertex of the Platonic solids {p,q} is By a theorem of Descartes, this is equal to 4 divided by the number of vertices (i.e. the total defect at all vertices is 4). The three-dimensional analog of a plane angle is a solid angle. The solid angle, Ω, at the vertex of a Platonic solid is given in terms of the dihedral angle by This follows from the spherical excess formula for a spherical polygon and the fact that the vertex figure of the polyhedron {p,q} is a regular q-gon. The solid angle of a face subtended from the center of a platonic solid is equal to the solid angle of a full sphere (4 steradians) divided by the number of faces. This is equal to the angular deficiency of its dual. The various angles associated with the Platonic solids are tabulated below. The numerical values of the solid angles are given in steradians. The constant φ = is the golden ratio. Radii, area, and volume Another virtue of regularity is that the Platonic solids all possess three concentric spheres: the circumscribed sphere that passes through all the vertices, the midsphere that is tangent to each edge at the midpoint of the edge, and the inscribed sphere that is tangent to each face at the center of the face. The radii of these spheres are called the circumradius, the midradius, and the inradius. These are the distances from the center of the polyhedron to the vertices, edge midpoints, and face centers respectively. The circumradius R and the inradius r of the solid {p, q} with edge length a are given by where θ is the dihedral angle. The midradius ρ is given by where h is the quantity used above in the definition of the dihedral angle (h = 4, 6, 6, 10, or 10). The ratio of the circumradius to the inradius is symmetric in p and q: The surface area, A, of a Platonic solid {p, q} is easily computed as area of a regular p-gon times the number of faces F. This is: The volume is computed as F times the volume of the pyramid whose base is a regular p-gon and whose height is the inradius r. That is, The following table lists the various radii of the Platonic solids together with their surface area and volume. The overall size is fixed by taking the edge length, a, to be equal to 2. The constants φ and ξ in the above are given by Among the Platonic solids, either the dodecahedron or the icosahedron may be seen as the best approximation to the sphere. The icosahedron has the largest number of faces and the largest dihedral angle, it hugs its inscribed sphere the most tightly, and its surface area to volume ratio is closest to that of a sphere of the same size (i.e. either the same surface area or the same volume). The dodecahedron, on the other hand, has the smallest angular defect, the largest vertex solid angle, and it fills out its circumscribed sphere the most. Point in space For an arbitrary point in the space of a Platonic solid with circumradius R, whose distances to the centroid of the Platonic solid and its n vertices are L and di respectively, and , we have For all five Platonic solids, we have If di are the distances from the n vertices of the Platonic solid to any point on its circumscribed sphere, then Rupert property A polyhedron P is said to have the Rupert property if a polyhedron of the same or larger size and the same shape as P can pass through a hole in P. All five Platonic solids have this property. Symmetry Dual polyhedra Every polyhedron has a dual (or "polar") polyhedron with faces and vertices interchanged. The dual of every Platonic solid is another Platonic solid, so that we can arrange the five solids into dual pairs. The tetrahedron is self-dual (i.e. its dual is another tetrahedron). The cube and the octahedron form a dual pair. The dodecahedron and the icosahedron form a dual pair. If a polyhedron has Schläfli symbol {p, q}, then its dual has the symbol {q, p}. Indeed, every combinatorial property of one Platonic solid can be interpreted as another combinatorial property of the dual. One can construct the dual polyhedron by taking the vertices of the dual to be the centers of the faces of the original figure. Connecting the centers of adjacent faces in the original forms the edges of the dual and thereby interchanges the number of faces and vertices while maintaining the number of edges. More generally, one can dualize a Platonic solid with respect to a sphere of radius d concentric with the solid. The radii (R, ρ, r) of a solid and those of its dual (R*, ρ*, r*) are related by Dualizing with respect to the midsphere (d = ρ) is often convenient because the midsphere has the same relationship to both polyhedra. Taking d2 = Rr yields a dual solid with the same circumradius and inradius (i.e. R* = R and r* = r). Symmetry groups In mathematics, the concept of symmetry is studied with the notion of a mathematical group. Every polyhedron has an associated symmetry group, which is the set of all transformations (Euclidean isometries) which leave the polyhedron invariant. The order of the symmetry group is the number of symmetries of the polyhedron. One often distinguishes between the full symmetry group, which includes reflections, and the proper symmetry group, which includes only rotations. The symmetry groups of the Platonic solids are a special class of three-dimensional point groups known as polyhedral groups. The high degree of symmetry of the Platonic solids can be interpreted in a number of ways. Most importantly, the vertices of each solid are all equivalent under the action of the symmetry group, as are the edges and faces. One says the action of the symmetry group is transitive on the vertices, edges, and faces. In fact, this is another way of defining regularity of a polyhedron: a polyhedron is regular if and only if it is vertex-uniform, edge-uniform, and face-uniform. There are only three symmetry groups associated with the Platonic solids rather than five, since the symmetry group of any polyhedron coincides with that of its dual. This is easily seen by examining the construction of the dual polyhedron. Any symmetry of the original must be a symmetry of the dual and vice versa. The three polyhedral groups are: the tetrahedral group T, the octahedral group O (which is also the symmetry group of the cube), and the icosahedral group I (which is also the symmetry group of the dodecahedron). The orders of the proper (rotation) groups are 12, 24, and 60 respectively – precisely twice the number of edges in the respective polyhedra. The orders of the full symmetry groups are twice as much again (24, 48, and 120). See (Coxeter 1973) for a derivation of these facts. All Platonic solids except the tetrahedron are centrally symmetric, meaning they are preserved under reflection through the origin. The following table lists the various symmetry properties of the Platonic solids. The symmetry groups listed are the full groups with the rotation subgroups given in parentheses (likewise for the number of symmetries). Wythoff's kaleidoscope construction is a method for constructing polyhedra directly from their symmetry groups. They are listed for reference Wythoff's symbol for each of the Platonic solids. In nature and technology The tetrahedron, cube, and octahedron all occur naturally in crystal structures. These by no means exhaust the numbers of possible forms of crystals. However, neither the regular icosahedron nor the regular dodecahedron are amongst them. One of the forms, called the pyritohedron (named for the group of minerals of which it is typical) has twelve pentagonal faces, arranged in the same pattern as the faces of the regular dodecahedron. The faces of the pyritohedron are, however, not regular, so the pyritohedron is also not regular. Allotropes of boron and many boron compounds, such as boron carbide, include discrete B12 icosahedra within their crystal structures. Carborane acids also have molecular structures approximating regular icosahedra. In the early 20th century, Ernst Haeckel described (Haeckel, 1904) a number of species of Radiolaria, some of whose skeletons are shaped like various regular polyhedra. Examples include Circoporus octahedrus, Circogonia icosahedra, Lithocubus geometricus and Circorrhegma dodecahedra. The shapes of these creatures should be obvious from their names. Many viruses, such as the herpes virus, have the shape of a regular icosahedron. Viral structures are built of repeated identical protein subunits and the icosahedron is the easiest shape to assemble using these subunits. A regular polyhedron is used because it can be built from a single basic unit protein used over and over again; this saves space in the viral genome. In meteorology and climatology, global numerical models of atmospheric flow are of increasing interest which employ geodesic grids that are based on an icosahedron (refined by triangulation) instead of the more commonly used longitude/latitude grid. This has the advantage of evenly distributed spatial resolution without singularities (i.e. the poles) at the expense of somewhat greater numerical difficulty. Geometry of space frames is often based on platonic solids. In the MERO system, Platonic solids are used for naming convention of various space frame configurations. For example, O+T refers to a configuration made of one half of octahedron and a tetrahedron. Several Platonic hydrocarbons have been synthesised, including cubane and dodecahedrane and not tetrahedrane. Platonic solids are often used to make dice, because dice of these shapes can be made fair. 6-sided dice are very common, but the other numbers are commonly used in role-playing games. Such dice are commonly referred to as dn where n is the number of faces (d8, d20, etc.); see dice notation for more details. These shapes frequently show up in other games or puzzles. Puzzles similar to a Rubik's Cube come in all five shapes – see magic polyhedra. Liquid crystals with symmetries of Platonic solids For the intermediate material phase called liquid crystals, the existence of such symmetries was first proposed in 1981 by H. Kleinert and K. Maki. In aluminum the icosahedral structure was discovered three years after this by Dan Shechtman, which earned him the Nobel Prize in Chemistry in 2011. In architecture Architects liked the idea of Plato's timeless forms that can be seen by the soul in the objects of the material world, but turned these shapes into more suitable for construction sphere, cylinder, cone, and square pyramid. In particular, one of the leaders of neoclassicism, Étienne-Louis Boullée, was preoccupied with the architects' version of "Platonic solids". Related polyhedra and polytopes Uniform polyhedra There exist four regular polyhedra that are not convex, called Kepler–Poinsot polyhedra. These all have icosahedral symmetry and may be obtained as stellations of the dodecahedron and the icosahedron. The next most regular convex polyhedra after the Platonic solids are the cuboctahedron, which is a rectification of the cube and the octahedron, and the icosidodecahedron, which is a rectification of the dodecahedron and the icosahedron (the rectification of the self-dual tetrahedron is a regular octahedron). These are both quasi-regular, meaning that they are vertex- and edge-uniform and have regular faces, but the faces are not all congruent (coming in two different classes). They form two of the thirteen Archimedean solids, which are the convex uniform polyhedra with polyhedral symmetry. Their duals, the rhombic dodecahedron and rhombic triacontahedron, are edge- and face-transitive, but their faces are not regular and their vertices come in two types each; they are two of the thirteen Catalan solids. The uniform polyhedra form a much broader class of polyhedra. These figures are vertex-uniform and have one or more types of regular or star polygons for faces. These include all the polyhedra mentioned above together with an infinite set of prisms, an infinite set of antiprisms, and 53 other non-convex forms. The Johnson solids are convex polyhedra which have regular faces but are not uniform. Among them are five of the eight convex deltahedra, which have identical, regular faces (all equilateral triangles) but are not uniform. (The other three convex deltahedra are the Platonic tetrahedron, octahedron, and icosahedron.) Regular tessellations The three regular tessellations of the plane are closely related to the Platonic solids. Indeed, one can view the Platonic solids as regular tessellations of the sphere. This is done by projecting each solid onto a concentric sphere. The faces project onto regular spherical polygons which exactly cover the sphere. Spherical tilings provide two infinite additional sets of regular tilings, the hosohedra, {2,n} with 2 vertices at the poles, and lune faces, and the dual dihedra, {n,2} with 2 hemispherical faces and regularly spaced vertices on the equator. Such tesselations would be degenerate in true 3D space as polyhedra. Every regular tessellation of the sphere is characterized by a pair of integers {p, q} with  +  > . Likewise, a regular tessellation of the plane is characterized by the condition  +  = . There are three possibilities: In a similar manner, one can consider regular tessellations of the hyperbolic plane. These are characterized by the condition  +  < . There is an infinite family of such tessellations. Higher dimensions In more than three dimensions, polyhedra generalize to polytopes, with higher-dimensional convex regular polytopes being the equivalents of the three-dimensional Platonic solids. In the mid-19th century the Swiss mathematician Ludwig Schläfli discovered the four-dimensional analogues of the Platonic solids, called convex regular 4-polytopes. There are exactly six of these figures; five are analogous to the Platonic solids : 5-cell as {3,3,3}, 16-cell as {3,3,4}, 600-cell as {3,3,5}, tesseract as {4,3,3}, and 120-cell as {5,3,3}, and a sixth one, the self-dual 24-cell, {3,4,3}. In all dimensions higher than four, there are only three convex regular polytopes: the simplex as {3,3,...,3}, the hypercube as {4,3,...,3}, and the cross-polytope as {3,3,...,4}. In three dimensions, these coincide with the tetrahedron as {3,3}, the cube as {4,3}, and the octahedron as {3,4}.
Mathematics
Three-dimensional space
null
23924
https://en.wikipedia.org/wiki/Passenger%20pigeon
Passenger pigeon
The passenger pigeon or wild pigeon (Ectopistes migratorius) is an extinct species of pigeon that was endemic to North America. Its common name is derived from the French word passager, meaning "passing by", due to the migratory habits of the species. The scientific name also refers to its migratory characteristics. The morphologically similar mourning dove (Zenaida macroura) was long thought to be its closest relative, and the two were at times confused, but genetic analysis has shown that the genus Patagioenas is more closely related to it than the Zenaida doves. The passenger pigeon was sexually dimorphic in size and coloration. The male was in length, mainly gray on the upperparts, lighter on the underparts, with iridescent bronze feathers on the neck, and black spots on the wings. The female was , and was duller and browner than the male overall. The juvenile was similar to the female, but without iridescence. It mainly inhabited the deciduous forests of eastern North America and was also recorded elsewhere, but bred primarily around the Great Lakes. The pigeon migrated in enormous flocks, constantly searching for food, shelter, and breeding grounds, and was once the most abundant bird in North America, numbering around 3 billion, and possibly up to 5 billion. A very fast flyer, the passenger pigeon could reach a speed of . The bird fed mainly on mast, and also fruits and invertebrates. It practiced communal roosting and communal breeding, and its extreme gregariousness may have been linked with searching for food and predator satiation. Passenger pigeons were hunted by Native Americans, but hunting intensified after the arrival of Europeans, particularly in the 19th century. Pigeon meat was commercialized as cheap food, resulting in hunting on a massive scale for many decades. There were several other factors contributing to the decline and subsequent extinction of the species, including shrinking of the large breeding populations necessary for preservation of the species and widespread deforestation, which destroyed its habitat. A slow decline between about 1800 and 1870 was followed by a rapid decline between 1870 and 1890. In 1900, the last confirmed wild bird was shot in southern Ohio. The last captive birds were divided in three groups around the turn of the 20th century, some of which were photographed alive. Martha, thought to be the last passenger pigeon, died on September 1, 1914, at the Cincinnati Zoo. The eradication of the species is a notable example of anthropogenic extinction. Taxonomy Swedish naturalist Carl Linnaeus coined the binomial name Columba macroura for both the mourning dove and the passenger pigeon in the 1758 edition of his work Systema Naturae (the starting point of biological nomenclature), wherein he appears to have considered the two identical. This composite description cited accounts of these birds in two pre-Linnean books. One of these was Mark Catesby's description of the passenger pigeon, which was published in his 1731 to 1743 work Natural History of Carolina, Florida and the Bahama Islands, which referred to this bird as Palumbus migratorius, and was accompanied by the earliest published illustration of the species. Catesby's description was combined with the 1743 description of the mourning dove by George Edwards, who used the name C. macroura for that bird. There is nothing to suggest Linnaeus ever saw specimens of these birds himself, and his description is thought to be fully derivative of these earlier accounts and their illustrations. In his 1766 edition of Systema Naturae, Linnaeus dropped the name C. macroura, and instead used the name C. migratoria for the passenger pigeon, and C. carolinensis for the mourning dove. In the same edition, Linnaeus also named C. canadensis, based on Turtur canadensis, as used by Mathurin Jacques Brisson in 1760. Brisson's description was later shown to have been based on a female passenger pigeon. In 1827, William Swainson moved the passenger pigeon from the genus Columba to the new monotypic genus Ectopistes, due in part to the length of the wings and the wedge shape of the tail. In 1906 Outram Bangs suggested that because Linnaeus had wholly copied Catesby's text when coining C. macroura, this name should apply to the passenger pigeon, as E. macroura. In 1918 Harry C. Oberholser suggested that C. canadensis should take precedence over C. migratoria (as E. canadensis), as it appeared on an earlier page in Linnaeus' book. In 1952 Francis Hemming proposed that the International Commission on Zoological Nomenclature (ICZN) secure the specific name macroura for the mourning dove, and the name migratorius for the passenger pigeon, since this was the intended use by the authors on whose work Linnaeus had based his description. This was accepted by the ICZN, which used its plenary powers to designate the species for the respective names in 1955. Evolution The passenger pigeon was a member of the pigeon and dove family, Columbidae. The oldest known fossil of the genus is an isolated humerus (USNM 430960) known from the Lee Creek Mine in North Carolina in sediments belonging to the Yorktown Formation, dating to the Zanclean stage of the Pliocene, between 5.3 and 3.6 million years ago. Its closest living relatives were long thought to be the Zenaida doves, based on morphological grounds, particularly the physically similar mourning dove (now Z. macroura). It was even suggested that the mourning dove belonged to the genus Ectopistes and was listed as E. carolinensis by some authors, including Thomas Mayo Brewer. The passenger pigeon was supposedly descended from Zenaida pigeons that had adapted to the woodlands on the plains of central North America. The passenger pigeon differed from the species in the genus Zenaida in being larger, lacking a facial stripe, being sexually dimorphic, and having iridescent neck feathers and a smaller clutch. In a 2002 study by American geneticist Beth Shapiro et al., museum specimens of the passenger pigeon were included in an ancient DNA analysis for the first time (in a paper focusing mainly on the dodo), and it was found to be the sister taxon of the cuckoo-dove genus Macropygia. The Zenaida doves were instead shown to be related to the quail-doves of the genus Geotrygon and the Leptotila doves. A more extensive 2010 study instead showed that the passenger pigeon was most closely related to the New World Patagioenas pigeons, including the band-tailed pigeon (P. fasciata) of western North America, which are related to the Southeast Asian species in the genera Turacoena, Macropygia, and Reinwardtoena. This clade is also related to the Columba and Streptopelia doves of the Old World (collectively termed the "typical pigeons and doves"). The authors of the study suggested that the ancestors of the passenger pigeon may have colonized the New World from South East Asia by flying across the Pacific Ocean, or perhaps across Beringia in the north. In a 2012 study, the nuclear DNA of the passenger pigeon was analyzed for the first time, and its relationship with the Patagioenas pigeon was confirmed. In contrast to the 2010 study, these authors suggested that their results could indicate that the ancestors of the passenger pigeon and its Old World relatives may have originated in the Neotropical region of the New World. The cladogram below follows the 2012 DNA study showing the position of the passenger pigeon among its closest relatives: DNA in old museum specimens is often degraded and fragmentary, and passenger pigeon specimens have been used in various studies to discover improved methods of analyzing and assembling genomes from such material. DNA samples are often taken from the toe pads of bird skins in museums, as this can be done without causing significant damage to valuable specimens. The passenger pigeon had no known subspecies. Hybridization occurred between the passenger pigeon and the Barbary dove (Streptopelia risoria) in the aviary of Charles Otis Whitman (who owned many of the last captive birds around the turn of the 20th century, and kept them with other pigeon species) but the offspring were infertile. Etymology The genus name, Ectopistes, translates as "moving about" or "wandering", while the specific name, migratorius, indicates its migratory habits. The full binomial can thus be translated as "migratory wanderer". The English common name "passenger pigeon" derives from the French word , which means "to pass by" in a fleeting manner. While the pigeon was extant, the name "passenger pigeon" was used interchangeably with "wild pigeon". The bird also gained some less-frequently used names, including blue pigeon, merne rouck pigeon, wandering long-tailed dove, and wood pigeon. In the 18th century, the passenger pigeon was known as tourte in New France (in modern Canada), but to the French in Europe it was known as tourtre. In modern French, the bird is known as tourte voyageuse or pigeon migrateur, among other names. In the Native American Algonquian languages, the pigeon was called amimi by the Lenape, by the Ojibwe, and by the Kaskaskia Illinois. Other names in indigenous American languages include in Mohawk, and , or "lost dove", in Choctaw. The Seneca people called the pigeon , meaning "big bread", as it was a source of food for their tribes. Chief Simon Pokagon of the Potawatomi stated that his people called the pigeon , and that the Europeans did not adopt native names for the bird, as it reminded them of their domesticated pigeons, instead calling them "wild" pigeons, as they called the native peoples "wild" men. Description The passenger pigeon was sexually dimorphic in size and coloration. It weighed between . The adult male was about in length. It had a bluish-gray head, nape, and hindneck. On the sides of the neck and the upper mantle were iridescent display feathers that have variously been described as being a bright bronze, violet, or golden-green, depending on the angle of the light. The upper back and wings were a pale or slate gray tinged with olive brown, that turned into grayish-brown on the lower wings. The lower back and rump were a dark blue-gray that became grayish-brown on the upper tail-covert feathers. The greater and median wing-covert feathers were pale gray, with a small number of irregular black spots near the end. The primary and secondary feathers of the wing were a blackish-brown with a narrow white edge on the outer side of the secondaries. The two central tail feathers were brownish gray, and the rest were white. The tail pattern was distinctive as it had white outer edges with blackish spots that were prominently displayed in flight. The lower throat and breast were richly pinkish-rufous, grading into a paler pink further down, and into white on the abdomen and undertail covert feathers. The undertail coverts also had a few black spots. The bill was black, while the feet and legs were a bright coral red. It had a carmine-red iris surrounded by a narrow purplish-red eye-ring. The wing of the male measured , the tail , the bill , and the tarsus was . The adult female passenger pigeon was slightly smaller than the male at in length. It was duller than the male overall, and was a grayish-brown on the forehead, crown, and nape down to the scapulars, and the feathers on the sides of the neck had less iridescence than those of the male. The lower throat and breast were a buff-gray that developed into white on the belly and undertail-coverts. It was browner on the upperparts and paler buff brown and less rufous on the underparts than the male. The wings, back, and tail were similar in appearance to those of the male except that the outer edges of the primary feathers were edged in buff or rufous buff. The wings had more spotting than those of the male. The tail was shorter than that of the male, and the legs and feet were a paler red. The iris was orange red, with a grayish blue, naked orbital ring. The wing of the female was , the tail , the bill , and the tarsus was . The juvenile passenger pigeon was similar in plumage to the adult female, but lacked the spotting on the wings, and was a darker brownish-gray on the head, neck, and breast. The feathers on the wings had pale gray fringes (also described as white tips), giving it a scaled look. The secondaries were brownish-black with pale edges, and the tertial feathers had a rufous wash. The primaries were also edged with a rufous-brown color. The neck feathers had no iridescence. The legs and feet were dull red, and the iris was brownish, and surrounded by a narrow carmine ring. The plumage of the sexes was similar during their first year. Of the hundreds of surviving skins, only one appears to be aberrant in coloran adult female from the collection of Walter Rothschild, Natural History Museum at Tring. It is a washed brown on the upper parts, wing covert, secondary feathers, and tail (where it would otherwise have been gray), and white on the primary feathers and underparts. The normally black spots are brown, and it is pale gray on the head, lower back, and upper-tail covert feathers, yet the iridescence is unaffected. The brown mutation is a result of a reduction in eumelanin, due to incomplete synthesis (oxidation) of this pigment. This sex-linked mutation is common in female wild birds, but it is thought the white feathers of this specimen are instead the result of bleaching due to exposure to sunlight. The passenger pigeon was physically adapted for speed, endurance, and maneuverability in flight, and has been described as having a streamlined version of the typical pigeon shape, such as that of the generalized rock dove (Columba livia). The wings were very long and pointed, and measured from the wing-chord to the primary feathers, and to the secondaries. The tail, which accounted for much of its overall length, was long and wedge-shaped (or graduated), with two central feathers longer than the rest. The body was slender and narrow, and the head and neck were small. The internal anatomy of the passenger pigeon has rarely been described. Robert W. Shufeldt found little to differentiate the bird's osteology from that of other pigeons when examining a male skeleton in 1914, but Julian P. Hume noted several distinct features in a more detailed 2015 description. The pigeon's particularly large breast muscles indicated a powerful flight (musculus pectoralis major for downstroke and the smaller musculus supracoracoideus for upstroke). The coracoid bone (which connects the scapula, furcula, and sternum) was large relative to the size of the bird, , with straighter shafts and more robust articular ends than in other pigeons. The furcula had a sharper V-shape and was more robust, with expanded articular ends. The scapula was long, straight, and robust, and its distal end was enlarged. The sternum was very large and robust compared to that of other pigeons; its keel was deep. The overlapping uncinate processes, which stiffen the ribcage, were very well developed. The wing bones (humerus, radius, ulna, and carpometacarpus) were short but robust compared to other pigeons. The leg bones were similar to those of other pigeons. Vocalizations The noise produced by flocks of passenger pigeons was described as deafening, audible for miles away, and the bird's voice as loud, harsh, and unmusical. It was also described by some as clucks, twittering, and cooing, and as a series of low notes, instead of an actual song. The birds apparently made croaking noises when building nests, and bell-like sounds when mating. During feeding, some individuals would give alarm calls when facing a threat, and the rest of the flock would join the sound while taking off. In 1911, American behavioral scientist Wallace Craig published an account of the gestures and sounds of this species as a series of descriptions and musical notations, based on observation of C. O. Whitman's captive passenger pigeons in 1903. Craig compiled these records to assist in identifying potential survivors in the wild (as the physically similar mourning doves could otherwise be mistaken for passenger pigeons), while noting this "meager information" was likely all that would be left on the subject. According to Craig, one call was a simple harsh "keck" that could be given twice in succession with a pause in between. This was said to be used to attract the attention of another pigeon. Another call was a more frequent and variable scolding. This sound was described as "kee-kee-kee-kee" or "tete! tete! tete!", and was used to call either to its mate or towards other creatures it considered to be enemies. One variant of this call, described as a long, drawn-out "tweet", could be used to call down a flock of passenger pigeons passing overhead, which would then land in a nearby tree. "Keeho" was a soft cooing that, while followed by louder "keck" notes or scolding, was directed at the bird's mate. A nesting passenger pigeon would also give off a stream of at least eight mixed notes that were both high and low in tone and ended with "keeho". Overall, female passenger pigeons were quieter and called infrequently. Craig suggested that the loud, strident voice and "degenerated" musicality was the result of living in populous colonies where only the loudest sounds could be heard. Distribution and habitat The passenger pigeon was found across most of North America east of the Rocky Mountains, from the Great Plains to the Atlantic coast in the east, to the south of Canada in the north, and the north of Mississippi in the southern United States, coinciding with its primary habitat, the eastern deciduous forests. Within this range, it constantly migrated in search of food and shelter. It is unclear if the birds favored particular trees and terrain, but they were possibly not restricted to one type, as long as their numbers could be supported. It originally bred from the southern parts of eastern and central Canada south to eastern Kansas, Oklahoma, Mississippi, and Georgia in the United States, but the primary breeding range was in southern Ontario and the Great Lakes states south through states north of the Appalachian Mountains. Though the western forests were ecologically similar to those in the east, these were occupied by band-tailed pigeons, which may have kept out the passenger pigeons through competitive exclusion. The passenger pigeon wintered from Arkansas, Tennessee, and North Carolina south to Texas, the Gulf Coast, and northern Florida, though flocks occasionally wintered as far north as southern Pennsylvania and Connecticut. It preferred to winter in large swamps, particularly those with alder trees; if swamps were not available, forested areas, particularly with pine trees, were favored roosting sites. There were also sightings of passenger pigeons outside of its normal range, including in several Western states, Bermuda, Cuba, and Mexico, particularly during severe winters. It has been suggested that some of these extralimital records may have been due to the paucity of observers rather than the actual extent of passenger pigeons; North America was then unsettled country, and the bird may have appeared anywhere on the continent except for the far west. There were also records of stragglers in Scotland, Ireland, and France, although these birds may have been escaped captives, or the records incorrect. More than 130 passenger pigeon fossils have been found scattered across 25 US states, including in the La Brea Tar Pits of California. These records date as far back as 100,000 years ago in the Pleistocene era, during which the pigeon's range extended to several western states that were not a part of its modern range. The abundance of the species in these regions and during this time is unknown. Ecology and behavior The passenger pigeon was nomadic, constantly migrating in search of food, shelter, or nesting grounds. In his 1831 Ornithological Biography, American naturalist and artist John James Audubon described a migration he observed in 1813 as follows: These flocks were frequently described as being so dense that they blackened the sky and as having no sign of subdivisions. The flocks ranged from only above the ground in windy conditions to as high as . These migrating flocks were typically in narrow columns that twisted and undulated, and they were reported as being in nearly every conceivable shape. A skilled flyer, the passenger pigeon is estimated to have averaged during migration. It flew with quick, repeated flaps that increased the bird's velocity the closer the wings got to the body. It was equally adept and quick flying through a forest as through open space. A flock was also adept at following the lead of the pigeon in front of it, and flocks swerved together to avoid a predator. When landing, the pigeon flapped its wings repeatedly before raising them at the moment of landing. The pigeon was awkward when on the ground, and moved around with jerky, alert steps. The passenger pigeon was one of the most social of all land birds. Estimated to have numbered three to five billion at the height of its population, it may have been the most numerous bird on Earth; researcher Arlie W. Schorger believed that it accounted for between 25 and 40 percent of the total land bird population in the United States. The passenger pigeon's historic population is roughly the equivalent of the number of birds that overwinter in the United States every year in the early 21st century. Even within their range, the size of individual flocks could vary greatly. In November 1859, Henry David Thoreau, writing in Concord, Massachusetts, noted that "quite a little flock of [passenger] pigeons bred here last summer," while only seven years later, in 1866, one flock in southern Ontario was described as being wide and long, took 14 hours to pass, and held in excess of 3.5 billion birds. Such a number would likely represent a large fraction of the entire population at the time, or perhaps all of it. Most estimations of numbers were based on single migrating colonies, and it is unknown how many of these existed at a given time. American writer Christopher Cokinos has suggested that if the birds flew single file, they would have stretched around the Earth 22 times. A 2014 genetic study (based on coalescent theory and on "sequences from most of the genome" of three individual passenger pigeons) suggested that the passenger pigeon population experienced dramatic fluctuations across the last million years, due to their dependence on availability of mast (which itself fluctuates). The study suggested the bird was not always abundant, mainly persisting at around 1/10,000 the amount of the several billions estimated in the 1800s, with vastly larger numbers present during outbreak phases. Some early accounts also suggest that the appearance of flocks in great numbers was an irregular occurrence. These large fluctuations in population may have been the result of a disrupted ecosystem and have consisted of outbreak populations much larger than those common in pre-European times. The authors of the 2014 genetic study note that a similar analysis of the human population size arrives at an "effective population size" of between 9,000 and 17,000 individuals (or approximately 1/550,000th of the peak total human population size of 7 billion cited in the study). For a 2017 genetic study, the authors sequenced the genomes of two additional passenger pigeons, as well as analyzing the mitochondrial DNA of 41 individuals. This study found evidence that the passenger-pigeon population had been stable for at least the previous 20,000 years. The study also found that the size of the passenger pigeon population over that time period was larger than the found in the 2014 genetic study. However, the 2017 study's "conservative" estimate of an "effective population size" of 13 million birds is still only about 1/300th of the bird's estimated historic population of approximately 3–5 billion before their "19th century decline and eventual extinction." A similar study inferring human population size from genetics (published in 2008, and using human mitochondrial DNA and Bayesian coalescent inference methods) showed considerable accuracy in reflecting overall patterns of human population growth as compared to data deduced by other means—though the study arrived at a human effective population size (as of 1600 AD, for Africa, Eurasia, and the Americas combined) that was roughly 1/1000 of the census population estimate for the same time and area based on anthropological and historical evidence. The 2017 passenger-pigeon genetic study also found that, in spite of its large population size, the genetic diversity was very low in the species. The authors suggested that this was a side-effect of natural selection, which theory and previous empirical studies suggested could have a particularly great impact on species with very large and cohesive populations. Natural selection can reduce genetic diversity over extended regions of a genome through 'selective sweeps' or 'background selection'. The authors found evidence of a faster rate of adaptive evolution and faster removal of harmful mutations in passenger pigeons compared to band-tailed pigeons, which are some of passenger pigeons' closest living relatives. They also found evidence of lower genetic diversity in regions of the passenger pigeon genome that have lower rates of genetic recombination. This is expected if natural selection, via selective sweeps or background selection, reduced their genetic diversity, but not if population instability did. The study concluded that earlier suggestion that population instability contributed to the extinction of the species was invalid. Evolutionary biologist A. Townsend Peterson said of the two passenger-pigeon genetic studies (published in 2014 and 2017) that, though the idea of extreme fluctuations in the passenger-pigeon population was "deeply entrenched," he was persuaded by the 2017 study's argument, due to its "in-depth analysis" and "massive data resources." A communally roosting species, the passenger pigeon chose roosting sites that could provide shelter and enough food to sustain their large numbers for an indefinite period. The time spent at one roosting site may have depended on the extent of human persecution, weather conditions, or other, unknown factors. Roosts ranged in size and extent, from a few acres to or greater. Some roosting areas would be reused for subsequent years, others would only be used once. The passenger pigeon roosted in such numbers that even thick tree branches would break under the strain. The birds frequently piled on top of each other's backs to roost. They rested in a slumped position that hid their feet. They slept with their bills concealed by the feathers in the middle of the breast while holding their tail at a 45-degree angle. Dung could accumulate under a roosting site to a depth of over . If the pigeon became alert, it would often stretch out its head and neck in line with its body and tail, then nod its head in a circular pattern. When aggravated by another pigeon, it raised its wings threateningly, but passenger pigeons almost never actually fought. The pigeon bathed in shallow water, and afterwards lay on each side in turn and raised the opposite wing to dry it. The passenger pigeon drank at least once a day, typically at dawn, by fully inserting its bill into lakes, small ponds, and streams. Pigeons were seen perching on top of each other to access water, and if necessary, the species could alight on open water to drink. One of the primary causes of natural mortality was the weather, and every spring many individuals froze to death after migrating north too early. In captivity, a passenger pigeon was capable of living at least 15 years; Martha, the last known living passenger pigeon, was at least 17 and possibly as old as 29 when she died. It is undocumented how long wild pigeons lived. The bird is believed to have played a significant ecological role in the composition of pre-Columbian forests of eastern North America. For instance, while the passenger pigeon was extant, forests were dominated by white oaks. This species germinated in the fall, therefore making its seeds almost useless as a food source during the spring breeding season, while red oaks produced acorns during the spring, which the pigeons devoured. The absence of the passenger pigeon's seed consumption may have contributed to the modern dominance of red oaks. Due to the immense amount of dung present at roosting sites, few plants grew for years after the pigeons left. Also, the accumulation of flammable debris (such as limbs broken from trees and foliage killed by excrement) at these sites may have increased both the frequency and intensity of forest fires, which would have favored fire-tolerant species, such as bur oaks, black oaks, and white oaks over less fire-tolerant species, such as red oaks, thus helping to explain the change in the composition of eastern forests since the passenger pigeon's extinction (from white oaks, bur oaks, and black oaks predominating in presettlement forests, to the "dramatic expansion" of red oaks today). A study released in 2018 concluded that the "vast numbers" of passenger pigeons present for "tens of thousands of years" would have influenced the evolution of the tree species whose seeds they ate. Those masting trees that produced seeds during the spring nesting season (such as red oaks) evolved so that some portion of their seeds would be too large for passenger pigeons to swallow (thus allowing some of their seeds to escape predation and grow new trees). White oak, in contrast, with its seeds sized consistently in the edible range, evolved an irregular masting pattern that took place in the fall, when fewer passenger pigeons would have been present. The study further concluded that this allowed white oaks to be the dominant tree species in regions where passenger pigeons were commonly present in the spring. With the large numbers in passenger pigeon flocks, the excrement they produced was enough to destroy surface-level vegetation at long-term roosting sites, while adding high quantities of nutrients to the ecosystem. Because of this—along with the breaking of tree limbs under their collective weight and the great amount of mast they consumed—passenger pigeons are thought to have influenced both the structure of eastern forests and the composition of the species present there. Due to these influences, some ecologists have considered the passenger pigeon a keystone species, with the disappearance of their vast flocks leaving a major gap in the ecosystem. Their role in creating forest disturbances has been linked to greater vertebrate diversity in forests by creating more niches for animals to fill. To help fill that ecological gap, it has been proposed that modern land managers attempt to replicate some of their effects on the ecosystem by creating openings in forest canopies to provide more understory light. The American chestnut trees that provided much of the mast on which the passenger pigeon fed was itself almost driven to extinction by an imported Asian fungus (chestnut blight) around 1905. As many as thirty billion trees are thought to have died as a result in the following decades, but this did not affect the passenger pigeon, which was already extinct in the wild at the time. After the disappearance of the passenger pigeon, the population of another acorn feeding species, the white-footed mouse, grew exponentially because of the increased availability of the seeds of the oak, beech, and chestnut trees. It has been speculated that the extinction of passenger pigeons may have increased the prevalence of tick-borne lyme disease in modern times as white-footed mice are the reservoir hosts of Borrelia burgdorferi. Diet Beeches and oaks produced the mast needed to support nesting and roosting flocks. The passenger pigeon changed its diet depending on the season. In the fall, winter, and spring, it mainly ate beechnuts, acorns, and chestnuts. During the summer, berries and softer fruits, such as blueberries, grapes, cherries, mulberries, pokeberries, and bunchberry, became the main objects of its consumption. It also ate worms, caterpillars, snails, and other invertebrates, particularly while breeding. It took advantage of cultivated grains, particularly buckwheat, when it found them. It was especially fond of salt, which it ingested either from brackish springs or salty soil. Mast occurs in large quantities in different places at different times, and rarely in consecutive years, which is one of the reasons why the large flocks were constantly on the move. As mast is produced during autumn, there would have to be a large amount of it left by the summer, when the young were reared. It is unknown how they located this fluctuating food source, but their eyesight and flight powers helped them survey large areas for places that could provide food enough for a temporary stay. The passenger pigeon foraged in flocks of tens or hundreds of thousands of individuals that overturned leaves, dirt, and snow with their bills in search of food. One observer described the motion of such a flock in search of mast as having a rolling appearance, as birds in the back of the flock flew overhead to the front of the flock, dropping leaves and grass in flight. The flocks had wide leading edges to better scan the landscape for food sources. When nuts on a tree loosened from their caps, a pigeon would land on a branch and, while flapping vigorously to stay balanced, grab the nut, pull it loose from its cap, and swallow it whole. Collectively, a foraging flock was capable of removing nearly all fruits and nuts from their path. Birds in the back of the flock flew to the front in order to pick over unsearched ground; however, birds never ventured far from the flock and hurried back if they became isolated. It is believed that the pigeons used social cues to identify abundant sources of food, and a flock of pigeons that saw others feeding on the ground often joined them. During the day, the birds left the roosting forest to forage on more open land. They regularly flew away from their roost daily in search of food, and some pigeons reportedly traveled as far as , leaving the roosting area early and returning at night. The passenger pigeon's very elastic mouth and throat and a joint in the lower bill enabled it to swallow acorns whole. It could store large quantities of food in its crop, which could expand to about the size of an orange, causing the neck to bulge and allowing a bird quickly to grab any food it discovered. The crop was described as being capable of holding at least 17 acorns or 28 beechnuts, 11 grains of corn, 100 maple seeds, plus other material; it was estimated that a passenger pigeon needed to eat about of food a day to survive. If shot, a pigeon with a crop full of nuts would fall to the ground with a sound described as like the rattle of a bag of marbles. After feeding, the pigeons perched on branches and digested the food stored in their crop overnight with the aid of a muscular gizzard, which often contained gravel. The pigeon could eat and digest of acorns per day. At the historic population of three billion passenger pigeons, this amounted to of food a day. The pigeon could regurgitate food from its crop when more desirable food became available. A 2018 study found that the dietary range of the passenger pigeon was restricted to certain sizes of seed, due to the size of its gape. This would have prevented it from eating some of the seeds of trees such as red oaks, black oaks, and the American chestnut. Specifically, the study found that between 13% and 69% of red oak seeds were too large for passenger pigeons to have swallowed, that only a "small proportion" of the seeds of black oaks and American chestnuts were too large for the birds to consume, and that all white oak seeds were sized within an edible range. They also found that seeds would be completely destroyed during digestion, which therefore hindered dispersal of seeds this way. Instead, passenger pigeons may have spread seeds by regurgitation, or after dying. Reproduction Other than finding roosting sites, the migrations of the passenger pigeon were connected with finding places appropriate for this communally breeding bird to nest and raise its young. It is not certain how many times a year the birds bred; once seems most likely, but some accounts suggest more. The nesting period lasted around four to six weeks. The flock arrived at a nesting ground around March in southern latitudes, and some time later in more northern areas. The pigeon had no site fidelity, often choosing to nest in a different location each year. The formation of a nesting colony did not necessarily take place until several months after the pigeons arrived on their breeding grounds, typically during late March, April, or May. The colonies, which were known as "cities", were immense, ranging from to thousands of hectares in size, and were often long and narrow in shape (L-shaped), with a few areas untouched for unknown reasons. Due to the topography, they were rarely continuous. Since no accurate data was recorded, it is not possible to give more than estimates on the size and population of these nesting areas, but most accounts mention colonies containing millions of birds. The largest nesting area ever recorded was in central Wisconsin in 1871; it was reported as covering , with the number of birds nesting there estimated to be around 136,000,000. As well as these "cities", there were regular reports of much smaller flocks or even individual pairs setting up a nesting site. The birds do not seem to have formed as vast breeding colonies at the periphery of their range. Courtship took place at the nesting colony. Unlike other pigeons, courtship took place on a branch or perch. The male, with a flourish of the wings, made a "keck" call while near a female. The male then gripped tightly to the branch and vigorously flapped his wings up and down. When the male was close to the female, he then pressed against her on the perch with his head held high and pointing at her. If receptive, the female pressed back against the male. When ready to mate, the pair preened each other. This was followed by the birds billing, in which the female inserted its bill into and clasped the male's bill, shook for a second, and separated quickly while standing next to each other. The male then scrambled onto the female's back and copulated, which was then followed by soft clucking and occasionally more preening. John James Audubon described the courtship of the passenger pigeon as follows: After observing captive birds, Wallace Craig found that this species did less charging and strutting than other pigeons (as it was awkward on the ground), and thought it probable that no food was transferred during their brief billing (unlike in other pigeons), and he therefore considered Audubon's description partially based on analogy with other pigeons as well as imagination. Nests were built immediately after pair formation and took two to four days to construct; this process was highly synchronized within a colony. The female chose the nesting site by sitting on it and flicking her wings. The male then carefully selected nesting materials, typically twigs, and handed them to the female over her back. The male then went in search of more nesting material while the female constructed the nest beneath herself. Nests were built between above the ground, though typically above , and were made of 70 to 110 twigs woven together to create a loose, shallow bowl through which the egg could easily be seen. This bowl was then typically lined with finer twigs. The nests were about wide, high, and deep. Though the nest has been described as crude and flimsy compared to those of many other birds, remains of nests could be found at sites where nesting had taken place several years prior. Nearly every tree capable of supporting nests had them, often more than 50 per tree; one hemlock was recorded as holding 317 nests. The nests were placed on strong branches close to the tree trunks. Some accounts state that ground under the nesting area looked as if it had been swept clean, due to all the twigs being collected at the same time, yet this area would also have been covered in dung. As both sexes took care of the nest, the pairs were monogamous for the duration of the nesting. Generally, the eggs were laid during the first two weeks of April across the pigeon's range. Each female laid its egg immediately or almost immediately after the nest was completed; sometimes the pigeon was forced to lay it on the ground if the nest was not complete. The normal clutch size appears to have been a single egg, but there is some uncertainty about this, as two have also been reported from the same nests. Occasionally, a second female laid its egg in another female's nest, resulting in two eggs being present. The egg was white and oval shaped and averaged in size. If the egg was lost, it was possible for the pigeon to lay a replacement egg within a week. A whole colony was known to re-nest after a snowstorm forced them to abandon their original colony. Both parents incubated The egg for 12 to 14 days, with the male incubating it from midmorning to midafternoon and the female incubating it the rest of the time. Upon hatching, the nestling (or squab) was blind and sparsely covered with yellow, hairlike down. The nestling developed quickly and within 14 days weighed as much as its parents. During this brooding period both parents took care of the nestling, with the male attending in the middle of the day and the female at other times. The nestlings were fed crop milk (a substance similar to curd, produced in the crops of the parent birds) exclusively for the first days after hatching. Adult food was gradually introduced after three to six days. After 13 to 15 days, the parents fed the nestling for a last time and then abandoned it, leaving the nesting area en masse. The nestling begged in the nest for a day or two, before climbing from the nest and fluttering to the ground, whereafter it moved around, avoided obstacles, and begged for food from nearby adults. It was another three or four days before it fledged. The entire nesting cycle lasted about 30 days. It is unknown whether colonies re-nested after a successful nesting. The passenger pigeon sexually matured during its first year and bred the following spring. Alfred Russel Wallace, in his historic 1858 paper On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection, used the passenger pigeon as an example of an immensely successful species despite laying fewer eggs than most other birds: Predators and parasites Nesting colonies attracted large numbers of predators, including American minks (Neogale vison), long-tailed weasels (Neogale frenata), American martens (Martes americana), and raccoons (Procyon lotor) that preyed on eggs and nestlings, birds of prey, such as owls, hawks, and eagles that preyed on nestlings and adults, and wolves (Canis lupus), foxes (Urocyon cinereoargenteus and Vulpes vulpes), bobcats (Lynx rufus), American black bears (Ursus americanus), and cougars (Puma concolor) that preyed on injured adults and fallen nestlings. Hawks of the genus Accipiter and falcons pursued and preyed upon pigeons in flight, which in turn executed complex aerial maneuvers to avoid them; Cooper's hawk (Accipiter cooperii) was known as the "great pigeon hawk" due to its successes, and these hawks allegedly followed migrating passenger pigeons. While many predators were drawn to the flocks, individual pigeons were largely protected due to the sheer size of the flock, and overall little damage could be inflicted on the flock by predation. Despite the number of predators, nesting colonies were so large that they were estimated to have a 90% success rate if not disturbed. After being abandoned and leaving the nest, the very fat juveniles were vulnerable to predators until they were able to fly. The sheer number of juveniles on the ground meant that only a small percentage of them were killed; predator satiation may therefore be one of the reasons for the extremely social habits and communal breeding of the species. Two parasites have been recorded on passenger pigeons. One species of phtilopterid louse, Columbicola extinctus, was originally thought to have lived on just passenger pigeons and to have become coextinct with them. This was proven inaccurate in 1999 when C. extinctus was rediscovered living on band-tailed pigeons. This, and the fact that the related louse C. angustus is mainly found on cuckoo-doves, further supports the relation between these pigeons, as the phylogeny of lice broadly mirrors that of their hosts. Another louse, Campanulotes defectus, was thought to have been unique to the passenger pigeon, but is now believed to have been a case of a contaminated specimen, as the species is considered to be the still-extant Campanulotes flavus of Australia. There is no record of a wild pigeon dying of either disease or parasites. Relationship with humans For fifteen thousand years or more before the arrival of Europeans in the Americas, passenger pigeons and Native Americans coexisted in the forests of what would later become the eastern part of the continental United States. A study published in 2008 found that, throughout most of the Holocene, Native American land-use practices greatly influenced forest composition. The regular use of prescribed burn, the girdling of unwanted trees, and the planting and tending of favored trees suppressed the populations of a number of tree species that did not produce nuts, acorns, or fruit, while increasing the populations of numerous tree species that did. In addition, the burning away of forest-floor litter made these foods easier to find, once they had fallen from the trees. Some have argued that such Native American land-use practices increased the populations of various animal species, including the passenger pigeon, by increasing the food available to them, while elsewhere it has been claimed that, by hunting passenger pigeons and competing with them for some kinds of nuts and acorns, Native Americans suppressed their population size. Genetic research may shed some light on this question. A 2017 study of passenger-pigeon DNA found that the passenger-pigeon population size was stable for 20,000 years prior to its 19th-century decline and subsequent extinction, while a 2016 study of ancient Native American DNA found that the Native American population went through a period of rapid expansion, increasing 60-fold, starting about 13–16 thousand years ago. If both of these studies are correct, then a great change in the size of the Native American population had no apparent impact on the size of the passenger-pigeon population. This suggests that the net effect of Native American activities on passenger-pigeon population size was neutral. The passenger pigeon played a religious role for some northern Native American tribes. The Wyandot people (or Huron) believed that every twelve years during the Feast of the Dead, the souls of the dead changed into passenger pigeons, which were then hunted and eaten. Before hunting the juvenile pigeons, the Seneca people made an offering of wampum and brooches to the old passenger pigeons; these were placed in a small kettle or other receptacle by a smoky fire. The Ho-Chunk people considered the passenger pigeon to be the bird of the chief, as they were served whenever the chieftain gave a feast. The Seneca people believed that a white pigeon was the chief of the passenger pigeon colony, and that a Council of Birds decided that the pigeons had to give their bodies to the Seneca because they were the only birds that nested in colonies. The Seneca developed a pigeon dance as a way of showing their gratitude. French explorer Jacques Cartier was the first European to report on passenger pigeons, during his voyage in 1534. The bird was subsequently observed and noted by historical figures such as Samuel de Champlain and Cotton Mather. Most early accounts dwell on the vast number of pigeons, the resulting darkened skies, and the enormous amount of hunted birds (50,000 birds were reportedly sold at a Boston market in 1771). The early colonists thought that large flights of pigeons would be followed by ill fortune or sickness. When the pigeons wintered outside of their normal range, some believed that they would have "a sickly summer and autumn." In the 18th and 19th centuries, various parts of the pigeon were thought to have medicinal properties. The blood was supposed to be good for eye disorders, the powdered stomach lining was used to treat dysentery, and the dung was used to treat a variety of ailments, including headaches, stomach pains, and lethargy. Though they did not last as long as the feathers of a goose, the feathers of the passenger pigeon were frequently used for bedding. Pigeon feather beds were so popular that for a time in Saint-Jérôme, Quebec, every dowry included a bed and pillows made of pigeon feathers. In 1822, one family in Chautauqua County, New York, killed 4,000 pigeons in a day solely for this purpose. The passenger pigeon was featured in the writings of many significant early naturalists, as well as accompanying illustrations. Mark Catesby's 1731 illustration, the first published depiction of this bird, is somewhat crude, according to some later commentators. The original watercolor that the engraving is based on was bought by the British royal family in 1768, along with the rest of Catesby's watercolors. The naturalists Alexander Wilson and John James Audubon both witnessed large pigeon migrations first hand, and published detailed accounts wherein both attempted to deduce the total number of birds involved. The most famous and often reproduced depiction of the passenger pigeon is Audubon's illustration (handcolored aquatint) in his book The Birds of America, published between 1827 and 1838. Audubon's image has been praised for its artistic qualities, but criticized for its supposed scientific inaccuracies. As Wallace Craig and R. W. Shufeldt (among others) pointed out, the birds are shown perched and billing one above the other, whereas they would instead have done this side by side, the male would be the one passing food to the female, and the male's tail would not be spread. Craig and Shufeldt instead cited illustrations by American artist Louis Agassiz Fuertes and Japanese artist K. Hayashi as more accurate depictions of the bird. Illustrations of the passenger pigeon were often drawn after stuffed birds, and Charles R. Knight is the only "serious" artist known to have drawn the species from life. He did so on at least two occasions; in 1903 he drew a bird possibly in one of the three aviaries with surviving birds, and some time before 1914, he drew Martha, the last individual, in the Cincinnati Zoo. The bird has been written about (including in poems, songs, and fiction) and illustrated by many notable writers and artists, and is depicted in art to this day, for example in Walton Ford's 2002 painting Falling Bough, and National Medal of Arts winner John A. Ruthven's 2014 mural in Cincinnati, which commemorates the 100th anniversary of Martha's death. The "Project Passenger Pigeon" outreach group used the centennial of its extinction to spread awareness about human-induced extinction, and to recognize its relevance in the 21st century. It has been suggested that the passenger pigeon could be used as a "flagship" species to spread awareness of other threatened, but less well-known North American birds. Hunting The passenger pigeon was an important source of food for the people of North America. Native Americans ate pigeons, and tribes near nesting colonies would sometimes move to live closer to them and eat the juveniles, killing them at night with long poles. Many Native Americans were careful not to disturb the adult pigeons, and instead ate only the juveniles as they were afraid that the adults might desert their nesting grounds; in some tribes, disturbing the adult pigeons was considered a crime. Away from the nests, large nets were used to capture adult pigeons, sometimes up to 800 at a time. Low-flying pigeons could be killed by throwing sticks or stones. At one site in Oklahoma, the pigeons leaving their roost every morning flew low enough that the Cherokee could throw clubs into their midst, which caused the lead pigeons to try to turn aside and in the process created a blockade that resulted in a large mass of flying, easily hit pigeons. Among the game birds, passenger pigeons were second only to the wild turkey (Meleagris gallopavo) in terms of importance for the Native Americans living in the southeastern United States. The bird's fat was stored, often in large quantities, and used as butter. Archaeological evidence supports the idea that Native Americans ate the pigeons frequently prior to colonization. What may be the earliest account of Europeans hunting passenger pigeons dates to January 1565, when the French explorer René Laudonnière wrote of killing close to 10,000 of them around Fort Caroline in a matter of weeks: This amounted to about one passenger pigeon per day for each person in the fort. After European colonization, the passenger pigeon was hunted with more intensive methods than the more sustainable methods practiced by the natives. Yet it has also been suggested that the species was rare prior to 1492, and that the subsequent increase in their numbers may be due to the decrease in the Native American population (who, as well as hunting the birds, competed with them for mast) caused by European immigration, and the supplementary food (agricultural crops) the immigrants imported (a theory for which Joel Greenberg offered a detailed rebuttal in his book, A Feathered River Across the Sky). The passenger pigeon was of particular value on the frontier, and some settlements counted on its meat to support their population. The flavor of the flesh of passenger pigeons varied depending on how they were prepared. In general, juveniles were thought to taste the best, followed by birds fattened in captivity and birds caught in September and October. It was common practice to fatten trapped pigeons before eating them or storing their bodies for winter. Dead pigeons were commonly stored by salting or pickling the bodies; other times, only the breasts of the pigeons were kept, in which case they were typically smoked. In the early 19th century, commercial hunters began netting and shooting the birds to sell as food in city markets, and even as pig fodder. Once pigeon meat became popular, commercial hunting started on a prodigious scale. Passenger pigeons were shot with such ease that many did not consider them to be a game bird, as an amateur hunter could easily bring down six with one shotgun blast; a particularly good shot with both barrels of a shotgun at a roost could kill 61 birds. The birds were frequently shot either in flight during migration or immediately after, when they commonly perched in dead, exposed trees. Hunters only had to shoot toward the sky without aiming, and many pigeons would be brought down. The pigeons proved difficult to shoot head-on, so hunters typically waited for the flocks to pass overhead before shooting them. Trenches were sometimes dug and filled with grain so that a hunter could shoot the pigeons along this trench. Hunters largely outnumbered trappers, and hunting passenger pigeons was a popular sport for young boys. In 1871, a single seller of ammunition provided three tons of powder and 16 tons (32,000 lb) of shot during a nesting. In the latter half of the 19th century, thousands of passenger pigeons were captured for use in the sports shooting industry. The pigeons were used as living targets in shooting tournaments, such as "trap-shooting", the controlled release of birds from special traps. Competitions could also consist of people standing regularly spaced while trying to shoot down as many birds as possible in a passing flock. The pigeon was considered so numerous that 30,000 birds had to be killed to claim the prize in one competition. Humans used a wide variety of other methods to capture and kill passenger pigeons. Nets were propped up to allow passenger pigeons entry, and then closed by knocking loose the stick that supported the opening, trapping twenty or more pigeons inside. Tunnel nets were also used to great effect, and one particularly large net was capable of catching 3,500 pigeons at a time. These nets were used by many farmers on their own property as well as by professional trappers. Food would be placed on the ground near the nets to attract the pigeons. Decoy or "stool pigeons" (sometimes blinded by having their eyelids sewn together) were tied to a stool. When a flock of pigeons passed by, a cord would be pulled that made the stool pigeon flutter to the ground, making it seem as if it had found food, and the flock would be lured into the trap. Salt was also frequently used as bait, and many trappers set up near salt springs. At least one trapper used alcohol-soaked grain as bait to intoxicate the birds and make them easier to kill. Another method of capture was to hunt at a nesting colony, particularly during the period of a few days after the adult pigeons abandoned their nestlings, but before the nestlings could fly. Some hunters used sticks to poke the nestlings out of the nest, while others shot the bottom of a nest with a blunt arrow to dislodge the pigeon. Others cut down a nesting tree in such a way that when it fell, it would also hit a second nesting tree and dislodge the pigeons within. In one case, of large trees were speedily cut down to get birds, and such methods were common. A severe method was to set fire to the base of a tree nested with pigeons; the adults would flee and the juveniles would fall to the ground. Sulfur was sometimes burned beneath the nesting tree to suffocate the birds, which fell out of the tree in a weakened state. By the mid-19th century, railroads had opened new opportunities for pigeon hunters. While it was once extremely difficult to ship masses of pigeons to eastern cities, railway access permitted pigeon hunting to become commercialized. An extensive telegraph system was introduced in the 1860s, which improved communication across the United States, making it easier to spread information about the whereabouts of pigeon flocks. After being opened up to the railroads, the town of Plattsburgh, New York, is estimated to have shipped 1.8 million pigeons to larger cities in 1851 alone at a price of 31 to 56 cents a dozen. By the late 19th century, the trade of passenger pigeons had become commercialized. Large commission houses employed trappers (known as "pigeoners") to follow the flocks of pigeons year-round. A single hunter is reported to have sent three million birds to eastern cities during his career. In 1874, at least 600 people were employed as pigeon trappers, a number which grew to 1,200 by 1881. Pigeons were caught in such numbers that by 1876, shipments of dead pigeons were unable to recoup the costs of the barrels and ice needed to ship them. The price of a barrel full of pigeons dropped to below fifty cents, due to overstocked markets. Passenger pigeons were instead kept alive so their meat would be fresh when killed, and sold once their market value had increased. Thousands of birds were kept in large pens, though the bad conditions led many to die from lack of food and water, and by fretting (gnawing) themselves; many rotted away before they could be sold. Hunting of passenger pigeons was documented and depicted in contemporaneous newspapers, wherein various trapping methods and uses were featured. The most often reproduced of these illustrations was captioned "Winter sports in northern Louisiana: shooting wild pigeons", and published in 1875. Passenger pigeons were also seen as agricultural pests, because feeding flocks could destroy entire crops. The bird was described as a "perfect scourge" by some farming communities, and hunters were employed to "wage warfare" on the birds to save grain, as shown in another newspaper illustration from 1867 captioned as "Shooting wild pigeons in Iowa". When comparing these "pests" to the bison of the Great Plains, the valuable resource needed was not the species of animals but the agriculture which was consumed by said animal. The crops that were eaten were seen as marketable calories, proteins, and nutrients all grown for the wrong species. Decline and conservation attempts The notion that the species could be driven to extinction was alien to the early colonists, because the number of birds did not appear to diminish, and also because the concept of extinction was yet to be defined. The bird seems to have been slowly pushed westward after the arrival of Europeans, becoming scarce or absent in the east, though there were still millions of birds in the 1850s. The population must have been decreasing in numbers for many years, though this went unnoticed due to the apparent vast number of birds, which clouded their decline. In 1856 Bénédict Henry Révoil may have been one of the first writers to voice concern about the fate of the passenger pigeon, after witnessing a hunt in 1847: By the 1870s, the decrease in birds was noticeable, especially after the last large-scale nestings and subsequent slaughters of millions of birds in 1874 and 1878. By this time, large nestings only took place in the north, around the Great Lakes. The last large nesting was in Petoskey, Michigan, in 1878 (following one in Pennsylvania a few days earlier), where 50,000 birds were killed each day for nearly five months. The surviving adults attempted a second nesting at new sites, but were killed by professional hunters before they had a chance to raise any young. Scattered nestings were reported into the 1880s, but the birds were now wary, and commonly abandoned their nests if persecuted. By the time of these last nestings, laws had already been enacted to protect the passenger pigeon, but these proved ineffective, as they were unclearly framed and hard to enforce. H. B. Roney, who witnessed the Petoskey slaughter, led campaigns to protect the pigeon, but was met with resistance, and accusations that he was exaggerating the severity of the situation. Few offenders were prosecuted, mainly some poor trappers, but the large enterprises were not affected. In 1857, a bill was brought forth to the Ohio State Legislature seeking protection for the passenger pigeon, yet a Select Committee of the Senate filed a report stating that the bird did not need protection, being "wonderfully prolific", and dismissing the suggestion that the species could be destroyed. Public protests against trap-shooting erupted in the 1870s, as the birds were badly treated before and after such contests. Conservationists were ineffective in stopping the slaughter. A bill was passed in the Michigan legislature making it illegal to net pigeons within of a nesting area. In 1897, a bill was introduced in the Michigan legislature asking for a 10-year closed season on passenger pigeons. Similar legal measures were passed and then disregarded in Pennsylvania. The gestures proved futile and, by the mid-1890s, the passenger pigeon had almost completely disappeared, and was probably extinct as a breeding bird in the wild. Small flocks are known to have existed at this point, since large numbers of birds were still being sold at markets. Thereafter, only small groups or individual birds were reported, many of which were shot on sight. Last survivors The last recorded nest and egg in the wild were collected in 1895 near Minneapolis. The last wild individual in Louisiana was discovered among a flock of mourning doves in 1896, and subsequently shot. Many late sightings are thought to be false or due to confusion with mourning doves. The last fully authenticated record of a wild passenger pigeon was near Oakford, Illinois, on March 12, 1901, when a male bird was killed, stuffed, and placed in Millikin University in Decatur, Illinois, where it remains today. This was not discovered until 2014, when writer Joel Greenberg found out the date of the bird's shooting while doing research for his book A Feathered River Across the Sky. Greenberg also pointed out a record of a male shot near Laurel, Indiana, on April 3, 1902, that was stuffed but later destroyed. For many years, the last confirmed wild passenger pigeon was thought to have been shot near Sargents, Pike County, Ohio on March 24, 1900, when a boy named Press Clay Southworth killed a female bird with a BB gun. The boy did not recognize the bird as a passenger pigeon, but his parents identified it, and sent it to a taxidermist. The specimen, nicknamed "Buttons" due to the buttons used instead of glass eyes, was donated to the Ohio Historical Society by the family in 1915. The reliability of accounts after the Ohio, Illinois, and Indiana birds are in question. Ornithologist Alexander Wetmore claimed that he saw a pair flying near Independence, Kansas, in April 1905. On May 18, 1907, U.S. President Theodore Roosevelt claimed to have seen a "flock of about a dozen two or three times on the wing" while on retreat at his cabin in Pine Knot, Virginia, and that they lit on a dead tree "in such a characteristically pigeon-like attitude"; this sighting was corroborated by a local gentleman whom he had "rambled around with in the woods a good deal" and whom he found to be "a singularly close observer." In 1910, the American Ornithologists' Union offered a reward of $3,000 for discovering a nest— . Most captive passenger pigeons were kept for exploitative purposes, but some were housed in zoos and aviaries. Audubon alone claimed to have brought 350 birds to England in 1830, distributing them among various noblemen, and the species is also known to have been kept at London Zoo. Being common birds, these attracted little interest, until the species became rare in the 1890s. By the turn of the 20th century, the last known captive passenger pigeons were divided in three groups; one in Milwaukee, one in Chicago, and one in Cincinnati. There are claims of a few further individuals having been kept in various places, but these accounts are not considered reliable today. The Milwaukee group was kept by David Whittaker, who began his collection in 1888, and possessed fifteen birds some years later, all descended from a single pair. The Chicago group was kept by Charles Otis Whitman, whose collection began with passenger pigeons bought from Whittaker beginning in 1896. He had an interest in studying pigeons, and kept his passenger pigeons with other pigeon species. Whitman brought his pigeons with him from Chicago to Massachusetts by railcar each summer. By 1897, Whitman had bought all of Whittaker's birds, and upon reaching a maximum of 19 individuals, he gave seven back to Whittaker in 1898. Around this time, a series of photographs were taken of these birds; 24 of the photos survive. Some of these images have been reproduced in various media, copies of which are now kept at the Wisconsin Historical Society. It is unclear exactly where, when, and by whom these photos were taken, but some appear to have been taken in Chicago in 1896, others in Massachusetts in 1898, the latter by a J. G. Hubbard. By 1902, Whitman owned sixteen birds. His pigeons laid many eggs, but few hatched, and many hatchlings died. A newspaper inquiry was published that requested "fresh blood" to the flock which had now ceased breeding. By 1907, he was down to two female passenger pigeons that died that winter, and was left with two infertile male hybrids, whose subsequent fate is unknown. By this time, only four (all males) of the birds Whitman returned to Whittaker were alive, and these died between November 1908 and February 1909. The Cincinnati Zoo, one of the oldest zoos in the United States, kept passenger pigeons from its beginning in 1875. The zoo kept more than twenty individuals in a cage. Passenger pigeons do not appear to have been kept at the zoo due to their rarity, but to enable guests to have a closer look at a native species. Recognizing the decline of the wild populations, Whitman and the Cincinnati Zoo consistently strove to breed the surviving birds, including attempts at making a rock dove foster passenger pigeon eggs. In 1902, Whitman gave a female passenger pigeon to the zoo; this was possibly the individual later known as Martha, which would become the last living member of the species. Other sources argue that Martha was hatched at the Cincinnati Zoo, lived there for 25 years, and was the descendant of three pairs of passenger pigeons purchased by the zoo in 1877. It is thought this individual was named Martha because her last cage mate was named George, thereby honoring George Washington and his wife Martha, though it has also been claimed she was named after the mother of a zookeeper's friends. In 1909, Martha and her two male companions at the Cincinnati Zoo became the only known surviving passenger pigeons. One of these males died around April that year, followed by George, the remaining male, on July 10, 1910. It is unknown whether the remains of George were preserved. Martha soon became a celebrity due to her status as an endling, and offers of a $1,000 reward () for finding a mate for her brought even more visitors to see her. During her last four years in solitude (her cage was ), Martha became steadily slower and more immobile; visitors would throw sand at her to make her move, and her cage was roped off in response. Martha died of old age on September 1, 1914, and was found lifeless on the floor of her cage. It was claimed that she died at 1 p.m., but other sources suggest she died some hours later. Depending on the source, Martha was between 17 and 29 years old at the time of her death, although 29 is the generally accepted figure. At the time, it was suggested that Martha might have died from an apoplectic stroke, as she had suffered one a few weeks before dying. Her body was frozen into a block of ice and sent to the Smithsonian Institution in Washington, where it was skinned, dissected, photographed, and mounted. As she was molting when she died, she proved difficult to stuff, and previously shed feathers were added to the skin. Martha was on display for many years, but after a period in the museum vaults, she was put back on display at the Smithsonian's National Museum of Natural History in 2015. A memorial statue of Martha stands on the grounds of the Cincinnati Zoo, in front of the "Passenger Pigeon Memorial Hut", formerly the aviary wherein Martha lived, now a National Historic Landmark. Incidentally, the last specimen of the extinct Carolina parakeet, named "Incus," died in Martha's cage in 1918; the stuffed remains of that bird are exhibited in the "Memorial Hut". Extinction causes The main reasons for the extinction of the passenger pigeon were the massive scale of hunting, the rapid loss of habitat, and the extremely social lifestyle of the bird, which made it highly vulnerable to the former factors. Deforestation was driven by the need to free land for agriculture and expanding towns, but also due to the demand for lumber and fuel. About 728,000 km2 (180 million acres) were cleared for farming between 1850 and 1910. Though there are still large woodland areas in eastern North America, which support a variety of wildlife, it was not enough to support the vast number of passenger pigeons needed to sustain the population. In contrast, very small populations of nearly extinct birds, such as the kākāpō (Strigops habroptilus) and the takahē (Porphyrio hochstetteri), have been enough to keep those species extant to the present. The combined effects of intense hunting and deforestation has been referred to as a "Blitzkrieg" against the passenger pigeon, and it has been labeled one of the greatest and most senseless human-induced extinctions in history. As the flocks dwindled in size, the passenger pigeon population decreased below the threshold necessary to propagate the species, an example of the Allee effect. The 2014 genetic study that found natural fluctuations in population numbers prior to human arrival also concluded that the species routinely recovered from lows in the population, and suggested that one of these lows may have coincided with the intensified hunting by humans in the 1800s, a combination which would have led to the rapid extinction of the species. A similar scenario may also explain the rapid extinction of the Rocky Mountain locust (Melanoplus spretus) during the same period. It has also been suggested that after the population was thinned out, it would be harder for few or solitary birds to locate suitable feeding areas. In addition to the birds killed or driven away by hunting during breeding seasons, many nestlings were also orphaned before being able to fend for themselves. Other, less convincing contributing factors have been suggested at times, including mass drownings, Newcastle disease, and migrations to areas outside their original range. The extinction of the passenger pigeon aroused public interest in the conservation movement, and resulted in new laws and practices which prevented many other species from becoming extinct. The rapid decline of the passenger pigeon has influenced later assessment methods of the extinction risk of endangered animal populations. The International Union for Conservation of Nature (IUCN) has used the passenger pigeon as an example in cases where a species was declared "at risk" for extinction even though population numbers are high. Naturalist Aldo Leopold paid tribute to the vanished species in a monument dedication held by the Wisconsin Society for Ornithology at Wyalusing State Park, Wisconsin, which had been one of the species' social roost sites. Speaking on May 11, 1947, Leopold remarked: Potential resurrection of the species Today, at least 1,532 passenger pigeon skins (along with 16 skeletons) still exist, spread across many institutions all over the world. It has been suggested that the passenger pigeon should be revived when available technology allows it (a concept which has been termed "de-extinction"), using genetic material from such specimens. In 2003, the Pyrenean ibex (Capra pyrenaica pyrenaica, a subspecies of the Spanish ibex) was the first extinct animal to be cloned back to life; the clone lived for only seven minutes before dying of lung defects. A hindrance to cloning the passenger pigeon is the fact that the DNA of museum specimens has been contaminated and fragmented, due to exposure to heat and oxygen. American geneticist George M. Church has proposed that the passenger pigeon genome can be reconstructed by piecing together DNA fragments from different specimens. The next step would be to splice these genes into the stem cells of rock pigeons (or band-tailed pigeons), which would then be transformed into egg and sperm cells, and placed into the eggs of rock pigeons, resulting in rock pigeons bearing passenger pigeon sperm and eggs. The offspring of these would have passenger pigeon traits, and would be further bred to favor unique features of the extinct species. The American non-profit organization Revive & Restore is currently pursuing the idea. The general idea of re-creating extinct species has been criticized, since the large funds needed could be spent on conserving currently threatened species and habitats, and because conservation efforts might be viewed as less urgent. In the case of the passenger pigeon, since it was very social, it is unlikely that enough birds could be created for revival to be successful, and it is unclear whether there is enough appropriate habitat left for its reintroduction. Furthermore, the parent pigeons that would raise the cloned passenger pigeons would belong to a different species, with a different way of rearing young. Bibliography
Biology and health sciences
Columbiformes
null
23939
https://en.wikipedia.org/wiki/Perl
Perl
Perl is a high-level, general-purpose, interpreted, dynamic programming language. Though Perl is not officially an acronym, there are various backronyms in use, including "Practical Extraction and Reporting Language". Perl was developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions. Perl originally was not capitalized and the name was changed to being capitalized by the time Perl 4 was released. The latest release is Perl 5, first released in 1994. From 2000 to October 2019 a sixth version of Perl was in development; the sixth version's name was changed to Raku. Both languages continue to be developed independently by different development teams which liberally borrow ideas from each other. Perl borrows features from other programming languages including C, sh, AWK, and sed. It provides text processing facilities without the arbitrary data-length limits of many contemporary Unix command line tools. Perl is a highly expressive programming language: source code for a given algorithm can be short and highly compressible. Perl gained widespread popularity in the mid-1990s as a CGI scripting language, in part due to its powerful regular expression and string parsing abilities. In addition to CGI, Perl 5 is used for system administration, network programming, finance, bioinformatics, and other applications, such as for graphical user interfaces (GUIs). It has been nicknamed "the Swiss Army chainsaw of scripting languages" because of its flexibility and power. In 1998, it was also referred to as the "duct tape that holds the Internet together", in reference to both its ubiquitous use as a glue language and its perceived inelegance. Name and logos Perl was originally named "Pearl". Wall wanted to give the language a short name with positive connotations. It is also a Christian reference to the Parable of the Pearl from the Gospel of Matthew. However, Wall discovered the existing PEARL language before Perl's official release and dropped the "a" from the name. The name is occasionally expanded as a backronym: Practical Extraction and Report Language and Wall's own Pathologically Eclectic Rubbish Lister, which is in the manual page for perl. Programming Perl, published by O'Reilly Media, features a picture of a dromedary camel on the cover and is commonly called the "Camel Book". This image has become an unofficial symbol of Perl. O'Reilly owns the image as a trademark but licenses it for non-commercial use, requiring only an acknowledgement and a link to www.perl.com. Licensing for commercial use is decided on a case-by-case basis. O'Reilly also provides "Programming Republic of Perl" logos for non-commercial sites and "Powered by Perl" buttons for any site that uses Perl. The Perl Foundation owns an alternative symbol, an onion, which it licenses to its subsidiaries, Perl Mongers, PerlMonks, Perl.org, and others. The symbol is a visual pun on pearl onion. History Early versions Larry Wall began work on Perl in 1987, while employed as a programmer at Unisys; he released version 1.0 on December 18, 1987. Wall based early Perl on some methods existing languages used for text manipulation. Perl 2, released in June 1988, featured a better regular expression engine. Perl 3, released in October 1989, added support for binary data streams. 1990s Originally, the only documentation for Perl was a single lengthy man page. In 1991, Programming Perl, known to many Perl programmers as the "Camel Book" because of its cover, was published and became the de facto reference for the language. At the same time, the Perl version number was bumped to 4, not to mark a major change in the language but to identify the version that was well documented by the book. Perl 4 was released in March 1991. Perl 4 went through a series of maintenance releases, culminating in Perl 4.036 in 1993, whereupon Wall abandoned Perl 4 to begin work on Perl 5. Initial design of Perl 5 continued into 1994. The perl5-porters mailing list was established in May 1994 to coordinate work on porting Perl 5 to different platforms. It remains the primary forum for development, maintenance, and porting of Perl 5. Perl 5.000 was released on October 17, 1994. It was a nearly complete rewrite of the interpreter, and it added many new features to the language, including objects, references, lexical (my) variables, and modules. Importantly, modules provided a mechanism for extending the language without modifying the interpreter. This allowed the core interpreter to stabilize, even as it enabled ordinary Perl programmers to add new language features. Perl 5 has been in active development since then. Perl 5.001 was released on March 13, 1995. Perl 5.002 was released on February 29, 1996 with the new prototypes feature. This allowed module authors to make subroutines that behaved like Perl builtins. Perl 5.003 was released June 25, 1996, as a security release. One of the most important events in Perl 5 history took place outside of the language proper and was a consequence of its module support. On October 26, 1995, the Comprehensive Perl Archive Network (CPAN) was established as a repository for the Perl language and Perl modules; , it carries over 211,850 modules in 43,865 distributions, written by more than 14,324 authors, and is mirrored worldwide at more than 245 locations. Perl 5.004 was released on May 15, 1997, and included, among other things, the UNIVERSAL package, giving Perl a base object from which all classes were automatically derived and the ability to require versions of modules. Another significant development was the inclusion of the CGI.pm module, which contributed to Perl's popularity as a CGI scripting language. Perl 5.004 added support for Microsoft Windows, Plan 9, QNX, and AmigaOS. Perl 5.005 was released on July 22, 1998. This release included several enhancements to the regex engine, new hooks into the backend through the B::* modules, the qr// regex quote operator, a large selection of other new core modules, and added support for several more operating systems, including BeOS. 2000–2020 Perl 5.6 was released on March 22, 2000. Major changes included 64-bit support, Unicode string representation, support for files over 2 GiB, and the "our" keyword. When developing Perl 5.6, the decision was made to switch the versioning scheme to one more similar to other open source projects; after 5.005_63, the next version became 5.5.640, with plans for development versions to have odd numbers and stable versions to have even numbers. In 2000, Wall put forth a call for suggestions for a new version of Perl from the community. The process resulted in 361 RFC (Request for Comments) documents that were to be used in guiding development of Perl 6. In 2001, work began on the "Apocalypses" for Perl 6, a series of documents meant to summarize the change requests and present the design of the next generation of Perl. They were presented as a digest of the RFCs, rather than a formal document. At this time, Perl 6 existed only as a description of a language. Perl 5.8 was first released on July 18, 2002, and further 5.X versions have been released approximately yearly since then. Perl 5.8 improved Unicode support, added a new I/O implementation, added a new thread implementation, improved numeric accuracy, and added several new modules. As of 2013, this version was still the most popular Perl version and was used by Red Hat Linux 5, SUSE Linux 10, Solaris 10, HP-UX 11.31, and AIX 5. In 2004, work began on the "Synopses" – documents that originally summarized the Apocalypses, but which became the specification for the Perl 6 language. In February 2005, Audrey Tang began work on Pugs, a Perl 6 interpreter written in Haskell. This was the first concerted effort toward making Perl 6 a reality. This effort stalled in 2006. The Perl On New Internal Engine (PONIE) project existed from 2003 until 2006. It was to be a bridge between Perl 5 and 6, and an effort to rewrite the Perl 5 interpreter to run on the Perl 6 Parrot virtual machine. The goal was to ensure the future of the millions of lines of Perl 5 code at thousands of companies around the world. The PONIE project ended in 2006 and is no longer being actively developed. Some of the improvements made to the Perl 5 interpreter as part of PONIE were folded into that project. On December 18, 2007, the 20th anniversary of Perl 1.0, Perl 5.10.0 was released. Perl 5.10.0 included notable new features, which brought it closer to Perl 6. These included a switch statement (called "given"/"when"), regular expressions updates, and the smart match operator (~~). Around this same time, development began in earnest on another implementation of Perl 6 known as Rakudo Perl, developed in tandem with the Parrot virtual machine. As of November 2009, Rakudo Perl has had regular monthly releases and now is the most complete implementation of Perl 6. A major change in the development process of Perl 5 occurred with Perl 5.11; the development community has switched to a monthly release cycle of development releases, with a yearly schedule of stable releases. By that plan, bugfix point releases will follow the stable releases every three months. On April 12, 2010, Perl 5.12.0 was released. Notable core enhancements include new package NAME VERSION syntax, the yada yada operator (intended to mark placeholder code that is not yet implemented), implicit , full Y2038 compliance, regex conversion overloading, DTrace support, and Unicode 5.2. On May 14, 2011, Perl 5.14 was released with JSON support built-in. On May 20, 2012, Perl 5.16 was released. Notable new features include the ability to specify a given version of Perl that one wishes to emulate, allowing users to upgrade their version of Perl, but still run old scripts that would normally be incompatible. Perl 5.16 also updates the core to support Unicode 6.1. On May 18, 2013, Perl 5.18 was released. Notable new features include the new dtrace hooks, lexical subs, more CORE:: subs, overhaul of the hash for security reasons, support for Unicode 6.2. On May 27, 2014, Perl 5.20 was released. Notable new features include subroutine signatures, hash slices/new slice syntax, postfix dereferencing (experimental), Unicode 6.3, and a function using a consistent random number generator. Some observers credit the release of Perl 5.10 with the start of the Modern Perl movement. In particular, this phrase describes a style of development that embraces the use of the CPAN, takes advantage of recent developments in the language, and is rigorous about creating high quality code. While the book Modern Perl may be the most visible standard-bearer of this idea, other groups such as the Enlightened Perl Organization have taken up the cause. In late 2012 and 2013, several projects for alternative implementations for Perl 5 started: Perl5 in Perl6 by the Rakudo Perl team, by Stevan Little and friends, by the Perl11 team under Reini Urban, by , and , a Kickstarter project led by Will Braswell and affiliated with the Perl11 project. Perl 6 and Raku At the 2000 Perl Conference, Jon Orwant made a case for a major new language initiative. This led to a decision to begin work on a redesign of the language, to be called Perl 6. Proposals for new language features were solicited from the Perl community at large, which submitted more than 300 RFCs. Wall spent the next few years digesting the RFCs and synthesizing them into a coherent framework for Perl 6. He presented his design for Perl 6 in a series of documents called "apocalypses" – numbered to correspond to chapters in Programming Perl. , the developing specification of Perl 6 was encapsulated in design documents called Synopses – numbered to correspond to Apocalypses. Thesis work by Bradley M. Kuhn, overseen by Wall, considered the possible use of the Java virtual machine as a runtime for Perl. Kuhn's thesis showed this approach to be problematic. In 2001, it was decided that Perl 6 would run on a cross-language virtual machine called Parrot. In 2005, Audrey Tang created the Pugs project, an implementation of Perl 6 in Haskell. This acted as, and continues to act as, a test platform for the Perl 6 language (separate from the development of the actual implementation), allowing the language designers to explore. The Pugs project spawned an active Perl/Haskell cross-language community centered around the Libera Chat #raku IRC channel. Many functional programming influences were absorbed by the Perl 6 design team. In 2012, Perl 6 development was centered primarily on two compilers: Rakudo, an implementation running on the Parrot virtual machine and the Java virtual machine. Niecza, which targets the Common Language Runtime. In 2013, MoarVM ("Metamodel On A Runtime"), a C language-based virtual machine designed primarily for Rakudo was announced. In October 2019, Perl 6 was renamed to Raku. only the Rakudo implementation and MoarVM are under active development, and other virtual machines, such as the Java Virtual Machine and JavaScript, are supported. Perl 7 In June 2020, Perl 7 was announced as the successor to Perl 5. Perl 7 was to initially be based on Perl 5.32 with a release expected in first half of 2021, and release candidates sooner. This plan was revised in May 2021, without any release timeframe or version of Perl 5 for use as a baseline specified. When Perl 7 would be released, Perl 5 would have gone into long term maintenance. Supported Perl 5 versions however would continue to get important security and bug fixes. Perl 7 was announced on 24 June 2020 at "The Perl Conference in the Cloud" as the successor to Perl 5. Based on Perl 5.32, Perl 7 was planned to be backward compatible with modern Perl 5 code; Perl 5 code, without boilerplate (pragma) header needs adding use compat::perl5; to stay compatible, but modern code can drop some of the boilerplate. The plan to go to Perl 7 brought up more discussion, however, and the Perl Steering Committee canceled it to avoid issues with backward compatibility for scripts that were not written to the pragmas and modules that would become the default in Perl 7. Perl 7 will only come out when the developers add enough features to warrant a major release upgrade. Design Philosophy According to Wall, Perl has two slogans. The first is "There's more than one way to do it," commonly known as TMTOWTDI, (pronounced Tim Toady). As proponents of this motto argue, this philosophy makes it easy to write concise statements. The second slogan is "Easy things should be easy and hard things should be possible". The design of Perl can be understood as a response to three broad trends in the computer industry: falling hardware costs, rising labor costs, and improvements in compiler technology. Many earlier computer languages, such as Fortran and C, aimed to make efficient use of expensive computer hardware. In contrast, Perl was designed so that computer programmers could write programs more quickly and easily. Perl has many features that ease the task of the programmer at the expense of greater CPU and memory requirements. These include automatic memory management; dynamic typing; strings, lists, and hashes; regular expressions; introspection; and an eval() function. Perl follows the theory of "no built-in limits", an idea similar to the Zero One Infinity rule. Wall was trained as a linguist, and the design of Perl is very much informed by linguistic principles. Examples include Huffman coding (common constructions should be short), good end-weighting (the important information should come first), and a large collection of language primitives. Perl favors language constructs that are concise and natural for humans to write, even where they complicate the Perl interpreter. Perl's syntax reflects the idea that "things that are different should look different." For example, scalars, arrays, and hashes have different leading sigils. Array indices and hash keys use different kinds of braces. Strings and regular expressions have different standard delimiters. There is a broad practical bent to both the Perl language and the community and culture that surround it. The preface to Programming Perl begins: "Perl is a language for getting your job done." One consequence of this is that Perl is not a tidy language. It includes many features, tolerates exceptions to its rules, and employs heuristics to resolve syntactical ambiguities. Because of the forgiving nature of the compiler, bugs can sometimes be hard to find. Perl's function documentation remarks on the variant behavior of built-in functions in list and scalar contexts by saying, "In general, they do what you want, unless you want consistency." Features The overall structure of Perl derives broadly from C. Perl is procedural in nature, with variables, expressions, assignment statements, brace-delimited blocks, control structures, and subroutines. Perl also takes features from shell programming. All variables are marked with leading sigils, which allow variables to be interpolated directly into strings. However, unlike the shell, Perl uses sigils on all accesses to variables, and unlike most other programming languages that use sigils, the sigil doesn't denote the type of the variable but the type of the expression. So for example, while an array is denoted by the sigil "@" (for example @arrayname), an individual member of the array is denoted by the scalar sigil "$" (for example $arrayname[3]). Perl also has many built-in functions that provide tools often used in shell programming (although many of these tools are implemented by programs external to the shell) such as sorting, and calling operating system facilities. Perl takes hashes ("associative arrays") from AWK and regular expressions from sed. These simplify many parsing, text-handling, and data-management tasks. Shared with Lisp is the implicit return of the last value in a block, and all statements are also expressions which can be used in larger expressions themselves. Perl 5 added features that support complex data structures, first-class functions (that is, closures as values), and an object-oriented programming model. These include references, packages, class-based method dispatch, and lexically scoped variables, along with compiler directives (for example, the strict pragma). A major additional feature introduced with Perl 5 was the ability to package code as reusable modules. Wall later stated that "The whole intent of Perl 5's module system was to encourage the growth of Perl culture rather than the Perl core." All versions of Perl do automatic data-typing and automatic memory management. The interpreter knows the type and storage requirements of every data object in the program; it allocates and frees storage for them as necessary using reference counting (so it cannot deallocate circular data structures without manual intervention). Legal type conversions – for example, conversions from number to string – are done automatically at run time; illegal type conversions are fatal errors. Syntax Perl has been referred to as "line noise" and a "write-only language" by its critics. Randal L. Schwartz in the first edition of the book Learning Perl, in the first chapter states: "Yes, sometimes Perl looks like line noise to the uninitiated, but to the seasoned Perl programmer, it looks like checksummed line noise with a mission in life." He also stated that the accusation that Perl is a write-only language could be avoided by coding with "proper care". The Perl overview document states that the names of built-in "magic" scalar variables "look like punctuation or line noise". However, the English module provides both long and short English alternatives. document states that line noise in regular expressions could be mitigated using the /x modifier to add whitespace. According to the Perl 6 FAQ, Perl 6 was designed to mitigate "the usual suspects" that elicit the "line noise" claim from Perl 5 critics, including the removal of "the majority of the punctuation variables" and the sanitization of the regex syntax. The Perl 6 FAQ also states that what is sometimes referred to as Perl's line noise is "the actual syntax of the language" just as gerunds and prepositions are a part of the English language. In a December 2012 blog posting, despite claiming that "Rakudo Perl 6 has failed and will continue to fail unless it gets some adult supervision", chromatic stated that the design of Perl 6 has a "well-defined grammar", an "improved type system, a unified object system with an intelligent metamodel, metaoperators, and a clearer system of context that provides for such niceties as pervasive laziness". He also stated that "Perl 6 has a coherence and a consistency that Perl 5 lacks." In Perl, one could write the "Hello, World!" program as: print "Hello, World!\n"; Here is a more complex Perl program, that counts down seconds from a given starting value: #!/usr/bin/perl use strict; use warnings; my ( $remaining, $total ); $remaining=$total=shift(@ARGV); STDOUT->autoflush(1); while ( $remaining ) { printf ( "Remaining %s/%s \r", $remaining--, $total ); sleep 1; } print "\n"; The Perl interpreter can also be used for one-off scripts on the command line. The following example (as invoked from an sh-compatible shell, such as Bash) translates the string "Bob" in all files ending with .txt in the current directory to "Robert": $ perl -i.bak -lp -e 's/Bob/Robert/g' *.txt Implementation No written specification or standard for the Perl language exists for Perl versions through Perl 5, and there are no plans to create one for the current version of Perl. There has been only one implementation of the interpreter, and the language has evolved along with it. That interpreter, together with its functional tests, stands as a de facto specification of the language. Perl 6, however, started with a specification, and several projects aim to implement some or all of the specification. Perl is implemented as a core interpreter, written in C, together with a large collection of modules, written in Perl and C. , the interpreter is 150,000 lines of C code and compiles to a 1 MB executable on typical machine architectures. Alternatively, the interpreter can be compiled to a link library and embedded in other programs. There are nearly 500 modules in the distribution, comprising 200,000 lines of Perl and an additional 350,000 lines of C code (much of the C code in the modules consists of character encoding tables). The interpreter has an object-oriented architecture. All of the elements of the Perl language—scalars, arrays, hashes, coderefs, file handles—are represented in the interpreter by C structs. Operations on these structs are defined by a large collection of macros, typedefs, and functions; these constitute the Perl C API. The Perl API can be bewildering to the uninitiated, but its entry points follow a consistent naming scheme, which provides guidance to those who use it. The life of a Perl interpreter divides broadly into a compile phase and a run phase. In Perl, the phases are the major stages in the interpreter's life-cycle. Each interpreter goes through each phase only once, and the phases follow in a fixed sequence. Most of what happens in Perl's compile phase is compilation, and most of what happens in Perl's run phase is execution, but there are significant exceptions. Perl makes important use of its capability to execute Perl code during the compile phase. Perl will also delay compilation into the run phase. The terms that indicate the kind of processing that is actually occurring at any moment are compile time and run time. Perl is in compile time at most points during the compile phase, but compile time may also be entered during the run phase. The compile time for code in a string argument passed to the eval built-in occurs during the run phase. Perl is often in run time during the compile phase and spends most of the run phase in run time. Code in BEGIN blocks executes at run time but in the compile phase. At compile time, the interpreter parses Perl code into a syntax tree. At run time, it executes the program by walking the tree. Text is parsed only once, and the syntax tree is subject to optimization before it is executed, so that execution is relatively efficient. Compile-time optimizations on the syntax tree include constant folding and context propagation, but peephole optimization is also performed. Perl has a Turing-complete grammar because parsing can be affected by run-time code executed during the compile phase. Therefore, Perl cannot be parsed by a straight Lex/Yacc lexer/parser combination. Instead, the interpreter implements its own lexer, which coordinates with a modified GNU bison parser to resolve ambiguities in the language. It is often said that "Only perl can parse Perl", meaning that only the Perl interpreter (perl) can parse the Perl language (Perl), but even this is not, in general, true. Because the Perl interpreter can simulate a Turing machine during its compile phase, it would need to decide the halting problem in order to complete parsing in every case. It is a longstanding result that the halting problem is undecidable, and therefore not even Perl can always parse Perl. Perl makes the unusual choice of giving the user access to its full programming power in its own compile phase. The cost in terms of theoretical purity is high, but practical inconvenience seems to be rare. Other programs that undertake to parse Perl, such as source-code analyzers and auto-indenters, have to contend not only with ambiguous syntactic constructs but also with the undecidability of Perl parsing in the general case. Adam Kennedy's PPI project focused on parsing Perl code as a document (retaining its integrity as a document), instead of parsing Perl as executable code (that not even Perl itself can always do). It was Kennedy who first conjectured that "parsing Perl suffers from the 'halting problem'," which was later proved. Perl is distributed with over 250,000 functional tests for core Perl language and over 250,000 functional tests for core modules. These run as part of the normal build process and extensively exercise the interpreter and its core modules. Perl developers rely on the functional tests to ensure that changes to the interpreter do not introduce software bugs; further, Perl users who see that the interpreter passes its functional tests on their system can have a high degree of confidence that it is working properly. Ports Perl is dual licensed under both the Artistic License 1.0 and the GNU General Public License. Distributions are available for most operating systems. It is particularly prevalent on Unix and Unix-like systems, but it has been ported to most modern (and many obsolete) platforms. With only six reported exceptions, Perl can be compiled from source code on all POSIX-compliant, or otherwise-Unix-compatible, platforms. Because of unusual changes required for the classic Mac OS environment, a special port called MacPerl was shipped independently. The Comprehensive Perl Archive Network carries a complete list of supported platforms with links to the distributions available on each. CPAN is also the source for publicly available Perl modules that are not part of the core Perl distribution. ActivePerl is a closed-source distribution from ActiveState that has regular releases that track the core Perl releases. The distribution previously included the Perl package manager (PPM), a popular tool for installing, removing, upgrading, and managing the use of common Perl modules; however, this tool was discontinued as of ActivePerl 5.28. Included also is PerlScript, a Windows Script Host (WSH) engine implementing the Perl language. Visual Perl is an ActiveState tool that adds Perl to the Visual Studio .NET development suite. A VBScript-to-Perl converter, a Perl compiler for Windows, and converters of AWK and sed to Perl have also been produced by this company and included on the ActiveState CD for Windows, which includes all of their distributions plus the Komodo IDE and all but the first on the Unix–Linux–POSIX variant thereof in 2002 and afterward. Performance The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages. The submitted Perl implementations typically perform toward the high end of the memory-usage spectrum and give varied speed results. Perl's performance in the benchmarks game is typical for interpreted languages. Large Perl programs start more slowly than similar programs in compiled languages because Perl has to compile the source every time it runs. In a talk at the YAPC::Europe 2005 conference and subsequent article "A Timely Start", Jean-Louis Leroy found that his Perl programs took much longer to run than expected because the perl interpreter spent significant time finding modules within his over-large include path. Unlike Java, Python, and Ruby, Perl has only experimental support for pre-compiling. Therefore, Perl programs pay this overhead penalty on every execution. The run phase of typical programs is long enough that amortized startup time is not substantial, but benchmarks that measure very short execution times are likely to be skewed due to this overhead. A number of tools have been introduced to improve this situation. The first such tool was Apache's mod_perl, which sought to address one of the most-common reasons that small Perl programs were invoked rapidly: CGI Web development. ActivePerl, via Microsoft ISAPI, provides similar performance improvements. Once Perl code is compiled, there is additional overhead during the execution phase that typically isn't present for programs written in compiled languages such as C or C++. Examples of such overhead include bytecode interpretation, reference-counting memory management, and dynamic type-checking. The most critical routines can be written in other languages (such as C), which can be connected to Perl via simple Inline modules or the more complex, but flexible, XS mechanism. Applications Perl has many and varied applications, compounded by the availability of many standard and third-party modules. Perl has chiefly been used to write CGI scripts: large projects written in Perl include cPanel, Slash, Bugzilla, RT, TWiki, and Movable Type; high-traffic websites that use Perl extensively include Priceline.com, Craigslist, IMDb, LiveJournal, DuckDuckGo, Slashdot and Ticketmaster. It is also an optional component of the popular LAMP technology stack for Web development, in lieu of PHP or Python. Perl is used extensively as a system programming language in the Debian Linux distribution. Perl is often used as a glue language, tying together systems and interfaces that were not specifically designed to interoperate, and for "data munging", that is, converting or processing large amounts of data for tasks such as creating reports. These strengths are linked intimately. The combination makes Perl a popular all-purpose language for system administrators, particularly because short programs, often called "one-liner programs", can be entered and run on a single command line. Perl code can be made portable across Windows and Unix; such code is often used by suppliers of software (both commercial off-the-shelf (COTS) and bespoke) to simplify packaging and maintenance of software build- and deployment-scripts. Perl/Tk and wxPerl are commonly used to add graphical user interfaces to Perl scripts. Perl's text-handling capabilities can be used for generating SQL queries; arrays, hashes, and automatic memory management make it easy to collect and process the returned data. For example, in Tim Bunce's Perl DBI application programming interface (API), the arguments to the API can be the text of SQL queries; thus it is possible to program in multiple languages at the same time (e.g., for generating a Web page using HTML, JavaScript, and SQL in a here document). The use of Perl variable interpolation to programmatically customize each of the SQL queries, and the specification of Perl arrays or hashes as the structures to programmatically hold the resulting data sets from each SQL query, allows a high-level mechanism for handling large amounts of data for post-processing by a Perl subprogram. In early versions of Perl, database interfaces were created by relinking the interpreter with a client-side database library. This was sufficiently difficult that it was done for only a few of the most-important and most widely used databases, and it restricted the resulting perl executable to using just one database interface at a time. In Perl 5, database interfaces are implemented by Perl DBI modules. The DBI (Database Interface) module presents a single, database-independent interface to Perl applications, while the DBD (Database Driver) modules handle the details of accessing some 50 different databases; there are DBD drivers for most ANSI SQL databases. DBI provides caching for database handles and queries, which can greatly improve performance in long-lived execution environments such as mod_perl, helping high-volume systems avert load spikes as in the Slashdot effect. In modern Perl applications, especially those written using web frameworks such as Catalyst, the DBI module is often used indirectly via object-relational mappers such as DBIx::Class, Class::DBI or Rose::DB::Object that generate SQL queries and handle data transparently to the application author. Community Perl's culture and community has developed alongside the language itself. Usenet was the first public venue in which Perl was introduced, but over the course of its evolution, Perl's community was shaped by the growth of broadening Internet-based services including the introduction of the World Wide Web. The community that surrounds Perl was, in fact, the topic of Wall's first "State of the Onion" talk. State of the Onion is the name for Wall's yearly keynote-style summaries on the progress of Perl and its community. They are characterized by his hallmark humor, employing references to Perl's culture, the wider hacker culture, Wall's linguistic background, sometimes his family life, and occasionally even his Christian background. Each talk is first given at various Perl conferences and is eventually also published online. In email, Usenet, and message board postings, "Just another Perl hacker" (JAPH) programs are a common trend, originated by Randal L. Schwartz, one of the earliest professional Perl trainers. In the parlance of Perl culture, Perl programmers are known as Perl hackers, and from this derives the practice of writing short programs to print out the phrase "Just another Perl hacker". In the spirit of the original concept, these programs are moderately obfuscated and short enough to fit into the signature of an email or Usenet message. The "canonical" JAPH as developed by Schwartz includes the comma at the end, although this is often omitted. Perl "golf" is the pastime of reducing the number of characters (key "strokes") used in a Perl program to the bare minimum, much in the same way that golf players seek to take as few shots as possible in a round. The phrase's first use emphasized the difference between pedestrian code meant to teach a newcomer and terse hacks likely to amuse experienced Perl programmers, an example of the latter being JAPHs that were already used in signatures in Usenet postings and elsewhere. Similar stunts had been an unnamed pastime in the language APL in previous decades. The use of Perl to write a program that performed RSA encryption prompted a widespread and practical interest in this pastime. In subsequent years, the term "code golf" has been applied to the pastime in other languages. A Perl Golf Apocalypse was held at Perl Conference 4.0 in Monterey, California in July 2000. As with C, obfuscated code competitions were a well known pastime in the late 1990s. The Obfuscated Perl Contest was a competition held by The Perl Journal from 1996 to 2000 that made an arch virtue of Perl's syntactic flexibility. Awards were given for categories such as "most powerful"—programs that made efficient use of space—and "best four-line signature" for programs that fit into four lines of 76 characters in the style of a Usenet signature block. Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as "Black Perl". Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks.
Technology
Scripting languages
null
23962
https://en.wikipedia.org/wiki/Phylogenetics
Phylogenetics
In biology, phylogenetics () is the study of the evolutionary history of life using genetics, which is known as phylogenetic inference. It establishes the relationship between organisms with the empirical data and observed heritable traits of DNA sequences, protein amino acid sequences, and morphology. The results are a phylogenetic tree—a diagram setting the hypothetical relationships between organisms and their evolutionary history. The tips of a phylogenetic tree can be living taxa or fossils, which represent the present time or "end" of an evolutionary lineage, respectively. A phylogenetic diagram can be rooted or unrooted. A rooted tree diagram indicates the hypothetical common ancestor of the tree. An unrooted tree diagram (a network) makes no assumption about the ancestral line, and does not show the origin or "root" of the taxa in question or the direction of inferred evolutionary transformations. In addition to their use for inferring phylogenetic patterns among taxa, phylogenetic analyses are often employed to represent relationships among genes or individual organisms. Such uses have become central to understanding biodiversity, evolution, ecology, and genomes. Phylogenetics is a component of systematics that uses similarities and differences of the characteristics of species to interpret their evolutionary relationships and origins. Phylogenetics focuses on whether the characteristics of a species reinforce a phylogenetic inference that it diverged from the most recent common ancestor of a taxonomic group. In the field of cancer research, phylogenetics can be used to study the clonal evolution of tumors and molecular chronology, predicting and showing how cell populations vary throughout the progression of the disease and during treatment, using whole genome sequencing techniques. The evolutionary processes behind cancer progression are quite different from those in most species and are important to phylogenetic inference; these differences manifest in several areas: the types of aberrations that occur, the rates of mutation, the high heterogeneity (variability) of tumor cell subclones, and the absence of genetic recombination. Phylogenetics can also aid in drug design and discovery. Phylogenetics allows scientists to organize species and can show which species are likely to have inherited particular traits that are medically useful, such as producing biologically active compounds - those that have effects on the human body. For example, in drug discovery, venom-producing animals are particularly useful. Venoms from these animals produce several important drugs, e.g., ACE inhibitors and Prialt (Ziconotide). To find new venoms, scientists turn to phylogenetics to screen for closely related species that may have the same useful traits. The phylogenetic tree shows which species of fish have an origin of venom, and related fish they may contain the trait. Using this approach in studying venomous fish, biologists are able to identify the fish species that may be venomous. Biologist have used this approach in many species such as snakes and lizards. In forensic science, phylogenetic tools are useful to assess DNA evidence for court cases. The simple phylogenetic tree of viruses A-E shows the relationships between viruses e.g., all viruses are descendants of Virus A. HIV forensics uses phylogenetic analysis to track the differences in HIV genes and determine the relatedness of two samples. Phylogenetic analysis has been used in criminal trials to exonerate or hold individuals. HIV forensics does have its limitations, i.e., it cannot be the sole proof of transmission between individuals and phylogenetic analysis which shows transmission relatedness does not indicate direction of transmission. Taxonomy and classification Taxonomy is the identification, naming, and classification of organisms. Compared to systemization, classification emphasizes whether a species has characteristics of a taxonomic group. The Linnaean classification system developed in the 1700s by Carolus Linnaeus is the foundation for modern classification methods. Linnaean classification relies on an organism's phenotype or physical characteristics to group and organize species. With the emergence of biochemistry, organism classifications are now usually based on phylogenetic data, and many systematists contend that only monophyletic taxa should be recognized as named groups. The degree to which classification depends on inferred evolutionary history differs depending on the school of taxonomy: phenetics ignores phylogenetic speculation altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reflect phylogeny in its classifications by only recognizing groups based on shared, derived characters (synapomorphies); evolutionary taxonomy tries to take into account both the branching pattern and "degree of difference" to find a compromise between them. Inference of a phylogenetic tree Usual methods of phylogenetic inference involve computational approaches implementing the optimality criteria and methods of parsimony, maximum likelihood (ML), and MCMC-based Bayesian inference. All these depend upon an implicit or explicit mathematical model describing the evolution of characters observed. Phenetics, popular in the mid-20th century but now largely obsolete, used distance matrix-based methods to construct trees based on overall similarity in morphology or similar observable traits (i.e. in the phenotype or the overall similarity of DNA, not the DNA sequence), which was often assumed to approximate phylogenetic relationships. Prior to 1950, phylogenetic inferences were generally presented as narrative scenarios. Such methods are often ambiguous and lack explicit criteria for evaluating alternative hypotheses. Impacts of taxon sampling In phylogenetic analysis, taxon sampling selects a small group of taxa to represent the evolutionary history of its broader population. This process is also known as stratified sampling or clade-based sampling. The practice occurs given limited resources to compare and analyze every species within a target population. Based on the representative group selected, the construction and accuracy of phylogenetic trees vary, which impacts derived phylogenetic inferences. Unavailable datasets, such as an organism's incomplete DNA and protein amino acid sequences in genomic databases, directly restrict taxonomic sampling. Consequently, a significant source of error within phylogenetic analysis occurs due to inadequate taxon samples. Accuracy may be improved by increasing the number of genetic samples within its monophyletic group. Conversely, increasing sampling from outgroups extraneous to the target stratified population may decrease accuracy. Long branch attraction is an attributed theory for this occurrence, where nonrelated branches are incorrectly classified together, insinuating a shared evolutionary history. There are debates if increasing the number of taxa sampled improves phylogenetic accuracy more than increasing the number of genes sampled per taxon. Differences in each method's sampling impact the number of nucleotide sites utilized in a sequence alignment, which may contribute to disagreements. For example, phylogenetic trees constructed utilizing a more significant number of total nucleotides are generally more accurate, as supported by phylogenetic trees' bootstrapping replicability from random sampling. The graphic presented in Taxon Sampling, Bioinformatics, and Phylogenomics, compares the correctness of phylogenetic trees generated using fewer taxa and more sites per taxon on the x-axis to more taxa and fewer sites per taxon on the y-axis. With fewer taxa, more genes are sampled amongst the taxonomic group; in comparison, with more taxa added to the taxonomic sampling group, fewer genes are sampled. Each method has the same total number of nucleotide sites sampled. Furthermore, the dotted line represents a 1:1 accuracy between the two sampling methods. As seen in the graphic, most of the plotted points are located below the dotted line, which indicates gravitation toward increased accuracy when sampling fewer taxa with more sites per taxon. The research performed utilizes four different phylogenetic tree construction models to verify the theory; neighbor-joining (NJ), minimum evolution (ME), unweighted maximum parsimony (MP), and maximum likelihood (ML). In the majority of models, sampling fewer taxon with more sites per taxon demonstrated higher accuracy. Generally, with the alignment of a relatively equal number of total nucleotide sites, sampling more genes per taxon has higher bootstrapping replicability than sampling more taxa. However, unbalanced datasets within genomic databases make increasing the gene comparison per taxon in uncommonly sampled organisms increasingly difficult. History Overview The term "phylogeny" derives from the German , introduced by Haeckel in 1866, and the Darwinian approach to classification became known as the "phyletic" approach. It can be traced back to Aristotle, who wrote in his Posterior Analytics, "We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses." Ernst Haeckel's recapitulation theory The modern concept of phylogenetics evolved primarily as a disproof of a previously widely accepted theory. During the late 19th century, Ernst Haeckel's recapitulation theory, or "biogenetic fundamental law", was widely popular. It was often expressed as "ontogeny recapitulates phylogeny", i.e. the development of a single organism during its lifetime, from germ to adult, successively mirrors the adult stages of successive ancestors of the species to which it belongs. But this theory has long been rejected. Instead, ontogeny evolves – the phylogenetic history of a species cannot be read directly from its ontogeny, as Haeckel thought would be possible, but characters from ontogeny can be (and have been) used as data for phylogenetic analyses; the more closely related two species are, the more apomorphies their embryos share. Timeline of key points 14th century, lex parsimoniae (parsimony principle), William of Ockam, English philosopher, theologian, and Franciscan friar, but the idea actually goes back to Aristotle, as a precursor concept. He introduced the concept of Occam's razor, which is the problem solving principle that recommends searching for explanations constructed with the smallest possible set of elements. Though he did not use these exact words, the principle can be summarized as "Entities must not be multiplied beyond necessity." The principle advocates that when presented with competing hypotheses about the same prediction, one should prefer the one that requires fewest assumptions. 1763, Bayesian probability, Rev. Thomas Bayes, a precursor concept. Bayesian probability began a resurgence in the 1950s, allowing scientists in the computing field to pair traditional Bayesian statistics with other more modern techniques. It is now used as a blanket term for several related interpretations of probability as an amount of epistemic confidence. 18th century, Pierre Simon (Marquis de Laplace), perhaps first to use ML (maximum likelihood), precursor concept. His work gave way to the Laplace distribution, which can be directly linked to least absolute deviations. 1809, evolutionary theory, Philosophie Zoologique, Jean-Baptiste de Lamarck, precursor concept, foreshadowed in the 17th century and 18th century by Voltaire, Descartes, and Leibniz, with Leibniz even proposing evolutionary changes to account for observed gaps suggesting that many species had become extinct, others transformed, and different species that share common traits may have at one time been a single race, also foreshadowed by some early Greek philosophers such as Anaximander in the 6th century BC and the atomists of the 5th century BC, who proposed rudimentary theories of evolution 1837, Darwin's notebooks show an evolutionary tree 1840, American Geologist Edward Hitchcock published what is considered to be the first paleontological "Tree of Life". Many critiques, modifications, and explanations would follow. 1843, distinction between homology and analogy (the latter now referred to as homoplasy), Richard Owen, precursor concept. Homology is the term used to characterize the similarity of features that can be parsimoniously explained by common ancestry. Homoplasy is the term used to describe a feature that has been gained or lost independently in separate lineages over the course of evolution. 1858, Paleontologist Heinrich Georg Bronn (1800–1862) published a hypothetical tree to illustrating the paleontological "arrival" of new, similar species. following the extinction of an older species. Bronn did not propose a mechanism responsible for such phenomena, precursor concept. 1858, elaboration of evolutionary theory, Darwin and Wallace, also in Origin of Species by Darwin the following year, precursor concept. 1866, Ernst Haeckel, first publishes his phylogeny-based evolutionary tree, precursor concept. Haeckel introduces the now-disproved recapitulation theory. He introduced the term "Cladus" as a taxonomic category just below subphylum. 1893, Dollo's Law of Character State Irreversibility, precursor concept. Dollo's Law of Irreversibility states that "an organism never comes back exactly to its previous state due to the indestructible nature of the past, it always retains some trace of the transitional stages through which it has passed." 1912, ML (maximum likelihood recommended, analyzed, and popularized by Ronald Fisher, precursor concept. Fisher is one of the main contributors to the early 20th-century revival of Darwinism, and has been called the "greatest of Darwin's successors" for his contributions to the revision of the theory of evolution and his use of mathematics to combine Mendelian genetics and natural selection in the 20th century "modern synthesis". 1921, Tillyard uses term "phylogenetic" and distinguishes between archaic and specialized characters in his classification system. 1940, Lucien Cuénot coined the term "clade" in 1940: "terme nouveau de clade (du grec κλάδοςç, branche) [A new term clade (from the Greek word klados, meaning branch)]". He used it for evolutionary branching. 1947, Bernhard Rensch introduced the term Kladogenesis in his German book Neuere Probleme der Abstammungslehre Die transspezifische Evolution, translated into English in 1959 as Evolution Above the Species Level (still using the same spelling). 1949, Jackknife resampling, Maurice Quenouille (foreshadowed in '46 by Mahalanobis and extended in '58 by Tukey), precursor concept. 1950, Willi Hennig's classic formalization. Hennig is considered the founder of phylogenetic systematics, and published his first works in German of this year. He also asserted a version of the parsimony principle, stating that the presence of amorphous characters in different species 'is always reason for suspecting kinship, and that their origin by convergence should not be presumed a priori'. This has been considered a foundational view of phylogenetic inference. 1952, William Wagner's ground plan divergence method. 1957, Julian Huxley adopted Rensch's terminology as "cladogenesis" with a full definition: "Cladogenesis I have taken over directly from Rensch, to denote all splitting, from subspeciation through adaptive radiation to the divergence of phyla and kingdoms." With it he introduced the word "clades", defining it as: "Cladogenesis results in the formation of delimitable monophyletic units, which may be called clades." 1960, Arthur Cain and Geoffrey Ainsworth Harrison coined "cladistic" to mean evolutionary relationship, 1963, first attempt to use ML (maximum likelihood) for phylogenetics, Edwards and Cavalli-Sforza. 1965 Camin-Sokal parsimony, first parsimony (optimization) criterion and first computer program/algorithm for cladistic analysis both by Camin and Sokal. Character compatibility method, also called clique analysis, introduced independently by Camin and Sokal (loc. cit.) and E. O. Wilson. 1966 English translation of Hennig. "Cladistics" and "cladogram" coined (Webster's, loc. cit.) 1969 Dynamic and successive weighting, James Farris. Wagner parsimony, Kluge and Farris. CI (consistency index), Kluge and Farris. Introduction of pairwise compatibility for clique analysis, Le Quesne. 1970, Wagner parsimony generalized by Farris. 1971 First successful application of ML (maximum likelihood) to phylogenetics (for protein sequences), Neyman. Fitch parsimony, Walter M. Fitch. These gave way to the most basic ideas of maximum parsimony. Fitch is known for his work on reconstructing phylogenetic trees from protein and DNA sequences. His definition of orthologous sequences has been referenced in many research publications. NNI (nearest neighbour interchange), first branch-swapping search strategy, developed independently by Robinson and Moore et al. ME (minimum evolution), Kidd and Sgaramella-Zonta (it is unclear if this is the pairwise distance method or related to ML as Edwards and Cavalli-Sforza call ML "minimum evolution"). 1972, Adams consensus, Adams. 1976, prefix system for ranks, Farris. 1977, Dollo parsimony, Farris. 1979 Nelson consensus, Nelson. MAST (maximum agreement subtree)((GAS) greatest agreement subtree), a consensus method, Gordon. Bootstrap, Bradley Efron, precursor concept. 1980, PHYLIP, first software package for phylogenetic analysis, Joseph Felsenstein. A free computational phylogenetics package of programs for inferring evolutionary trees (phylogenies). One such example tree created by PHYLIP, called a "drawgram", generates rooted trees. This image shown in the figure below shows the evolution of phylogenetic trees over time. 1981 Majority consensus, Margush and MacMorris. Strict consensus, Sokal and Rohlffirst computationally efficient ML (maximum likelihood) algorithm. Felsenstein created the Felsenstein Maximum Likelihood method, used for the inference of phylogeny which evaluates a hypothesis about evolutionary history in terms of the probability that the proposed model and the hypothesized history would give rise to the observed data set. 1982 PHYSIS, Mikevich and Farris Branch and bound, Hendy and Penny 1985 First cladistic analysis of eukaryotes based on combined phenotypic and genotypic evidence Diana Lipscomb. First issue of Cladistics. First phylogenetic application of bootstrap, Felsenstein. First phylogenetic application of jackknife, Scott Lanyon. 1986, MacClade, Maddison and Maddison. 1987, neighbor-joining method Saitou and Nei 1988, Hennig86 (version 1.5), Farris Bremer support (decay index), Bremer. 1989 RI (retention index), RCI (rescaled consistency index), Farris. HER (homoplasy excess ratio), Archie. 1990 combinable components (semi-strict) consensus, Bremer. SPR (subtree pruning and regrafting), TBR (tree bisection and reconnection), Swofford and Olsen. 1991 DDI (data decisiveness index), Goloboff. First cladistic analysis of eukaryotes based only on phenotypic evidence, Lipscomb. 1993, implied weighting Goloboff. 1994, reduced consensus: RCC (reduced cladistic consensus) for rooted trees, Wilkinson. 1995, reduced consensus RPC (reduced partition consensus) for unrooted trees, Wilkinson. 1996, first working methods for BI (Bayesian Inference) independently developed by Li, Mau, and Rannala and Yang and all using MCMC (Markov chain-Monte Carlo). 1998, TNT (Tree Analysis Using New Technology), Goloboff, Farris, and Nixon. 1999, Winclada, Nixon. 2003, symmetrical resampling, Goloboff. 2004, 2005, similarity metric (using an approximation to Kolmogorov complexity) or NCD (normalized compression distance), Li et al., Cilibrasi and Vitanyi. Uses of phylogenetic analysis Pharmacology One use of phylogenetic analysis involves the pharmacological examination of closely related groups of organisms. Advances in cladistics analysis through faster computer programs and improved molecular techniques have increased the precision of phylogenetic determination, allowing for the identification of species with pharmacological potential. Historically, phylogenetic screens for pharmacological purposes were used in a basic manner, such as studying the Apocynaceae family of plants, which includes alkaloid-producing species like Catharanthus, known for producing vincristine, an antileukemia drug. Modern techniques now enable researchers to study close relatives of a species to uncover either a higher abundance of important bioactive compounds (e.g., species of Taxus for taxol) or natural variants of known pharmaceuticals (e.g., species of Catharanthus for different forms of vincristine or vinblastine). Biodiversity Phylogenetic analysis has also been applied to biodiversity studies within the fungi family. Phylogenetic analysis helps understand the evolutionary history of various groups of organisms, identify relationships between different species, and predict future evolutionary changes. Emerging imagery systems and new analysis techniques allow for the discovery of more genetic relationships in biodiverse fields, which can aid in conservation efforts by identifying rare species that could benefit ecosystems globally. Infectious disease epidemiology Whole-genome sequence data from outbreaks or epidemics of infectious diseases can provide important insights into transmission dynamics and inform public health strategies. Traditionally, studies have combined genomic and epidemiological data to reconstruct transmission events. However, recent research has explored deducing transmission patterns solely from genomic data using phylodynamics, which involves analyzing the properties of pathogen phylogenies. Phylodynamics uses theoretical models to compare predicted branch lengths with actual branch lengths in phylogenies to infer transmission patterns. Additionally, coalescent theory, which describes probability distributions on trees based on population size, has been adapted for epidemiological purposes. Another source of information within phylogenies that has been explored is "tree shape." These approaches, while computationally intensive, have the potential to provide valuable insights into pathogen transmission dynamics. The structure of the host contact network significantly impacts the dynamics of outbreaks, and management strategies rely on understanding these transmission patterns. Pathogen genomes spreading through different contact network structures, such as chains, homogeneous networks, or networks with super-spreaders, accumulate mutations in distinct patterns, resulting in noticeable differences in the shape of phylogenetic trees, as illustrated in Fig. 1. Researchers have analyzed the structural characteristics of phylogenetic trees generated from simulated bacterial genome evolution across multiple types of contact networks. By examining simple topological properties of these trees, researchers can classify them into chain-like, homogeneous, or super-spreading dynamics, revealing transmission patterns. These properties form the basis of a computational classifier used to analyze real-world outbreaks. Computational predictions of transmission dynamics for each outbreak often align with known epidemiological data. Different transmission networks result in quantitatively different tree shapes. To determine whether tree shapes captured information about underlying disease transmission patterns, researchers simulated the evolution of a bacterial genome over three types of outbreak contact networks—homogeneous, super-spreading, and chain-like. They summarized the resulting phylogenies with five metrics describing tree shape. Figures 2 and 3 illustrate the distributions of these metrics across the three types of outbreaks, revealing clear differences in tree topology depending on the underlying host contact network. Super-spreader networks give rise to phylogenies with higher Colless imbalance, longer ladder patterns, lower Δw, and deeper trees than those from homogeneous contact networks. Trees from chain-like networks are less variable, deeper, more imbalanced, and narrower than those from other networks. Scatter plots can be used to visualize the relationship between two variables in pathogen transmission analysis, such as the number of infected individuals and the time since infection. These plots can help identify trends and patterns, such as whether the spread of the pathogen is increasing or decreasing over time, and can highlight potential transmission routes or super-spreader events. Box plots displaying the range, median, quartiles, and potential outliers datasets can also be valuable for analyzing pathogen transmission data, helping to identify important features in the data distribution. They may be used to quickly identify differences or similarities in the transmission data. Disciplines other than biology Phylogenetic tools and representations (trees and networks) can also be applied to philology, the study of the evolution of oral languages and written text and manuscripts, such as in the field of quantitative comparative linguistics. Computational phylogenetics can be used to investigate a language as an evolutionary system. The evolution of human language closely corresponds with human's biological evolution which allows phylogenetic methods to be applied. The concept of a "tree" serves as an efficient way to represent relationships between languages and language splits. It also serves as a way of testing hypotheses about the connections and ages of language families. For example, relationships among languages can be shown by using cognates as characters. The phylogenetic tree of Indo-European languages shows the relationships between several of the languages in a timeline, as well as the similarity between words and word order. There are three types of criticisms about using phylogenetics in philology, the first arguing that languages and species are different entities, therefore you can not use the same methods to study both. The second being how phylogenetic methods are being applied to linguistic data. And the third, discusses the types of data that is being used to construct the trees. Bayesian phylogenetic methods, which are sensitive to how treelike the data is, allow for the reconstruction of relationships among languages, locally and globally. The main two reasons for the use of Bayesian phylogenetics are that (1) diverse scenarios can be included in calculations and (2) the output is a sample of trees and not a single tree with true claim. The same process can be applied to texts and manuscripts. In Paleography, the study of historical writings and manuscripts, texts were replicated by scribes who copied from their source and alterations - i.e., 'mutations' - occurred when the scribe did not precisely copy the source. Phylogenetics has been applied to archaeological artefacts such as the early hominin hand-axes, late Palaeolithic figurines, Neolithic stone arrowheads, Bronze Age ceramics, and historical-period houses. Bayesian methods have also been employed by archaeologists in an attempt to quantify uncertainty in the tree topology and divergence times of stone projectile point shapes in the European Final Palaeolithic and earliest Mesolithic.
Biology and health sciences
Genetics and taxonomy
null
23971
https://en.wikipedia.org/wiki/Pilus
Pilus
A pilus (Latin for 'hair'; : pili) is a hair-like cell-surface appendage found on many bacteria and archaea. The terms pilus and fimbria (Latin for 'fringe'; plural: fimbriae) can be used interchangeably, although some researchers reserve the term pilus for the appendage required for bacterial conjugation. All conjugative pili are primarily composed of pilin – fibrous proteins, which are oligomeric. Dozens of these structures can exist on the bacterial and archaeal surface. Some bacteria, viruses or bacteriophages attach to receptors on pili at the start of their reproductive cycle. Pili are antigenic. They are also fragile and constantly replaced, sometimes with pili of different composition, resulting in altered antigenicity. Specific host responses to old pili structures are not effective on the new structure. Recombination between genes of some (but not all) pili code for variable (V) and constant (C) regions of the pili (similar to immunoglobulin diversity). As the primary antigenic determinants, virulence factors and impunity factors on the cell surface of a number of species of gram-negative and some gram-positive bacteria, including Enterobacteriaceae, Pseudomonadaceae, and Neisseriaceae, there has been much interest in the study of pili as an organelle of adhesion and as a vaccine component. The first detailed study of pili was done by Brinton and co-workers who demonstrated the existence of two distinct phases within one bacterial strain: pileated (p+) and non-pileated) Types by function A few names are given to different types of pili by their function. The classification does not always overlap with the structural or evolutionary-based types, as convergent evolution occurs. Conjugative pili Conjugative pili allow for the transfer of DNA between bacteria, in the process of bacterial conjugation. They are sometimes called "sex pili", in analogy to sexual reproduction, because they allow for the exchange of genes via the formation of "mating pairs". Perhaps the most well-studied is the F-pilus of Escherichia coli, encoded by the F sex factor. A sex pilus is typically 6 to 7 nm in diameter. During conjugation, a pilus emerging from the donor bacterium ensnares the recipient bacterium, draws it in close, and eventually triggers the formation of a mating bridge, which establishes direct contact and the formation of a controlled pore that allows transfer of DNA from the donor to the recipient. Typically, the DNA transferred consists of the genes required to make and transfer pili (often encoded on a plasmid), and so is a kind of selfish DNA; however, other pieces of DNA are often co-transferred and this can result in dissemination of genetic traits throughout a bacterial population, such as antibiotic resistance. The connection established by the F-pilus is extremely mechanically and thermochemically resistant thanks to the robust properties of the F-pilus, which ensures successful gene transfer in a variety of environments. Not all bacteria can make conjugative pili, but conjugation can occur between bacteria of different species. Hyperthermophilic archaea encode pili structurally similar to the bacterial conjugative pili. However, unlike in bacteria, where conjugation apparatus typically mediates the transfer of mobile genetic elements, such as plasmids or transposons, the conjugative machinery of hyperthermophilic archaea, called Ced (Crenarchaeal system for exchange of DNA) and Ted (Thermoproteales system for exchange of DNA), appears to be responsible for the transfer of cellular DNA between members of the same species. It has been suggested that in these archaea the conjugation machinery has been fully domesticated for promoting DNA repair through homologous recombination rather than spread of mobile genetic elements. Fimbriae Fimbria (Latin for 'fringe', : fimbriae) is a term used for a short pilus, an appendage that is used to attach the bacterium to a surface, sometimes also called an "attachment pilus" or adhesive pilus. The term "fimbria" can refer to many different (structural) types of pilus. Indeed, many different types of pili have been used for adhesion, a case of convergent evolution. The Gene Ontology system does not treat fimbriae as a distinct type of appendage, using the generic pilus (GO:0009289) type instead. This appendage ranges from 3–10 nanometers in diameter and can be as much as several micrometers long. Fimbriae are used by bacteria to adhere to one another and to adhere to animal cells and some inanimate objects. A bacterium can have as many as 1,000 fimbriae. Fimbriae are only visible with the use of an electron microscope. They may be straight or flexible. Fimbriae possess adhesins which attach them to some sort of substratum so that the bacteria can withstand shear forces and obtain nutrients. For example, E. coli uses them to attach to mannose receptors. Some aerobic bacteria form a very thin layer at the surface of a broth culture. This layer, called a pellicle, consists of many aerobic bacteria that adhere to the surface by their fimbriae. Thus, fimbriae allow the aerobic bacteria to remain both on the broth, from which they take nutrients, and near the air. Fimbriae are required for the formation of biofilm, as they attach bacteria to host surfaces for colonization during infection. Fimbriae are either located at the poles of a cell or are evenly spread over its entire surface. This term was also used in a lax sense to refer to all pili, by those who use "pilus" to specifically refer to sex pili. Types by assembling system or structure Transfer The Tra (transfer) family includes all known sex pili (as of 2010). They are related to the type IV secretion system (T4SS). They can be classified into the F-like type (after the F-pilus) and the P-like type. Like their secretion counterparts, the pilus injects material, DNA in this case, into another cell. Type IV pili Some pili, called type IV pili (T4P), generate motile forces. The external ends of the pili adhere to a solid substrate, either the surface to which the bacterium is attached or to other bacteria. Then, when the pili contract, they pull the bacterium forward like a grappling hook. Movement produced by type IV pili is typically jerky, so it is called twitching motility, as opposed to other forms of bacterial motility such as that produced by flagella. However, some bacteria, for example Myxococcus xanthus, exhibit gliding motility. Bacterial type IV pili are similar in structure to the component proteins of archaella (archaeal flagella), and both are related to the Type II secretion system (T2SS); they are unified by the group of Type IV filament systems. Besides archaella, many archaea produce adhesive type 4 pili, which enable archaeal cells to adhere to different substrates. The N-terminal alpha-helical portions of the archaeal type 4 pilins and archaellins are homologous to the corresponding regions of bacterial T4P; however, the C-terminal beta-strand-rich domains appear to be unrelated in bacterial and archaeal pilins. Genetic transformation is the process by which a recipient bacterial cell takes up DNA from a neighboring cell and integrates this DNA into its genome by homologous recombination. In Neisseria meningitidis (also called meningococcus), DNA transformation requires the presence of short DNA uptake sequences (DUSs) which are 9-10 monomers residing in coding regions of the donor DNA. Specific recognition of DUSs is mediated by a type IV pilin. Menningococcal type IV pili bind DNA through the minor pilin ComP via an electropositive stripe that is predicted to be exposed on the filament's surface. ComP displays an exquisite binding preference for selective DUSs. The distribution of DUSs within the N. meningitides genome favors certain genes, suggesting that there is a bias for genes involved in genomic maintenance and repair. This family was originally identified as "type IV fimbriae" by their appearance under the microscope. This classification survived as it happens to correspond to a clade. It has been shown that some archaeal type IV pilins can exist in 4 different conformations, yielding two pili with dramatically different structures. Remarkably, the two pili were produced by the same secretion machinery. However, which of the two pili is formed appears to depend on the growth conditions, suggesting that the two pili are functionally distinct. Type 1 fimbriae Another type are called type 1 fimbriae. They contain FimH adhesins at the "tips". The chaperone-usher pathway is responsible for moving many types of fimbriae out of the cell, including type 1 fimbriae and the P fimbriae. Curli "Gram-negative bacteria assemble functional amyloid surface fibers called curli." Curli are a type of fimbriae. Curli are composed of proteins called curlins. Some of the genes involved are CsgA, CsgB, CsgC, CsgD, CsgE, CsgF, and CsgG. Virulence Pili are responsible for virulence in the pathogenic strains of many bacteria, including E. coli, Vibrio cholerae, and many strains of Streptococcus. This is because the presence of pili greatly enhances bacteria's ability to bind to body tissues, which then increases replication rates and ability to interact with the host organism. If a species of bacteria has multiple strains but only some are pathogenic, it is likely that the pathogenic strains will have pili while the nonpathogenic strains do not. The development of attachment pili may then result in the development of further virulence traits. Fimbriae are one of the primary mechanisms of virulence for E. coli, Bordetella pertussis, Staphylococcus and Streptococcus bacteria. Their presence greatly enhances the bacteria's ability to attach to the host and cause disease. Nonpathogenic strains of V. cholerae first evolved pili, allowing them to bind to human tissues and form microcolonies. These pili then served as binding sites for the lysogenic bacteriophage that carries the disease-causing toxin. The gene for this toxin, once incorporated into the bacterium's genome, is expressed when the gene coding for the pilus is expressed (hence the name "toxin mediated pilus").
Biology and health sciences
Basic anatomy
Biology
23974
https://en.wikipedia.org/wiki/Plasmid
Plasmid
A plasmid is a small, extrachromosomal DNA molecule within a cell that is physically separated from chromosomal DNA and can replicate independently. They are most commonly found as small circular, double-stranded DNA molecules in bacteria; however, plasmids are sometimes present in archaea and eukaryotic organisms. Plasmids often carry useful genes, such as antibiotic resistance and virulence. While chromosomes are large and contain all the essential genetic information for living under normal conditions, plasmids are usually very small and contain additional genes for special circumstances. Artificial plasmids are widely used as vectors in molecular cloning, serving to drive the replication of recombinant DNA sequences within host organisms. In the laboratory, plasmids may be introduced into a cell via transformation. Synthetic plasmids are available for procurement over the internet by various vendors using submitted sequences typically designed with software, if a design does not work the vendor may make additional edits from the submission. Plasmids are considered replicons, units of DNA capable of replicating autonomously within a suitable host. However, plasmids, like viruses, are not generally classified as life. Plasmids are transmitted from one bacterium to another (even of another species) mostly through conjugation. This host-to-host transfer of genetic material is one mechanism of horizontal gene transfer, and plasmids are considered part of the mobilome. Unlike viruses, which encase their genetic material in a protective protein coat called a capsid, plasmids are "naked" DNA and do not encode genes necessary to encase the genetic material for transfer to a new host; however, some classes of plasmids encode the conjugative "sex" pilus necessary for their own transfer. Plasmids vary in size from 1 to over 400 kbp, and the number of identical plasmids in a single cell can range from one up to thousands. History The term plasmid was coined in 1952 by the American molecular biologist Joshua Lederberg to refer to "any extrachromosomal hereditary determinant." The term's early usage included any bacterial genetic material that exists extrachromosomally for at least part of its replication cycle, but because that description includes bacterial viruses, the notion of plasmid was refined over time to refer to genetic elements that reproduce autonomously. Later in 1968, it was decided that the term plasmid should be adopted as the term for extrachromosomal genetic element, and to distinguish it from viruses, the definition was narrowed to genetic elements that exist exclusively or predominantly outside of the chromosome, can replicate autonomously, and contribute to transferring mobile elements between unrelated bacteria. Properties and characteristics In order for plasmids to replicate independently within a cell, they must possess a stretch of DNA that can act as an origin of replication. The self-replicating unit, in this case, the plasmid, is called a replicon. A typical bacterial replicon may consist of a number of elements, such as the gene for plasmid-specific replication initiation protein (Rep), repeating units called iterons, DnaA boxes, and an adjacent AT-rich region. Smaller plasmids make use of the host replicative enzymes to make copies of themselves, while larger plasmids may carry genes specific for the replication of those plasmids. A few types of plasmids can also insert into the host chromosome, and these integrative plasmids are sometimes referred to as episomes in prokaryotes. Plasmids almost always carry at least one gene. Many of the genes carried by a plasmid are beneficial for the host cells, for example: enabling the host cell to survive in an environment that would otherwise be lethal or restrictive for growth. Some of these genes encode traits for antibiotic resistance or resistance to heavy metal, while others may produce virulence factors that enable a bacterium to colonize a host and overcome its defences or have specific metabolic functions that allow the bacterium to utilize a particular nutrient, including the ability to degrade recalcitrant or toxic organic compounds. Plasmids can also provide bacteria with the ability to fix nitrogen. Some plasmids, called cryptic plasmids, don't appear to provide any clear advantage to its host, yet still persist in bacterial populations. However, recent studies show that they may play a role in antibiotic resistance by contributing to heteroresistance within bacterial populations. Naturally occurring plasmids vary greatly in their physical properties. Their size can range from very small mini-plasmids of less than 1-kilobase pairs (kbp) to very large megaplasmids of several megabase pairs (Mbp). At the upper end, little differs between a megaplasmid and a minichromosome. Plasmids are generally circular, but examples of linear plasmids are also known. These linear plasmids require specialized mechanisms to replicate their ends. Plasmids may be present in an individual cell in varying number, ranging from one to several hundreds. The normal number of copies of plasmid that may be found in a single cell is called the plasmid copy number, and is determined by how the replication initiation is regulated and the size of the molecule. Larger plasmids tend to have lower copy numbers. Low-copy-number plasmids that exist only as one or a few copies in each bacterium are, upon cell division, in danger of being lost in one of the segregating bacteria. Such single-copy plasmids have systems that attempt to actively distribute a copy to both daughter cells. These systems, which include the parABS system and parMRC system, are often referred to as the partition system or partition function of a plasmid. Plasmids of linear form are unknown among phytopathogens with one exception, Rhodococcus fascians. Classifications and types Plasmids may be classified in a number of ways. Plasmids can be broadly classified into conjugative plasmids and non-conjugative plasmids. Conjugative plasmids contain a set of transfer genes which promote sexual conjugation between different cells. In the complex process of conjugation, plasmids may be transferred from one bacterium to another via sex pili encoded by some of the transfer genes (see figure). Non-conjugative plasmids are incapable of initiating conjugation, hence they can be transferred only with the assistance of conjugative plasmids. An intermediate class of plasmids are mobilizable, and carry only a subset of the genes required for transfer. They can parasitize a conjugative plasmid, transferring at high frequency only in its presence. Plasmids can also be classified into incompatibility groups. A microbe can harbour different types of plasmids, but different plasmids can only exist in a single bacterial cell if they are compatible. If two plasmids are not compatible, one or the other will be rapidly lost from the cell. Different plasmids may therefore be assigned to different incompatibility groups depending on whether they can coexist together. Incompatible plasmids (belonging to the same incompatibility group) normally share the same replication or partition mechanisms and can thus not be kept together in a single cell. Another way to classify plasmids is by function. There are five main classes: Fertility F-plasmids, which contain tra genes. They are capable of conjugation and result in the expression of sex pili. F-plasmids are categorized as either (+) or (-) and contribute to the difference of being a donor or recipient during conjugation. Resistance (R) plasmids, which contain genes that provide resistance against antibiotics or antibacterial agents was first discovered in 1959. R-factors where seen as the contributing factor for the spread of multidrug resistance in bacteria, some R-plasmids assist in transmissibility of other specifically non- self transmissible R-factors. Historically known as R-factors, before the nature of plasmids was understood. Col plasmids, which contain genes that code for bacteriocins, proteins that can kill other bacteria. Degradative plasmids, which enable the digestion of unusual substances, e.g. toluene and salicylic acid. Virulence plasmids, which turn the bacterium into a pathogen. e.g. Ti plasmid in Agrobacterium tumefaciens. Bacteria under selective pressure will keep plasmids containing virulence factors as it is a cost - benefit for survival, removal of the selective pressure can lead to the loss of a plasmid due to the expenditure of energy needed to keep it is no longer justified. Plasmids can belong to more than one of these functional groups. RNA plasmids Although most plasmids are double-stranded DNA molecules, some consist of single-stranded DNA, or predominantly double-stranded RNA. RNA plasmids are non-infectious extrachromosomal linear RNA replicons, both encapsidated and unencapsidated, which have been found in fungi and various plants, from algae to land plants. In many cases, however, it may be difficult or impossible to clearly distinguish RNA plasmids from RNA viruses and other infectious RNAs. Chromids Chromids are elements that exist at the boundary between a chromosome and a plasmid, found in about 10% of bacterial species sequenced by 2009. These elements carry core genes and have codon usage similar to the chromosome, yet use a plasmid-type replication mechanism such as the low copy number RepABC. As a result, they have been variously classified as minichromosomes or megaplasmids in the past. In Vibrio, the bacterium synchronizes the replication of the chromosome and chromid by a conserved genome size ratio. Vectors Artificially constructed plasmids may be used as vectors in genetic engineering. These plasmids serve as important tools in genetics and biotechnology labs, where they are commonly used to clone and amplify (make many copies of) or express particular genes. A wide variety of plasmids are commercially available for such uses. The gene to be replicated is normally inserted into a plasmid that typically contains a number of features for their use. These include a gene that confers resistance to particular antibiotics (ampicillin is most frequently used for bacterial strains), an origin of replication to allow the bacterial cells to replicate the plasmid DNA, and a suitable site for cloning (referred to as a multiple cloning site). DNA structural instability can be defined as a series of spontaneous events that culminate in an unforeseen rearrangement, loss, or gain of genetic material. Such events are frequently triggered by the transposition of mobile elements or by the presence of unstable elements such as non-canonical (non-B) structures. Accessory regions pertaining to the bacterial backbone may engage in a wide range of structural instability phenomena. Well-known catalysts of genetic instability include direct, inverted, and tandem repeats, which are known to be conspicuous in a large number of commercially available cloning and expression vectors. Insertion sequences can also severely impact plasmid function and yield, by leading to deletions and rearrangements, activation, down-regulation or inactivation of neighboring gene expression. Therefore, the reduction or complete elimination of extraneous noncoding backbone sequences would pointedly reduce the propensity for such events to take place, and consequently, the overall recombinogenic potential of the plasmid. Cloning Plasmids are the most-commonly used bacterial cloning vectors. These cloning vectors contain a site that allows DNA fragments to be inserted, for example a multiple cloning site or polylinker which has several commonly used restriction sites to which DNA fragments may be ligated. After the gene of interest is inserted, the plasmids are introduced into bacteria by a process called transformation. These plasmids contain a selectable marker, usually an antibiotic resistance gene, which confers on the bacteria an ability to survive and proliferate in a selective growth medium containing the particular antibiotics. The cells after transformation are exposed to the selective media, and only cells containing the plasmid may survive. In this way, the antibiotics act as a filter to select only the bacteria containing the plasmid DNA. The vector may also contain other marker genes or reporter genes to facilitate selection of plasmids with cloned inserts. Bacteria containing the plasmid can then be grown in large amounts, harvested, and the plasmid of interest may then be isolated using various methods of plasmid preparation. A plasmid cloning vector is typically used to clone DNA fragments of up to 15 kbp. To clone longer lengths of DNA, lambda phage with lysogeny genes deleted, cosmids, bacterial artificial chromosomes, or yeast artificial chromosomes are used. Protein Production Another major use of plasmids is to make large amounts of proteins. In this case, researchers grow bacteria containing a plasmid harboring the gene of interest. Just as the bacterium produces proteins to confer its antibiotic resistance, it can also be induced to produce large amounts of proteins from the inserted gene. This is a cheap and easy way of mass-producing the protein, for example, utilizing the rapid reproduction of E.coli with a plasmid containing the insulin gene leads to a large production of insulin. Gene therapy Plasmids may also be used for gene transfer as a potential treatment in gene therapy so that it may express the protein that is lacking in the cells. Some forms of gene therapy require the insertion of therapeutic genes at pre-selected chromosomal target sites within the human genome. Plasmid vectors are one of many approaches that could be used for this purpose. Zinc finger nucleases (ZFNs) offer a way to cause a site-specific double-strand break to the DNA genome and cause homologous recombination. Plasmids encoding ZFN could help deliver a therapeutic gene to a specific site so that cell damage, cancer-causing mutations, or an immune response is avoided. Disease models Plasmids were historically used to genetically engineer the embryonic stem cells of rats to create rat genetic disease models. The limited efficiency of plasmid-based techniques precluded their use in the creation of more accurate human cell models. However, developments in adeno-associated virus recombination techniques, and zinc finger nucleases, have enabled the creation of a new generation of isogenic human disease models. Biosynthetic Gene Cluster (BGC) Plasmids assist in transporting biosynthetic gene clusters - a set of gene that contain all the necessary enzymes that lead to the production of special metabolites (formally known as secondary metabolite). A benefit of using plasmids to transfer BGC is demonstrated by using a suitable host that can mass produce specialized metabolites, some of these molecules are able to control microbial population. Plasmids can contain and express several BGCs with a few plasmids known to be exclusive for transferring BGCs. BGC's can also be transfers to the host organism's chromosome, utilizing a plasmid vector, which allows for studies in gene knockout experiments. By using plasmids for the uptake of BGCs, microorganisms can gain an advantage as production is not limited to antibiotic resistant biosynthesis genes but the production of toxins/antitoxins. Episomes The term episome was introduced by François Jacob and Élie Wollman in 1958 to refer to extra-chromosomal genetic material that may replicate autonomously or become integrated into the chromosome. Since the term was introduced, however, its use has changed, as plasmid has become the preferred term for autonomously replicating extrachromosomal DNA. At a 1968 symposium in London some participants suggested that the term episome be abandoned, although others continued to use the term with a shift in meaning. Today, some authors use episome in the context of prokaryotes to refer to a plasmid that is capable of integrating into the chromosome. The integrative plasmids may be replicated and stably maintained in a cell through multiple generations, but at some stage, they will exist as an independent plasmid molecule. In the context of eukaryotes, the term episome is used to mean a non-integrated extrachromosomal closed circular DNA molecule that may be replicated in the nucleus. Viruses are the most common examples of this, such as herpesviruses, adenoviruses, and polyomaviruses, but some are plasmids. Other examples include aberrant chromosomal fragments, such as double minute chromosomes, that can arise during artificial gene amplifications or in pathologic processes (e.g., cancer cell transformation). Episomes in eukaryotes behave similarly to plasmids in prokaryotes in that the DNA is stably maintained and replicated with the host cell. Cytoplasmic viral episomes (as in poxvirus infections) can also occur. Some episomes, such as herpesviruses, replicate in a rolling circle mechanism, similar to bacteriophages (bacterial phage viruses). Others replicate through a bidirectional replication mechanism (Theta type plasmids). In either case, episomes remain physically separate from host cell chromosomes. Several cancer viruses, including Epstein-Barr virus and Kaposi's sarcoma-associated herpesvirus, are maintained as latent, chromosomally distinct episomes in cancer cells, where the viruses express oncogenes that promote cancer cell proliferation. In cancers, these episomes passively replicate together with host chromosomes when the cell divides. When these viral episomes initiate lytic replication to generate multiple virus particles, they generally activate cellular innate immunity defense mechanisms that kill the host cell. Plasmid maintenance Some plasmids or microbial hosts include an addiction system or postsegregational killing system (PSK), such as the hok/sok (host killing/suppressor of killing) system of plasmid R1 in Escherichia coli. This variant produces both a long-lived poison and a short-lived antidote. Several types of plasmid addiction systems (toxin/ antitoxin, metabolism-based, ORT systems) were described in the literature and used in biotechnical (fermentation) or biomedical (vaccine therapy) applications. Daughter cells that retain a copy of the plasmid survive, while a daughter cell that fails to inherit the plasmid dies or suffers a reduced growth-rate because of the lingering poison from the parent cell. Finally, the overall productivity could be enhanced. In contrast, plasmids used in biotechnology, such as pUC18, pBR322 and derived vectors, hardly ever contain toxin-antitoxin addiction systems, and therefore need to be kept under antibiotic pressure to avoid plasmid loss. Plasmids in nature Yeast plasmids Yeasts naturally harbour various plasmids. Notable among them are 2 μm plasmids—small circular plasmids often used for genetic engineering of yeast—and linear pGKL plasmids from Kluyveromyces lactis, that are responsible for killer phenotypes. Other types of plasmids are often related to yeast cloning vectors that include: Yeast integrative plasmid (YIp), yeast vectors that rely on integration into the host chromosome for survival and replication, and are usually used when studying the functionality of a solo gene or when the gene is toxic. Also connected with the gene URA3, that codes an enzyme related to the biosynthesis of pyrimidine nucleotides (T, C); Yeast Replicative Plasmid (YRp), which transport a sequence of chromosomal DNA that includes an origin of replication. These plasmids are less stable, as they can be lost during budding. Plant mitochondrial plasmids The mitochondria of many higher plants contain self-replicating, extra-chromosomal linear or circular DNA molecules which have been considered to be plasmids. These can range from 0.7 kb to 20 kb in size. The plasmids have been generally classified into two categories- circular and linear. Circular plasmids have been isolated and found in many different plants, with those in Vicia faba and Chenopodium album being the most studied and whose mechanism of replication is known. The circular plasmids can replicate using the θ model of replication (as in Vicia faba) and through rolling circle replication (as in C.album). Linear plasmids have been identified in some plant species such as Beta vulgaris, Brassica napus, Zea mays, etc. but are rarer than their circular counterparts. The function and origin of these plasmids remains largely unknown. It has been suggested that the circular plasmids share a common ancestor, some genes in the mitochondrial plasmid have counterparts in the nuclear DNA suggesting inter-compartment exchange. Meanwhile, the linear plasmids share structural similarities such as invertrons with viral DNA and fungal plasmids, like fungal plasmids they also have low GC content, these observations have led to some hypothesizing that these linear plasmids have viral origins, or have ended up in plant mitochondria through horizontal gene transfer from pathogenic fungi. Study of plasmids Plasmid DNA extraction Plasmids are often used to purify a specific sequence, since they can easily be purified away from the rest of the genome. For their use as vectors, and for molecular cloning, plasmids often need to be isolated. There are several methods to isolate plasmid DNA from bacteria, ranging from the plasmid extraction kits (miniprep to the maxiprep or bulkprep), alkaline lysis, enzymatic lysis, and mechanical lysis . The former can be used to quickly find out whether the plasmid is correct in any of several bacterial clones. The yield is a small amount of impure plasmid DNA, which is sufficient for analysis by restriction digest and for some cloning techniques. In the latter, much larger volumes of bacterial suspension are grown from which a maxi-prep can be performed. In essence, this is a scaled-up miniprep followed by additional purification. This results in relatively large amounts (several hundred micrograms) of very pure plasmid DNA. Many commercial kits have been created to perform plasmid extraction at various scales, purity, and levels of automation. Conformations Plasmid DNA may appear in one of five conformations, which (for a given size) run at different speeds in a gel during electrophoresis. The conformations are listed below in order of electrophoretic mobility (speed for a given applied voltage) from slowest to fastest: Nicked open-circular DNA has one strand cut. Relaxed circular DNA is fully intact with both strands uncut but has been enzymatically relaxed (supercoils removed). This can be modeled by letting a twisted extension cord unwind and relax and then plugging it into itself. Linear DNA has free ends, either because both strands have been cut or because the DNA was linear in vivo. This can be modeled with an electrical extension cord that is not plugged into itself. Supercoiled (or covalently closed-circular) DNA is fully intact with both strands uncut, and with an integral twist, resulting in a compact form. This can be modeled by twisting an extension cord and then plugging it into itself. Supercoiled denatured DNA is similar to supercoiled DNA, but has unpaired regions that make it slightly less compact; this can result from excessive alkalinity during plasmid preparation. The rate of migration for small linear fragments is directly proportional to the voltage applied at low voltages. At higher voltages, larger fragments migrate at continuously increasing yet different rates. Thus, the resolution of a gel decreases with increased voltage. At a specified, low voltage, the migration rate of small linear DNA fragments is a function of their length. Large linear fragments (over 20 kb or so) migrate at a certain fixed rate regardless of length. This is because the molecules 'respirate', with the bulk of the molecule following the leading end through the gel matrix. Restriction digests are frequently used to analyse purified plasmids. These enzymes specifically break the DNA at certain short sequences. The resulting linear fragments form 'bands' after gel electrophoresis. It is possible to purify certain fragments by cutting the bands out of the gel and dissolving the gel to release the DNA fragments. Because of its tight conformation, supercoiled DNA migrates faster through a gel than linear or open-circular DNA. Software for bioinformatics and design The use of plasmids as a technique in molecular biology is supported by bioinformatics software. These programs record the DNA sequence of plasmid vectors, help to predict cut sites of restriction enzymes, and to plan manipulations. Examples of software packages that handle plasmid maps are ApE, Clone Manager, GeneConstructionKit, Geneious, Genome Compiler, LabGenius, Lasergene, MacVector, pDraw32, Serial Cloner, UGENE, VectorFriends, Vector NTI, and WebDSV. These pieces of software help conduct entire experiments in silico before doing wet experiments. Plasmid collections Many plasmids have been created over the years and researchers have given out plasmids to plasmid databases such as the non-profit organisations Addgene and BCCM/GeneCorner. One can find and request plasmids from those databases for research. Researchers also often upload plasmid sequences to the NCBI database, from which sequences of specific plasmids can be retrieved.
Biology and health sciences
Cellular division
null
23977
https://en.wikipedia.org/wiki/Plant%20cell
Plant cell
Plant cells are the cells present in green plants, photosynthetic eukaryotes of the kingdom Plantae. Their distinctive features include primary cell walls containing cellulose, hemicelluloses and pectin, the presence of plastids with the capability to perform photosynthesis and store starch, a large vacuole that regulates turgor pressure, the absence of flagella or centrioles, except in the gametes, and a unique method of cell division involving the formation of a cell plate or phragmoplast that separates the new daughter cells. Characteristics of plant cells Plant cells have cell walls composed of cellulose, hemicelluloses, and pectin and constructed outside the cell membrane. Their composition contrasts with the cell walls of fungi, which are made of chitin, of bacteria, which are made of peptidoglycan and of archaea, which are made of pseudopeptidoglycan. In many cases lignin or suberin are secreted by the protoplast as secondary wall layers inside the primary cell wall. Cutin is secreted outside the primary cell wall and into the outer layers of the secondary cell wall of the epidermal cells of leaves, stems and other above-ground organs to form the plant cuticle. Cell walls perform many essential functions. They provide shape to form the tissue and organs of the plant, and play an important role in intercellular communication and plant-microbe interactions. The cell wall is flexible during growth and has small pores called plasmodesmata that allow the exchange of nutrients and hormones between cells. Many types of plant cells contain a large central vacuole, a water-filled volume enclosed by a membrane known as the tonoplast that maintains the cell's turgor, controls movement of molecules between the cytosol and sap, stores useful material such as phosphorus and nitrogen and digests waste proteins and organelles. Specialized cell-to-cell communication pathways known as plasmodesmata, occur in the form of pores in the primary cell wall through which the plasmalemma and endoplasmic reticulum of adjacent cells are continuous. Plant cells contain plastids, the most notable being chloroplasts, which contain the green-colored pigment chlorophyll that converts the energy of sunlight into chemical energy that the plant uses to make its own food from water and carbon dioxide in the process known as photosynthesis. Other types of plastids are the amyloplasts, specialized for starch storage, elaioplasts specialized for fat storage, and chromoplasts specialized for synthesis and storage of pigments. As in mitochondria, which have a genome encoding 37 genes, plastids have their own genomes of about 100–120 unique genes and are interpreted as having arisen as prokaryotic endosymbionts living in the cells of an early eukaryotic ancestor of the land plants and algae. Cell division in land plants and a few groups of algae, notably the Charophytes and the Chlorophyte Order Trentepohliales, takes place by construction of a phragmoplast as a template for building a cell plate late in cytokinesis. The motile, free-swimming sperm of bryophytes and pteridophytes, cycads and Ginkgo are the only cells of land plants to have flagella similar to those in animal cells. The conifers and flowering plants do not have motile sperm and lack both flagella and centrioles. Types of plant cells and tissues Plant cells differentiate from undifferentiated meristematic cells (analogous to the stem cells of animals) to form the major classes of cells and tissues of roots, stems, leaves, flowers, and reproductive structures, each of which may be composed of several cell types. Parenchyma Parenchyma cells are living cells that have functions ranging from storage and support to photosynthesis (mesophyll cells) and phloem loading (transfer cells). Apart from the xylem and phloem in their vascular bundles, leaves are composed mainly of parenchyma cells. Some parenchyma cells, as in the epidermis, are specialized for light penetration and focusing or regulation of gas exchange, but others are among the least specialized cells in plant tissue, and may remain totipotent, capable of dividing to produce new populations of undifferentiated cells, throughout their lives. Parenchyma cells have thin, permeable primary walls enabling the transport of small molecules between them, and their cytoplasm is responsible for a wide range of biochemical functions such as nectar secretion, or the manufacture of secondary products that discourage herbivory. Parenchyma cells that contain many chloroplasts and are concerned primarily with photosynthesis are called chlorenchyma cells. Chlorenchyma cells are parenchyma cells involved in photosynthesis. Others, such as the majority of the parenchyma cells in potato tubers and the seed cotyledons of legumes, have a storage function. Collenchyma Collenchyma cells are alive at maturity and have thickened cellulose cell walls. These cells mature from meristem derivatives that initially resemble parenchyma, but differences quickly become apparent. Plastids do not develop, and the secretory apparatus (ER and Golgi) proliferates to secrete additional primary wall. The wall is most commonly thickest at the corners, where three or more cells come in contact, and thinnest where only two cells come in contact, though other arrangements of the wall thickening are possible. Pectin and hemicellulose are the dominant constituents of collenchyma cell walls of dicotyledon angiosperms, which may contain as little as 20% of cellulose in Petasites. Collenchyma cells are typically quite elongated, and may divide transversely to give a septate appearance. The role of this cell type is to support the plant in axes still growing in length, and to confer flexibility and tensile strength on tissues. The primary wall lacks lignin that would make it tough and rigid, so this cell type provides what could be called plastic support – support that can hold a young stem or petiole into the air, but in cells that can be stretched as the cells around them elongate. Stretchable support (without elastic snap-back) is a good way to describe what collenchyma does. Parts of the strings in celery are collenchyma. Sclerenchyma Sclerenchyma is a tissue composed of two types of cells, sclereids and fibres that have thickened, lignified secondary walls laid down inside of the primary cell wall. The secondary walls harden the cells and make them impermeable to water. Consequently, sclereids and fibres are typically dead at functional maturity, and the cytoplasm is missing, leaving an empty central cavity. Sclereids or stone cells, (from the Greek skleros, hard) are hard, tough cells that give leaves or fruits a gritty texture. They may discourage herbivory by damaging digestive passages in small insect larval stages. Sclereids form the hard pit wall of peaches and many other fruits, providing physical protection to the developing kernel. Fibres are elongated cells with lignified secondary walls that provide load-bearing support and tensile strength to the leaves and stems of herbaceous plants. Sclerenchyma fibres are not involved in conduction, either of water and nutrients (as in the xylem) or of carbon compounds (as in the phloem), but it is likely that they evolved as modifications of xylem and phloem initials in early land plants. Xylem Xylem is a complex vascular tissue composed of water-conducting tracheids or vessel elements, together with fibres and parenchyma cells. Tracheids are elongated cells with lignified secondary thickening of the cell walls, specialised for conduction of water, and first appeared in plants during their transition to land in the Silurian period more than 425 million years ago (see Cooksonia). The possession of xylem tracheids defines the vascular plants or Tracheophytes. Tracheids are pointed, elongated xylem cells, the simplest of which have continuous primary cell walls and lignified secondary wall thickenings in the form of rings, hoops, or reticulate networks. More complex tracheids with valve-like perforations called bordered pits characterise the gymnosperms. The ferns and other pteridophytes and the gymnosperms have only xylem tracheids, while the flowering plants also have xylem vessels. Vessel elements are hollow xylem cells without end walls that are aligned end-to-end so as to form long continuous tubes. The bryophytes lack true xylem tissue, but their sporophytes have a water-conducting tissue known as the hydrome that is composed of elongated cells of simpler construction. Phloem Phloem is a specialised tissue for food transport in higher plants, mainly transporting sucrose along pressure gradients generated by osmosis, a process called translocation. Phloem is a complex tissue, consisting of two main cell types, the sieve tubes and the intimately associated companion cells, together with parenchyma cells, phloem fibres and sclereids. Sieve tubes are joined end-to-end with perforated end-plates between known as sieve plates, which allow transport of photosynthate between the sieve elements. The sieve tube elements lack nuclei and ribosomes, and their metabolism and functions are regulated by the adjacent nucleate companion cells. The companion cells, connected to the sieve tubes via plasmodesmata, are responsible for loading the phloem with sugars. The bryophytes lack phloem, but moss sporophytes have a simpler tissue with analogous function known as the leptome. Epidermis The plant epidermis is specialised tissue, composed of parenchyma cells, that covers the external surfaces of leaves, stems and roots. Several cell types may be present in the epidermis. Notable among these are the stomatal guard cells that control the rate of gas exchange between the plant and the atmosphere, glandular and clothing hairs or trichomes, and the root hairs of primary roots. In the shoot epidermis of most plants, only the guard cells have chloroplasts. Chloroplasts contain the green pigment chlorophyll which is needed for photosynthesis. The epidermal cells of aerial organs arise from the superficial layer of cells known as the tunica (L1 and L2 layers) that covers the plant shoot apex, whereas the cortex and vascular tissues arise from innermost layer of the shoot apex known as the corpus (L3 layer). The epidermis of roots originates from the layer of cells immediately beneath the root cap. The epidermis of all aerial organs, but not roots, is covered with a cuticle made of polyester cutin or polymer cutan (or both), with a superficial layer of epicuticular waxes. The epidermal cells of the primary shoot are thought to be the only plant cells with the biochemical capacity to synthesize cutin.
Biology and health sciences
Plant cells
null
23978
https://en.wikipedia.org/wiki/Polysaccharide
Polysaccharide
Polysaccharides (), or polycarbohydrates, are the most abundant carbohydrates found in food. They are long-chain polymeric carbohydrates composed of monosaccharide units bound together by glycosidic linkages. This carbohydrate can react with water (hydrolysis) using amylase enzymes as catalyst, which produces constituent sugars (monosaccharides or oligosaccharides). They range in structure from linear to highly branched. Examples include storage polysaccharides such as starch, glycogen and galactogen and structural polysaccharides such as hemicellulose and chitin. Polysaccharides are often quite heterogeneous, containing slight modifications of the repeating unit. Depending on the structure, these macromolecules can have distinct properties from their monosaccharide building blocks. They may be amorphous or even insoluble in water. When all the monosaccharides in a polysaccharide are the same type, the polysaccharide is called a homopolysaccharide or homoglycan, but when more than one type of monosaccharide is present, it is called a heteropolysaccharide or heteroglycan. Natural saccharides are generally composed of simple carbohydrates called monosaccharides with general formula (CH2O)n where n is three or more. Examples of monosaccharides are glucose, fructose, and glyceraldehyde. Polysaccharides, meanwhile, have a general formula of Cx(H2O)y where x and y are usually large numbers between 200 and 2500. When the repeating units in the polymer backbone are six-carbon monosaccharides, as is often the case, the general formula simplifies to (C6H10O5)n, where typically . As a rule of thumb, polysaccharides contain more than ten monosaccharide units, whereas oligosaccharides contain three to ten monosaccharide units, but the precise cutoff varies somewhat according to the convention. Polysaccharides are an important class of biological polymers. Their function in living organisms is usually either structure- or storage-related. Starch (a polymer of glucose) is used as a storage polysaccharide in plants, being found in the form of both amylose and the branched amylopectin. In animals, the structurally similar glucose polymer is the more densely branched glycogen, sometimes called "animal starch". Glycogen's properties allow it to be metabolized more quickly, which suits the active lives of moving animals. In bacteria, they play an important role in bacterial multicellularity. Cellulose and chitin are examples of structural polysaccharides. Cellulose is used in the cell walls of plants and other organisms and is said to be the most abundant organic molecule on Earth. It has many uses such as a significant role in the paper and textile industries and is used as a feedstock for the production of rayon (via the viscose process), cellulose acetate, celluloid, and nitrocellulose. Chitin has a similar structure but has nitrogen-containing side branches, increasing its strength. It is found in arthropod exoskeletons and in the cell walls of some fungi. It also has multiple uses, including surgical threads. Polysaccharides also include callose or laminarin, chrysolaminarin, xylan, arabinoxylan, mannan, fucoidan, and galactomannan. Function Structure Nutrition polysaccharides are common sources of energy. Many organisms can easily break down starches into glucose; however, most organisms cannot metabolize cellulose or other polysaccharides like cellulose, chitin, and arabinoxylans. Some bacteria and protists can metabolize these carbohydrate types. Ruminants and termites, for example, use microorganisms to process cellulose. Even though these complex polysaccharides are not very digestible, they provide important dietary elements for humans. Called dietary fiber, these carbohydrates enhance digestion. The main action of dietary fiber is to change the nature of the contents of the gastrointestinal tract and how other nutrients and chemicals are absorbed. Soluble fiber binds to bile acids in the small intestine, making them less likely to enter the body; this, in turn, lowers cholesterol levels in the blood. Soluble fiber also attenuates the absorption of sugar, reduces sugar response after eating, normalizes blood lipid levels and, once fermented in the colon, produces short-chain fatty acids as byproducts with wide-ranging physiological activities (discussion below). Although insoluble fiber is associated with reduced diabetes risk, the mechanism by which this occurs is unknown. Not yet formally proposed as an essential macronutrient (as of 2005), dietary fiber is nevertheless regarded as important for the diet, with regulatory authorities in many developed countries recommending increases in fiber intake. Storage polysaccharides Starch Starch is a glucose polymer in which glucopyranose units are bonded by alpha-linkages. It is made up of a mixture of amylose (15–20%) and amylopectin (80–85%). Amylose consists of a linear chain of several hundred glucose molecules, and Amylopectin is a branched molecule made of several thousand glucose units (every chain of 24–30 glucose units is one unit of Amylopectin). Starches are insoluble in water. They can be digested by breaking the alpha-linkages (glycosidic bonds). Both humans and other animals have amylases so that they can digest starches. Potato, rice, wheat, and maize are major sources of starch in the human diet. The formations of starches are the ways that plants store glucose. Glycogen Glycogen serves as the secondary long-term energy storage in animal and fungal cells, with the primary energy stores being held in adipose tissue. Glycogen is made primarily by the liver and the muscles, but can also be made by glycogenesis within the brain and stomach. Glycogen is analogous to starch, a glucose polymer in plants, and is sometimes referred to as animal starch, having a similar structure to amylopectin but more extensively branched and compact than starch. Glycogen is a polymer of α(1→4) glycosidic bonds linked with α(1→6)-linked branches. Glycogen is found in the form of granules in the cytosol/cytoplasm in many cell types and plays an important role in the glucose cycle. Glycogen forms an energy reserve that can be quickly mobilized to meet a sudden need for glucose, but one that is less compact and more immediately available as an energy reserve than triglycerides (lipids). In the liver hepatocytes, glycogen can compose up to 8 percent (100–120 grams in an adult) of the fresh weight soon after a meal. Only the glycogen stored in the liver can be made accessible to other organs. In the muscles, glycogen is found in a low concentration of one to two percent of the muscle mass. The amount of glycogen stored in the body—especially within the muscles, liver, and red blood cells—varies with physical activity, basal metabolic rate, and eating habits such as intermittent fasting. Small amounts of glycogen are found in the kidneys and even smaller amounts in certain glial cells in the brain and white blood cells. The uterus also stores glycogen during pregnancy to nourish the embryo. Glycogen is composed of a branched chain of glucose residues. It is primarily stored in the liver and muscles. It is an energy reserve for animals. It is the chief form of carbohydrate stored in animal organisms. It is insoluble in water. It turns brown-red when mixed with iodine. It also yields glucose on hydrolysis. Galactogen Galactogen is a polysaccharide of galactose that functions as energy storage in pulmonate snails and some Caenogastropoda. This polysaccharide is exclusive of the reproduction and is only found in the albumen gland from the female snail reproductive system and in the perivitelline fluid of eggs. Furthermore, galactogen serves as an energy reserve for developing embryos and hatchlings, which is later replaced by glycogen in juveniles and adults. Formed by crosslinking polysaccharide-based nanoparticles and functional polymers, galactogens have applications within hydrogel structures. These hydrogel structures can be designed to release particular nanoparticle pharmaceuticals and/or encapsulated therapeutics over time or in response to environmental stimuli. Galactogens are polysaccharides with binding affinity for bioanalytes. With this, by end-point attaching galactogens to other polysaccharides constituting the surface of medical devices, galactogens have use as a method of capturing bioanalytes (e.g., CTC's), a method for releasing the captured bioanalytes and an analysis method. Inulin Inulin is a naturally occurring polysaccharide complex carbohydrate composed of fructose, a plant-derived food that human digestive enzymes cannot completely break down. The inulins belong to a class of dietary fibers known as fructans. Inulin is used by some plants as a means of storing energy and is typically found in roots or rhizomes. Most plants that synthesize and store inulin do not store other forms of carbohydrates such as starch. In the United States in 2018, the Food and Drug Administration approved inulin as a dietary fiber ingredient used to improve the nutritional value of manufactured food products. Structural polysaccharides Arabinoxylans Arabinoxylans are found in both the primary and secondary cell walls of plants and are the copolymers of two sugars: arabinose and xylose. They may also have beneficial effects on human health. Cellulose The structural components of plants are formed primarily from cellulose. Wood is largely cellulose and lignin, while paper and cotton are nearly pure cellulose. Cellulose is a polymer made with repeated glucose units bonded together by beta-linkages. Humans and many animals lack an enzyme to break the beta-linkages, so they do not digest cellulose. Certain animals, such as termites can digest cellulose, because bacteria possessing the enzyme are present in their gut. Cellulose is insoluble in water. It does not change color when mixed with iodine. On hydrolysis, it yields glucose. It is the most abundant carbohydrate in nature. Chitin Chitin is one of many naturally occurring polymers. It forms a structural component of many animals, such as exoskeletons. Over time it is bio-degradable in the natural environment. Its breakdown may be catalyzed by enzymes called chitinases, secreted by microorganisms such as bacteria and fungi and produced by some plants. Some of these microorganisms have receptors to simple sugars from the decomposition of chitin. If chitin is detected, they then produce enzymes to digest it by cleaving the glycosidic bonds in order to convert it to simple sugars and ammonia. Chemically, chitin is closely related to chitosan (a more water-soluble derivative of chitin). It is also closely related to cellulose in that it is a long unbranched chain of glucose derivatives. Both materials contribute structure and strength, protecting the organism. Pectins Pectins are a family of complex polysaccharides that contain 1,4-linked α--galactosyl uronic acid residues. They are present in most primary cell walls and in the nonwoody parts of terrestrial plants. Acidic polysaccharides Acidic polysaccharides are polysaccharides that contain carboxyl groups, phosphate groups and/or sulfuric ester groups. Polysaccharides containing sulfate groups can be isolated from algae or obtained by chemical modification. Polysaccharides are major classes of biomolecules. They are long chains of carbohydrate molecules, composed of several smaller monosaccharides. These complex bio-macromolecules functions as an important source of energy in animal cell and form a structural component of a plant cell. It can be a homopolysaccharide or a heteropolysaccharide depending upon the type of the monosaccharides. Polysaccharides can be a straight chain of monosaccharides known as linear polysaccharides, or it can be branched known as a branched polysaccharide. Bacterial polysaccharides Pathogenic bacteria commonly produce a bacterial capsule, a thick, mucus-like layer of polysaccharide. The capsule cloaks antigenic proteins on the bacterial surface that would otherwise provoke an immune response and thereby lead to the destruction of the bacteria. Capsular polysaccharides are water-soluble, commonly acidic, and have molecular weights on the order of 100,000 to 2,000,000 daltons. They are linear and consist of regularly repeating subunits of one to six monosaccharides. There is enormous structural diversity; nearly two hundred different polysaccharides are produced by E. coli alone. Mixtures of capsular polysaccharides, either conjugated or native, are used as vaccines. Bacteria and many other microbes, including fungi and algae, often secrete polysaccharides to help them adhere to surfaces and to prevent them from drying out. Humans have developed some of these polysaccharides into useful products, including xanthan gum, dextran, welan gum, gellan gum, diutan gum and pullulan. Most of these polysaccharides exhibit useful visco-elastic properties when dissolved in water at very low levels. This makes various liquids used in everyday life, such as some foods, lotions, cleaners, and paints, viscous when stationary, but much more free-flowing when even slight shear is applied by stirring or shaking, pouring, wiping, or brushing. This property is named pseudoplasticity or shear thinning; the study of such matters is called rheology. Aqueous solutions of the polysaccharide alone have a curious behavior when stirred: after stirring ceases, the solution initially continues to swirl due to momentum, then slows to a standstill due to viscosity and reverses direction briefly before stopping. This recoil is due to the elastic effect of the polysaccharide chains, previously stretched in solution, returning to their relaxed state. Cell-surface polysaccharides play diverse roles in bacterial ecology and physiology. They serve as a barrier between the cell wall and the environment, mediate host-pathogen interactions. Polysaccharides also play an important role in formation of biofilms and the structuring of complex life forms in bacteria like Myxococcus xanthus. These polysaccharides are synthesized from nucleotide-activated precursors (called nucleotide sugars) and, in most cases, all the enzymes necessary for biosynthesis, assembly and transport of the completed polymer are encoded by genes organized in dedicated clusters within the genome of the organism. Lipopolysaccharide is one of the most important cell-surface polysaccharides, as it plays a key structural role in outer membrane integrity, as well as being an important mediator of host-pathogen interactions. The enzymes that make the A-band (homopolymeric) and B-band (heteropolymeric) O-antigens have been identified and the metabolic pathways defined. The exopolysaccharide alginate is a linear copolymer of β-1,4-linked -mannuronic acid and -guluronic acid residues, and is responsible for the mucoid phenotype of late-stage cystic fibrosis disease. The pel and psl loci are two recently discovered gene clusters that also encode exopolysaccharides found to be important for biofilm formation. Rhamnolipid is a biosurfactant whose production is tightly regulated at the transcriptional level, but the precise role that it plays in disease is not well understood at present. Protein glycosylation, particularly of pilin and flagellin, became a focus of research by several groups from about 2007, and has been shown to be important for adhesion and invasion during bacterial infection. Chemical identification tests for polysaccharides Periodic acid-Schiff stain (PAS) Polysaccharides with unprotected vicinal diols or amino sugars (where some hydroxyl groups are replaced with amines) give a positive periodic acid-Schiff stain (PAS). The list of polysaccharides that stain with PAS is long. Although mucins of epithelial origins stain with PAS, mucins of connective tissue origin have so many acidic substitutions that they do not have enough glycol or amino-alcohol groups left to react with PAS. Derivatives By chemical modifications certain properties of polysaccharides can be improved. Various ligands can be covalently attached to their hydroxyl groups. Due to the covalent attachment of methyl-, hydroxyethyl- or carboxymethyl- groups on cellulose, for instance, high swelling properties in aqueous media can be introduced. Another example is thiolated polysaccharides. (See thiomers.) Thiol groups are covalently attached to polysaccharides such as hyaluronic acid or chitosan. As thiolated polysaccharides can crosslink via disulfide bond formation, they form stable three-dimensional networks. Furthermore, they can bind to cysteine subunits of proteins via disulfide bonds. Because of these bonds, polysaccharides can be covalently attached to endogenous proteins such as mucins or keratins.
Biology and health sciences
Carbohydrates
Biology
23984
https://en.wikipedia.org/wiki/Pyxis
Pyxis
Pyxis is a small and faint constellation in the southern sky. Abbreviated from Pyxis Nautica, its name is Latin for a mariner's compass (contrasting with Circinus, which represents a draftsman's compasses). Pyxis was introduced by Nicolas-Louis de Lacaille in the 18th century, and is counted among the 88 modern constellations. The plane of the Milky Way passes through Pyxis. A faint constellation, its three brightest stars—Alpha, Beta and Gamma Pyxidis—are in a rough line. At magnitude 3.68, Alpha is the constellation's brightest star. It is a blue-white star approximately distant and around 22,000 times as luminous as the Sun. Pyxis is located close to the stars that formed the old constellation Argo Navis, the ship of Jason and the Argonauts. Parts of Argo Navis were the Carina (the keel or hull), the Puppis (the stern), and the Vela (the sails). These eventually became their own constellations. In the 19th century, John Herschel suggested renaming Pyxis to Malus (meaning the mast) but the suggestion was not followed. T Pyxidis, located about 4 degrees northeast of Alpha Pyxidis, is a recurrent nova that has flared up to magnitude 7 every few decades. Also, three star systems in Pyxis have confirmed exoplanets. The Pyxis globular cluster is situated about 130,000 light-years away in the galactic halo. This region was not thought to contain globular clusters. The possibility has been raised that this object might have escaped from the Large Magellanic Cloud. History In ancient Chinese astronomy, Alpha, Beta, and Gamma Pyxidis formed part of Tianmiao, a celestial temple honouring the ancestors of the emperor, along with stars from neighbouring Antlia. The French astronomer Nicolas-Louis de Lacaille first described the constellation in French as la Boussole (the Marine Compass) in 1752, after he had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. Lacaille Latinised the name to Pixis [sic] Nautica on his 1763 chart. The Ancient Greeks identified the four main stars of Pyxis as the mast of the mythological Jason's ship, Argo Navis. German astronomer Johann Bode defined the constellation Lochium Funis, the Log and Line—a nautical device once used for measuring speed and distance travelled at sea—around Pyxis in his 1801 star atlas, but the depiction did not survive. In 1844 John Herschel attempted to resurrect the classical configuration of Argo Navis by renaming it Malus the Mast, a suggestion followed by Francis Baily, but Benjamin Gould restored Lacaille's nomenclature. For instance, Alpha Pyxidis is referenced as α Mali in an old catalog of the United States Naval Observatory (star 3766, page 97). Characteristics Covering 220.8 square degrees and hence 0.535% of the sky, Pyxis ranks 65th of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 52°N. It is most visible in the evening sky in February and March. A small constellation, it is bordered by Hydra to the north, Puppis to the west, Vela to the south, and Antlia to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Pyx". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of eight sides (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −17.41° and −37.29°. Features Stars Lacaille gave Bayer designations to ten stars now named Alpha to Lambda Pyxidis, skipping the Greek letters iota and kappa. Although a nautical element, the constellation was not an integral part of the old Argo Navis and hence did not share in the original Bayer designations of that constellation, which were split between Carina, Vela and Puppis. Pyxis is a faint constellation, its three brightest stars—Alpha, Beta and Gamma Pyxidis—forming a rough line. Overall, there are 41 stars within the constellation's borders with apparent magnitudes brighter than or equal to 6.5. With an apparent magnitude of 3.68, Alpha Pyxidis is the brightest star in the constellation. Located 880 ± 30 light-years distant from Earth, it is a blue-white giant star of spectral type B1.5III that is around 22,000 times as luminous as the Sun and has 9.4 ± 0.7 times its diameter. It began life with a mass 12.1 ± 0.6 times that of the Sun, almost 15 million years ago. Its light is dimmed by 30% due to interstellar dust, so would have a brighter magnitude of 3.31 if not for this. The second brightest star at magnitude 3.97 is Beta Pyxidis, a yellow bright giant or supergiant of spectral type G7Ib-II that is around 435 times as luminous as the Sun, lying 420 ± 10 light-years distant away from Earth. It has a companion star of magnitude 12.5 separated by 9 arcseconds. Gamma Pyxidis is a star of magnitude 4.02 that lies 207 ± 2 light-years distant. It is an orange giant of spectral type K3III that has cooled and swollen to 3.7 times the diameter of the Sun after exhausting its core hydrogen. Kappa Pyxidis was catalogued but not given a Bayer designation by Lacaille, but Gould felt the star was bright enough to warrant a letter. Kappa has a magnitude of 4.62 and is 560 ± 50 light-years distant. An orange giant of spectral type K4/K5III, Kappa has a luminosity approximately 965 times that of the Sun. It is separated by 2.1 arcseconds from a magnitude 10 star. Theta Pyxidis is a red giant of spectral type M1III and semi-regular variable with two measured periods of 13 and 98.3 days, and an average magnitude of 4.71, and is 500 ± 30 light-years distant from Earth. It has expanded to approximately 54 times the diameter of the Sun. Located around 4 degrees northeast of Alpha is T Pyxidis, a binary star system composed of a white dwarf with around 0.8 times the Sun's mass and a red dwarf that orbit each other every 1.8 hours. This system is located around 15,500 light-years away from Earth. A recurrent nova, it has brightened to the 7th magnitude in the years 1890, 1902, 1920, 1944, 1966 and 2011 from a baseline of around 14th magnitude. These outbursts are thought to be due to the white dwarf accreting material from its companion and ejecting periodically. TY Pyxidis is an eclipsing binary star whose apparent magnitude ranges from 6.85 to 7.5 over 3.2 days. The two components are both of spectral type G5IV with a diameter 2.2 times, and mass 1.2 times that of the Sun, and revolve around each other every 3.2 days. The system is classified as a RS Canum Venaticorum variable, a binary system with prominent starspot activity, and lies 184 ± 5 light-years away. The system emits X-rays, and analysing the emission curve over time led researchers to conclude that there was a loop of material arcing between the two stars. RZ Pyxidis is another eclipsing binary system, made up of two young stars less than 200,000 years old. Both are hot blue-white stars of spectral type B7V and are around 2.5 times the size of the Sun. One is around five times as luminous as the Sun and the other around four times as luminous. The system is classified as a Beta Lyrae variable, the apparent magnitude varying from 8.83 to 9.72 over 0.66 days. XX Pyxidis is one of the more-studied members of a class of stars known as Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. Astronomers made more sense of its pulsations when it became clear that it is also a binary star system. The main star is a white main sequence star of spectral type A4V that is around 1.85 ± 0.05 times as massive as the Sun. Its companion is most likely a red dwarf of spectral type M3V, around 0.3 times as massive as the Sun. The two are very close—possibly only 3 times the diameter of the Sun between them—and orbit each other every 1.15 days. The brighter star is deformed into an egg shape. AK Pyxidis is a red giant of spectral type M5III and semi-regular variable that varies between magnitudes 6.09 and 6.51. Its pulsations take place over multiple periods simultaneously of 55.5, 57.9, 86.7, 162.9 and 232.6 days. UZ Pyxidis is another semi-regular variable red giant, this time a carbon star, that is around 3560 times as luminous as the Sun with a surface temperature of 3482 K, located 2116 light-years away from Earth. It varies between magnitudes 6.99 and 7.83 over 159 days. VY Pyxidis is a BL Herculis variable (type II Cepheid), ranging between apparent magnitudes 7.13 and 7.40 over a period of 1.24 days. Located around 650 light-years distant, it shines with a luminosity approximately 45 times that of the Sun. The closest star to Earth in the constellation is Gliese 318, a white dwarf of spectral class DA5 and magnitude 11.85. Its distance has been calculated to be 26 light-years, or 28.7 ± 0.5 light-years distant from Earth. It has around 45% of the Sun's mass, yet only 0.15% of its luminosity. WISEPC J083641.12-185947.2 is a brown dwarf of spectral type T8p located around 72 light-years from Earth. Discovered by infrared astronomy in 2011, it has a magnitude of 18.79. Planetary systems Pyxis is home to three stars with confirmed planetary systems—all discovered by Doppler spectroscopy. A hot Jupiter, HD 73256 b, that orbits HD 73256 every 2.55 days, was discovered using the CORALIE spectrograph in 2003. The host star is a yellow star of spectral type G9V that has 69% of our Sun's luminosity, 89% of its diameter and 105% of its mass. Around 119 light-years away, it shines with an apparent magnitude of 8.08 and is around a billion years old. HD 73267 b was discovered with the High Accuracy Radial Velocity Planet Searcher (HARPS) in 2008. It orbits HD 73267 every 1260 days, a 7 billion-year-old star of spectral type G5V that is around 89% as massive as the Sun. A red dwarf of spectral type M2.5V that has around 42% the Sun's mass, Gliese 317 is orbited by two gas giant planets. Around 50 light-years distant from Earth, it is a good candidate for future searches for more terrestrial rocky planets. Deep sky objects Pyxis lies in the plane of the Milky Way, although part of the eastern edge is dark, with material obscuring our galaxy arm there. NGC 2818 is a planetary nebula that lies within a dim open cluster of magnitude 8.2. NGC 2818A is an open cluster that lies on line of sight with it. K 1-2 is a planetary nebula whose central star is a spectroscopic binary composed of two stars in close orbit with jets emanating from the system. The surface temperature of one component has been estimated at as high as 85,000 K. NGC 2627 is an open cluster of magnitude 8.4 that is visible in binoculars. Discovered in 1995, the Pyxis globular cluster is a 13.3 ± 1.3 billion year-old globular cluster situated around 130,000 light-years distant from Earth and around 133,000 light-years distant from the centre of the Milky Way—a region not previously thought to contain globular clusters. Located in the galactic halo, it was noted to lie on the same plane as the Large Magellanic Cloud and the possibility has been raised that it might be an escaped object from that galaxy. NGC 2613 is a spiral galaxy of magnitude 10.5 which appears spindle-shaped as it is almost edge-on to observers on Earth. Henize 2-10 is a dwarf galaxy which lies 30 million light-years away. It has a black hole of around a million solar masses at its centre. Known as a starburst galaxy due to very high rates of star formation, it has a bluish colour due to the huge numbers of young stars within it.
Physical sciences
Other
Astronomy
23990
https://en.wikipedia.org/wiki/Puppis
Puppis
Puppis ("stern") is a constellation in the southern sky. It was originally part of the traditional constellation of Argo Navis (the ship of Jason and the Argonauts), which was divided into three parts, the other two being Carina (the keel and hull), and Vela (the sails). Puppis is the largest of the three constellations in square degrees. It is one of the 88 modern constellations recognized by the International Astronomical Union. History The constellation of Argo Navis is recorded in Greek texts, derived from ancient Egypt around 1000 BC. According to Plutarch, its equivalent in Egyptian astronomy was the "Boat of Osiris". As Argo Navis was roughly 28% larger than the next largest constellation, Hydra, it was sub-divided into three sections in 1752 by the French astronomer Nicolas Louis de Lacaille, including Puppis, which he referred to as "Argûs in puppi". Despite the division, Lacaille kept a single set of Bayer designations for the whole constellation, Argo. Therefore, Carina has the α, β, and ε, Vela has γ and δ, Puppis has ζ, and so on. In 1844, John Herschel proposed complete dividing Argo Navis in accordance with Lacaille's divisions. However, the constellation continued to be used into the 20th century, and officially received a three-letter designation alongside its divisions in 1922. Puppis, along with Carina and Vela, was included in the list of modern IAU constellations in 1930. Features Named stars Planetary systems Several extrasolar planet systems have been found around stars in the constellation Puppis, including: On July 1, 2003, a planet was found orbiting the star HD 70642. This planetary system is much like Jupiter with a wide, circular orbit and a long-period. On May 17, 2006, HD 69830 (the nearest star of this constellation) was discovered to have three Neptune-mass planets, the first multi-planetary system without any Jupiter-like or Saturn-like planets. The star also hosts an asteroid belt at the region between middle planet to outer planet. On June 21, 2007, the first extrasolar planet found in the open cluster NGC 2423, was discovered around the red giant star NGC 2423-3. The planet is at least 10.6 times the mass of Jupiter and orbits at 2.1 AU distance. On September 22, 2008, two Jupiter-like planets were discovered around HD 60532. HD 60532 b has a minimum mass of and orbits at 0.759 AU and takes 201.3 days to complete the orbit. HD 60532 c has a minimum mass of and orbits at 1.58 AU and takes 604 days to complete the orbit. In 2023, Astronomers detected two ice giant type exoplanets (both with a mass of tens of earths) having a collision event around the 300 million year old star designated as 2MASS J08152329-3859234. Deep-sky objects As the Milky Way runs through Puppis, there are many open clusters in the constellation. M46 and M47 are two open clusters in the same binocular field. M47 can be seen with the naked eye under dark skies, and its brightest stars are 6th magnitude. Messier 93 (M93) is another open cluster somewhat to the south. NGC 2451 is a very bright open cluster containing the star c Puppis, and the near NGC 2477 is a good target for small telescopes. The star Pi Puppis is the main component of a bright group of stars known as Collinder 135. M46 is a circular open cluster with an overall magnitude of 6.1 at a distance of approximately 5400 light-years from Earth. The planetary nebula NGC 2438 is superimposed; it is approximately 2900 light-years from Earth. M46 is classified as a Shapley class f and a Trumpler class III 2 m cluster. This means that it is a rich cluster that appears distinct from the star field, despite not being at its center. The cluster's stars, numbering between 50 and 100, have a moderate range in brightness.
Physical sciences
Other
Astronomy
23992
https://en.wikipedia.org/wiki/Piscis%20Austrinus
Piscis Austrinus
Piscis Austrinus is a constellation in the southern celestial hemisphere. The name is Latin for "the southern fish", in contrast with the larger constellation Pisces, which represents a pair of fish. Before the 20th century, it was also known as Piscis Notius. Piscis Austrinus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. The stars of the modern constellation Grus once formed the "tail" of Piscis Austrinus. In 1597 (or 1598), Petrus Plancius carved out a separate constellation and named it after the crane. It is a faint constellation, containing only one star brighter than 4th magnitude: Fomalhaut, which is 1st magnitude and the 18th-brightest star in the night sky. Fomalhaut is surrounded by a circumstellar disk, and possibly hosts a planet. Other objects contained within the boundaries of the constellation include Lacaille 9352, one of the brightest red dwarf stars in the night sky (though still too faint to see with the naked eye); and PKS 2155-304, a BL Lacertae object that is one of the optically brightest blazars in the sky. Origins Pisces Austrinus originated with the Babylonian constellation simply known as the Fish (MUL.KU). Professor of astronomy Bradley Schaefer has proposed that ancient observers must have been able to see as far south as Mu Piscis Austrini to define a pattern that looked like a fish. Like many of Schaefer's proposals this is nothing new: mu PsA is explicitly mentioned in the Almagest and the constellation is definitely a takeover from ancient Babylon. Along with the eagle Aquila the crow Corvus and water snake Hydra, Piscis Austrinus was introduced to the Ancient Greeks around 500 BCE; the constellations marked the summer and winter solstices, respectively. In Greek mythology, this constellation is known as the Great Fish and it is portrayed as swallowing the water being poured out by Aquarius, the water-bearer constellation. The two fish of the constellation Pisces are said to be the offspring of the Great Fish. In Egyptian mythology, this fish saved the life of the Egyptian goddess Isis, so she placed this fish and its descendants into the heavens as constellations of stars. In the 5th century BC, Greek historian Ctesias wrote that the fish was said to have lived in a lake near Bambyce in Syria and had saved Derceto, daughter of Aphrodite, and for this deed was placed in the heavens. For this reason, fish were sacred and not eaten by many Syrians. Characteristics Piscis Austrinus is a constellation bordered by Capricornus to the northwest, Microscopium to the southwest, Grus to the south, Sculptor to the east, and Aquarius to the north. Its recommended three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "PsA". Ptolemy called the constellation Ichthus Notios "Southern Fish" in his Almagest; this was Latinised to Piscis Notius and used by German celestial cartographers Johann Bayer and Johann Elert Bode. Bayer also called it Piscis Meridanus and Piscis Austrinus, while French astronomer Nicolas-Louis de Lacaille called it Piscis Australis. English Astronomer Royal John Flamsteed went with Piscis Austrinus, which was followed by most subsequently. The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −24.83° and −36.46°. The whole constellation is visible to observers south of latitude 53°N. Features Stars Ancient astronomers counted twelve stars as belonging to Piscis Austrinus, though one was later incorporated into nearby Grus as Gamma Gruis. Other stars became part of Microscopium. Bayer used the Greek letters alpha through mu to label the most prominent stars in the constellation. Ptolemy had catalogued Fomalhaut (Alpha Piscis Austrini) as belonging to both this constellation and Aquarius. Lacaille redrew the constellation as it was poorly visible from Europe, adding pi, and relabelling gamma, delta and epsilon as epsilon, eta and gamma, respectively. However, Baily and Gould did not uphold these changes as Bayer's original chart was fairly accurate. Bode added tau and upsilon. Flamsteed gave 24 stars Flamsteed designations, though the first four numbered became part of Microscopium. Within the constellation's borders, there are 47 stars brighter than or equal to apparent magnitude 6.5. Traditionally representing the mouth of the fish, Fomalhaut is the brightest star in the constellation and the 19th-brightest star in the night sky, with an apparent magnitude of 1.16. Located 25.13 ± 0.09 light-years away, it is a white main-sequence star that is 1.92 ± 0.02 times as massive and 16.63 ± 0.48 as luminous as the Sun. Its companion Fomalhaut b was thought to be the first extrasolar planet ever detected by a visible light image, thanks to the Hubble Space Telescope, but infrared observations have since retracted this claim: it is instead a spherical cloud of dust. TW Piscis Austrini can be seen close by and is possibly associated with Fomalhaut as it lies within a light-year of it. Of magnitude 6.5, it is a BY Draconis variable. The second-brightest star in the constellation, Epsilon Piscis Austrini is a blue-white star of magnitude +4.17. Located 400 ± 20 light-years distant, it is a blue-white main-sequence star 4.10 ± 0.19 times as massive as the Sun, and around 661 times as luminous. Beta, Delta and Zeta constitute the Tien Kang ("heavenly rope") in China. Beta is a white main-sequence star of apparent magnitude 4.29 that is of similar size and luminosity to Fomalhaut but five times as remote, at around 143 ± 1 light-years distant from Earth. Delta Piscis Austrini is a double star with components of magnitude 4.2 and 9.2. The brighter is a yellow giant of spectral type G8 III. It is a red clump star that is burning helium in its core. It is 172 ± 2 light-years distant from Earth. Zeta Piscis Austrini is an orange giant star of spectral type K1III that is located 413 ± 2 light-years distant from Earth. It is a suspected variable star. S Piscis Austrini is a long-period Mira-type variable red giant which ranges between magnitude 8.0 and 14.5 over a period of 271.7 days, and V Piscis Austrini is a semi-regular variable ranging between magnitudes 8.0 and 9.0 over 148 days. Lacaille 9352 is a faint red dwarf star of spectral type M0.5V that is just under half the Sun's diameter and mass. A mere 10.74 light-years away, it is too dim to be seen with the naked eye at magnitude 7.34. In June 2020 two super-Earth planets were discovered via radial velocity method. Exoplanets have been discovered in five other star systems in the constellation. HD 205739 is a yellow-white main-sequence star of spectral type F7 V that has a planet around 1.37 times as massive as Jupiter orbiting it with a period of 279 days, and a suggestion of a second planet. HD 216770 is an orange dwarf accompanied by a Jupiter-like planet every 118 days. HD 207832 is a star of spectral type G5V with a diameter and mass about 90% of that of the Sun, and around 77% of its luminosity. Two gas giant planets with masses around 56% and 73% that of Jupiter were discovered in 2012 via the radial velocity method. With orbits of 162 and 1156 days, they average around 0.57 and 2.11 astronomical units away from their star. WASP-112 and WASP-124 are two sun-like stars that have planets discovered by transit. Deep sky objects NGC 7172, NGC 7174 and NGC 7314 are three galaxies of magnitudes 11.9, 12.5 and 10.9, respectively. NGC 7259 is another spiral galaxy, which hosted a supernova—SN 2009ip—in 2009. At redshift z = 0.116, the BL Lacertae object PKS 2155-304 is one of the brightest blazars in the sky.
Physical sciences
Other
Astronomy
24006
https://en.wikipedia.org/wiki/Paraffin%20wax
Paraffin wax
Paraffin wax (or petroleum wax) is a soft colorless solid derived from petroleum, coal, or oil shale that consists of a mixture of hydrocarbon molecules containing between 20 and 40 carbon atoms. It is solid at room temperature and begins to melt above approximately , and its boiling point is above . Common applications for paraffin wax include lubrication, electrical insulation, and candles; dyed paraffin wax can be made into crayons. Un-dyed, unscented paraffin candles are odorless and bluish-white. Paraffin wax was first created by Carl Reichenbach in Germany in 1830 and marked a major advancement in candlemaking technology, as it burned more cleanly and reliably than tallow candles and was cheaper to produce. In chemistry, paraffin is used synonymously with alkane, indicating hydrocarbons with the general formula CnH2n+2. The name is derived from Latin parum ("very little") + affinis, meaning "lacking affinity" or "lacking reactivity", referring to paraffin's unreactive nature. Properties Paraffin wax is mostly found as a white, odorless, flavourless, waxy solid, with a typical melting point between about , and a density of around 900 kg/m3. It is insoluble in water, but soluble in ether, benzene, and certain esters. Paraffin is unaffected by most common chemical reagents but burns readily. Its heat of combustion is 42 MJ/kg. Paraffin wax is an excellent electrical insulator, with a resistivity of between 1013 and 1017 ohm-metre. This is better than nearly all other materials except some plastics (notably PTFE). It is an effective neutron moderator and was used in James Chadwick's 1932 experiments to identify the neutron. Paraffin wax is an excellent material for storing heat, with a specific heat capacity of 2.14–2.9 J⋅g−1⋅K−1 (joules per gram per kelvin) and a heat of fusion of 200–220 J⋅g−1. Paraffin wax phase-change cooling coupled with retractable radiators was used to cool the electronics of the Lunar Roving Vehicle during the crewed missions to the Moon in the early 1970s. Wax expands considerably when it melts and so is used in wax element thermostats for industrial, domestic and, particularly, automobile purposes. If pure paraffin wax melted to the approximate flash point in a half open glass vessel which is then suddenly cooled down, then its vapors may autoignite as result of reaching boiling liquid pressure. History Paraffin wax was first created in 1830 by German chemist Karl von Reichenbach when he attempted to develop a method to efficiently separate and refine waxy substances naturally occurring in petroleum. Paraffin represented a major advance in the candle-making industry because it burned cleanly and was cheaper to manufacture than other candle fuels such as beeswax and tallow. Paraffin wax initially suffered from a low melting point. This was remedied by adding stearic acid. The production of paraffin wax enjoyed a boom in the early 20th century due to the growth of the oil and meatpacking industries, which created paraffin and stearic acid as byproducts. Manufacturing The feedstock for paraffin is slack wax, which is a mixture of oil and wax, a byproduct from the refining of lubricating oil. The first step in making paraffin wax is to remove the oil (de-oiling or de-waxing) from the slack wax. The oil is separated by crystallization. Most commonly, the slack wax is heated, mixed with one or more solvents such as a ketone and then cooled. As it cools, wax crystallizes out of the solution, leaving only oil. This mixture is filtered into two streams: solid (wax plus some solvent) and liquid (oil and solvent). After the solvent is recovered by distillation, the resulting products are called "product wax" (or "press wax") and "foots oil". The lower the percentage of oil in the wax, the more refined it is considered to be (semi-refined versus fully refined). The product wax may be further processed to remove colors and odors. The wax may finally be blended together to give certain desired properties such as melt point and penetration. Paraffin wax is sold in either liquid or solid form. Applications In industrial applications, it is often useful to modify the crystal properties of the paraffin wax, typically by adding branching to the existing carbon backbone chain. The modification is usually done with additives, such as EVA copolymers, microcrystalline wax, or forms of polyethylene. The branched properties result in a modified paraffin with a higher viscosity, smaller crystalline structure, and modified functional properties. Pure paraffin wax is rarely used for carving original models for casting metal and other materials in the lost wax process, as it is relatively brittle at room temperature and presents the risks of chipping and breakage when worked. Soft and pliable waxes, like beeswax, may be preferred for such sculpture, but "investment casting waxes," often paraffin-based, are expressly formulated for the purpose. In a histology or pathology laboratory, paraffin wax is used to impregnate tissue prior to sectioning thin samples. Water is removed from the tissue through ascending strengths of alcohol (75% to absolute), and then the alcohol is cleared in an organic solvent such as xylene. The tissue is then placed in paraffin wax for several hours, then set in a mold with wax to cool and solidify. Sections are then cut on a microtome. Other uses Agent for preparation of specimens for histology Anti-caking agent, moisture repellent, and dustbinding coatings for fertilizers Antiozonant agents: blends of paraffin and micro waxes are used in rubber compounds to prevent cracking of the rubber; the admixture of wax migrates to the surface of the product and forms a protective layer. The layer can also act as a release agent, helping the product separate from its mould. Bicycle chain lubrication Bullet lubricant – with other ingredients, such as olive oil and beeswax Candle-making Coatings for waxed paper or waxed cotton Component of surfboard wax, ski wax, and skateboard wax Crayons Food-grade paraffin wax: Shiny coating used in candy-making; although edible, it is nondigestible, passing through the body without being broken down Coating for many kinds of hard cheese, like Edam cheese Sealant for jars, cans, and bottles Chewing gum additive Forensic investigations: the nitrate test uses paraffin wax to detect nitrates and nitrites on the hand of a shooting suspect Fuel for fire breathing Investment casting Lava lamps Manufacture of boiled leather armor and books Mechanical thermostats and actuators, as an expansion medium for activating such devices Microwax: food additive, a glazing agent with E number E905 Moisturiser in toiletries and cosmetics such as Vaseline. Neutron radiation shielding Phase change material for thermal energy storage Used by MESSENGER (Mercury spacecraft), when the spacecraft was unable to radiate excessive heat. Phlegmatizing agent, commonly used to stabilise/desensitize high explosives such as RDX Potting material to encapsulate electronic components such as guitar pickups, transformers, and inductors, to prevent moisture ingress and to reduce electromagnetically induced acoustic noise and microphonic effects Prevents oxidation on the surface of polished steel and iron Solid ink color blocks of wax for thermal printers. The wax is melted and then sprayed on the paper producing images with a shiny surface Solid propellant for hybrid rocket motors Textile manufacturing processes, such as that used for Eisengarn thread Thickening agent in many paintballs Wax baths for occupational and physical therapies and cosmetic treatments Wax carving Wood finishing Occupational safety People can be exposed to paraffin in the workplace by breathing it in, skin contact, and eye contact. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) for paraffin wax fume exposure of 2 mg/m3 over an 8-hour workday.
Physical sciences
Hydrocarbons
Chemistry
24007
https://en.wikipedia.org/wiki/Pearl
Pearl
A pearl is a hard, glistening object produced within the soft tissue (specifically the mantle) of a living shelled mollusk or another animal, such as fossil conulariids. Just like the shell of a mollusk, a pearl is composed of calcium carbonate (mainly aragonite or a mixture of aragonite and calcite) in minute crystalline form, which has deposited in concentric layers. The ideal pearl is perfectly round and smooth, but many other shapes, known as baroque pearls, can occur. The finest quality of natural pearls have been highly valued as gemstones and objects of beauty for many centuries. Because of this, pearl has become a metaphor for something rare, fine, admirable and valuable. The most valuable pearls occur spontaneously in the wild, but are extremely rare. These wild pearls are referred to as natural pearls. Cultured or farmed pearls from pearl oysters and freshwater mussels make up the majority of those currently sold. Imitation pearls are also widely sold in inexpensive jewelry. Pearls have been harvested and cultivated primarily for use in jewelry, but in the past were also used to adorn clothing. They have also been crushed and used in cosmetics, medicines and paint formulations. Whether wild or cultured, gem-quality pearls are almost always nacreous and iridescent, like the interior of the shell that produces them. However, almost all species of shelled mollusks are capable of producing pearls (technically "calcareous concretions") of lesser shine or less spherical shape. Although these may also be legitimately referred to as "pearls" by gemological labs and also under U.S. Federal Trade Commission rules, and are formed in the same way, most of them have no value except as curiosities. Etymology The English word pearl comes from the French , originally from the Latin , after the ham- or mutton leg-shaped bivalve. The scientific name for the family of pearl-bearing oysters, Margaritiferidae comes from the Old Persian word for pearl which is the source of the English name Margaret. Definition All shelled mollusks can, by natural processes, produce some kind of "pearl" when an irritating microscopic object becomes trapped within its mantle folds, but the great majority of these "pearls" are not valued as gemstones. Nacreous pearls, the best-known and most commercially significant, are primarily produced by two groups of molluskan bivalves or clams. A nacreous pearl is made from layers of nacre, by the same living process as is used in the secretion of the mother of pearl which lines the shell. Natural (or wild) pearls, formed without human intervention, are very rare. Many hundreds of pearl oysters or mussels must be gathered and opened, and thus killed, to find even one wild pearl; for many centuries, this was the only way pearls were obtained, and why pearls fetched such extraordinary prices in the past. Cultured pearls are formed in pearl farms, using human intervention as well as natural processes. One family of nacreous pearl bivalves – the pearl oyster – lives in the sea, while the other – a very different group of bivalves – lives in freshwater; these are the river mussels such as the freshwater pearl mussel. Saltwater pearls can grow in several species of marine pearl oysters in the family Pteriidae. Freshwater pearls grow within certain (but by no means all) species of freshwater mussels in the order Unionida, the families Unionidae and Margaritiferidae. Physical properties The unique luster of pearls depends upon the reflection, refraction, and diffraction of light from the translucent layers. The thinner and more numerous the layers in the pearl, the finer the luster. The iridescence that pearls display is caused by the overlapping of successive layers, which breaks up light falling on the surface. In addition, pearls (especially cultured freshwater pearls) can be dyed yellow, green, blue, brown, pink, purple, or black. The most valuable pearls have a metallic, highly reflective luster. Because pearls are made primarily of calcium carbonate, they can be dissolved in vinegar. Calcium carbonate is susceptible to even a weak acid solution because the crystals react with the acetic acid in the vinegar to form calcium acetate and carbon dioxide. Freshwater and saltwater pearls Freshwater and saltwater pearls may sometimes look quite similar, but they come from different sources. Freshwater pearls form in various species of freshwater mussels, family Unionidae, which live in lakes, rivers, ponds and other bodies of fresh water. These freshwater pearl mussels occur not only in hotter climates, but also in colder, more temperate areas such as Scotland (where they are protected under law). Most freshwater cultured pearls sold today come from China. Saltwater pearls grow within pearl oysters, family Pteriidae, which live in oceans. Saltwater pearl oysters are usually cultivated in protected lagoons or volcanic atolls. Formation The mollusk's mantle (protective membrane) deposits layers of calcium carbonate (CaCO3) in the form of the mineral aragonite or a mixture of aragonite and calcite (polymorphs with the same chemical formula, but different crystal structures) held together by an organic horn-like compound called conchiolin. The combination of aragonite and conchiolin is called nacre, which makes up mother-of-pearl. The commonly held belief that a grain of sand acts as the irritant is in fact rarely the case. Typical stimuli include organic material, parasites, or even damage that displaces mantle tissue to another part of the mollusk's body. These small particles or organisms gain entry when the shell valves are open for feeding or respiration. In cultured pearls, the irritant is typically an introduced piece of the mantle epithelium, with or without a spherical bead (beaded or beadless cultured pearls). Natural pearls Natural pearls are nearly 100% calcium carbonate and conchiolin. It is thought that natural pearls form under a set of accidental conditions when a microscopic intruder or parasite enters a bivalve mollusk and settles inside the shell. The mollusk, irritated by the intruder, forms a pearl sac of external mantle tissue cells and secretes the calcium carbonate and conchiolin to cover the irritant. This secretion process is repeated many times, thus producing a pearl. Natural pearls come in many shapes, with perfectly round ones being comparatively rare. Typically, the build-up of a natural pearl consists of a brown central zone formed by columnar calcium carbonate (usually calcite, sometimes columnar aragonite) and a yellowish to white outer zone consisting of nacre (tabular aragonite). In a pearl cross-section such as the diagram, these two different materials can be seen. The presence of columnar calcium carbonate rich in organic material indicates juvenile mantle tissue that formed during the early stage of pearl development. Displaced living cells with a well-defined task may continue to perform their function in their new location, often resulting in a cyst. Such displacement may occur via an injury. The fragile rim of the shell is exposed and is prone to damage and injury. Crabs, other predators and parasites such as worm larvae may produce traumatic attacks and cause injuries in which some external mantle tissue cells are disconnected from their layer. Embedded in the conjunctive tissue of the mantle, these cells may survive and form a small pocket in which they continue to secrete calcium carbonate, their natural product. The pocket is called a pearl sac, and grows with time by cell division. The juvenile mantle tissue cells, according to their stage of growth, secrete columnar calcium carbonate from pearl sac's inner surface. In time, the pearl sac's external mantle cells proceed to the formation of tabular aragonite. When the transition to nacre secretion occurs, the brown pebble becomes covered with a nacreous coating. During this process, the pearl sac seems to travel into the shell; however, the sac actually stays in its original relative position the mantle tissue while the shell itself grows. After a couple of years, a pearl forms and the shell may be found by a lucky pearl fisher. Cultured pearls Cultured pearls are the response of the shell to a tissue implant. A tiny piece of mantle tissue (called a graft) from a donor shell is transplanted into a recipient shell, causing a pearl sac to form into which the tissue precipitates calcium carbonate. There are a number of methods for producing cultured pearls: using freshwater or seawater shells, transplanting the graft into the mantle or into the gonad, and adding a spherical bead as a nucleus. Most saltwater cultured pearls are grown with beads. Trade names of cultured pearls are Akoya (), white or golden South sea, and black Tahitian. Most beadless cultured pearls are mantle-grown in freshwater shells in China, and are known as freshwater cultured pearls. Cultured pearls can be distinguished from natural pearls by X-ray examination. Nucleated cultured pearls are often 'preformed' as they tend to follow the shape of the implanted shell bead nucleus. After a bead is inserted into the oyster, it secretes a few layers of nacre around the bead; the resulting cultured pearl can then be harvested in as few as twelve to eighteen months. When a cultured pearl with a bead nucleus is X-rayed, it reveals a different structure to that of a natural pearl. A beaded cultured pearl shows a solid center with no concentric growth rings, whereas a natural pearl shows a series of concentric growth rings. A beadless cultured pearl (whether of freshwater or saltwater origin) may show growth rings, but also a complex central cavity, witness of the first precipitation of the young pearl sac. Imitation pearls Some imitation pearls (also called shell pearls) are simply made of mother-of-pearl, coral or conch shell, while others are made from glass and are coated with a solution containing fish scales called essence d'Orient. Gemological identification A well-equipped gem testing laboratory can distinguish natural pearls from cultured pearls by using gemological X-ray equipment to examine the center of a pearl. With X-rays it is possible to see the growth rings of the pearl, where the layers of calcium carbonate are separated by thin layers of conchiolin. The differentiation of natural pearls from non-beaded cultured pearls can be very difficult without the use of this X-ray technique. Natural and cultured pearls can be distinguished from imitation pearls using a microscope. Another method of testing for imitations is to rub two pearls against each other. Imitation pearls are completely smooth, but natural and cultured pearls are composed of nacre platelets, making both feel slightly gritty. Value of a natural pearl Fine quality natural pearls are very rare jewels. Their values are determined similarly to those of other precious gems, according to size, shape, color, quality of surface, orient and luster. Single natural pearls are often sold as collectors' items, or set as centerpieces in unique jewelry. Very few matched strands of natural pearls exist, and those that do often sell for hundreds of thousands of dollars. (In 1917, jeweler Pierre Cartier purchased the Fifth Avenue mansion that is now the New York Cartier store in exchange for a matched double strand of natural pearls Cartier had been collecting for years; at the time, it was valued at US$1 million.) The introduction and advance of the cultured pearl hit the pearl industry hard. Pearl dealers publicly disputed the authenticity of these new cultured products, and left many consumers uneasy and confused about their much lower prices. Essentially, the controversy damaged the images of both natural and cultured pearls. By the 1950s, when a significant number of women in developed countries could afford their own cultured pearl necklace, natural pearls were reduced to a small, exclusive niche in the pearl industry. Origin of a natural pearl Previously, natural pearls were found in many parts of the world. Present day natural pearling is confined mostly to the Persian Gulf, in seas off Bahrain. Australia also has one of the world's last remaining fleets of pearl diving ships. Australian pearl divers dive for south sea pearl oysters to be used in the cultured south sea pearl industry. The catch of pearl oysters is similar to the numbers of oysters taken during the natural pearl days. Hence significant numbers of natural pearls are still found in the Australian Indian Ocean waters from wild oysters. X-ray examination is required to positively verify natural pearls found today. Types of cultured pearls A keshi pearl is a pearl composed entirely of nacre and results from mishaps in the culturing process. Most are quite small, typically only a few millimeters in diameter, and are often irregular in shape. In seeding a cultured pearl, a piece of mantle muscle from a sacrificed oyster is placed with a bead of mother of pearl within a host oyster. If the piece of mantle should slip off the bead, a keshi pearl forms of baroque shape about the mantle piece. Therefore, while a keshi pearl could be considered superior to cultured pearls with a mother of pearl bead center, in the cultured pearl industry the oyster's resources used to create a mistaken all-nacre baroque pearl is a drain on the production of the intended round cultured pearl. Therefore, the pearl industry is making ongoing attempts to improve culturing technique so that keshi pearls do not occur. All-nacre pearls may one day be limited to natural found pearls. Today many "keshi" pearls are actually intentional, with post-harvest shells returned to the water to regenerate a pearl in the existing pearl sac. Tahitian pearls, frequently referred to as black pearls, are highly valued because of their rarity; the culturing process for them dictates a smaller volume output and they can never be mass-produced because, in common with most sea pearls, the oyster can only be nucleated with one pearl at a time, while freshwater mussels are capable of multiple pearl implants. Before the days of cultured pearls, black pearls were rare and highly valued for the simple reason that white pearl oysters rarely produced naturally black pearls, and black pearl oysters rarely produced any natural pearls at all. Since the development of pearl culture technology, the black pearl oysters Pinctada margaritifera found in Tahiti and many other Pacific islands including the Cook Islands and Fiji are being extensively used for producing cultured pearls. The rarity of the black cultured pearl is now a "comparative" issue. The black cultured pearl is rare when compared to Chinese freshwater cultured pearls, and Japanese and Chinese akoya cultured pearls, and is more valuable than these pearls. However, it is more abundant than the South Sea pearl, which is more valuable than the black cultured pearl. This is simply because the black pearl oyster Pinctada margaritifera is far more abundant than the elusive, rare, and larger south sea pearl oyster Pinctada maxima, which cannot be found in lagoons, but which must be dived for in a rare number of deep ocean habitats or grown in hatcheries. Natural black pearls are rare, with black pearls having a body color that may be assessed as silver, silver blue, gold, brown-black, green-black, or black. Black cultured pearls from the black pearl oyster – Pinctada margaritifera – are not South Sea pearls, although they are often mistakenly described as black South Sea pearls. In the absence of an official definition for the pearl from the black all use to, these pearls are usually referred to as "black pearls". The correct definition of a South Sea pearl – as described by CIBJO and GIA – is a pearl produced by the Pinctada maxima pearl oyster. South Sea pearls are the color of their host Pinctada maxima oyster – and can be white, silver, pink, gold, cream, and any combination of these basic colors, including overtones of the various colors of the rainbow displayed in the pearl nacre of the oyster shell itself. South Sea pearls are the largest and rarest of the cultured pearls – making them the most valuable. Prized for their exquisitely beautiful 'orient' or lustre, South Sea pearls are now farmed in various parts of the world where the Pinctada maxima oysters can be found, with the finest South Sea pearls being produced by Paspaley along the remote coastline of North-Western Australia. White and silver colored South Sea pearls tend to come from the Broome area of Australia, while golden colored ones are more prevalent in the Philippines and Indonesia. A farm in the Gulf of California, Mexico, is culturing pearls from the black lipped Pinctada mazatlanica oysters and the rainbow lipped Pteria sterna oysters. Also called Concha Nácar, the pearls from these rainbow lipped oysters fluoresce red under ultraviolet light. From other species Biologically speaking, under the right set of circumstances, almost any shelled mollusk can produce some kind of pearl. However, most of these molluskan pearls have no luster or iridescence. The great majority of mollusk species produce pearls which are not attractive, and are sometimes not even very durable. Such pearls usually have no value at all, except perhaps to a scientist or collector, or as a curiosity. These objects used to be referred to as "calcareous concretions" by some gemologists, even though a malacologist would still consider them to be pearls. Valueless pearls of this type are sometimes found in edible mussels, edible oysters, escargot snails, and so on. The GIA and CIBJO now simply use the term 'pearl' (or, where appropriate, the more descriptive term 'non-nacreous pearl') when referring to such items and, under Federal Trade Commission rules, various mollusk pearls may be referred to as 'pearls', without qualification. A few species produce pearls that can be of interest as gemstones. These species include the bailer shell Melo, the giant clam Tridacna, various scallop species, Pen shells Pinna, and the Haliotis iris species of abalone. Pearls of abalone are cultured pearls, or blister pearls, unique to New Zealand waters, and are commonly referred to as 'blue pearls'. They are admired for their luster and naturally bright vibrant colors that are often compared to opal. Another example is the conch pearl (sometimes referred to simply as the 'pink pearl'), which is found very rarely growing between the mantle and the shell of the queen conch or pink conch, Strombus gigas, a large sea snail or marine gastropod from the Caribbean Sea. These pearls, which are often pink in color, are a by-product of the conch fishing industry, and the best of them display a shimmering optical effect related to chatoyance known as 'flame structure'. Somewhat similar gastropod pearls, this time more orange in hue, are (again very rarely) found in the horse conch Triplofusus papillosus. The second largest pearl known was found in the Philippines in 1934 and is known as the Pearl of Lao Tzu. It is a naturally occurring, non-nacreous, calcareous concretion (pearl) from a giant clam. Because it did not grow in a pearl oyster it is not pearly; instead the surface is glossy like porcelain. Other pearls from giant clams are known to exist, but this is a particularly large one weighing 14 lb (6.4 kg). The largest known pearl (also from a giant clam) is the Pearl of Puerto, also found in the Philippines by a fisherman from Puerto Princesa, Palawan Island. The enormous pearl is 30 cm wide (1 ft), 67 cm long (2.2 ft) and weighs 75 lb (34 kg). History Pearl hunting The ancient chronicle Mahavamsa mentions the thriving pearl industry in the port of Oruwella in the Gulf of Mannar in Sri Lanka. It also records that eight varieties of pearls accompanied Prince Vijaya's embassy to the Pandyan king as well as king Devanampiya Tissa's embassy to Emperor Ashoka. Pliny the Elder (23–79 AD) praised the pearl fishery of the Gulf as most productive in the world. For thousands of years, seawater pearls were retrieved by divers in the Indian Ocean in areas such as the Persian Gulf, the Red Sea and the Gulf of Mannar. Evidence also suggest a prehistoric origin to pearl diving in these regions. Starting in the Han dynasty (206 BC–220 AD), the Chinese hunted extensively for seawater pearls in the South China Sea, particularly in what is now Tolo Harbour in Hong Kong. Tanka pearl divers of twelfth century China attached ropes to their waists in order to be safely brought back up to the surface. When Spanish conquistadors arrived in the Western Hemisphere, they discovered that around the islands of Cubagua and Margarita, some 200 km north of the Venezuelan coast, was an extensive pearl bed (a bed of pearl oysters). One discovered and named pearl, La Peregrina pearl, was offered to Philip II of Spain who intended to give it as a gift for his daughter on the occasion of her marriage, but the King found it so beautiful that he kept it for himself. Later, he elevated it to be part of the Spanish Crown Jewel. From then on, the pearl was recorded in every royal inventory for more than 200 years. According to Garcilasso de la Vega, who says that he saw La Peregrina at Seville in 1607, this was found at Panama in 1560 by a slave worker who was rewarded with his liberty, and his owner with the office of alcalde of Panama. Margarita pearls are extremely difficult to find today and are known for their unique yellowish color. Before the beginning of the 20th century, pearl hunting was the most common way of harvesting pearls. Divers manually pulled oysters from ocean floors and river bottoms and checked them individually for pearls. Not all mussels and oysters produce pearls. In a haul of three tons, only three or four oysters will produce perfect pearls. British Isles Pearls were one of the attractions which drew Julius Caesar to Britain. They are, for the most part, freshwater pearls from mussels. Pearling was banned in the U.K. in 1998 due to the endangered status of river mussels. Discovery and publicity about the sale for a substantial sum of the Abernethy pearl in the River Tay had resulted in heavy exploitation of mussel colonies during the 1970s and 80s by weekend warriors. When it was permitted it was carried on mainly by Scottish Travellers who found pearls varied from river to river with the River Oykel in the Highlands being noted for the finest rose-pink pearls. There are two firms in Scotland that are licensed to sell pre-1998 freshwater pearls. Pearl farming Today, the cultured pearls on the market can be divided into two categories. The first category covers the beaded cultured pearls, including akoya, South Sea and Tahiti. These pearls are gonad grown, and usually one pearl is grown at a time. This limits the number of pearls at a harvest period. The pearls are usually harvested after one year for akoya, 2–4 years for Tahitian and South Sea, and 2–7 years for freshwater. This perliculture process was first developed by the British biologist William Saville-Kent who passed the information along to Tatsuhei Mise and Tokichi Nishikawa from Japan. The second category includes the non-beaded freshwater cultured pearls, like the Biwa or Chinese pearls. As they grow in the mantle, where on each wing up to 25 grafts can be implanted, these pearls are much more frequent and saturate the market completely. An impressive improvement in quality has taken place over ten years when the former rice-grain-shaped pebbles are compared with the near round pearls of today. Later, large near perfect round bead nucleated pearls up to 15mm in diameter have been produced with metallic luster. The nucleus bead in a beaded cultured pearl is generally a polished sphere made from freshwater mussel shell. Along with a small piece of mantle tissue from another mollusk (donor shell) to serve as a catalyst for the pearl sac, it is surgically implanted into the gonad (reproductive organ) of a saltwater mollusk. In freshwater perliculture, only the piece of tissue is used in most cases, and is inserted into the fleshy mantle of the host mussel. South Sea and Tahitian pearl oysters, also known as Pinctada maxima and Pinctada margaritifera, which survive the subsequent surgery to remove the finished pearl, are often implanted with a new, larger beads as part of the same procedure and then returned to the water for another 2–3 years of growth. Despite the common misperception, Mikimoto did not discover the process of pearl culture. The accepted process of pearl culture was developed by the British Biologist William Saville-Kent in Australia and brought to Japan by Tokichi Nishikawa and Tatsuhei Mise. Nishikawa was granted the patent in 1916, and married the daughter of Mikimoto. Mikimoto was able to use Nishikawa's technology. After the patent was granted in 1916, the technology was immediately commercially applied to akoya pearl oysters in Japan in 1916. Mise's brother was the first to produce a commercial crop of pearls in the akoya oyster. Mitsubishi's Baron Iwasaki immediately applied the technology to the south sea pearl oyster in 1917 in the Philippines, and later in Buton, and Palau. Mitsubishi was the first to produce a cultured south sea pearl – although it was not until 1928 that the first small commercial crop of pearls was successfully produced. The original Japanese cultured pearls, known as akoya pearls, are produced by a species of small pearl oyster, Pinctada fucata martensii, which is no bigger than in size, hence akoya pearls larger than 10 mm in diameter are extremely rare and highly priced. Today, a hybrid mollusk is used in both Japan and China in the production of akoya pearls. Cultured Pearls were sold in cans for the export market. These were packed in Japan by the I.C.P. Canning Factory (International Pearl Company L.T.D.) in Nagasaki Pref. Japan. Timeline of pearl production Mitsubishi commenced pearl culture with the South Sea pearl oyster in 1916, as soon as the technology patent was commercialized. By 1931 this project was showing signs of success, but was upset by the death of Tatsuhei Mise. Although the project was recommenced after Tatsuhei's death, the project was discontinued at the beginning of WWII before significant productions of pearls were achieved. After WWII, new south sea pearl projects were commenced in the early 1950s at Kuri Bay and Port Essington in Australia, and Burma. Japanese companies were involved in all projects using technicians from the original Mitsubishi South Sea pre-war projects. Kuri Bay is now the location of one of the largest and most well-known pearl farms owned by Paspaley, the biggest producer of South Sea pearls in the world. In 2010, China overtook Japan in akoya pearl production. Japan has all but ceased its production of akoya pearls smaller than 8 mm. Japan maintains its status as a pearl processing center, however, and imports the majority of Chinese akoya pearl production. These pearls are then processed (often simply matched and sorted), relabeled as product of Japan, and exported. In the past two decades, cultured pearls have been produced using larger oysters in the south Pacific and Indian Ocean. The largest pearl oyster is the Pinctada maxima, which is roughly the size of a dinner plate. South Sea pearls are characterized by their large size and warm luster. Sizes up to 14 mm in diameter are not uncommon. In 2013, Indonesia Pearl supplied 43 percent of South Sea Pearls international market. The other significant producers are Australia, Philippines, Myanmar and Malaysia. Freshwater pearl farming In 1914, pearl farmers began growing cultured freshwater pearls using the pearl mussels native to Lake Biwa. This lake, the largest and most ancient in Japan, lies near the city of Kyoto. The extensive and successful use of the Biwa Pearl Mussel is reflected in the name Biwa pearls, a phrase which was at one time nearly synonymous with freshwater pearls in general. Since the time of peak production in 1971, when Biwa pearl farmers produced six tons of cultured pearls, pollution has caused the virtual extinction of the industry. Japanese pearl farmers recently cultured a hybrid pearl mussel – a cross between Biwa Pearl Mussels and a closely related species from China, Hyriopsis cumingi, in Lake Kasumigaura. This industry has also nearly ceased production, due to pollution. Currently, the Belpearl company based out of Kobe, Japan continues to purchase the remaining Kasumiga-ura pearls. Japanese pearl producers also invested in producing cultured pearls with freshwater mussels in the region of Shanghai, China. China has since become the world's largest producer of freshwater pearls, producing more than 1,500 metric tons per year (in addition to metric measurements, Japanese units of measurement such as the kan and momme are sometimes encountered in the pearl industry). Led by pearl pioneer John Latendresse and his wife Chessy, the United States began farming cultured freshwater pearls in the mid-1960s. National Geographic magazine introduced the American cultured pearl as a commercial product in their August 1985 issue. The Tennessee pearl farm has emerged as a tourist destination in recent years, but commercial production of freshwater pearls has ceased. Momme weight For many cultured pearl dealers and wholesalers, the preferred weight measure used for loose pearls and pearl strands is the momme. Momme is a weight measure used by the Japanese for centuries. Today, momme weight is still the standard unit of measure used by most pearl dealers to communicate with pearl producers and wholesalers. One momme corresponds to 1/1000 kan. Reluctant to give up tradition, the Japanese government formalized the kan measure in 1891 as being exactly 3.75 kilograms or 8.28 pounds. Hence, 1 momme = 3.75 grams or 3750 milligrams. In the United States, during the 19th and 20th centuries, through trade with Japan in silk cloth the momme became a unit indicating the quality of silk cloth. Though millimeter size range is typically the first factor in determining a cultured pearl necklace's value, the momme weight of pearl necklace will allow the buyer to quickly determine if the necklace is properly proportioned. This is especially true when comparing the larger south sea and Tahitian pearl necklaces. In jewelry The value of the pearls in jewelry is determined by a combination of the luster, color, size, lack of surface flaw and symmetry that are appropriate for the type of pearl under consideration. Among those attributes, luster is the most important differentiator of pearl quality according to jewelers. All factors being equal, however, the larger the pearl the more valuable it is. Large, perfectly round pearls are rare and highly valued. Teardrop-shaped pearls are often used in pendants. Gallery Shapes Pearls are generally of spherical shapes. Perfectly round pearls are the rarest and most valuable shape. Semi-rounds are also used in necklaces or in pieces where the shape of the pearl can be disguised to look like it is a perfectly round pearl. Button pearls are like a slightly flattened round pearl and can also make a necklace, but are more often used in single pendants or earrings where the back half of the pearl is covered, making it look like a larger, rounder pearl. Pear-shaped pearls sometimes look like teardrop pearls and are most often seen in earrings, pendants, or as a center pearl in a necklace. Baroque pearls have a different appeal; they are often highly irregular with unique and interesting shapes. They are also commonly seen in necklaces. Circled pearls are characterized by concentric ridges, or rings, around the body of the pearl. In general, cultured pearls are less valuable than natural pearls, whereas imitation pearls have almost no value. One way that jewelers can determine whether a pearl is cultured or natural is to have a gemlab perform an X-ray examination of the pearl. If X-rays reveals a nucleus, the pearl is likely a bead-nucleated saltwater pearl. If no nucleus is present, but irregular and small dark inner spots indicating a cavity are visible, combined with concentric rings of organic substance, the pearl is likely a cultured freshwater. Cultured freshwater pearls can often be confused for natural pearls which present as homogeneous pictures which continuously darken toward the surface of the pearl. Natural pearls will often show larger cavities where organic matter has dried out and decomposed. Lengths of pearl necklaces There is a special vocabulary used to describe the length of pearl necklaces. While most other necklaces are simply referred to by their physical measurement, pearl necklaces are named by how low they hang when worn around the neck. A collar, measuring 10 to 13 inches or 25 to 33 cm in length, sits directly against the throat and does not hang down the neck at all; collars are often made up of multiple strands of pearls. Pearl chokers, measuring 14 to 16 inches or 35 to 41 cm in length, nestle just at the base of the neck. A strand called a princess length, measuring 17 to 19 inches or 43 to 48 cm in length, comes down to or just below the collarbone. A matinee length, measuring 20 to 24 inches or 50 to 60 cm in length, falls just above the breasts. An opera length, measuring 28 to 35 inches or 70 to 90 cm in length, will be long enough to reach the breastbone or sternum of the wearer; and longer still, a pearl rope, measuring more than 45 inches or 115 cm in length, is any length that falls down farther than an opera. Necklaces can also be classified as uniform, or graduated. In a uniform strand of pearls, all pearls are classified as the same size, but actually fall in a range. A uniform strand of akoya pearls, for example, will measure within 0.5 mm. So a strand will never be 7 mm, but will be 6.5–7 mm. Freshwater pearls, Tahitian pearls, and South Sea pearls all measure to a full millimeter when considered uniform. A graduated strand of pearls most often has at least 3 mm of differentiation from the ends to the center of the necklace. Popularized in the United States during the 1950s by the GIs bringing strands of cultured akoya pearls home from Japan, a 3.5 momme, 3 mm to 7 mm graduated strand was much more affordable than a uniform strand because most of the pearls were small. Colors Earrings and necklaces can also be classified on the grade of the color of the pearl: saltwater and freshwater pearls come in many different colors. While white, and more recently black, saltwater pearls are by far the most popular, other color tints can be found on pearls from the oceans. Pink, blue, champagne, green, and even purple saltwater pearls can be encountered, but to collect enough of these rare colors to form a complete string of the same size and same shade can take years. The vast majority of inexpensive colored pearls have been subjected to some form of dye, often a fabric dye. This dye will only tend to penetrate the first layer or two of nacre, but this is enough to impart vivid and sometimes garish color to otherwise white pearls. Truly valuable pearls are never dyed, and this process is not believed to add and in most cases would only subtract from their market value. Religious references Hindu scriptures The Hindu tradition describes the sacred Nine Pearls which were first documented in the Garuda Purana, one of the books of the Hindu scriptures. Ayurveda contains references to pearl powder as a stimulant of digestion and to treat mental ailments. According to Marco Polo, the kings of Malabar wore a necklace of 108 rubies and pearls which was given from one generation of kings to the next. The reason was that every king had to say 108 prayers every morning and every evening. At least until the beginning of the 20th century it was a Hindu custom to present a completely new, undrilled pearl and pierce it during the wedding ceremony. The Pearl, which can be transliterated to "Moti", a type of "Mani" from Sanskrit, is also associated with many Hindu deities, the most famous being the Kaustubha that Lord Vishnu wears on his chest. Hebrew scriptures The Hebrew word פְּנִינִים 'pearl(s)' appears in several places in the Hebrew Bible (Job 28:18; Proverbs 3:15; 8:11; 20:15; 31:10; Lamentations 4:7), although its etymology is unclear. New Testament scriptures In a Christian New Testament parable (Matthew 13:45–46), Jesus compared the Kingdom of Heaven to a "pearl of great price". "Again, the kingdom of heaven is like unto a merchant man, seeking goodly (fine) pearls: Who, when he had found one pearl of great price, went and sold all that he had, and bought it." The twelve gates of the New Jerusalem are reportedly each made of a single pearl in Revelation 21:21, that is, the Pearly Gates. "And the twelve gates were twelve pearls: every several gate was of one pearl: and the street of the city was pure gold, as it were transparent glass." Holy things are compared to pearls in Matthew 7:6: "Give not that which is holy unto the dogs, neither cast ye your pearls before swine, lest they trample them under their feet, and turn again and rend you." Pearls are also found in numerous references showing the wickedness and pride of a people, as in Revelation 18:16. "And saying, Alas, alas, that great city, that was clothed in fine linen, in purple, and scarlet, and decked with gold, and precious stones, and pearls!" Islamic scriptures The Quran often mentions that dwellers of paradise will be adorned with pearls: 22:23 God will admit those who believe and work righteous deeds, to Gardens beneath which rivers flow: they shall be adorned therein with bracelets of gold and pearls; and their garments there will be of silk. 35:33 Gardens of Eternity will they enter: therein will they be adorned with bracelets of gold, silver and pearls; and their garments there will be of silk. 52:24 And they will be waited on by their youthful servants like spotless pearls. Additional references The metaphor of a pearl appears in the longer Hymn of the Pearl, a poem respected for its high literary quality, and use of layered theological metaphor, found within one of the texts of Gnosticism. The Pearl of Great Price is a book of scripture in the Church of Jesus Christ of Latter-day Saints (LDS Church) and some other Latter Day Saint denominations. Pearl is a Middle English religious poem.
Physical sciences
Organic gemstones
null
24022
https://en.wikipedia.org/wiki/Physical%20therapy
Physical therapy
Physical therapy (PT), also known as physiotherapy, is a healthcare profession, as well as the care provided by physical therapists who promote, maintain, or restore health through patient education, physical intervention, disease prevention, and health promotion. Physical therapist is the term used for such professionals in the United States, and physiotherapist is the term used in many other countries. The career has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. PTs practice in many settings, both public and private. In addition to clinical practice, other aspects of physical therapy practice include research, education, consultation, and health administration. Physical therapy is provided as a primary care treatment or alongside, or in conjunction with, other medical services. In some jurisdictions, such as the United Kingdom, physical therapists may have the authority to prescribe medication. Overview Physical therapy addresses the illnesses or injuries that limit a person's abilities to move and perform functional activities in their daily lives. PTs use an individual's history and physical examination to arrive at a diagnosis and establish a management plan and, when necessary, incorporate the results of laboratory and imaging studies like X-rays, CT-scan, or MRI findings. Physical therapists can use sonography to diagnose and manage common musculoskeletal, nerve, and pulmonary conditions. Electrodiagnostic testing (e.g., electromyograms and nerve conduction velocity testing) may also be used. PT management commonly includes prescription of or assistance with specific exercises, manual therapy, and manipulation, mechanical devices such as traction, education, electrophysical modalities which include heat, cold, electricity, sound waves, radiation, assistive devices, prostheses, orthoses, and other interventions. In addition, PTs work with individuals to prevent the loss of mobility before it occurs by developing fitness and wellness-oriented programs for healthier and more active lifestyles, providing services to individuals and populations to develop, maintain, and restore maximum movement and functional ability throughout the lifespan. This includes providing treatment in circumstances where movement and function are threatened by aging, injury, disease, or environmental factors. Functional movement is central to what it means to be healthy. Physical therapy is a professional career that has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. Neurological rehabilitation is, in particular, a rapidly emerging field. PTs practice in many settings, such as privately-owned physical therapy clinics, outpatient clinics or offices, health and wellness clinics, rehabilitation hospital facilities, skilled nursing facilities, extended care facilities, private homes, education and research centers, schools, hospices, industrial and these workplaces or other occupational environments, fitness centers and sports training facilities. Physical therapists also practice in non-patient care roles such as health policy, health insurance, health care administration and as health care executives. Physical therapists are involved in the medical-legal field serving as experts, performing peer review and independent medical examinations. Education varies greatly by country. The span of education ranges from some countries having little formal education to others having doctoral degrees and post-doctoral residencies and fellowships. Regarding its relationship to other healthcare professions, physiotherapy is one of the allied health professions. World Physiotherapy has signed a "memorandum of understanding" with the four other members of the World Health Professions Alliance "to enhance their joint collaboration on protecting and investing in the health workforce to provide safe, quality and equitable care in all settings". History Physicians like Hippocrates and later Galen are believed to have been the first practitioners of physical therapy, advocating massage, manual therapy techniques and hydrotherapy to treat people in 460 BC. After the development of orthopedics in the eighteenth century, machines like the Gymnasticon were developed to treat gout and similar diseases by systematic exercise of the joints, similar to later developments in physical therapy. The earliest documented origins of actual physical therapy as a professional group date back to Per Henrik Ling, "Father of Swedish Gymnastics," who founded the Royal Central Institute of Gymnastics (RCIG) in 1813 for manipulation, and exercise. Up until 2014, the Swedish word for a physical therapist was sjukgymnast = someone involved in gymnastics for those who are ill, but the title was then changed to fysioterapeut (physiotherapist), the word used in the other Scandinavian countries. In 1887, PTs were given official registration by Sweden's National Board of Health and Welfare. Other countries soon followed. In 1894, four nurses in Great Britain formed the Chartered Society of Physiotherapy. The School of Physiotherapy at the University of Otago in New Zealand in 1913, and the United States 1914 Reed College in Portland, Oregon, which graduated "reconstruction aides." Since the profession's inception, spinal manipulative therapy has been a component of the physical therapist practice. Modern physical therapy was established towards the end of the 19th century due to events that affected on a global scale, which called for rapid advances in physical therapy. Following this, American orthopedic surgeons began treating children with disabilities and employed women trained in physical education, and remedial exercise. These treatments were further applied and promoted during the Polio outbreak of 1916. During the First World War, women were recruited to work with and restore physical function to injured soldiers, and the field of physical therapy was institutionalized. In 1918 the term "Reconstruction Aide" was used to refer to individuals practicing physical therapy. The first school of physical therapy was established at Walter Reed Army Hospital in Washington, D.C., following the outbreak of World War I. Research catalyzed the physical therapy movement. The first physical therapy research was published in the United States in March 1921 in "The PT Review." In the same year, Mary McMillan organized the American Women's Physical Therapeutic Association (now called the American Physical Therapy Association (APTA). In 1924, the Georgia Warm Springs Foundation promoted the field by touting physical therapy as a treatment for polio. Treatment through the 1940s primarily consisted of exercise, massage, and traction. Manipulative procedures to the spine and extremity joints began to be practiced, especially in the British Commonwealth countries, in the early 1950s. Around the time polio vaccines were developed, physical therapists became a normal occurrence in hospitals throughout North America and Europe. In the late 1950s, physical therapists started to move beyond hospital-based practice to outpatient orthopedic clinics, public schools, colleges/universities health-centres, geriatric settings (skilled nursing facilities), rehabilitation centers and medical centers. Specialization in physical therapy in the U.S. occurred in 1974, with the Orthopaedic Section of the APTA being formed for those physical therapists specializing in orthopedics. In the same year, the International Federation of Orthopaedic Manipulative Physical Therapists was formed, which has ever since played an important role in advancing manual therapy worldwide. An international organization for the profession is the World Confederation for Physical Therapy (WCPT). It was founded in 1951 and has operated under the brand name World Physiotherapy since 2020. Education Educational criteria for physical therapy providers vary from state to state, country to country, and among various levels of professional responsibility. Most U.S. states have physical therapy practice acts that recognize both physical therapists (PT) and physical therapist assistants (PTA) and some jurisdictions also recognize physical therapy technicians (PT Techs) or aides. Most countries have licensing bodies that require physical therapists to be member of before they can start practicing as independent professionals. Canada The Canadian Alliance of Physiotherapy Regulators (CAPR) offers eligible program graduates to apply for the national Physiotherapy Competency Examination (PCE). Passing the PCE is one of the requirements in most provinces and territories to work as a licensed physiotherapist in Canada. CAPR has members which are physiotherapy regulatory organizations recognized in their respective provinces and territories: Government of Yukon, Consumer Services College of Physical Therapists of British Columbia College of Physiotherapists of Alberta Saskatchewan College of Physical Therapists College of Physiotherapists of Manitoba College of Physiotherapists of Ontario Ordre professionnel de la physiothérapie du Québec College of Physiotherapists of New Brunswick/Collège des physiothérapeutes du Nouveau-Brunswick Nova Scotia College of Physiotherapists Prince Edward Island College of Physiotherapists Newfoundland & Labrador College of Physiotherapists Physiotherapy programs are offered at fifteen universities, often through the university's respective college of medicine. Each of Canada's physical therapy schools has transitioned from three-year Bachelor of Science in Physical Therapy (BScPT) programs that required two years of prerequisite university courses (five-year bachelor's degree) to two-year Master's of Physical Therapy (MPT) programs that require prerequisite bachelor's degrees. The last Canadian university to follow suit was the University of Manitoba, which transitioned to the MPT program in 2012, making the MPT credential the new entry to practice standard across Canada. Existing practitioners with BScPT credentials are not required to upgrade their qualifications. In the province of Quebec, prospective physiotherapists are required to have completed a college diploma in either health sciences, which lasts on average two years, or physical rehabilitation technology, which lasts at least three years, to apply to a physiotherapy program or program in university. Following admission, physical therapy students work on a bachelor of science with a major in physical therapy and rehabilitation. The B.Sc. usually requires three years to complete. Students must then enter graduate school to complete a master's degree in physical therapy, which normally requires one and a half to two years of study. Graduates who obtain their M.Sc. must successfully pass the membership examination to become members of the Ordre Professionnel de la physiothérapie du Québec (PPQ). Physiotherapists can pursue their education in such fields as rehabilitation sciences, sports medicine, kinesiology, and physiology. In the province of Quebec, physical rehabilitation therapists are health care professionals who are required to complete a four-year college diploma program in physical rehabilitation therapy and be members of the Ordre Professionnel de la physiothérapie du Québec (OPPQ) to practice legally in the country according to specialist De Van Gerard. Most physical rehabilitation therapists complete their college diploma at Collège Montmorency, Dawson College, or Cégep Marie-Victorin, all situated in and around the Montreal area. After completing their technical college diploma, graduates have the opportunity to pursue their studies at the university level to perhaps obtain a bachelor's degree in physiotherapy, kinesiology, exercise science, or occupational therapy. The Université de Montréal, the Université Laval and the Université de Sherbrooke are among the Québécois universities that admit physical rehabilitation therapists in their programs of study related to health sciences and rehabilitation to credit courses that were completed in college. To date, there are no bridging programs available to facilitate upgrading from the BScPT to the MPT credential. However, research Master's of Science (MSc) and Doctor of Philosophy (Ph.D.) programs are available at every university. Aside from academic research, practitioners can upgrade their skills and qualifications through continuing education courses and curriculums. Continuing education is a requirement of the provincial regulatory bodies. The Canadian Physiotherapy Association offers a curriculum of continuing education courses in orthopedics and manual therapy. The program consists of 5 levels (7 courses) of training with ongoing mentorship and evaluation at each level. The orthopedic curriculum and examinations take a minimum of 4 years to complete. However, upon completion of level 2, physiotherapists can apply to a unique 1-year course-based Master's program in advanced orthopedics and manipulation at the University of Western Ontario to complete their training. This program accepts only 16 physiotherapists annually since 2007. Successful completion of either of these education streams and their respective examinations allows physiotherapists the opportunity to apply to the Canadian Academy of Manipulative Physiotherapy (CAMPT) for fellowship. Fellows of the Canadian Academy of manipulative Physiotherapists (FCAMPT) are considered leaders in the field, having extensive post-graduate education in orthopedics and manual therapy. FCAMPT is an internationally recognized credential, as CAMPT is a member of the International Federation of Manipulative Physiotherapists (IFOMPT), a branch of World Physiotherapy (formerly World Confederation of Physical Therapy (WCPT)) and the World Health Organization (WHO). Scotland Physiotherapy degrees are offered at four universities: Edinburgh Napier University in Edinburgh, Robert Gordon University in Aberdeen, Glasgow Caledonian University in Glasgow, and Queen Margaret University in Edinburgh. Students can qualify as physiotherapists by completing a four-year Bachelor of Science degree or a two-year master's degree (if they already have an undergraduate degree in a related field). To use the title 'Physiotherapist', a student must register with the Health and Care Professions Council, a UK-wide regulatory body, on qualifying. Many physiotherapists are also members of the Chartered Society of Physiotherapy (CSP), which provides insurance and professional support. United States The primary physical therapy practitioner is the Physical Therapist (PT) who is trained and licensed to examine, evaluate, diagnose and treat impairment, functional limitations, and disabilities in patients or clients. Physical therapist education curricula in the United States culminate in a Doctor of Physical Therapy (DPT) degree, with some practicing PTs holding a Master of Physical Therapy degree, and some with a Bachelor's degree. The Master of Physical Therapy and Master of Science in Physical Therapy degrees are no longer offered, and the entry-level degree is the Doctor of Physical Therapy degree, which typically takes 3 years after completing a bachelor's degree. PTs who hold a Masters or bachelors in PT are encouraged to get their DPT because APTA's goal is for all PT's to be on a doctoral level. WCPT recommends physical therapist entry-level educational programs be based on university or university-level studies, of a minimum of four years, independently validated and accredited. Curricula in the United States are accredited by the Commission on Accreditation in Physical Therapy Education (CAPTE). According to CAPTE, there are 37,306 students currently enrolled in 294 accredited PT programs in the United States while 10,096 PTA students are currently enrolled in 396 PTA programs in the United States. The physical therapist professional curriculum includes content in the clinical sciences (e.g., content about the cardiovascular, pulmonary, endocrine, metabolic, gastrointestinal, genitourinary, integumentary, musculoskeletal, and neuromuscular systems and the medical and surgical conditions frequently seen by physical therapists). Current training is specifically aimed to enable physical therapists to appropriately recognize and refer non-musculoskeletal diagnoses that may present similarly to those caused by systems not appropriate for physical therapy intervention, which has resulted in direct access to physical therapists in many states. Post-doctoral residency and fellowship education prevalence is increasing steadily with 219 residency, and 42 fellowship programs accredited in 2016. Residencies are aimed to train physical therapists in a specialty such as acute care, cardiovascular & pulmonary, clinical electrophysiology, faculty, geriatrics, neurology, orthopaedics, pediatrics, sports, women's health, and wound care, whereas fellowships train specialists in a subspecialty (e.g. critical care, hand therapy, and division 1 sports), similar to the medical model. Residency programs offer eligibility to sit for the specialist certification in their respective area of practice. For example, completion of an orthopedic physical therapy residency, allows its graduates to apply and sit for the clinical specialist examination in orthopedics, achieving the OCS designation upon passing the examination. Board certification of physical therapy specialists is aimed to recognize individuals with advanced clinical knowledge and skill training in their respective area of practice, and exemplifies the trend toward greater education to optimally treat individuals with movement dysfunction. Physical therapist assistants may deliver treatment and physical interventions for patients and clients under a care plan established by and under the supervision of a physical therapist. Physical therapist assistants in the United States are currently trained under associate of applied sciences curricula specific to the profession, as outlined and accredited by CAPTE. As of December 2022, there were 396 accredited two-year (Associate degree) programs for physical therapist assistants In the United States of America. Curricula for the physical therapist assistant associate degree include: Anatomy & physiology Exercise physiology Human biology Physics Biomechanics Kinesiology Neuroscience Clinical pathology Behavioral sciences Communication Ethics Research Other coursework as required by individual programs Job duties and education requirements for Physical Therapy Technicians or Aides may vary depending on the employer, but education requirements range from a high school diploma or equivalent to completion of a 2-year degree program. O-Net reports that 64% of PT Aides/Techs have a high school diploma or equivalent, 21% have completed some college but do not hold a degree, and 10% hold an associate degree. Some jurisdictions allow physical therapists to employ technicians or aides or therapy assistants to perform designated routine tasks related to physical therapy under the direct supervision of a physical therapist. Some jurisdictions require physical therapy technicians or aides to be certified, and education and certification requirements vary among jurisdictions. Employment Physical therapy–related jobs in North America have shown rapid growth in recent years, but employment rates and average wages may vary significantly between different countries, states, provinces, or regions. A study from 2013 states that 56.4% of physical therapists were globally satisfied with their jobs. Salary, interest in work, and fulfillment in a job are important predictors of job satisfaction. In a Polish study, job burnout among the physical therapists was manifested by increased emotional exhaustion and decreased sense of personal achievement. Emotional exhaustion is significantly higher among physical therapists working with adults and employed in hospitals. Other factors that increased burnout include working in a hospital setting and having seniority from 15 to 19 years. United States According to the United States Department of Labor's Bureau of Labor Statistics, there were approximately 210,900 physical therapists employed in the United States in 2014, earning an average of $84,020 annually in 2015, or $40.40 per hour, with 34% growth in employment projected by 2024. The Bureau of Labor Statistics also reports that there were approximately 128,700 Physical Therapist Assistants and Aides employed in the United States in 2014, earning an average $42,980 annually, or $20.66 per hour, with 40% growth in employment projected by 2024. To meet their needs, many healthcare and physical therapy facilities hire "travel physical therapists", who work temporary assignments between 8 and 26 weeks for much higher wages; about $113,500 a year." Bureau of Labor Statistics data on PTAs and techs can be difficult to decipher, due to their tendency to report data on these job fields collectively rather than separately. O-Net reports that in 2015, PTAs in the United States earned a median wage of $55,170 annually or $26.52 hourly and that Aides/Techs earned a median wage of $25,120 annually or $12.08 hourly in 2015. The American Physical Therapy Association reports vacancy rates for physical therapists as 11.2% in outpatient private practice, 10% in acute care settings, and 12.1% in skilled nursing facilities. The APTA also reports turnover rates for physical therapists as 10.7% in outpatient private practice, 11.9% in acute care settings, 27.6% in skilled nursing facilities. Definitions and licensing requirements in the United States vary among jurisdictions, as each state has enacted its own physical therapy practice act defining the profession within its jurisdiction, but the Federation of State Boards of Physical Therapy has also drafted a model definition to limit this variation. The Commission on Accreditation in Physical Therapy Education (CAPTE) is responsible for accrediting physical therapy education curricula throughout the United States of America. United Kingdom The title of Physiotherapist is a protected professional title in the United Kingdom. Anyone using this title must be registered with the Health & Care Professions Council (HCPC). Physiotherapists must complete the necessary qualifications, usually an undergraduate physiotherapy degree (at university or as an intern), a master rehabilitation degree, or a doctoral degree in physiotherapy. This is typically followed by supervised professional experience lasting two to three years. All professionals on the HCPC register must comply with continuing professional development (CPD) and can be audited for this evidence at intervals. Specialty areas The body of knowledge of physical therapy is large, and therefore physical therapists may specialize in a specific clinical area. While there are many different types of physical therapy, the American Board of Physical Therapy Specialties lists ten current specialist certifications. Most Physical Therapists practicing in a specialty will have undergone further training, such as an accredited residency program, although individuals are currently able to sit for their specialist examination after 2,000 hours of focused practice in their respective specialty population, in addition to requirements set by each respective specialty board. Cardiovascular and pulmonary Cardiovascular and pulmonary rehabilitation respiratory practitioners and physical therapists offer therapy for a wide variety of cardiopulmonary disorders or pre and post cardiac or pulmonary surgery. An example of cardiac surgery is coronary bypass surgery. The primary goals of this specialty include increasing endurance and functional independence. Manual therapy is used in this field to assist in clearing lung secretions experienced with cystic fibrosis. Pulmonary disorders, heart attacks, post coronary bypass surgery, chronic obstructive pulmonary disease, and pulmonary fibrosis, treatments can benefit from cardiovascular and pulmonary specialized physical therapists. Clinical electrophysiology This specialty area includes electrotherapy/physical agents, electrophysiological evaluation (EMG/NCV), physical agents, and wound management. Geriatric Geriatric physical therapy covers a wide area of issues concerning people as they go through normal adult aging but is usually focused on the older adult. There are many conditions that affect many people as they grow older and include but are not limited to the following: arthritis, osteoporosis, cancer, Alzheimer's disease, hip and joint replacement, balance disorders, incontinence, etc. Geriatric physical therapists specialize in providing therapy for such conditions in older adults. Physical rehabilitation can prevent deterioration in health and activities of daily living among care home residents. The current evidence suggests benefits to physical health from participating in different types of physical rehabilitation to improve daily living, strength, flexibility, balance, mood, memory, exercise tolerance, fear of falling, injuries, and death. It may be both safe and effective in improving physical and possibly mental state, while reducing disability with few adverse events. The current body of evidence suggests that physical rehabilitation may be effective for long-term care residents in reducing disability with few adverse events. However, there is insufficient to conclude whether the beneficial effects are sustainable and cost-effective. The findings are based on moderate quality evidence. Wound management Wound management physical therapy includes the treatment of conditions involving the skin and all its related organs. Common conditions managed include wounds and burns. Physical therapists may utilize surgical instruments, wound irrigations, dressings, and topical agents to remove the damaged or contaminated tissue and promote tissue healing. Other commonly used interventions include exercise, edema control, splinting, and compression garments. The work done by physical therapists in the integumentary specialty does work similar to what would be done by medical doctors or nurses in the emergency room or triage. Neurology Neurological physical therapy is a field focused on working with individuals who have a neurological disorder or disease. These can include stroke, chronic back pain, Alzheimer's disease, Charcot-Marie-Tooth disease (CMT), ALS, brain injury, cerebral palsy, multiple sclerosis, Parkinson's disease, facial palsy and spinal cord injury. Common impairments associated with neurologic conditions include impairments of vision, balance, ambulation, activities of daily living, movement, muscle strength and loss of functional independence. The techniques involve in neurological physical therapy are wide-ranging and often require specialized training. Neurological physiotherapy is also called neurophysiotherapy or neurological rehabilitation. It is recommended for neurophysiotherapists to collaborate with psychologists when providing physical treatment of movement disorders. This is especially important because combining physical therapy and psychotherapy can improve neurological status of the patients. Orthopaedics Orthopedic physical therapists diagnose, manage, and treat disorders and injuries of the musculoskeletal system including rehabilitation after orthopedic surgery, acute trauma such as sprains, strains, injuries of insidious onset such as tendinopathy, bursitis, and deformities like scoliosis. This specialty of physical therapy is most often found in the outpatient clinical setting. Orthopedic therapists are trained in the treatment of post-operative orthopedic procedures, fractures, acute sports injuries, arthritis, sprains, strains, back and neck pain, spinal conditions, and amputations. Joint and spine mobilization/manipulation, dry needling (similar to acupuncture), therapeutic exercise, neuromuscular techniques, muscle reeducation, hot/cold packs, and electrical muscle stimulation (e.g., cryotherapy, iontophoresis, electrotherapy) are modalities employed to expedite recovery in the orthopedic setting. Additionally, an emerging adjunct to diagnosis and treatment is the use of sonography for diagnosis and to guide treatments such as muscle retraining. Those with injury or disease affecting the muscles, bones, ligaments, or tendons will benefit from assessment by a physical therapist specialized in orthopedics. Pediatrics Pediatric physical therapy assists in the early detection of health problems and uses a variety of modalities to provide physical therapy for disorders in the pediatric population. These therapists are specialized in the diagnosis, treatment, and management of infants, children, and adolescents with a variety of congenital, developmental, neuromuscular, skeletal, or acquired disorders/diseases. Treatments focus mainly on improving gross and fine motor skills, balance and coordination, strength and endurance as well as cognitive and sensory processing/integration. Sports Physical therapists are closely involved in the care and wellbeing of athletes including recreational, semi-professional (paid), and professional (full-time employment) participants. This area of practice encompasses athletic injury management under 5 main categories: acute care – assessment and diagnosis of an initial injury; treatment – application of specialist advice and techniques to encourage healing; rehabilitation – progressive management for full return to sport; prevention – identification and address of deficiencies known to directly result in, or act as precursors to injury, such as movement assessment education – sharing of specialist knowledge to individual athletes, teams, or clubs to assist in prevention or management of injury Physical therapists who work for professional sports teams often have a specialized sports certification issued through their national registering organization. Most Physical therapists who practice in a sporting environment are also active in collaborative sports medicine programs too (
Biology and health sciences
Medical procedures
null
24026
https://en.wikipedia.org/wiki/Proteome
Proteome
A proteome is the entire set of proteins that is, or can be, expressed by a genome, cell, tissue, or organism at a certain time. It is the set of expressed proteins in a given type of cell or organism, at a given time, under defined conditions. Proteomics is the study of the proteome. Types of proteomes While proteome generally refers to the proteome of an organism, multicellular organisms may have very different proteomes in different cells, hence it is important to distinguish proteomes in cells and organisms. A cellular proteome is the collection of proteins found in a particular cell type under a particular set of environmental conditions such as exposure to hormone stimulation. It can also be useful to consider an organism's complete proteome, which can be conceptualized as the complete set of proteins from all of the various cellular proteomes. This is very roughly the protein equivalent of the genome. The term proteome has also been used to refer to the collection of proteins in certain sub-cellular systems, such as organelles. For instance, the mitochondrial proteome may consist of more than 3000 distinct proteins. The proteins in a virus can be called a viral proteome. Usually viral proteomes are predicted from the viral genome but some attempts have been made to determine all the proteins expressed from a virus genome, i.e. the viral proteome. More often, however, virus proteomics analyzes the changes of host proteins upon virus infection, so that in effect two proteomes (of virus and its host) are studied. Importance in cancer The proteome can be used in order to comparatively analyze different cancer cell lines. Proteomic studies have been used in order to identify the likelihood of metastasis in bladder cancer cell lines KK47 and YTS1 and were found to have 36 unregulated and 74 down regulated proteins. The differences in protein expression can help identify novel cancer signaling mechanisms. Biomarkers of cancer have been found by mass spectrometry based proteomic analyses. The use of proteomics or the study of the proteome is a step forward in personalized medicine to tailor drug cocktails to the patient's specific proteomic and genomic profile. The analysis of ovarian cancer cell lines showed that putative biomarkers for ovarian cancer include "α-enolase (ENOA), elongation factor Tu, mitochondrial (EFTU), glyceraldehyde-3-phosphate dehydrogenase (G3P), stress-70 protein, mitochondrial (GRP75), apolipoprotein A-1 (APOA1), peroxiredoxin (PRDX2) and annexin A (ANXA)". Comparative proteomic analyses of 11 cell lines demonstrated the similarity between the metabolic processes of each cell line; 11,731 proteins were completely identified from this study. Housekeeping proteins tend to show greater variability between cell lines. Resistance to certain cancer drugs is still not well understood. Proteomic analysis has been used in order to identify proteins that may have anti-cancer drug properties, specifically for the colon cancer drug irinotecan. Studies of adenocarcinoma cell line LoVo demonstrated that 8 proteins were unregulated and 7 proteins were down-regulated. Proteins that showed a differential expression were involved in processes such as transcription, apoptosis and cell proliferation/differentiation among others. The proteome in bacterial systems Proteomic analyses have been performed in different kinds of bacteria to assess their metabolic reactions to different conditions. For example, in bacteria such as Clostridium and Bacillus, proteomic analyses were used in order to investigate how different proteins help each of these bacteria spores germinate after a prolonged period of dormancy. In order to better understand how to properly eliminate spores, proteomic analysis must be performed. History Marc Wilkins coined the term proteome in 1994 in a symposium on "2D Electrophoresis: from protein maps to genomes" held in Siena in Italy. It appeared in print in 1995, with the publication of part of his PhD thesis. Wilkins used the term to describe the entire complement of proteins expressed by a genome, cell, tissue or organism. Size and contents The genomes of viruses and prokaryotes encode a relatively well-defined proteome as each protein can be predicted with high confidence, based on its open reading frame (in viruses ranging from ~3 to ~1000, in bacteria ranging from about 500 proteins to about 10,000). However, most protein prediction algorithms use certain cut-offs, such as 50 or 100 amino acids, so small proteins are often missed by such predictions. In eukaryotes this becomes much more complicated as more than one protein can be produced from most genes due to alternative splicing (e.g. human genome encodes about 20,000 proteins, but some estimates predicted 92,179 proteins out of which 71,173 are splicing variants). Association of proteome size with DNA repair capability The concept of “proteomic constraint” is that DNA repair capacity is positively correlated with the information content of a genome, which, in turn, is approximately related to the size of the proteome. In bacteria, archaea and DNA viruses, DNA repair capability is positively related to genome information content and to genome size. “Proteomic constraint” proposes that modulators of mutation rates such as DNA repair genes are subject to selection pressure proportional to the amount of information in a genome. Proteoforms. There are different factors that can add variability to proteins. SAPs (single amino acid polymorphisms) and non-synonymous single nucleotide polymorphisms (nsSNPs) can lead to different "proteoforms" or "proteomorphs". Recent estimates have found ~135,000 validated nonsynonymous cSNPs currently housed within SwissProt. In dbSNP, there are 4.7 million candidate cSNPs, yet only ~670,000 cSNPs have been validated in the 1,000-genomes set as nonsynonymous cSNPs that change the identity of an amino acid in a protein. Dark proteome. The term dark proteome coined by Perdigão and colleagues, defines regions of proteins that have no detectable sequence homology to other proteins of known three-dimensional structure and therefore cannot be modeled by homology. For 546,000 Swiss-Prot proteins, 44–54% of the proteome in eukaryotes and viruses was found to be "dark", compared with only ~14% in archaea and bacteria. Human proteome. Currently, several projects aim to map the human proteome, including the Human Proteome Map, ProteomicsDB, isoform.io, and The Human Proteome Project (HPP). Much like the human genome project, these projects seek to find and collect evidence for all predicted protein coding genes in the human genome. The Human Proteome Map currently (October 2020) claims 17,294 proteins and ProteomicsDB 15,479, using different criteria. On October 16, 2020, the HPP published a high-stringency blueprint covering more than 90% of the predicted protein coding genes. Proteins are identified from a wide range of fetal and adult tissues and cell types, including hematopoietic cells. Methods to study the proteome Analyzing proteins proves to be more difficult than analyzing nucleic acid sequences. While there are only 4 nucleotides that make up DNA, there are at least 20 different amino acids that can make up a protein. Additionally, there is currently no known high throughput technology to make copies of a single protein. Numerous methods are available to study proteins, sets of proteins, or the whole proteome. In fact, proteins are often studied indirectly, e.g. using computational methods and analyses of genomes. Only a few examples are given below. Separation techniques and electrophoresis Proteomics, the study of the proteome, has largely been practiced through the separation of proteins by two dimensional gel electrophoresis. In the first dimension, the proteins are separated by isoelectric focusing, which resolves proteins on the basis of charge. In the second dimension, proteins are separated by molecular weight using SDS-PAGE. The gel is stained with Coomassie brilliant blue or silver to visualize the proteins. Spots on the gel are proteins that have migrated to specific locations. Mass spectrometry Mass spectrometry is one of the key methods to study the proteome. Some important mass spectrometry methods include Orbitrap Mass Spectrometry, MALDI (Matrix Assisted Laser Desorption/Ionization), and ESI (Electrospray Ionization). Peptide mass fingerprinting identifies a protein by cleaving it into short peptides and then deduces the protein's identity by matching the observed peptide masses against a sequence database. Tandem mass spectrometry, on the other hand, can get sequence information from individual peptides by isolating them, colliding them with a non-reactive gas, and then cataloguing the fragment ions produced. In May 2014, a draft map of the human proteome was published in Nature. This map was generated using high-resolution Fourier-transform mass spectrometry. This study profiled 30 histologically normal human samples resulting in the identification of proteins coded by 17,294 genes. This accounts for around 84% of the total annotated protein-coding genes. Chromatography Liquid chromatography is an important tool in the study of the proteome. It allows for very sensitive separation of different kinds of proteins based on their affinity for a matrix. Some newer methods for the separation and identification of proteins include the use of monolithic capillary columns, high temperature chromatography and capillary electrochromatography. Blotting Western blotting can be used in order to quantify the abundance of certain proteins. By using antibodies specific to the protein of interest, it is possible to probe for the presence of specific proteins from a mixture of proteins. Protein complementation assays and interaction screens Protein-fragment complementation assays are often used to detect protein–protein interactions. The yeast two-hybrid assay is the most popular of them but there are numerous variations, both used in vitro and in vivo. Pull-down assays are a method to determine the protein binding partners of a given protein. Protein structure prediction Protein structure prediction can be used to provide three-dimensional protein structure predictions of whole proteomes. In 2022, a large-scale collaboration between EMBL-EBI and DeepMind provided predicted structures for over 200 million proteins from across the tree of life. Smaller projects have also used protein structure prediction to help map the proteome of individual organisms, for example isoform.io provides coverage of multiple protein isoforms for over 20,000 genes in the human genome. Protein databases The Human Protein Atlas contains information about the human proteins in cells, tissues, and organs. All the data in the knowledge resource is open access to allow scientists both in academia and industry to freely access the data for exploration of the human proteome. The organization ELIXIR has selected the protein atlas as a core resource due to its fundamental importance for a wider life science community. The Plasma Proteome database contains information on 10,500 blood plasma proteins. Because the range in protein contents in plasma is very large, it is difficult to detect proteins that tend to be scarce when compared to abundant proteins. This is an analytical limit that may possibly be a barrier for the detections of proteins with ultra low concentrations. Databases such as neXtprot and UniProt are central resources for human proteomic data.
Biology and health sciences
Proteins
Biology
24029
https://en.wikipedia.org/wiki/Peptide
Peptide
Peptides are short chains of amino acids linked by peptide bonds. A polypeptide is a longer, continuous, unbranched peptide chain. Polypeptides that have a molecular mass of 10,000 Da or more are called proteins. Chains of fewer than twenty amino acids are called oligopeptides, and include dipeptides, tripeptides, and tetrapeptides. Peptides fall under the broad chemical classes of biological polymers and oligomers, alongside nucleic acids, oligosaccharides, polysaccharides, and others. Proteins consist of one or more polypeptides arranged in a biologically functional way, often bound to ligands such as coenzymes and cofactors, to another protein or other macromolecule such as DNA or RNA, or to complex macromolecular assemblies. Amino acids that have been incorporated into peptides are termed residues. A water molecule is released during formation of each amide bond. All peptides except cyclic peptides have an N-terminal (amine group) and C-terminal (carboxyl group) residue at the end of the peptide (as shown for the tetrapeptide in the image). Classification There are numerous types of peptides that have been classified according to their sources and functions. According to the Handbook of Biologically Active Peptides, some groups of peptides include plant peptides, bacterial/antibiotic peptides, fungal peptides, invertebrate peptides, amphibian/skin peptides, venom peptides, cancer/anticancer peptides, vaccine peptides, immune/inflammatory peptides, brain peptides, endocrine peptides, ingestive peptides, gastrointestinal peptides, cardiovascular peptides, renal peptides, respiratory peptides, opioid peptides, neurotrophic peptides, and blood–brain peptides. Some ribosomal peptides are subject to proteolysis. These function, typically in higher organisms, as hormones and signaling molecules. Some microbes produce peptides as antibiotics, such as microcins and bacteriocins. Peptides frequently have post-translational modifications such as phosphorylation, hydroxylation, sulfonation, palmitoylation, glycosylation, and disulfide formation. In general, peptides are linear, although lariat structures have been observed. More exotic manipulations do occur, such as racemization of L-amino acids to D-amino acids in platypus venom. Nonribosomal peptides are assembled by enzymes, not the ribosome. A common non-ribosomal peptide is glutathione, a component of the antioxidant defenses of most aerobic organisms. Other nonribosomal peptides are most common in unicellular organisms, plants, and fungi and are synthesized by modular enzyme complexes called nonribosomal peptide synthetases. These complexes are often laid out in a similar fashion, and they can contain many different modules to perform a diverse set of chemical manipulations on the developing product. These peptides are often cyclic and can have highly complex cyclic structures, although linear nonribosomal peptides are also common. Since the system is closely related to the machinery for building fatty acids and polyketides, hybrid compounds are often found. The presence of oxazoles or thiazoles often indicates that the compound was synthesized in this fashion. are derived from animal milk or meat digested by proteolysis. In addition to containing small peptides, the resulting material includes fats, metals, salts, vitamins, and many other biological compounds. Peptones are used in nutrient media for growing bacteria and fungi. Peptide fragments refer to fragments of proteins that are used to identify or quantify the source protein. Often these are the products of enzymatic degradation performed in the laboratory on a controlled sample, but can also be forensic or paleontological samples that have been degraded by natural effects. Chemical synthesis Protein-peptide interactions Peptides can perform interactions with proteins and other macromolecules. They are responsible for numerous important functions in human cells, such as cell signaling, and act as immune modulators. Indeed, studies have reported that 15-40% of all protein-protein interactions in human cells are mediated by peptides. Additionally, it is estimated that at least 10% of the pharmaceutical market is based on peptide products. Example families The peptide families in this section are ribosomal peptides, usually with hormonal activity. All of these peptides are synthesized by cells as longer "propeptides" or "proproteins" and truncated prior to exiting the cell. They are released into the bloodstream where they perform their signaling functions. Antimicrobial peptides Magainin family Cecropin family Cathelicidin family Defensin family Tachykinin peptides Substance P Kassinin Neurokinin A Eledoisin Neurokinin B Vasoactive intestinal peptides VIP (Vasoactive Intestinal Peptide; PHM27) PACAP Pituitary Adenylate Cyclase Activating Peptide Peptide PHI 27 (Peptide Histidine Isoleucine 27) GHRH 1-24 (Growth Hormone Releasing Hormone 1-24) Glucagon Secretin Pancreatic polypeptide-related peptides NPY (NeuroPeptide Y) PYY (Peptide YY) APP (Avian Pancreatic Polypeptide) PPY Pancreatic PolYpeptide Opioid peptides Proopiomelanocortin (POMC) peptides Enkephalin pentapeptides Prodynorphin peptides Calcitonin peptides Calcitonin Amylin AGG01 Self-assembling peptides Aromatic short peptides Biomimetic peptides Peptide amphiphiles Peptide dendrimers Other peptides B-type Natriuretic Peptide (BNP) – produced in the myocardium and useful in medical diagnosis Lactotripeptides – Lactotripeptides might reduce blood pressure, although the evidence is mixed. Peptidic components from traditional Chinese medicine Colla Corii Asini in hematopoiesis. Jelleine – produced from royal jelly of honey bees. Terminology Length Several terms related to peptides have no strict length definitions, and there is often overlap in their usage: A polypeptide is a single linear chain of many amino acids (any length), held together by amide bonds. A protein consists of one or more polypeptides (more than about 50 amino acids long). An oligopeptide consists of only a few amino acids (between two and twenty). Number of amino acids Peptides and proteins are often described by the number of amino acids in their chain, e.g. a protein with 158 amino acids may be described as a "158 amino-acid-long protein". Peptides of specific shorter lengths are named using IUPAC numerical multiplier prefixes: A monopeptide has one amino acid. A dipeptide has two amino acids. A tripeptide has three amino acids. A tetrapeptide has four amino acids. A pentapeptide has five amino acids. (e.g., enkephalin). A hexapeptide has six amino acids. (e.g., angiotensin IV). A heptapeptide has seven amino acids. (e.g., spinorphin). An octapeptide has eight amino acids (e.g., angiotensin II). A nonapeptide has nine amino acids (e.g., oxytocin). A decapeptide has ten amino acids (e.g., gonadotropin-releasing hormone and angiotensin I). A undecapeptide has eleven amino acids (e.g., substance P). The same words are also used to describe a group of residues in a larger polypeptide (e.g., RGD motif). Function A neuropeptide is a peptide that is active in association with neural tissue. A lipopeptide is a peptide that has a lipid connected to it, and pepducins are lipopeptides that interact with GPCRs. A peptide hormone is a peptide that acts as a hormone. A proteose is a mixture of peptides produced by the hydrolysis of proteins. The term is somewhat archaic. A peptidergic agent (or drug) is a chemical which functions to directly modulate the peptide systems in the body or brain. An example is opioidergics, which are neuropeptidergics. A cell-penetrating peptide is a peptide able to penetrate the cell membrane.
Biology and health sciences
Proteins
Biology
24032
https://en.wikipedia.org/wiki/Positron%20emission%20tomography
Positron emission tomography
Positron emission tomography (PET) is a functional imaging technique that uses radioactive substances known as radiotracers to visualize and measure changes in metabolic processes, and in other physiological activities including blood flow, regional chemical composition, and absorption. Different tracers are used for various imaging purposes, depending on the target process within the body. For example: Fluorodeoxyglucose ([18F]FDG or FDG) is commonly used to detect cancer; [18F]Sodium fluoride (Na18F) is widely used for detecting bone formation; Oxygen-15 (15O) is sometimes used to measure blood flow. PET is a common imaging technique, a medical scintillography technique used in nuclear medicine. A radiopharmaceutical – a radioisotope attached to a drug – is injected into the body as a tracer. When the radiopharmaceutical undergoes beta plus decay, a positron is emitted, and when the positron interacts with an ordinary electron, the two particles annihilate and two gamma rays are emitted in opposite directions. These gamma rays are detected by two gamma cameras to form a three-dimensional image. PET scanners can incorporate a computed tomography scanner (CT) and are known as PET-CT scanners. PET scan images can be reconstructed using a CT scan performed using one scanner during the same session. One of the disadvantages of a PET scanner is its high initial cost and ongoing operating costs. Uses PET is both a medical and research tool used in pre-clinical and clinical settings. It is used heavily in the imaging of tumors and the search for metastases within the field of clinical oncology, and for the clinical diagnosis of certain diffuse brain diseases such as those causing various types of dementias. PET is a valuable research tool to learn and enhance our knowledge of the normal human brain, heart function, and support drug development. PET is also used in pre-clinical studies using animals. It allows repeated investigations into the same subjects over time, where subjects can act as their own control and substantially reduces the numbers of animals required for a given study. This approach allows research studies to reduce the sample size needed while increasing the statistical quality of its results. Physiological processes lead to anatomical changes in the body. Since PET is capable of detecting biochemical processes as well as expression of some proteins, PET can provide molecular-level information much before any anatomic changes are visible. PET scanning does this by using radiolabelled molecular probes that have different rates of uptake depending on the type and function of tissue involved. Regional tracer uptake in various anatomic structures can be visualized and relatively quantified in terms of injected positron emitter within a PET scan. PET imaging is best performed using a dedicated PET scanner. It is also possible to acquire PET images using a conventional dual-head gamma camera fitted with a coincidence detector. The quality of gamma-camera PET imaging is lower, and the scans take longer to acquire. However, this method allows a low-cost on-site solution to institutions with low PET scanning demand. An alternative would be to refer these patients to another center or relying on a visit by a mobile scanner. Alternative methods of medical imaging include single-photon emission computed tomography (SPECT), computed tomography (CT), magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI), and ultrasound. SPECT is an imaging technique similar to PET that uses radioligands to detect molecules in the body. SPECT is less expensive and provides inferior image quality than PET. Oncology PET scanning with the radiotracer [18F]fluorodeoxyglucose (FDG) is widely used in clinical oncology. FDG is a glucose analog that is taken up by glucose-using cells and phosphorylated by hexokinase (whose mitochondrial form is significantly elevated in rapidly growing malignant tumors). Metabolic trapping of the radioactive glucose molecule allows the PET scan to be utilized. The concentrations of imaged FDG tracer indicate tissue metabolic activity as it corresponds to the regional glucose uptake. FDG is used to explore the possibility of cancer spreading to other body sites (cancer metastasis). These FDG PET scans for detecting cancer metastasis are the most common in standard medical care (representing 90% of current scans). The same tracer may also be used for the diagnosis of types of dementia. Less often, other radioactive tracers, usually but not always labelled with fluorine-18 (18F), are used to image the tissue concentration of different kinds of molecules of interest inside the body. A typical dose of FDG used in an oncological scan has an effective radiation dose of 7.6 mSv. Because the hydroxy group that is replaced by fluorine-18 to generate FDG is required for the next step in glucose metabolism in all cells, no further reactions occur in FDG. Furthermore, most tissues (with the notable exception of liver and kidneys) cannot remove the phosphate added by hexokinase. This means that FDG is trapped in any cell that takes it up until it decays, since phosphorylated sugars, due to their ionic charge, cannot exit from the cell. This results in intense radiolabeling of tissues with high glucose uptake, such as the normal brain, liver, kidneys, and most cancers, which have a higher glucose uptake than most normal tissue due to the Warburg effect. As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in Hodgkin lymphoma, non-Hodgkin lymphoma, and lung cancer. A 2020 review of research on the use of PET for Hodgkin lymphoma found evidence that negative findings in interim PET scans are linked to higher overall survival and progression-free survival; however, the certainty of the available evidence was moderate for survival, and very low for progression-free survival. A few other isotopes and radiotracers are slowly being introduced into oncology for specific purposes. For example, 11C-labelled metomidate (11C-metomidate) has been used to detect tumors of adrenocortical origin. Also, fluorodopa (FDOPA) PET/CT (also called F-18-DOPA PET/CT) has proven to be a more sensitive alternative to finding and also localizing pheochromocytoma than the Iobenguane (MIBG) scan. Neuroimaging Neurology PET imaging with oxygen-15 indirectly measures blood flow to the brain. In this method, increased radioactivity signal indicates increased blood flow which is assumed to correlate with increased brain activity. Because of its 2-minute half-life, oxygen-15 must be piped directly from a medical cyclotron for such uses, which is difficult. PET imaging with FDG takes advantage of the fact that the brain is normally a rapid user of glucose. Standard FDG PET of the brain measures regional glucose use and can be used in neuropathological diagnosis. Brain pathologies such as Alzheimer's disease (AD) greatly decrease brain metabolism of both glucose and oxygen in tandem. Therefore FDG PET of the brain may also be used to successfully differentiate Alzheimer's disease from other dementing processes, and also to make early diagnoses of Alzheimer's disease. The advantage of FDG PET for these uses is its much wider availability. Some fluorine-18 based radioactive tracers used for Alzheimer's include florbetapir, flutemetamol, Pittsburgh compound B (PiB) and florbetaben, which are all used to detect amyloid-beta plaques, a potential biomarker for Alzheimer's in the brain. PET imaging with FDG can also be used for localization of "seizure focus". A seizure focus will appear as hypometabolic during an interictal scan. Several radiotracers (i.e. radioligands) have been developed for PET that are ligands for specific neuroreceptor subtypes such as [11C]raclopride, [18F]fallypride and [18F]desmethoxyfallypride for dopamine D2/D3 receptors; [11C]McN5652 and [11C]DASB for serotonin transporters; [18F]mefway for serotonin 5HT1A receptors; and [18F]nifene for nicotinic acetylcholine receptors or enzyme substrates (e.g. 6-FDOPA for the AADC enzyme). These agents permit the visualization of neuroreceptor pools in the context of a plurality of neuropsychiatric and neurologic illnesses. PET may also be used for the diagnosis of hippocampal sclerosis, which causes epilepsy. FDG, and the less common tracers flumazenil and MPPF have been explored for this purpose. If the sclerosis is unilateral (right hippocampus or left hippocampus), FDG uptake can be compared with the healthy side. Even if the diagnosis is difficult with MRI, it may be diagnosed with PET. The development of a number of novel probes for non-invasive, in-vivo PET imaging of neuroaggregate in human brain has brought amyloid imaging close to clinical use. The earliest amyloid imaging probes included [18F]FDDNP developed at the University of California, Los Angeles and Pittsburgh compound B (PiB) developed at the University of Pittsburgh. These probes permit the visualization of amyloid plaques in the brains of Alzheimer's patients and could assist clinicians in making a positive clinical diagnosis of AD pre-mortem and aid in the development of novel anti-amyloid therapies. [11C]polymethylpentene (PMP) is a novel radiopharmaceutical used in PET imaging to determine the activity of the acetylcholinergic neurotransmitter system by acting as a substrate for acetylcholinesterase. Post-mortem examination of AD patients have shown decreased levels of acetylcholinesterase. [11C]PMP is used to map the acetylcholinesterase activity in the brain, which could allow for premortem diagnoses of AD and help to monitor AD treatments. Avid Radiopharmaceuticals has developed and commercialized a compound called florbetapir that uses the longer-lasting radionuclide fluorine-18 to detect amyloid plaques using PET scans. Neuropsychology or cognitive neuroscience To examine links between specific psychological processes or disorders and brain activity. Psychiatry Numerous compounds that bind selectively to neuroreceptors of interest in biological psychiatry have been radiolabeled with C-11 or F-18. Radioligands that bind to dopamine receptors (D1, D2, reuptake transporter), serotonin receptors (5HT1A, 5HT2A, reuptake transporter), opioid receptors (mu and kappa), cholinergic receptors (nicotinic and muscarinic) and other sites have been used successfully in studies with human subjects. Studies have been performed examining the state of these receptors in patients compared to healthy controls in schizophrenia, substance abuse, mood disorders and other psychiatric conditions. Stereotactic surgery and radiosurgery PET can also be used in image guided surgery for the treatment of intracranial tumors, arteriovenous malformations and other surgically treatable conditions. Cardiology Cardiology, atherosclerosis and vascular disease study: FDG PET can help in identifying hibernating myocardium. However, the cost-effectiveness of PET for this role versus SPECT is unclear. FDG PET imaging of atherosclerosis to detect patients at risk of stroke is also feasible. Also, it can help test the efficacy of novel anti-atherosclerosis therapies. Infectious diseases Imaging infections with molecular imaging technologies can improve diagnosis and treatment follow-up. Clinically, PET has been widely used to image bacterial infections using FDG to identify the infection-associated inflammatory response. Three different PET contrast agents have been developed to image bacterial infections in vivo are [18F]maltose, [18F]maltohexaose, and [18F]2-fluorodeoxysorbitol (FDS). FDS has the added benefit of being able to target only Enterobacteriaceae. Bio-distribution studies In pre-clinical trials, a new drug can be radiolabeled and injected into animals. Such scans are referred to as biodistribution studies. The information regarding drug uptake, retention and elimination over time can be obtained quickly and cost-effectively compare to the older technique of killing and dissecting the animals. Commonly, drug occupancy at a purported site of action can be inferred indirectly by competition studies between unlabeled drug and radiolabeled compounds to bind with specificity to the site. A single radioligand can be used this way to test many potential drug candidates for the same target. A related technique involves scanning with radioligands that compete with an endogenous (naturally occurring) substance at a given receptor to demonstrate that a drug causes the release of the natural substance. Small animal imaging A miniature animal PET has been constructed that is small enough for a fully conscious rat to be scanned. This RatCAP (rat conscious animal PET) allows animals to be scanned without the confounding effects of anesthesia. PET scanners designed specifically for imaging rodents, often referred to as microPET, as well as scanners for small primates, are marketed for academic and pharmaceutical research. The scanners are based on microminiature scintillators and amplified avalanche photodiodes (APDs) through a system that uses single-chip silicon photomultipliers. In 2018 the UC Davis School of Veterinary Medicine became the first veterinary center to employ a small clinical PET scanner as a scanner for clinical (rather than research) animal diagnosis. Because of cost as well as the marginal utility of detecting cancer metastases in companion animals (the primary use of this modality), veterinary PET scanning is expected to be rarely available in the immediate future. Musculo-skeletal imaging PET imaging has been used for imaging muscles and bones. FDG is the most commonly used tracer for imaging muscles, and NaF-F18 is the most widely used tracer for imaging bones. Muscles PET is a feasible technique for studying skeletal muscles during exercise. Also, PET can provide muscle activation data about deep-lying muscles (such as the vastus intermedialis and the gluteus minimus) compared to techniques like electromyography, which can be used only on superficial muscles directly under the skin. However, a disadvantage is that PET provides no timing information about muscle activation because it has to be measured after the exercise is completed. This is due to the time it takes for FDG to accumulate in the activated muscles. Bones Together with [18F]sodium floride, PET for bone imaging has been in use for 60 years for measuring regional bone metabolism and blood flow using static and dynamic scans. Researchers have recently started using [18F]sodium fluoride to study bone metastasis as well. Safety PET scanning is non-invasive, but it does involve exposure to ionizing radiation. FDG, which is now the standard radiotracer used for PET neuroimaging and cancer patient management, has an effective radiation dose of 14 mSv. The amount of radiation in FDG is similar to the effective dose of spending one year in the American city of Denver, Colorado (12.4 mSv/year). For comparison, radiation dosage for other medical procedures range from 0.02 mSv for a chest X-ray and 6.5–8 mSv for a CT scan of the chest. Average civil aircrews are exposed to 3 mSv/year, and the whole body occupational dose limit for nuclear energy workers in the US is 50 mSv/year. For scale, see Orders of magnitude (radiation). For PET-CT scanning, the radiation exposure may be substantial—around 23–26 mSv (for a 70 kg person—dose is likely to be higher for higher body weights). Operation Radionuclides and radiotracers Radionuclides are incorporated either into compounds normally used by the body such as glucose (or glucose analogues), water, or ammonia, or into molecules that bind to receptors or other sites of drug action. Such labelled compounds are known as radiotracers. PET technology can be used to trace the biologic pathway of any compound in living humans (and many other species as well), provided it can be radiolabeled with a PET isotope. Thus, the specific processes that can be probed with PET are virtually limitless, and radiotracers for new target molecules and processes are continuing to be synthesized. As of this writing there are already dozens in clinical use and hundreds applied in research. In 2020 by far the most commonly used radiotracer in clinical PET scanning is the carbohydrate derivative FDG. This radiotracer is used in essentially all scans for oncology and most scans in neurology, thus makes up the large majority of radiotracer (>95%) used in PET and PET-CT scanning. Due to the short half-lives of most positron-emitting radioisotopes, the radiotracers have traditionally been produced using a cyclotron in close proximity to the PET imaging facility. The half-life of fluorine-18 is long enough that radiotracers labeled with fluorine-18 can be manufactured commercially at offsite locations and shipped to imaging centers. Recently rubidium-82 generators have become commercially available. These contain strontium-82, which decays by electron capture to produce positron-emitting rubidium-82. The use of positron-emitting isotopes of metals in PET scans has been reviewed, including elements not listed above, such as lanthanides. Immuno-PET The isotope 89Zr has been applied to the tracking and quantification of molecular antibodies with PET cameras (a method called "immuno-PET"). The biological half-life of antibodies is typically on the order of days, see daclizumab and erenumab by way of example. To visualize and quantify the distribution of such antibodies in the body, the PET isotope 89Zr is well suited because its physical half-life matches the typical biological half-life of antibodies, see table above. Emission To conduct the scan, a short-lived radioactive tracer isotope is injected into the living subject (usually into blood circulation). Each tracer atom has been chemically incorporated into a biologically active molecule. There is a waiting period while the active molecule becomes concentrated in tissues of interest. Then the subject is placed in the imaging scanner. The molecule most commonly used for this purpose is FDG, a sugar, for which the waiting period is typically an hour. During the scan, a record of tissue concentration is made as the tracer decays. As the radioisotope undergoes positron emission decay (also known as positive beta decay), it emits a positron, an antiparticle of the electron with opposite charge. The emitted positron travels in tissue for a short distance (typically less than 1 mm, but dependent on the isotope), during which time it loses kinetic energy, until it decelerates to a point where it can interact with an electron. The encounter annihilates both electron and positron, producing a pair of annihilation (gamma) photons moving in approximately opposite directions. These are detected when they reach a scintillator in the scanning device, creating a burst of light which is detected by photomultiplier tubes or silicon avalanche photodiodes (Si APD). The technique depends on simultaneous or coincident detection of the pair of photons moving in approximately opposite directions (they would be exactly opposite in their center of mass frame, but the scanner has no way to know this, and so has a built-in slight direction-error tolerance). Photons that do not arrive in temporal "pairs" (i.e. within a timing-window of a few nanoseconds) are ignored. Localization of the positron annihilation event The most significant fraction of electron–positron annihilations results in two 511 keV gamma photons being emitted at almost 180 degrees to each other. Hence, it is possible to localize their source along a straight line of coincidence (also called the line of response, or LOR). In practice, the LOR has a non-zero width as the emitted photons are not exactly 180 degrees apart. If the resolving time of the detectors is less than 500 picoseconds rather than about 10 nanoseconds, it is possible to localize the event to a segment of a chord, whose length is determined by the detector timing resolution. As the timing resolution improves, the signal-to-noise ratio (SNR) of the image will improve, requiring fewer events to achieve the same image quality. This technology is not yet common, but it is available on some new systems. Image reconstruction The raw data collected by a PET scanner are a list of 'coincidence events' representing near-simultaneous detection (typically, within a window of 6 to 12 nanoseconds of each other) of annihilation photons by a pair of detectors. Each coincidence event represents a line in space connecting the two detectors along which the positron emission occurred (i.e., the line of response (LOR)). Analytical techniques, much like the reconstruction of computed tomography (CT) and single-photon emission computed tomography (SPECT) data, are commonly used, although the data set collected in PET is much poorer than CT, so reconstruction techniques are more difficult. Coincidence events can be grouped into projection images, called sinograms. The sinograms are sorted by the angle of each view and tilt (for 3D images). The sinogram images are analogous to the projections captured by CT scanners, and can be reconstructed in a similar way. The statistics of data thereby obtained are much worse than those obtained through transmission tomography. A normal PET data set has millions of counts for the whole acquisition, while the CT can reach a few billion counts. This contributes to PET images appearing "noisier" than CT. Two major sources of noise in PET are scatter (a detected pair of photons, at least one of which was deflected from its original path by interaction with matter in the field of view, leading to the pair being assigned to an incorrect LOR) and random events (photons originating from two different annihilation events but incorrectly recorded as a coincidence pair because their arrival at their respective detectors occurred within a coincidence timing window). In practice, considerable pre-processing of the data is required – correction for random coincidences, estimation and subtraction of scattered photons, detector dead-time correction (after the detection of a photon, the detector must "cool down" again) and detector-sensitivity correction (for both inherent detector sensitivity and changes in sensitivity due to angle of incidence). Filtered back projection (FBP) has been frequently used to reconstruct images from the projections. This algorithm has the advantage of being simple while having a low requirement for computing resources. Disadvantages are that shot noise in the raw data is prominent in the reconstructed images, and areas of high tracer uptake tend to form streaks across the image. Also, FBP treats the data deterministically – it does not account for the inherent randomness associated with PET data, thus requiring all the pre-reconstruction corrections described above. Statistical, likelihood-based approaches: Statistical, likelihood-based iterative expectation-maximization algorithms such as the Shepp–Vardi algorithm are now the preferred method of reconstruction. These algorithms compute an estimate of the likely distribution of annihilation events that led to the measured data, based on statistical principles. The advantage is a better noise profile and resistance to the streak artifacts common with FBP, but the disadvantage is greater computer resource requirements. A further advantage of statistical image reconstruction techniques is that the physical effects that would need to be pre-corrected for when using an analytical reconstruction algorithm, such as scattered photons, random coincidences, attenuation and detector dead-time, can be incorporated into the likelihood model being used in the reconstruction, allowing for additional noise reduction. Iterative reconstruction has also been shown to result in improvements in the resolution of the reconstructed images, since more sophisticated models of the scanner physics can be incorporated into the likelihood model than those used by analytical reconstruction methods, allowing for improved quantification of the radioactivity distribution. Research has shown that Bayesian methods that involve a Poisson likelihood function and an appropriate prior probability (e.g., a smoothing prior leading to total variation regularization or a Laplacian distribution leading to -based regularization in a wavelet or other domain), such as via Ulf Grenander's Sieve estimator or via Bayes penalty methods or via I.J. Good's roughness method may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function but do not involve such a prior. Attenuation correction: Quantitative PET Imaging requires attenuation correction. In these systems attenuation correction is based on a transmission scan using 68Ge rotating rod source. Transmission scans directly measure attenuation values at 511 keV. Attenuation occurs when photons emitted by the radiotracer inside the body are absorbed by intervening tissue between the detector and the emission of the photon. As different LORs must traverse different thicknesses of tissue, the photons are attenuated differentially. The result is that structures deep in the body are reconstructed as having falsely low tracer uptake. Contemporary scanners can estimate attenuation using integrated x-ray CT equipment, in place of earlier equipment that offered a crude form of CT using a gamma ray (positron emitting) source and the PET detectors. While attenuation-corrected images are generally more faithful representations, the correction process is itself susceptible to significant artifacts. As a result, both corrected and uncorrected images are always reconstructed and read together. 2D/3D reconstruction: Early PET scanners had only a single ring of detectors, hence the acquisition of data and subsequent reconstruction was restricted to a single transverse plane. More modern scanners now include multiple rings, essentially forming a cylinder of detectors. There are two approaches to reconstructing data from such a scanner: Treat each ring as a separate entity, so that only coincidences within a ring are detected, the image from each ring can then be reconstructed individually (2D reconstruction), or Allow coincidences to be detected between rings as well as within rings, then reconstruct the entire volume together (3D). 3D techniques have better sensitivity (because more coincidences are detected and used) hence less noise, but are more sensitive to the effects of scatter and random coincidences, as well as requiring greater computer resources. The advent of sub-nanosecond timing resolution detectors affords better random coincidence rejection, thus favoring 3D image reconstruction. Time-of-flight (TOF) PET: For modern systems with a higher time resolution (roughly 3 nanoseconds) a technique called "time-of-flight" is used to improve the overall performance. Time-of-flight PET makes use of very fast gamma-ray detectors and data processing system which can more precisely decide the difference in time between the detection of the two photons. It is impossible to localize the point of origin of the annihilation event exactly (currently within 10 cm). Therefore, image reconstruction is still needed. TOF technique gives a remarkable improvement in image quality, especially signal-to-noise ratio. Combination of PET with CT or MRI PET scans are increasingly read alongside CT or MRI scans, with the combination (co-registration) giving both anatomic and metabolic information (i.e., what the structure is, and what it is doing biochemically). Because PET imaging is most useful in combination with anatomical imaging, such as CT, modern PET scanners are now available with integrated high-end multi-detector-row CT scanners (PET-CT). Because the two scans can be performed in immediate sequence during the same session, with the patient not changing position between the two types of scans, the two sets of images are more precisely registered, so that areas of abnormality on the PET imaging can be more perfectly correlated with anatomy on the CT images. This is very useful in showing detailed views of moving organs or structures with higher anatomical variation, which is more common outside the brain. At the Jülich Institute of Neurosciences and Biophysics, the world's largest PET-MRI device began operation in April 2009. A 9.4-tesla magnetic resonance tomograph (MRT) combined with a PET. Presently, only the head and brain can be imaged at these high magnetic field strengths. For brain imaging, registration of CT, MRI and PET scans may be accomplished without the need for an integrated PET-CT or PET-MRI scanner by using a device known as the N-localizer. Limitations The minimization of radiation dose to the subject is an attractive feature of the use of short-lived radionuclides. Besides its established role as a diagnostic technique, PET has an expanding role as a method to assess the response to therapy, in particular, cancer therapy, where the risk to the patient from lack of knowledge about disease progress is much greater than the risk from the test radiation. Since the tracers are radioactive, the elderly and pregnant are unable to use it due to risks posed by radiation. Limitations to the widespread use of PET arise from the high costs of cyclotrons needed to produce the short-lived radionuclides for PET scanning and the need for specially adapted on-site chemical synthesis apparatus to produce the radiopharmaceuticals after radioisotope preparation. Organic radiotracer molecules that will contain a positron-emitting radioisotope cannot be synthesized first and then the radioisotope prepared within them, because bombardment with a cyclotron to prepare the radioisotope destroys any organic carrier for it. Instead, the isotope must be prepared first, then the chemistry to prepare any organic radiotracer (such as FDG) accomplished very quickly, in the short time before the isotope decays. Few hospitals and universities are capable of maintaining such systems, and most clinical PET is supported by third-party suppliers of radiotracers that can supply many sites simultaneously. This limitation restricts clinical PET primarily to the use of tracers labelled with fluorine-18, which has a half-life of 110 minutes and can be transported a reasonable distance before use, or to rubidium-82 (used as rubidium-82 chloride) with a half-life of 1.27 minutes, which is created in a portable generator and is used for myocardial perfusion studies. In recent years a few on-site cyclotrons with integrated shielding and "hot labs" (automated chemistry labs that are able to work with radioisotopes) have begun to accompany PET units to remote hospitals. The presence of the small on-site cyclotron promises to expand in the future as the cyclotrons shrink in response to the high cost of isotope transportation to remote PET machines. In recent years the shortage of PET scans has been alleviated in the US, as rollout of radiopharmacies to supply radioisotopes has grown 30%/year. Because the half-life of fluorine-18 is about two hours, the prepared dose of a radiopharmaceutical bearing this radionuclide will undergo multiple half-lives of decay during the working day. This necessitates frequent recalibration of the remaining dose (determination of activity per unit volume) and careful planning with respect to patient scheduling. History The concept of emission and transmission tomography was introduced by David E. Kuhl, Luke Chapman and Roy Edwards in the late 1950s. Their work would lead to the design and construction of several tomographic instruments at Washington University School of Medicine and later at the University of Pennsylvania. In the 1960s and 70s tomographic imaging instruments and techniques were further developed by Michel Ter-Pogossian, Michael E. Phelps, Edward J. Hoffman and others at Washington University School of Medicine. Work by Gordon Brownell, Charles Burnham and their associates at the Massachusetts General Hospital beginning in the 1950s contributed significantly to the development of PET technology and included the first demonstration of annihilation radiation for medical imaging. Their innovations, including the use of light pipes and volumetric analysis, have been important in the deployment of PET imaging. In 1961, James Robertson and his associates at Brookhaven National Laboratory built the first single-plane PET scan, nicknamed the "head-shrinker." One of the factors most responsible for the acceptance of positron imaging was the development of radiopharmaceuticals. In particular, the development of labeled 2-fluorodeoxy-D-glucose (FDG-firstly synthethized and described by two Czech scientists from Charles University in Prague in 1968) by the Brookhaven group under the direction of Al Wolf and Joanna Fowler was a major factor in expanding the scope of PET imaging. The compound was first administered to two normal human volunteers by Abass Alavi in August 1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of FDG in that organ. Later, the substance was used in dedicated positron tomographic scanners, to yield the modern procedure. The logical extension of positron instrumentation was a design using two 2-dimensional arrays. PC-I was the first instrument using this concept and was designed in 1968, completed in 1969 and reported in 1972. The first applications of PC-I in tomographic mode as distinguished from the computed tomographic mode were reported in 1970. It soon became clear to many of those involved in PET development that a circular or cylindrical array of detectors was the logical next step in PET instrumentation. Although many investigators took this approach, James Robertson and Zang-Hee Cho were the first to propose a ring system that has become the prototype of the current shape of PET. The first multislice cylindrical array PET scanner was completed in 1974 at the Mallinckrodt Institute of Radiology by the group led by Ter-Pogossian. The PET-CT scanner, attributed to David Townsend and Ronald Nutt, was named by Time as the medical invention of the year in 2000. Cost As of August 2008, Cancer Care Ontario reports that the current average incremental cost to perform a PET scan in the province is CA$1,000–1,200 per scan. This includes the cost of the radiopharmaceutical and a stipend for the physician reading the scan. In the United States, a PET scan is estimated to be US$1500-$5000. In England, the National Health Service reference cost (2015–2016) for an adult outpatient PET scan is £798. In Australia, as of July 2018, the Medicare Benefits Schedule Fee for whole body FDG PET ranges from A$953 to A$999, depending on the indication for the scan. Quality control The overall performance of PET systems can be evaluated by quality control tools such as the Jaszczak phantom.
Technology
Imaging
null
24047
https://en.wikipedia.org/wiki/Phase%20%28waves%29
Phase (waves)
In physics and mathematics, the phase (symbol φ or ϕ) of a wave or other periodic function of some real variable (such as time) is an angle-like quantity representing the fraction of the cycle covered up to . It is expressed in such a scale that it varies by one full turn as the variable goes through each period (and goes through each complete cycle). It may be measured in any angular unit such as degrees or radians, thus increasing by 360° or as the variable completes a full period. This convention is especially appropriate for a sinusoidal function, since its value at any argument then can be expressed as , the sine of the phase, multiplied by some factor (the amplitude of the sinusoid). (The cosine may be used instead of sine, depending on where one considers each period to start.) Usually, whole turns are ignored when expressing the phase; so that is also a periodic function, with the same period as , that repeatedly scans the same range of angles as goes through each period. Then, is said to be "at the same phase" at two argument values and (that is, ) if the difference between them is a whole number of periods. The numeric value of the phase depends on the arbitrary choice of the start of each period, and on the interval of angles that each period is to be mapped to. The term "phase" is also used when comparing a periodic function with a shifted version of it. If the shift in is expressed as a fraction of the period, and then scaled to an angle spanning a whole turn, one gets the phase shift, phase offset, or phase difference of relative to . If is a "canonical" function for a class of signals, like is for all sinusoidal signals, then is called the initial phase of . Mathematical definition Let the signal be a periodic function of one real variable, and be its period (that is, the smallest positive real number such that for all ). Then the phase of at any argument is Here denotes the fractional part of a real number, discarding its integer part; that is, ; and is an arbitrary "origin" value of the argument, that one considers to be the beginning of a cycle. This concept can be visualized by imagining a clock with a hand that turns at constant speed, making a full turn every seconds, and is pointing straight up at time . The phase is then the angle from the 12:00 position to the current position of the hand, at time , measured clockwise. The phase concept is most useful when the origin is chosen based on features of . For example, for a sinusoid, a convenient choice is any where the function's value changes from zero to positive. The formula above gives the phase as an angle in radians between 0 and . To get the phase as an angle between and , one uses instead The phase expressed in degrees (from 0° to 360°, or from −180° to +180°) is defined the same way, except with "360°" in place of "2π". Consequences With any of the above definitions, the phase of a periodic signal is periodic too, with the same period : The phase is zero at the start of each period; that is Moreover, for any given choice of the origin , the value of the signal for any argument depends only on its phase at . Namely, one can write , where is a function of an angle, defined only for a single full turn, that describes the variation of as ranges over a single period. In fact, every periodic signal with a specific waveform can be expressed as where is a "canonical" function of a phase angle in 0 to 2π, that describes just one cycle of that waveform; and is a scaling factor for the amplitude. (This claim assumes that the starting time chosen to compute the phase of corresponds to argument 0 of .) Adding and comparing phases Since phases are angles, any whole full turns should usually be ignored when performing arithmetic operations on them. That is, the sum and difference of two phases (in degrees) should be computed by the formulas respectively. Thus, for example, the sum of phase angles is 30° (, minus one full turn), and subtracting 50° from 30° gives a phase of 340° (, plus one full turn). Similar formulas hold for radians, with instead of 360. Phase shift The difference between the phases of two periodic signals and is called the phase difference or phase shift of relative to . At values of when the difference is zero, the two signals are said to be in phase; otherwise, they are out of phase with each other. In the clock analogy, each signal is represented by a hand (or pointer) of the same clock, both turning at constant but possibly different speeds. The phase difference is then the angle between the two hands, measured clockwise. The phase difference is particularly important when two signals are added together by a physical process, such as two periodic sound waves emitted by two sources and recorded together by a microphone. This is usually the case in linear systems, when the superposition principle holds. For arguments when the phase difference is zero, the two signals will have the same sign and will be reinforcing each other. One says that constructive interference is occurring. At arguments when the phases are different, the value of the sum depends on the waveform. For sinusoids For sinusoidal signals, when the phase difference is 180° ( radians), one says that the phases are opposite, and that the signals are in antiphase. Then the signals have opposite signs, and destructive interference occurs. Conversely, a phase reversal or phase inversion implies a 180-degree phase shift. When the phase difference is a quarter of turn (a right angle, or ), sinusoidal signals are sometimes said to be in quadrature, e.g., in-phase and quadrature components of a composite signal or even different signals (e.g., voltage and current). If the frequencies are different, the phase difference increases linearly with the argument . The periodic changes from reinforcement and opposition cause a phenomenon called beating. For shifted signals The phase difference is especially important when comparing a periodic signal with a shifted and possibly scaled version of it. That is, suppose that for some constants and all . Suppose also that the origin for computing the phase of has been shifted too. In that case, the phase difference is a constant (independent of ), called the 'phase shift' or 'phase offset' of relative to . In the clock analogy, this situation corresponds to the two hands turning at the same speed, so that the angle between them is constant. In this case, the phase shift is simply the argument shift , expressed as a fraction of the common period (in terms of the modulo operation) of the two signals and then scaled to a full turn: If is a "canonical" representative for a class of signals, like is for all sinusoidal signals, then the phase shift called simply the initial phase of . Therefore, when two periodic signals have the same frequency, they are always in phase, or always out of phase. Physically, this situation commonly occurs, for many reasons. For example, the two signals may be a periodic soundwave recorded by two microphones at separate locations. Or, conversely, they may be periodic soundwaves created by two separate speakers from the same electrical signal, and recorded by a single microphone. They may be a radio signal that reaches the receiving antenna in a straight line, and a copy of it that was reflected off a large building nearby. A well-known example of phase difference is the length of shadows seen at different points of Earth. To a first approximation, if is the length seen at time at one spot, and is the length seen at the same time at a longitude 30° west of that point, then the phase difference between the two signals will be 30° (assuming that, in each signal, each period starts when the shadow is shortest). For sinusoids with same frequency For sinusoidal signals (and a few other waveforms, like square or symmetric triangular), a phase shift of 180° is equivalent to a phase shift of 0° with negation of the amplitude. When two signals with these waveforms, same period, and opposite phases are added together, the sum is either identically zero, or is a sinusoidal signal with the same period and phase, whose amplitude is the difference of the original amplitudes. The phase shift of the co-sine function relative to the sine function is +90°. It follows that, for two sinusoidal signals and with same frequency and amplitudes and , and has phase shift +90° relative to , the sum is a sinusoidal signal with the same frequency, with amplitude and phase shift from , such that A real-world example of a sonic phase difference occurs in the warble of a Native American flute. The amplitude of different harmonic components of same long-held note on the flute come into dominance at different points in the phase cycle. The phase difference between the different harmonics can be observed on a spectrogram of the sound of a warbling flute. Phase comparison Phase comparison is a comparison of the phase of two waveforms, usually of the same nominal frequency. In time and frequency, the purpose of a phase comparison is generally to determine the frequency offset (difference between signal cycles) with respect to a reference. A phase comparison can be made by connecting two signals to a two-channel oscilloscope. The oscilloscope will display two sine signals, as shown in the graphic to the right. In the adjacent image, the top sine signal is the test frequency, and the bottom sine signal represents a signal from the reference. If the two frequencies were exactly the same, their phase relationship would not change and both would appear to be stationary on the oscilloscope display. Since the two frequencies are not exactly the same, the reference appears to be stationary and the test signal moves. By measuring the rate of motion of the test signal the offset between frequencies can be determined. Vertical lines have been drawn through the points where each sine signal passes through zero. The bottom of the figure shows bars whose width represents the phase difference between the signals. In this case the phase difference is increasing, indicating that the test signal is lower in frequency than the reference. Formula for phase of an oscillation or a periodic signal The phase of a simple harmonic oscillation or sinusoidal signal is the value of in the following functions: where , , and are constant parameters called the amplitude, frequency, and phase of the sinusoid. These signals are periodic with period , and they are identical except for a displacement of along the axis. The term phase can refer to several different things: It can refer to a specified reference, such as , in which case we would say the phase of is , and the phase of is . It can refer to , in which case we would say and have the same phase but are relative to their own specific references. In the context of communication waveforms, the time-variant angle , or its principal value, is referred to as instantaneous phase, often just phase. Absolute phase
Physical sciences
Waves
Physics
24048
https://en.wikipedia.org/wiki/Particle%20in%20a%20box
Particle in a box
In quantum mechanics, the particle in a box model (also known as the infinite potential well or the infinite square well) describes the movement of a free particle in a small space surrounded by impenetrable barriers. The model is mainly used as a hypothetical example to illustrate the differences between classical and quantum systems. In classical systems, for example, a particle trapped inside a large box can move at any speed within the box and it is no more likely to be found at one position than another. However, when the well becomes very narrow (on the scale of a few nanometers), quantum effects become important. The particle may only occupy certain positive energy levels. Likewise, it can never have zero energy, meaning that the particle can never "sit still". Additionally, it is more likely to be found at certain positions than at others, depending on its energy level. The particle may never be detected at certain positions, known as spatial nodes. The particle in a box model is one of the very few problems in quantum mechanics that can be solved analytically, without approximations. Due to its simplicity, the model allows insight into quantum effects without the need for complicated mathematics. It serves as a simple illustration of how energy quantizations (energy levels), which are found in more complicated quantum systems such as atoms and molecules, come about. It is one of the first quantum mechanics problems taught in undergraduate physics courses, and it is commonly used as an approximation for more complicated quantum systems. One-dimensional solution The simplest form of the particle in a box model considers a one-dimensional system. Here, the particle may only move backwards and forwards along a straight line with impenetrable barriers at either end. The walls of a one-dimensional box may be seen as regions of space with an infinitely large potential energy. Conversely, the interior of the box has a constant, zero potential energy. This means that no forces act upon the particle inside the box and it can move freely in that region. However, infinitely large forces repel the particle if it touches the walls of the box, preventing it from escaping. The potential energy in this model is given as where L is the length of the box, xc is the location of the center of the box and x is the position of the particle within the box. Simple cases include the centered box (xc = 0) and the shifted box (xc = L/2) (pictured). Position wave function In quantum mechanics, the wave function gives the most fundamental description of the behavior of a particle; the measurable properties of the particle (such as its position, momentum and energy) may all be derived from the wave function. The wave function can be found by solving the Schrödinger equation for the system where is the reduced Planck constant, is the mass of the particle, is the imaginary unit and is time. Inside the box, no forces act upon the particle, which means that the part of the wave function inside the box oscillates through space and time with the same form as a free particle: where and are arbitrary complex numbers. The frequency of the oscillations through space and time is given by the wave number and the angular frequency respectively. These are both related to the total energy of the particle by the expression which is known as the dispersion relation for a free particle. However, since the particle is not entirely free but under the influence of a potential, the energy of the particle is where T is the kinetic and V the potential energy. Therefore, the energy of the particle given above is not the same thing as (i.e. the momentum of the particle is not given by ). Thus the wave number k above actually describes the energy states of the particle and is not related to momentum like the "wave number" usually is. The rationale for calling k the wave number is that it enumerates the number of crests that the wave function has inside the box, and in this sense it is a wave number. This discrepancy can be seen more clearly below, when we find out that the energy spectrum of the particle is discrete (only discrete values of energy are allowed) but the momentum spectrum is continuous (momentum can vary continuously), i.e., . The amplitude of the wave function at a given position is related to the probability of finding a particle there by . The wave function must therefore vanish everywhere beyond the edges of the box. Also, the amplitude of the wave function may not "jump" abruptly from one point to the next. These two conditions are only satisfied by wave functions with the form where and for positive integers . The simplest solutions, or both yield the trivial wave function , which describes a particle that does not exist anywhere in the system. Here one sees that only a discrete set of energy values and wave numbers k are allowed for the particle. Usually in quantum mechanics it is also demanded that the derivative of the wave function in addition to the wave function itself be continuous; here this demand would lead to the only solution being the constant zero function, which is not what we desire, so we give up this demand (as this system with infinite potential can be regarded as a nonphysical abstract limiting case, we can treat it as such and "bend the rules"). Note that giving up this demand means that the wave function is not a differentiable function at the boundary of the box, and thus it can be said that the wave function does not solve the Schrödinger equation at the boundary points and (but does solve it everywhere else). Finally, the unknown constant may be found by normalizing the wave function. That is, it follows from that any complex number whose absolute value is yields the same normalized state. It is expected that the eigenvalues, i.e., the energy of the box should be the same regardless of its position in space, but changes. Notice that represents a phase shift in the wave function. This phase shift has no effect when solving the Schrödinger equation, and therefore does not affect the eigenvalue. If we set the origin of coordinates to the center of the box, we can rewrite the spatial part of the wave function succinctly as: Momentum wave function The momentum wave function is proportional to the Fourier transform of the position wave function. With (note that the parameter describing the momentum wave function below is not exactly the special above, linked to the energy eigenvalues), the momentum wave function is given by where sinc is the cardinal sine sinc function, . For the centered box (), the solution is real and particularly simple, since the phase factor on the right reduces to unity. (With care, it can be written as an even function of .) It can be seen that the momentum spectrum in this wave packet is continuous, and one may conclude that for the energy state described by the wave number , the momentum can, when measured, also attain other values beyond . Hence, it also appears that, since the energy is for the nth eigenstate, the relation does not strictly hold for the measured momentum ; the energy eigenstate is not a momentum eigenstate, and, in fact, not even a superposition of two momentum eigenstates, as one might be tempted to imagine from equation () above: peculiarly, it has no well-defined momentum before measurement! Position and momentum probability distributions In classic physics, the particle can be detected anywhere in the box with equal probability. In quantum mechanics, however, the probability density for finding a particle at a given position is derived from the wave function as For the particle in a box, the probability density for finding the particle at a given position depends upon its state, and is given by Thus, for any value of n greater than one, there are regions within the box for which , indicating that spatial nodes exist at which the particle cannot be found. If relativistic wave equations are considered, however, the probability density not go to zero at the nodes (apart from the trivial case ). In quantum mechanics, the average, or expectation value of the position of a particle is given by For the steady state particle in a box, it can be shown that the average position is always , regardless of the state of the particle. For a superposition of states, the expectation value of the position will change based on the cross term, which is proportional to . The variance in the position is a measure of the uncertainty in position of the particle: The probability density for finding a particle with a given momentum is derived from the wave function as . As with position, the probability density for finding the particle at a given momentum depends upon its state, and is given by where, again, . The expectation value for the momentum is then calculated to be zero, and the variance in the momentum is calculated to be: The uncertainties in position and momentum ( and ) are defined as being equal to the square root of their respective variances, so that: This product increases with increasing n, having a minimum for n = 1. The value of this product for n = 1 is about equal to 0.568 , which obeys the Heisenberg uncertainty principle, which states that the product will be greater than or equal to . Another measure of uncertainty in position is the information entropy of the probability distribution Hx: where x0 is an arbitrary reference length. Another measure of uncertainty in momentum is the information entropy of the probability distribution Hp: where γ is Euler's constant. The quantum mechanical entropic uncertainty principle states that for (nats) For , the sum of the position and momentum entropies yields: where the unit is nat, and which satisfies the quantum entropic uncertainty principle. Energy levels The energies that correspond with each of the permitted wave numbers may be written as The energy levels increase with , meaning that high energy levels are separated from each other by a greater amount than low energy levels are. The lowest possible energy for the particle (its zero-point energy) is found in state 1, which is given by The particle, therefore, always has a positive energy. This contrasts with classical systems, where the particle can have zero energy by resting motionlessly. This can be explained in terms of the uncertainty principle, which states that the product of the uncertainties in the position and momentum of a particle is limited by It can be shown that the uncertainty in the position of the particle is proportional to the width of the box. Thus, the uncertainty in momentum is roughly inversely proportional to the width of the box. The kinetic energy of a particle is given by , and hence the minimum kinetic energy of the particle in a box is inversely proportional to the mass and the square of the well width, in qualitative agreement with the calculation above. Higher-dimensional boxes (Hyper-)rectangular walls If a particle is trapped in a two-dimensional box, it may freely move in the and -directions, between barriers separated by lengths and respectively. For a centered box, the position wave function may be written including the length of the box as . Using a similar approach to that of the one-dimensional box, it can be shown that the wave functions and energies for a centered box are given respectively by where the two-dimensional wavevector is given by For a three dimensional box, the solutions are where the three-dimensional wavevector is given by: In general for an n-dimensional box, the solutions are The n-dimensional momentum wave functions may likewise be represented by and the momentum wave function for an n-dimensional centered box is then: An interesting feature of the above solutions is that when two or more of the lengths are the same (e.g. ), there are multiple wave functions corresponding to the same total energy. For example, the wave function with has the same energy as the wave function with . This situation is called degeneracy and for the case where exactly two degenerate wave functions have the same energy that energy level is said to be doubly degenerate. Degeneracy results from symmetry in the system. For the above case two of the lengths are equal so the system is symmetric with respect to a 90° rotation. More complicated wall shapes The wave function for a quantum-mechanical particle in a box whose walls have arbitrary shape is given by the Helmholtz equation subject to the boundary condition that the wave function vanishes at the walls. These systems are studied in the field of quantum chaos for wall shapes whose corresponding dynamical billiard tables are non-integrable. Applications Because of its mathematical simplicity, the particle in a box model is used to find approximate solutions for more complex physical systems in which a particle is trapped in a narrow region of low electric potential between two high potential barriers. These quantum well systems are particularly important in optoelectronics, and are used in devices such as the quantum well laser, the quantum well infrared photodetector and the quantum-confined Stark effect modulator. It is also used to model a lattice in the Kronig–Penney model and for a finite metal with the free electron approximation. Conjugated polyenes Conjugated polyene systems can be modeled using particle in a box. The conjugated system of electrons can be modeled as a one dimensional box with length equal to the total bond distance from one terminus of the polyene to the other. In this case each pair of electrons in each π bond corresponds to their energy level. The energy difference between two energy levels, nf and ni is: The difference between the ground state energy, n, and the first excited state, n+1, corresponds to the energy required to excite the system. This energy has a specific wavelength, and therefore color of light, related by: A common example of this phenomenon is in β-carotene. β-carotene (C40H56) is a conjugated polyene with an orange color and a molecular length of approximately 3.8 nm (though its chain length is only approximately 2.4 nm). Due to β-carotene's high level of conjugation, electrons are dispersed throughout the length of the molecule, allowing one to model it as a one-dimensional particle in a box. β-carotene has 11 carbon-carbon double bonds in conjugation; each of those double bonds contains two π-electrons, therefore β-carotene has 22 π-electrons. With two electrons per energy level, β-carotene can be treated as a particle in a box at energy level n=11. Therefore, the minimum energy needed to excite an electron to the next energy level can be calculated, n=12, as follows (recalling that the mass of an electron is 9.109 × 10−31 kg): Using the previous relation of wavelength to energy, recalling both the Planck constant h and the speed of light c: This indicates that β-carotene primarily absorbs light in the infrared spectrum, therefore it would appear white to a human eye. However the observed wavelength is 450 nm, indicating that the particle in a box is not a perfect model for this system. Quantum well laser The particle in a box model can be applied to quantum well lasers, which are laser diodes consisting of one semiconductor “well” material sandwiched between two other semiconductor layers of different material . Because the layers of this sandwich are very thin (the middle layer is typically about 100 Å thick), quantum confinement effects can be observed. The idea that quantum effects could be harnessed to create better laser diodes originated in the 1970s. The quantum well laser was patented in 1976 by R. Dingle and C. H. Henry. Specifically, the quantum wells behavior can be represented by the particle in a finite well model. Two boundary conditions must be selected. The first is that the wave function must be continuous. Often, the second boundary condition is chosen to be the derivative of the wave function must be continuous across the boundary, but in the case of the quantum well the masses are different on either side of the boundary. Instead, the second boundary condition is chosen to conserve particle flux as , which is consistent with experiment. The solution to the finite well particle in a box must be solved numerically, resulting in wave functions that are sine functions inside the quantum well and exponentially decaying functions in the barriers. This quantization of the energy levels of the electrons allows a quantum well laser to emit light more efficiently than conventional semiconductor lasers. Due to their small size, quantum dots do not showcase the bulk properties of the specified semi-conductor but rather show quantised energy states. This effect is known as the quantum confinement and has led to numerous applications of quantum dots such as the quantum well laser. Researchers at Princeton University have recently built a quantum well laser that is no bigger than a grain of rice. The laser is powered by a single electron that passes through two quantum dots; a double quantum dot. The electron moves from a state of higher energy, to a state of lower energy whilst emitting photons in the microwave region. These photons bounce off mirrors to create a beam of light; the laser. The quantum well laser is heavily based on the interaction between light and electrons. This relationship is a key component in quantum mechanical theories that include the De Broglie Wavelength and Particle in a box. The double quantum dot allows scientists to gain full control over the movement of an electron, which consequently results in the production of a laser beam. Quantum dots Quantum dots are extremely small semiconductors (on the scale of nanometers). They display quantum confinement in that the electrons cannot escape the “dot”, thus allowing particle-in-a-box approximations to be used. Their behavior can be described by three-dimensional particle-in-a-box energy quantization equations. The energy gap of a quantum dot is the energy gap between its valence and conduction bands. This energy gap is equal to the gap of the bulk material plus the energy equation derived particle-in-a-box, which gives the energy for electrons and holes. This can be seen in the following equation, where and are the effective masses of the electron and hole, is radius of the dot, and is the Planck constant: Hence, the energy gap of the quantum dot is inversely proportional to the square of the "length of the box", i.e. the radius of the quantum dot. Manipulation of the band gap allows for the absorption and emission of specific wavelengths of light, as energy is inversely proportional to wavelength. The smaller the quantum dot, the larger the band gap and thus the shorter the wavelength absorbed. Different semiconducting materials are used to synthesize quantum dots of different sizes and therefore emit different wavelengths of light. Materials that normally emit light in the visible region are often used and their sizes are fine-tuned so that certain colors are emitted. Typical substances used to synthesize quantum dots are cadmium (Cd) and selenium (Se). For example, when the electrons of two nanometer CdSe quantum dots relax after excitation, blue light is emitted. Similarly, red light is emitted in four nanometer CdSe quantum dots. Quantum dots have a variety of functions including but not limited to fluorescent dyes, transistors, LEDs, solar cells, and medical imaging via optical probes. One function of quantum dots is their use in lymph node mapping, which is feasible due to their unique ability to emit light in the near infrared (NIR) region. Lymph node mapping allows surgeons to track if and where cancerous cells exist. Quantum dots are useful for these functions due to their emission of brighter light, excitation by a wide variety of wavelengths, and higher resistance to light than other substances.
Physical sciences
Quantum mechanics
Physics
24062
https://en.wikipedia.org/wiki/Peroxisome
Peroxisome
A peroxisome () is a membrane-bound organelle, a type of microbody, found in the cytoplasm of virtually all eukaryotic cells. Peroxisomes are oxidative organelles. Frequently, molecular oxygen serves as a co-substrate, from which hydrogen peroxide (H2O2) is then formed. Peroxisomes owe their name to hydrogen peroxide generating and scavenging activities. They perform key roles in lipid metabolism and the reduction of reactive oxygen species. Peroxisomes are involved in the catabolism of very long chain fatty acids, branched chain fatty acids, bile acid intermediates (in the liver), D-amino acids, and polyamines. Peroxisomes also play a role in the biosynthesis of plasmalogens: ether phospholipids critical for the normal function of mammalian brains and lungs. Peroxisomes contain approximately 10% of the total activity of two enzymes (Glucose-6-phosphate dehydrogenase and 6-Phosphogluconate dehydrogenase) in the pentose phosphate pathway, which is important for energy metabolism. It is vigorously debated whether peroxisomes are involved in isoprenoid and cholesterol synthesis in animals. Other peroxisomal functions include the glyoxylate cycle in germinating seeds ("glyoxysomes"), photorespiration in leaves, glycolysis in trypanosomes ("glycosomes"), and methanol and amine oxidation and assimilation in some yeasts. History Peroxisomes (microbodies) were first described by a Swedish doctoral student, J. Rhodin in 1954. They were identified as organelles by Christian de Duve and Pierre Baudhuin in 1966. De Duve and co-workers discovered that peroxisomes contain several oxidases involved in the production of hydrogen peroxide (H2O2) as well as catalase involved in the decomposition of H2O2 to oxygen and water. Due to their role in peroxide metabolism, De Duve named them “peroxisomes”, replacing the formerly used morphological term “microbodies”. Later, it was described that firefly luciferase is targeted to peroxisomes in mammalian cells, allowing the discovery of the import targeting signal for peroxisomes, and triggering many advances in the peroxisome biogenesis field. Structure Peroxisomes are small (0.1–1 μm diameter) subcellular compartments (organelles) with a fine, granular matrix and surrounded by a single biomembrane which are located in the cytoplasm of a cell. Compartmentalization creates an optimized environment to promote various metabolic reactions within peroxisomes required to sustain cellular functions and viability of the organism. The number, size and protein composition of peroxisomes are variable and depend on cell type and environmental conditions. For example, in baker's yeast (S. cerevisiae), it has been observed that, with good glucose supply, only a few, small peroxisomes are present. In contrast, when the yeasts were supplied with long-chain fatty acids as sole carbon source up to 20 to 25 large peroxisomes can be formed. Metabolic functions A major function of the peroxisome is the breakdown of very long chain fatty acids through beta oxidation. In animal cells, the long fatty acids are converted to medium chain fatty acids, which are subsequently shuttled to mitochondria where they eventually are broken down to carbon dioxide and water. In yeast and plant cells, this process is carried out exclusively in peroxisomes. The first reactions in the formation of plasmalogen in animal cells also occur in peroxisomes. Plasmalogen is the most abundant phospholipid in myelin. Deficiency of plasmalogens causes profound abnormalities in the myelination of nerve cells, which is one reason why many peroxisomal disorders affect the nervous system. Peroxisomes also play a role in the production of bile acids important for the absorption of fats and fat-soluble vitamins, such as vitamins A and K. Skin disorders are features of genetic disorders affecting peroxisome function as a result. The specific metabolic pathways that occur exclusively in mammalian peroxisomes are: α-oxidation of phytanic acid β-oxidation of very-long-chain and polyunsaturated fatty acids biosynthesis of plasmalogens conjugation of cholic acid as part of bile acid synthesis Peroxisomes contain oxidative enzymes, such as D-amino acid oxidase and uric acid oxidase. However the last enzyme is absent in humans, explaining the disease known as gout, caused by the accumulation of uric acid. Certain enzymes within the peroxisome, by using molecular oxygen, remove hydrogen atoms from specific organic substrates (labeled as R), in an oxidative reaction, producing hydrogen peroxide (H2O2, itself toxic): Catalase, another peroxisomal enzyme, uses this H2O2 to oxidize other substrates, including phenols, formic acid, formaldehyde, and alcohol, by means of the peroxidation reaction: , thus eliminating the poisonous hydrogen peroxide in the process. This reaction is important in liver and kidney cells, where the peroxisomes detoxify various toxic substances that enter the blood. About 25% of the ethanol that humans consume by drinking alcoholic beverages is oxidized to acetaldehyde in this way. In addition, when excess H2O2 accumulates in the cell, catalase converts it to H2O through this reaction: In higher plants, peroxisomes contain also a complex battery of antioxidative enzymes such as superoxide dismutase, the components of the ascorbate-glutathione cycle, and the NADP-dehydrogenases of the pentose-phosphate pathway. It has been demonstrated that peroxisomes generate superoxide (O2•−) and nitric oxide (•NO) radicals. There is evidence now that those reactive oxygen species including peroxisomal H2O2 are also important signalling molecules in plants and animals and contribute to healthy ageing and age-related disorders in humans. The peroxisome of plant cells is polarised when fighting fungal penetration. Infection causes a glucosinolate molecule to play an antifungal role to be made and delivered to the outside of the cell through the action of the peroxisomal proteins (PEN2 and PEN3). Peroxisomes in mammals and humans also contribute to anti-viral defense. and the combat of pathogens Peroxisome assembly Peroxisomes are derived from the smooth endoplasmic reticulum under certain experimental conditions and replicate by membrane growth and division out of pre-existing organelles. Peroxisome matrix proteins are translated in the cytoplasm prior to import. Specific amino acid sequences (PTS or peroxisomal targeting signal) at the C-terminus (PTS1) or N-terminus (PTS2) of peroxisomal matrix proteins signals them to be imported into the organelle by a targeting factor. There are currently 36 known proteins involved in peroxisome biogenesis and maintenance, called peroxins, which participate in the process of peroxisome assembly in different organisms. In mammalian cells there are 13 characterized peroxins. In contrast to protein import into the endoplasmic reticulum (ER) or mitochondria, proteins do not need to be unfolded to be imported into the peroxisome lumen. The matrix protein import receptors, the peroxins PEX5 and PEX7, accompany their cargoes (containing a PTS1 or a PTS2 amino acid sequence, respectively) all the way to the peroxisome where they release the cargo into the peroxisomal matrix and then return to the cytosol – a step named recycling. A special way of peroxisomal protein targeting is called piggy backing. Proteins that are transported by this unique method do not have a canonical PTS, but rather bind on a PTS protein to be transported as a complex. A model describing the import cycle is referred to as the extended shuttle mechanism. There is now evidence that ATP hydrolysis is required for the recycling of receptors to the cytosol. Also, ubiquitination is crucial for the export of PEX5 from the peroxisome to the cytosol. The biogenesis of the peroxisomal membrane and the insertion of peroxisomal membrane proteins (PMPs) requires the peroxins PEX19, PEX3, and PEX16. PEX19 is a PMP receptor and chaperone, which binds the PMPs and routes them to the peroxisomal membrane, where it interacts with PEX3, a peroxisomal integral membrane protein. PMPs are then inserted into the peroxisomal membrane. The degradation of peroxisomes is called pexophagy. Peroxisome interaction and communication The diverse functions of peroxisomes require dynamic interactions and cooperation with many organelles involved in cellular lipid metabolism such as the endoplasmic reticulum, mitochondria, lipid droplets, and lysosomes. Peroxisomes interact with mitochondria in several metabolic pathways, including β-oxidation of fatty acids and the metabolism of reactive oxygen species. Both organelles are in close contact with the endoplasmic reticulum and share several proteins, including organelle fission factors. Peroxisomes also interact with the endoplasmic reticulum and cooperate in the synthesis of ether lipids (plasmalogens), which are important for nerve cells (see above). In filamentous fungi, peroxisomes move on microtubules by 'hitchhiking,' a process involving contact with rapidly moving early endosomes. Physical contact between organelles is often mediated by membrane contact sites, where membranes of two organelles are physically tethered to enable rapid transfer of small molecules, enable organelle communication and are crucial for coordination of cellular functions and hence human health. Alterations of membrane contacts have been observed in various diseases. Associated medical conditions Peroxisomal disorders are a class of medical conditions that typically affect the human nervous system as well as many other organ systems. Two common examples are X-linked adrenoleukodystrophy and the peroxisome biogenesis disorders. Genes PEX genes encode the protein machinery (peroxins) required for proper peroxisome assembly. Peroxisomal membrane proteins are imported through at least two routes, one of which depends on interaction between peroxin 19 and peroxin 3, while the other is required for import of peroxin 3, either of which may occur without the import of matrix (lumen) enzymes, which possess the peroxisomal targeting signal PTS1 or PTS2 as previously discussed. Elongation of the peroxisome membrane and the final fission of the organelle are regulated by Pex11p. Genes that encode peroxin proteins include: PEX1, PEX2 (PXMP3), PEX3, PEX5, PEX6, PEX7, PEX9, PEX10, PEX11A, PEX11B, PEX11G, PEX12, PEX13, PEX14, PEX16, PEX19, PEX26, PEX28, PEX30, and PEX31. Between organisms, PEX numbering and function can differ. Evolutionary origins The protein content of peroxisomes varies across species or organism, but the presence of proteins common to many species has been used to suggest an endosymbiotic origin; that is, peroxisomes evolved from bacteria that invaded larger cells as parasites, and very gradually evolved a symbiotic relationship. However, this view has been challenged by recent discoveries. For example, peroxisome-less mutants can restore peroxisomes upon introduction of the wild-type gene. Two independent evolutionary analyses of the peroxisomal proteome found homologies between the peroxisomal import machinery and the ERAD pathway in the endoplasmic reticulum, along with a number of metabolic enzymes that were likely recruited from the mitochondria. The peroxisome may have had an Actinomycetota origin; however, this is controversial. Other related organelles Other organelles of the microbody family related to peroxisomes include glyoxysomes of plants and filamentous fungi, glycosomes of kinetoplastids, and Woronin bodies of filamentous fungi.
Biology and health sciences
Organelles
Biology
24075
https://en.wikipedia.org/wiki/Peripheral%20Component%20Interconnect
Peripheral Component Interconnect
Peripheral Component Interconnect (PCI) is a local computer bus for attaching hardware devices in a computer and is part of the PCI Local Bus standard. The PCI bus supports the functions found on a processor bus but in a standardized format that is independent of any given processor's native bus. Devices connected to the PCI bus appear to a bus master to be connected directly to its own bus and are assigned addresses in the processor's address space. It is a parallel bus, synchronous to a single bus clock. Attached devices can take either the form of an integrated circuit fitted onto the motherboard (called a planar device in the PCI specification) or an expansion card that fits into a slot. The PCI Local Bus was first implemented in IBM PC compatibles, where it displaced the combination of several slow Industry Standard Architecture (ISA) slots and one fast VESA Local Bus (VLB) slot as the bus configuration. It has subsequently been adopted for other computer types. Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as Universal Serial Bus (USB) or serial, TV tuner cards and hard disk drive host adapters. PCI video cards replaced ISA and VLB cards until rising bandwidth needs outgrew the abilities of PCI. The preferred interface for video cards then became Accelerated Graphics Port (AGP), a superset of PCI, before giving way to PCI Express. The first version of PCI found in retail desktop computers was a 32-bit bus using a bus clock and signaling, although the PCI 1.0 standard provided for a 64-bit variant as well. These have one locating notch in the card. Version 2.0 of the PCI standard introduced 3.3 V slots, physically distinguished by a flipped physical connector to prevent accidental insertion of 5 V cards. Universal cards, which can operate on either voltage, have two notches. Version 2.1 of the PCI standard introduced optional 66 MHz operation. A server-oriented variant of PCI, PCI Extended (PCI-X) operated at frequencies up to 133 MHz for PCI-X 1.0 and up to 533 MHz for PCI-X 2.0. An internal connector for laptop cards, called Mini PCI, was introduced in version 2.2 of the PCI specification. The PCI bus was also adopted for an external laptop connector standard the CardBus. The first PCI specification was developed by Intel, but subsequent development of the standard became the responsibility of the PCI Special Interest Group (PCI-SIG). PCI and PCI-X sometimes are referred to as either Parallel PCI or Conventional PCI to distinguish them technologically from their more recent successor PCI Express, which adopted a serial, lane-based architecture. PCI's heyday in the desktop computer market was approximately 1995 to 2005. PCI and PCI-X have become obsolete for most purposes and has largely disappeared from many other modern motherboards since 2013; however they are still common on some modern desktops for the purposes of backward compatibility and the relative low cost to produce. Another common modern application of parallel PCI is in industrial PCs, where many specialized expansion cards, used here, never transitioned to PCI Express, just as with some ISA cards. Many kinds of devices formerly available on PCI expansion cards are now commonly integrated onto motherboards or available in USB and PCI Express versions. History Work on PCI began at the Intel Architecture Labs (IAL, also Architecture Development Lab) . A team of primarily IAL engineers defined the architecture and developed a proof of concept chipset and platform (Saturn) partnering with teams in the company's desktop PC systems and core logic product organizations. PCI was immediately put to use in servers, replacing Micro Channel architecture (MCA) and Extended Industry Standard Architecture (EISA) as the server expansion bus of choice. In mainstream PCs, PCI was slower to replace VLB, and did not gain significant market penetration until late 1994 in second-generation Pentium PCs. By 1996, VLB was all but extinct, and manufacturers had adopted PCI even for Intel 80486 (486) computers. EISA continued to be used alongside PCI through 2000. Apple Computer adopted PCI for professional Power Macintosh computers (replacing NuBus) in mid-1995, and the consumer Performa product line (replacing LC Processor Direct Slot (PDS)) in mid-1996. Outside the server market, the 64-bit version of plain PCI remained rare in practice though, although it was used for example by all (post-iMac) G3 and G4 Power Macintosh computers. Later revisions of PCI added new features and performance improvements, including a 66 MHz 3.3 V standard and 133 MHz PCI-X, and the adaptation of PCI signaling to other form factors. Both PCI-X 1.0b and PCI-X 2.0 are backward compatible with some PCI standards. These revisions were used on server hardware but consumer PC hardware remained nearly all 32-bit, 33 MHz and 5 volt. The PCI-SIG introduced the serial PCI Express in . Since then, motherboard manufacturers have included progressively fewer PCI slots in favor of the new standard. Many new motherboards do not provide PCI slots at all, as of late 2013. Auto configuration PCI provides separate memory and memory-mapped I/O port address spaces for the x86 processor family, 64 and 32 bits, respectively. Addresses in these address spaces are assigned by software. A third address space, called the PCI Configuration Space, which uses a fixed addressing scheme, allows software to determine the amount of memory and I/O address space needed by each device. Each device can request up to six areas of memory space or input/output (I/O) port space via its configuration space registers. In a typical system, the firmware (or operating system) queries all PCI buses at startup time (via PCI Configuration Space) to find out what devices are present and what system resources (memory space, I/O space, interrupt lines, etc.) each needs. It then allocates the resources and tells each device what its allocation is. The PCI configuration space also contains a small amount of device type information, which helps an operating system choose device drivers for it, or at least to have a dialogue with a user about the system configuration. Devices may have an on-board read-only memory (ROM) containing executable code for x86 or PA-RISC processors, an Open Firmware driver, or an Option ROM. These are typically needed for devices used during system startup, before device drivers are loaded by the operating system. In addition, there are PCI Latency Timers that are a mechanism for PCI Bus-Mastering devices to share the PCI bus fairly. "Fair" in this case means that devices will not use such a large portion of the available PCI bus bandwidth that other devices are not able to get needed work done. Note, this does not apply to PCI Express. Interrupts Devices are required to follow a protocol so that the interrupt-request (IRQ) lines can be shared. The PCI bus includes four interrupt lines, INTA# through INTD#, all of which are available to each device. Up to eight PCI devices share the same IRQ line (INTINA# through INTINH#) in APIC-enabled x86 systems. Interrupt lines are not wired in parallel as are the other PCI bus lines. The positions of the interrupt lines rotate between slots, so what appears to one device as the INTA# line is INTB# to the next and INTC# to the one after that. Single-function devices usually use their INTA# for interrupt signaling, so the device load is spread fairly evenly across the four available interrupt lines. This alleviates a common problem with sharing interrupts. The mapping of PCI interrupt lines onto system interrupt lines, through the PCI host bridge, is implementation-dependent. Platform-specific firmware or operating system code is meant to know this, and set the "interrupt line" field in each device's configuration space indicating which IRQ it is connected to. PCI interrupt lines are level-triggered. This was chosen over edge-triggering to gain an advantage when servicing a shared interrupt line, and for robustness: edge-triggered interrupts are easy to miss. Later revisions of the PCI specification add support for message-signaled interrupts. In this system, a device signals its need for service by performing a memory write, rather than by asserting a dedicated line. This alleviates the problem of scarcity of interrupt lines. Even if interrupt vectors are still shared, it does not suffer the sharing problems of level-triggered interrupts. It also resolves the routing problem, because the memory write is not unpredictably modified between device and host. Finally, because the message signaling is in-band, it resolves some synchronization problems that can occur with posted writes and out-of-band interrupt lines. PCI Express does not have physical interrupt lines at all. It uses message-signaled interrupts exclusively. Conventional hardware specifications These specifications represent the most common version of PCI used in normal PCs: clock with synchronous transfers Peak transfer rate of 133 MB/s (133 megabytes per second) for 32-bit bus width (33.33 MHz × 32 bits ÷ 8 bits/byte = 133 MB/s) 32-bit bus width 32- or 64-bit memory address space (4 GiB or 16 EiB) 32-bit I/O port space 256-byte (per device) configuration space 5-volt signaling Reflected-wave switching The PCI specification also provides options for 3.3 V signaling, 64-bit bus width, and 66 MHz clocking, but these are not commonly encountered outside of PCI-X support on server motherboards. The PCI bus arbiter performs bus arbitration among multiple masters on the PCI bus. Any number of bus masters can reside on the PCI bus, as well as requests for the bus. One pair of request and grant signals is dedicated to each bus master. Card voltage and keying Typical PCI cards have either one or two key notches, depending on their signaling voltage. Cards requiring 3.3 volts have a notch 56.21 mm from the card backplate; those requiring 5 volts have a notch 104.41 mm from the backplate. This allows cards to be fitted only into slots with a voltage they support. "Universal cards" accepting either voltage have both key notches. Connector pinout The PCI connector is defined as having 62 contacts on each side of the edge connector, but two or four of them are replaced by key notches, so a card has 60 or 58 contacts on each side. Side A refers to the 'solder side' and side B refers to the 'component side': if the card is held with the connector pointing down, a view of side A will have the backplate on the right, whereas a view of side B will have the backplate on the left. The pinout of B and A sides are as follows, looking down into the motherboard connector (pins A1 and B1 are closest to backplate). 64-bit PCI extends this by an additional 32 contacts on each side which provide AD[63:32], C/BE[7:4]#, the PAR64 parity signal, and a number of power and ground pins. Most lines are connected to each slot in parallel. The exceptions are: Each slot has its own REQ# output to, and GNT# input from the motherboard arbiter. Each slot has its own IDSEL line, usually connected to a specific AD line. TDO is daisy-chained to the following slot's TDI. Cards without JTAG support must connect TDI to TDO so as not to break the chain. PRSNT1# and PRSNT2# for each slot have their own pull-up resistors on the motherboard. The motherboard may (but does not have to) sense these pins to determine the presence of PCI cards and their power requirements. REQ64# and ACK64# are individually pulled up on 32-bit only slots. The interrupt pins INTA# through INTD# are connected to all slots in different orders. (INTA# on one slot is INTB# on the next and INTC# on the one after that.)
Technology
Computer hardware
null
24077
https://en.wikipedia.org/wiki/PDF
PDF
Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. Based on the PostScript language, each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, vector graphics, raster images and other information needed to display it. PDF has its roots in "The Camelot Project" initiated by Adobe co-founder John Warnock in 1991. PDF was standardized as ISO 32000 in 2008. The last edition as ISO 32000-2:2020 was published in December 2020. PDF files may contain a variety of content besides flat text and graphics including logical structuring elements, interactive elements such as annotations and form-fields, layers, rich media (including video content), three-dimensional objects using U3D or PRC, and various other data formats. The PDF specification also provides for encryption and digital signatures, file attachments, and metadata to enable workflows requiring these features. History The development of PDF began in 1991 when John Warnock wrote a paper for a project then code-named Camelot, in which he proposed the creation of a simplified version of PostScript called Interchange PostScript (IPS). Unlike traditional PostScript, which was tightly focused on rendering print jobs to output devices, IPS would be optimized for displaying pages to any screen and any platform. Adobe Systems made the PDF specification available free of charge in 1993. In the early years PDF was popular mainly in desktop publishing workflows, and competed with several other formats, including DjVu, Envoy, Common Ground Digital Paper, Farallon Replica and even Adobe's own PostScript format. PDF was a proprietary format controlled by Adobe until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008, at which time control of the specification passed to an ISO Committee of volunteer industry experts. In 2008, Adobe published a Public Patent License to ISO 32000-1 granting royalty-free rights for all patents owned by Adobe necessary to make, use, sell, and distribute PDF-compliant implementations. PDF 1.7, the sixth edition of the PDF specification that became ISO 32000-1, includes some proprietary technologies defined only by Adobe, such as Adobe XML Forms Architecture (XFA) and JavaScript extension for Acrobat, which are referenced by ISO 32000-1 as normative and indispensable for the full implementation of the ISO 32000-1 specification. These proprietary technologies are not standardized, and their specification is published only on Adobe's website. Many of them are not supported by popular third-party implementations of PDF. ISO published version 2.0 of PDF, ISO 32000-2 in 2017, available for purchase, replacing the free specification provided by Adobe. In December 2020, the second edition of PDF 2.0, ISO 32000-2:2020, was published, with clarifications, corrections, and critical updates to normative references (ISO 32000-2 does not include any proprietary technologies as normative references). In April 2023 the PDF Association made ISO 32000-2 available for download free of charge. Technical details A PDF file is often a combination of vector graphics, text, and bitmap graphics. The basic types of content in a PDF are: Typeset text stored as content streams (i.e., not encoded in plain text); Vector graphics for illustrations and designs that consist of shapes and lines; Raster graphics for photographs and other types of images; and Other multimedia objects. In later PDF revisions, a PDF document can also support links (inside document or web page), forms, JavaScript (initially available as a plugin for Acrobat 3.0), or any other types of embedded contents that can be handled using plug-ins. PDF combines three technologies: An equivalent subset of the PostScript page description programming language but in declarative form, for generating the layout and graphics. A font-embedding/replacement system to allow fonts to travel with the documents. A structured storage system to bundle these elements and any associated content into a single file, with data compression where appropriate. PostScript language PostScript is a page description language run in an interpreter to generate an image. It can handle graphics and has standard features of programming languages such as branching and looping. PDF is a subset of PostScript, simplified to remove such control flow features, while graphics commands remain. PostScript was originally designed for a drastically different use case: transmission of one-way linear print jobs in which the PostScript interpreter would collect a series of commands until it encountered the showpage command, then execute all the commands to render a page as a raster image to a printing device. PostScript was not intended for long-term storage and real-time interactive rendering of electronic documents to computer monitors, so there was no need to support anything other than consecutive rendering of pages. If there was an error in the final printed output, the user would correct it at the application level and send a new print job in the form of an entirely new PostScript file. Thus, any given page in a PostScript file could be accurately rendered only as the cumulative result of executing all preceding commands to draw all previous pages—any of which could affect subsequent pages—plus the commands to draw that particular page, and there was no easy way to bypass that process to skip around to different pages. Traditionally, to go from PostScript to PDF, a source PostScript file (that is, an executable program) is used as the basis for generating PostScript-like PDF code (see, e.g., Adobe Distiller). This is done by applying standard compiler techniques like loop unrolling, inlining and removing unused branches, resulting in code that is purely declarative and static. The end result is then packaged into a container format, together with all necessary dependencies for correct rendering (external files, graphics, or fonts to which the document refers), and compressed. Modern applications write to printer drivers that directly generate PDF rather than going through PostScript first. As a document format, PDF has several advantages over PostScript: PDF contains only static declarative PostScript code that can be processed as data, and does not require a full program interpreter or compiler. This avoids the complexity and security risks of an engine with such a higher complexity level. Like Display PostScript, PDF has supported transparent graphics since version 1.4, while standard PostScript does not. PDF enforces the rule that the code for any particular page cannot affect any other pages. That rule is strongly recommended for PostScript code too, but has to be implemented explicitly (see, e.g., the Document Structuring Conventions), as PostScript is a full programming language that allows for such greater flexibilities and is not limited to the concepts of pages and documents. All data required for rendering is included within the file itself, improving portability. Its disadvantages are: A loss of flexibility, and limitation to a single use case. A (sometimes much) larger file size. PDF since v1.6 supports embedding of interactive 3D documents: 3D drawings can be embedded using U3D or PRC and various other data formats. File format A PDF file is organized using ASCII characters, except for certain elements that may have binary content. The file starts with a header containing a magic number (as a readable string) and the version of the format, for example %PDF-1.7. The format is a subset of a COS ("Carousel" Object Structure) format. A COS tree file consists primarily of objects, of which there are nine types: Boolean values, representing true or false Real numbers Integers Strings, enclosed within parentheses ((...)) or represented as hexadecimal within single angle brackets (<...>). Strings may contain 8-bit characters. Names, starting with a forward slash (/) Arrays, ordered collections of objects enclosed within square brackets ([...]) Dictionaries, collections of objects indexed by names enclosed within double angle brackets (<<...>>) Streams, usually containing large amounts of optionally compressed binary data, preceded by a dictionary and enclosed between the stream and endstream keywords. The null object Comments using 8-bit characters prefixed with the percent sign (%) may be inserted. Objects may be either direct (embedded in another object) or indirect. Indirect objects are numbered with an object number and a generation number and defined between the obj and endobj keywords if residing in the document root. Beginning with PDF version 1.5, indirect objects (except other streams) may also be located in special streams known as object streams (marked /Type /ObjStm). This technique enables non-stream objects to have standard stream filters applied to them, reduces the size of files that have large numbers of small indirect objects and is especially useful for Tagged PDF. Object streams do not support specifying an object's generation number (other than 0). An index table, also called the cross-reference table, is located near the end of the file and gives the byte offset of each indirect object from the start of the file. This design allows for efficient random access to the objects in the file, and also allows for small changes to be made without rewriting the entire file (incremental update). Before PDF version 1.5, the table would always be in a special ASCII format, be marked with the xref keyword, and follow the main body composed of indirect objects. Version 1.5 introduced optional cross-reference streams, which have the form of a standard stream object, possibly with filters applied. Such a stream may be used instead of the ASCII cross-reference table and contains the offsets and other information in binary format. The format is flexible in that it allows for integer width specification (using the /W array), so that for example, a document not exceeding 64 KiB in size may dedicate only 2 bytes for object offsets. At the end of a PDF file is a footer containing The startxref keyword followed by an offset to the start of the cross-reference table (starting with the xref keyword) or the cross-reference stream object, followed by The %%EOF end-of-file marker. If a cross-reference stream is not being used, the footer is preceded by the trailer keyword followed by a dictionary containing information that would otherwise be contained in the cross-reference stream object's dictionary: A reference to the root object of the tree structure, also known as the catalog (/Root) The count of indirect objects in the cross-reference table (/Size) Other optional information Within each page, there are one or multiple content streams that describe the text, vector and images being drawn on the page. The content stream is stack-based, similar to PostScript. There are two layouts to the PDF files: non-linearized (not "optimized") and linearized ("optimized"). Non-linearized PDF files can be smaller than their linear counterparts, though they are slower to access because portions of the data required to assemble pages of the document are scattered throughout the PDF file. Linearized PDF files (also called "optimized" or "web optimized" PDF files) are constructed in a manner that enables them to be read in a Web browser plugin without waiting for the entire file to download, since all objects required for the first page to display are optimally organized at the start of the file. PDF files may be optimized using Adobe Acrobat software or QPDF. Page dimensions are not limited by the format itself. However, Adobe Acrobat imposes a limit of 15 million by 15 million inches, or 225 trillion in2 (145,161 km2). Imaging model The basic design of how graphics are represented in PDF is very similar to that of PostScript, except for the use of transparency, which was added in PDF 1.4. PDF graphics use a device-independent Cartesian coordinate system to describe the surface of a page. A PDF page description can use a matrix to scale, rotate, or skew graphical elements. A key concept in PDF is that of the graphics state, which is a collection of graphical parameters that may be changed, saved, and restored by a page description. PDF has (as of version 2.0) 25 graphics state properties, of which some of the most important are: The current transformation matrix (CTM), which determines the coordinate system The clipping path The color space The alpha constant, which is a key component of transparency Black point compensation control (introduced in PDF 2.0) Vector graphics As in PostScript, vector graphics in PDF are constructed with paths. Paths are usually composed of lines and cubic Bézier curves, but can also be constructed from the outlines of text. Unlike PostScript, PDF does not allow a single path to mix text outlines with lines and curves. Paths can be stroked, filled, fill then stroked, or used for clipping. Strokes and fills can use any color set in the graphics state, including patterns. PDF supports several types of patterns. The simplest is the tiling pattern in which a piece of artwork is specified to be drawn repeatedly. This may be a colored tiling pattern, with the colors specified in the pattern object, or an uncolored tiling pattern, which defers color specification to the time the pattern is drawn. Beginning with PDF 1.3 there is also a shading pattern, which draws continuously varying colors. There are seven types of shading patterns of which the simplest are the axial shading (Type 2) and radial shading (Type 3). Raster images Raster images in PDF (called Image XObjects) are represented by dictionaries with an associated stream. The dictionary describes the properties of the image, and the stream contains the image data. (Less commonly, small raster images may be embedded directly in a page description as an inline image.) Images are typically filtered for compression purposes. Image filters supported in PDF include the following general-purpose filters: ASCII85Decode, a filter used to put the stream into 7-bit ASCII, ASCIIHexDecode, similar to ASCII85Decode but less compact, FlateDecode, a commonly used filter based on the deflate algorithm defined in (deflate is also used in the gzip, PNG, and zip file formats among others); introduced in PDF 1.2; it can use one of two groups of predictor functions for more compact zlib/deflate compression: Predictor 2 from the TIFF 6.0 specification and predictors (filters) from the PNG specification (), LZWDecode, a filter based on LZW Compression; it can use one of two groups of predictor functions for more compact LZW compression: Predictor 2 from the TIFF 6.0 specification and predictors (filters) from the PNG specification, RunLengthDecode, a simple compression method for streams with repetitive data using the run-length encoding algorithm and the image-specific filters, DCTDecode, a lossy filter based on the JPEG standard, CCITTFaxDecode, a lossless bi-level (black/white) filter based on the Group 3 or Group 4 CCITT (ITU-T) fax compression standard defined in ITU-T T.4 and T.6, JBIG2Decode, a lossy or lossless bi-level (black/white) filter based on the JBIG2 standard, introduced in PDF 1.4, and JPXDecode, a lossy or lossless filter based on the JPEG 2000 standard, introduced in PDF 1.5. Normally all image content in a PDF is embedded in the file. But PDF allows image data to be stored in external files by the use of external streams or Alternate Images. Standardized subsets of PDF, including PDF/A and PDF/X, prohibit these features. Text Text in PDF is represented by text elements in page content streams. A text element specifies that characters should be drawn at certain positions. The characters are specified using the encoding of a selected font resource. A font object in PDF is a description of a digital typeface. It may either describe the characteristics of a typeface, or it may include an embedded font file. The latter case is called an embedded font while the former is called an unembedded font. The font files that may be embedded are based on widely used standard digital font formats: Type 1 (and its compressed variant CFF), TrueType, and (beginning with PDF 1.6) OpenType. Additionally PDF supports the Type 3 variant in which the components of the font are described by PDF graphic operators. Fourteen typefaces, known as the standard 14 fonts, have a special significance in PDF documents: Times (v3) (in regular, italic, bold, and bold italic) Courier (in regular, oblique, bold and bold oblique) Helvetica (v3) (in regular, oblique, bold and bold oblique) Symbol Zapf Dingbats These fonts are sometimes called the base fourteen fonts. These fonts, or suitable substitute fonts with the same metrics, should be available in most PDF readers, but they are not guaranteed to be available in the reader, and may only display correctly if the system has them installed. Fonts may be substituted if they are not embedded in a PDF. Within text strings, characters are shown using character codes (integers) that map to glyphs in the current font using an encoding. There are several predefined encodings, including WinAnsi, MacRoman, and many encodings for East Asian languages and a font can have its own built-in encoding. (Although the WinAnsi and MacRoman encodings are derived from the historical properties of the Windows and Macintosh operating systems, fonts using these encodings work equally well on any platform.) PDF can specify a predefined encoding to use, the font's built-in encoding or provide a lookup table of differences to a predefined or built-in encoding (not recommended with TrueType fonts). The encoding mechanisms in PDF were designed for Type 1 fonts, and the rules for applying them to TrueType fonts are complex. For large fonts or fonts with non-standard glyphs, the special encodings Identity-H (for horizontal writing) and Identity-V (for vertical) are used. With such fonts, it is necessary to provide a ToUnicode table if semantic information about the characters is to be preserved. A text document which is scanned to PDF without the text being recognised by optical character recognition (OCR) is an image, with no fonts or text properties. Transparency The original imaging model of PDF was opaque, similar to PostScript, where each object drawn on the page completely replaced anything previously marked in the same location. In PDF 1.4 the imaging model was extended to allow transparency. When transparency is used, new objects interact with previously marked objects to produce blending effects. The addition of transparency to PDF was done by means of new extensions that were designed to be ignored in products written to PDF 1.3 and earlier specifications. As a result, files that use a small amount of transparency might be viewed acceptably by older viewers, but files making extensive use of transparency could be viewed incorrectly by an older viewer. The transparency extensions are based on the key concepts of transparency groups, blending modes, shape, and alpha. The model is closely aligned with the features of Adobe Illustrator version 9. The blend modes were based on those used by Adobe Photoshop at the time. When the PDF 1.4 specification was published, the formulas for calculating blend modes were kept secret by Adobe. They have since been published. The concept of a transparency group in PDF specification is independent of existing notions of "group" or "layer" in applications such as Adobe Illustrator. Those groupings reflect logical relationships among objects that are meaningful when editing those objects, but they are not part of the imaging model. Additional features Logical structure and accessibility A tagged PDF (see clause 14.8 in ISO 32000) includes document structure and semantics information to enable reliable text extraction and accessibility. Technically speaking, tagged PDF is a stylized use of the format that builds on the logical structure framework introduced in PDF 1.3. Tagged PDF defines a set of standard structure types and attributes that allow page content (text, graphics, and images) to be extracted and reused for other purposes. Tagged PDF is not required in situations where a PDF file is intended only for print. Since the feature is optional, and since the rules for tagged PDF were relatively vague in ISO 32000-1, support for tagged PDF among consuming devices, including assistive technology (AT), is uneven as of 2021. ISO 32000-2, however, includes an improved discussion of tagged PDF which is anticipated to facilitate further adoption. An ISO-standardized subset of PDF specifically targeted at accessibility, PDF/UA, was first published in 2012. Optional Content Groups (layers) With the introduction of PDF version 1.5 (2003) came the concept of Layers. Layers, more formally known as Optional Content Groups (OCGs), refer to sections of content in a PDF document that can be selectively viewed or hidden by document authors or viewers. This capability is useful in CAD drawings, layered artwork, maps, multi-language documents, etc. Basically, it consists of an Optional Content Properties Dictionary added to the document root. This dictionary contains an array of Optional Content Groups (OCGs), each describing a set of information and each of which may be individually displayed or suppressed, plus a set of Optional Content Configuration Dictionaries, which give the status (Displayed or Suppressed) of the given OCGs. Encryption and signatures A PDF file may be encrypted, for security, in which case a password is needed to view or edit the contents. PDF 2.0 defines 256-bit AES encryption as the standard for PDF 2.0 files. The PDF Reference also defines ways that third parties can define their own encryption systems for PDF. PDF files may be digitally signed, to provide secure authentication; complete details on implementing digital signatures in PDF are provided in ISO 32000-2. PDF files may also contain embedded DRM restrictions that provide further controls that limit copying, editing, or printing. These restrictions depend on the reader software to obey them, so the security they provide is limited. The standard security provided by PDF consists of two different methods and two different passwords: a user password, which encrypts the file and prevents opening, and an owner password, which specifies operations that should be restricted even when the document is decrypted, which can include modifying, printing, or copying text and graphics out of the document, or adding or modifying text notes and AcroForm fields. The user password encrypts the file, while the owner password does not, instead relying on client software to respect these restrictions. An owner password can easily be removed by software, including some free online services. Thus, the use restrictions that a document author places on a PDF document are not secure, and cannot be assured once the file is distributed; this warning is displayed when applying such restrictions using Adobe Acrobat software to create or edit PDF files. Even without removing the password, most freeware or open source PDF readers ignore the permission "protections" and allow the user to print or make copies of excerpts of the text as if the document were not limited by password protection. Beginning with PDF 1.5, Usage rights (UR) signatures are used to enable additional interactive features that are not available by default in a particular PDF viewer application. The signature is used to validate that the permissions have been granted by a bona fide granting authority. For example, it can be used to allow a user: To save the PDF document along with a modified form or annotation data Import form data files in FDF, XFDF, and text (CSV/TSV) formats Export form data files in FDF and XFDF formats Submit form data Instantiate new pages from named page templates Apply a digital signature to existing digital signature form field Create, delete, modify, copy, import, and export annotations For example, Adobe Systems grants permissions to enable additional features in Adobe Reader, using public-key cryptography. Adobe Reader verifies that the signature uses a certificate from an Adobe-authorized certificate authority. Any PDF application can use this same mechanism for its own purposes. Under specific circumstances including non-patched systems of the receiver, the information the receiver of a digital signed document sees can be manipulated by the sender after the document has been signed by the signer. PAdES (PDF Advanced Electronic Signatures) is a set of restrictions and extensions to PDF and ISO 32000-1 making it suitable for advanced electronic signatures. This is published by ETSI as TS 102 778. File attachments PDF files can have file attachments which processors may access and open or save to a local filesystem. Metadata PDF files can contain two types of metadata. The first is the Document Information Dictionary, a set of key/value fields such as author, title, subject, creation and update dates. This is optional and is referenced from an Info key in the trailer of the file. A small set of fields is defined and can be extended with additional text values if required. This method is deprecated in PDF 2.0. In PDF 1.4, support was added for Metadata Streams, using the Extensible Metadata Platform (XMP) to add XML standards-based extensible metadata as used in other file formats. PDF 2.0 allows metadata to be attached to any object in the document, such as information about embedded illustrations, fonts, and images, as well as the whole document (attaching to the document catalog), using an extensible schema. PDF documents can also contain display settings, including the page display layout and zoom level in a Viewer Preferences object. Adobe Reader uses these settings to override the user's default settings when opening the document. The free Adobe Reader cannot remove these settings. Accessibility PDF files can be created specifically to be accessible to people with disabilities. PDF file formats in use can include tags, text equivalents, captions, audio descriptions, and more. Some software can automatically produce tagged PDFs, but this feature is not always enabled by default. Leading screen readers, including JAWS, Window-Eyes, Hal, and Kurzweil 1000 and 3000 can read tagged PDFs. Moreover, tagged PDFs can be re-flowed and magnified for readers with visual impairments. Adding tags to older PDFs and those that are generated from scanned documents can present some challenges. One of the significant challenges with PDF accessibility is that PDF documents have three distinct views, which, depending on the document's creation, can be inconsistent with each other. The three views are (i) the physical view, (ii) the tags view, and (iii) the content view. The physical view is displayed and printed (what most people consider a PDF document). The tags view is what screen readers and other assistive technologies use to deliver high-quality navigation and reading experience to users with disabilities. The content view is based on the physical order of objects within the PDF's content stream and may be displayed by software that does not fully support the tags' view, such as the Reflow feature in Adobe's Reader. PDF/UA, the International Standard for accessible PDF based on ISO 32000-1 was first published as ISO 14289–1 in 2012 and establishes normative language for accessible PDF technology. Multimedia Rich Media PDF is a PDF file including interactive content that can be embedded or linked within the file. It can contain images, audio, video content, or buttons. For example, if the interactive PDF is a digital catalog for an E-commerce business, products can be listed on the PDF pages and can be added with images and links to the website and buttons to order directly from the document. Forms Interactive Forms is a mechanism to add forms to the PDF file format. PDF currently supports two different methods for integrating data and PDF forms. Both formats today coexist in the PDF specification: AcroForms (also known as Acrobat forms), introduced in the PDF 1.2 format specification and included in all later PDF specifications. XML Forms Architecture (XFA) forms, introduced in the PDF 1.5 format specification. Adobe XFA Forms are not compatible with AcroForms. XFA was deprecated from PDF with PDF 2.0. AcroForms were introduced in the PDF 1.2 format. AcroForms permit the uses of objects (e.g. text boxes, Radio buttons, etc.) and some code (e.g. JavaScript). Alongside the standard PDF action types, interactive forms (AcroForms) support submitting, resetting, and importing data. The "submit" action transmits the names and values of selected interactive form fields to a specified uniform resource locator (URL). Interactive form field names and values may be submitted in any of the following formats, (depending on the settings of the action's ExportFormat, SubmitPDF, and XFDF flags): HTML Form format HTML 4.01 Specification since PDF 1.5; HTML 2.0 since 1.2 Forms Data Format (FDF) based on PDF, uses the same syntax and has essentially the same file structure, but is much simpler than PDF since the body of an FDF document consists of only one required object. Forms Data Format is defined in the PDF specification (since PDF 1.2). The Forms Data Format can be used when submitting form data to a server, receiving the response, and incorporating it into the interactive form. It can also be used to export form data to stand-alone files that can be imported back into the corresponding PDF interactive form. FDF was originally defined in 1996 as part of ISO 32000-2:2017. XML Forms Data Format (XFDF) (external XML Forms Data Format Specification, Version 2.0; supported since PDF 1.5; it replaced the "XML" form submission format defined in PDF 1.4) the XML version of Forms Data Format, but the XFDF implements only a subset of FDF containing forms and annotations. Some entries in the FDF dictionary do not have XFDF equivalents – such as the Status, Encoding, JavaScript, Page's keys, EmbeddedFDFs, Differences, and Target. In addition, XFDF does not allow the spawning, or addition, of new pages based on the given data; as can be done when using an FDF file. The XFDF specification is referenced (but not included) in PDF 1.5 specification (and in later versions). It is described separately in XML Forms Data Format Specification. The PDF 1.4 specification allowed form submissions in XML format, but this was replaced by submissions in XFDF format in the PDF 1.5 specification. XFDF conforms to the XML standard. XFDF can be used in the same way as FDF; e.g., form data is submitted to a server, modifications are made, then sent back and the new form data is imported in an interactive form. It can also be used to export form data to stand-alone files that can be imported back into the corresponding PDF interactive form. As of August 2019, XFDF 3.0 is an ISO/IEC standard under the formal name ISO 19444-1:2019 - Document management — XML Forms Data Format — Part 1: Use of ISO 32000-2 (XFDF 3.0). This standard is a normative reference of ISO 32000-2. PDF The entire document can be submitted rather than individual fields and values, as was defined in PDF 1.4. AcroForms can keep form field values in external stand-alone files containing key-value pairs. The external files may use Forms Data Format (FDF) and XML Forms Data Format (XFDF) files. The usage rights (UR) signatures define rights for import form data files in FDF, XFDF, and text (CSV/TSV) formats, and export form data files in FDF and XFDF formats. In PDF 1.5, Adobe Systems introduced a proprietary format for forms; Adobe XML Forms Architecture (XFA). Adobe XFA Forms are not compatible with ISO 32000's AcroForms feature, and most PDF processors do not handle XFA content. The XFA specification is referenced from ISO 32000-1/PDF 1.7 as an external proprietary specification and was entirely deprecated from PDF with ISO 32000-2 (PDF 2.0). Licensing Anyone may create applications that can read and write PDF files without having to pay royalties to Adobe Systems; Adobe holds patents to PDF, but licenses them for royalty-free use in developing software complying with its PDF specification. Security Changes to content In November 2019, researchers from Ruhr University Bochum and Hackmanit GmbH published attacks on digitally signed PDFs. They showed how to change the visible content in a signed PDF without invalidating the signature in 21 of 22 desktop PDF viewers and 6 of 8 online validation services by abusing implementation flaws. At the same conference, they additionally showed how to exfiltrate the plaintext of encrypted content in PDFs. In 2021, they showed new so-called shadow attacks on PDFs that abuse the flexibility of features provided in the specification. An overview of security issues in PDFs regarding denial of service, information disclosure, data manipulation, and arbitrary code execution attacks was presented by Jens Müller. Malware vulnerability PDF files can be infected with viruses, Trojans, and other malware. They can have hidden JavaScript code that might exploit vulnerabilities in a PDF, hidden objects executed when the file that hides them is opened, and, less commonly, a malicious PDF can launch malware. PDF attachments carrying viruses were first discovered in 2001. The virus, named OUTLOOK.PDFWorm or Peachy, uses Microsoft Outlook to send itself as an attached Adobe PDF file. It was activated with Adobe Acrobat, but not with Acrobat Reader. From time to time, new vulnerabilities are discovered in various versions of Adobe Reader, prompting the company to issue security fixes. Other PDF readers are also susceptible. One aggravating factor is that a PDF reader can be configured to start automatically if a web page has an embedded PDF file, providing a vector for attack. If a malicious web page contains an infected PDF file that takes advantage of a vulnerability in the PDF reader, the system may be compromised even if the browser is secure. Some of these vulnerabilities are a result of the PDF standard allowing PDF documents to be scripted with JavaScript. Disabling JavaScript execution in the PDF reader can help mitigate such future exploits, although it does not protect against exploits in other parts of the PDF viewing software. Security experts say that JavaScript is not essential for a PDF reader and that the security benefit that comes from disabling JavaScript outweighs any compatibility issues caused. One way of avoiding PDF file exploits is to have a local or web service convert files to another format before viewing. On March 30, 2010, security researcher Didier Stevens reported an Adobe Reader and Foxit Reader exploit that runs a malicious executable if the user allows it to launch when asked. Software Viewers and editors Many PDF viewers are provided free of charge from a variety of sources. Programs to manipulate and edit PDF files are available, usually for purchase. There are many software options for creating PDFs, including the PDF printing capabilities built into macOS, iOS, and most Linux distributions. Much document processing software including LibreOffice, Microsoft Office 2007 (if updated to SP2) and later, WordPerfect 9, and Scribus can export documents in PDF. There are many PDF print drivers for Microsoft Windows, the pdfTeX typesetting system, the DocBook PDF tools, applications developed around Ghostscript and Adobe Acrobat itself as well as Adobe InDesign, Adobe FrameMaker, Adobe Illustrator, Adobe Photoshop, that allow a "PDF printer" to be set up, which when selected sends output to a PDF file instead of a physical printer. Google's online office suite Google Docs allows uploading and saving to PDF. Some web apps offer free PDF editing and annotation tools. The Free Software Foundation was "developing a free, high-quality and fully functional set of libraries and programs that implement the PDF file format and associated technologies to the ISO 32000 standard", as one of its high priority projects. In 2011, however, the GNU PDF project was removed from the list of "high priority projects" due to the maturation of the Poppler library, which has enjoyed wider use in applications such as Evince with the GNOME desktop environment. Poppler is based on Xpdf code base. There are also commercial development libraries available as listed in List of PDF software. The Apache PDFBox project of the Apache Software Foundation is an open source Java library, licensed under the Apache License, for working with PDF documents. Printing Raster image processors (RIPs) are used to convert PDF files into a raster format suitable for imaging onto paper and other media in printers, digital production presses and prepress in a process known as rasterization. RIPs capable of processing PDF directly include the Adobe PDF Print Engine from Adobe Systems and Jaws and the Harlequin RIP from Global Graphics. In 1993, the Jaws raster image processor from Global Graphics became the first shipping prepress RIP that interpreted PDF natively without conversion to another format. The company released an upgrade to its Harlequin RIP with the same capability in 1997. Agfa-Gevaert introduced and shipped Apogee, the first prepress workflow system based on PDF, in 1997. Many commercial offset printers have accepted the submission of press-ready PDF files as a print source, specifically the PDF/X-1a subset and variations of the same. The submission of press-ready PDF files is a replacement for the problematic need for receiving collected native working files. In 2006, PDF was widely accepted as the standard print job format at the Open Source Development Labs Printing Summit. It is supported as a print job format by the Common Unix Printing System and desktop application projects such as GNOME, KDE, Firefox, Thunderbird, LibreOffice and OpenOffice have switched to emit print jobs in PDF. Some desktop printers also support direct PDF printing, which can interpret PDF data without external help. Native display model PDF was selected as the "native" metafile format for macOS (originally called Mac OS X), replacing the PICT format of the earlier classic Mac OS. The imaging model of the Quartz graphics layer is based on the model common to Display PostScript and PDF, leading to the nickname Display PDF. The Preview application can display PDF files, as can version 2.0 and later of the Safari web browser. System-level support for PDF allows macOS applications to create PDF documents automatically, provided they support the OS-standard printing architecture. The files are then exported in PDF 1.3 format according to the file header. When taking a screenshot under Mac OS X versions 10.0 through 10.3, the image was also captured as a PDF; later versions save screen captures as a PNG file, though this behavior can be set back to PDF if desired. Annotation Adobe Acrobat is one example of proprietary software that allows the user to annotate, highlight, and add notes to already created PDF files. One UNIX application available as free software (under the GNU General Public License) is PDFedit. The freeware Foxit Reader, available for Microsoft Windows, macOS and Linux, allows annotating documents. Tracker Software's PDF-XChange Viewer allows annotations and markups without restrictions in its freeware alternative. Apple's macOS's integrated PDF viewer, Preview, does also enable annotations as does the open-source software Skim, with the latter supporting interaction with LaTeX, SyncTeX, and PDFSync and integration with BibDesk reference management software. Freeware Qiqqa can create an annotation report that summarizes all the annotations and notes one has made across their library of PDFs. The Text Verification Tool exports differences in documents as annotations and markups. There are also web annotation systems that support annotation in pdf and other document formats. In cases where PDFs are expected to have all of the functionality of paper documents, ink annotation is required. Conversion and Information Extraction PDF's emphasis on preserving the visual appearance of documents across different software and hardware platforms poses challenges to the conversion of PDF documents to other file formats and the targeted extraction of information, such as text, images, tables, bibliographic information, and document metadata. Numerous tools and source code libraries support these tasks. Several labeled datasets to test PDF conversion and information extraction tools exist and have been used for benchmark evaluations of the tool's performance. Alternatives The Open XML Paper Specification is a competing format used both as a page description language and as the native print spooler format for Microsoft Windows since Windows Vista. Mixed Object: Document Content Architecture is a competing format. MO:DCA-P is a part of Advanced Function Presentation.
Technology
File formats
null
24085
https://en.wikipedia.org/wiki/Phenol
Phenol
Phenol (also known as carbolic acid, phenolic acid, or benzenol) is an aromatic organic compound with the molecular formula . It is a white crystalline solid that is volatile. The molecule consists of a phenyl group () bonded to a hydroxy group (). Mildly acidic, it requires careful handling because it can cause chemical burns. Phenol was first extracted from coal tar, but today is produced on a large scale (about 7 million tonnes a year) from petroleum-derived feedstocks. It is an important industrial commodity as a precursor to many materials and useful compounds. It is primarily used to synthesize plastics and related materials. Phenol and its chemical derivatives are essential for production of polycarbonates, epoxies, explosives, Bakelite, nylon, detergents, herbicides such as phenoxy herbicides, and numerous pharmaceutical drugs. Properties Phenol is an organic compound appreciably soluble in water, with about 84.2 g dissolving in 1000 ml (0.895 M). Homogeneous mixtures of phenol and water at phenol to water mass ratios of ~2.6 and higher are possible. The sodium salt of phenol, sodium phenoxide, is far more water-soluble. It is a combustible solid (NFPA rating = 2). When heated, phenol produces flammable vapors that are explosive at concentrations of 3 to 10% in air. Carbon dioxide or dry chemical extinguishers should be used to fight phenol fires. Acidity Phenol is a weak acid (pH 6.6). In aqueous solution in the pH range ca. 8 - 12 it is in equilibrium with the phenolate anion (also called phenoxide or carbolate): Phenol is more acidic than aliphatic alcohols. Its enhanced acidity is attributed to resonance stabilization of phenolate anion. In this way, the negative charge on oxygen is delocalized on to the ortho and para carbon atoms through the pi system. An alternative explanation involves the sigma framework, postulating that the dominant effect is the induction from the more electronegative sp2 hybridised carbons; the comparatively more powerful inductive withdrawal of electron density that is provided by the sp2 system compared to an sp3 system allows for great stabilization of the oxyanion. In support of the second explanation, the pKa of the enol of acetone in water is 10.9, making it only slightly less acidic than phenol (pKa 10.0). Thus, the greater number of resonance structures available to phenoxide compared to acetone enolate seems to contribute little to its stabilization. However, the situation changes when solvation effects are excluded. Hydrogen bonding In carbon tetrachloride and in alkane solvents, phenol hydrogen bonds with a wide range of Lewis bases such as pyridine, diethyl ether, and diethyl sulfide. The enthalpies of adduct formation and the IR frequency shifts accompanying adduct formation have been compiled. Phenol is classified as a hard acid. Tautomerism Phenol exhibits keto-enol tautomerism with its unstable keto tautomer cyclohexadienone, but the effect is nearly negligible. The equilibrium constant for enolisation is approximately 10−13, which means only one in every ten trillion molecules is in the keto form at any moment. The small amount of stabilisation gained by exchanging a C=C bond for a C=O bond is more than offset by the large destabilisation resulting from the loss of aromaticity. Phenol therefore exists essentially entirely in the enol form. 4, 4' Substituted cyclohexadienone can undergo a dienone–phenol rearrangement in acid conditions and form stable 3,4‐disubstituted phenol. For substituted phenols, several factors can favor the keto tautomer: (a) additional hydroxy groups (see resorcinol) (b) annulation as in the formation of naphthols, and (c) deprotonation to give the phenolate. Phenoxides are enolates stabilised by aromaticity. Under normal circumstances, phenoxide is more reactive at the oxygen position, but the oxygen position is a "hard" nucleophile whereas the alpha-carbon positions tend to be "soft". Reactions Phenol is highly reactive toward electrophilic aromatic substitution. The enhanced nucleophilicity is attributed to donation pi electron density from O into the ring. Many groups can be attached to the ring, via halogenation, acylation, sulfonation, and related processes. Phenol is so strongly activated that bromination and chlorination lead readily to polysubstitution. The reaction affords 2- and 4-substituted derivatives. The regiochemistry of halogenation changes in strongly acidic solutions where predominates. Phenol reacts with dilute nitric acid at room temperature to give a mixture of 2-nitrophenol and 4-nitrophenol while with concentrated nitric acid, additional nitro groups are introduced, e.g. to give 2,4,6-trinitrophenol. Friedel Crafts alkylations of phenol and its derivatives often proceed without catalysts. Alkylating agents include alkyl halides, alkenes, and ketones. Thus, adamantyl-1-bromide, dicyclopentadiene), and cyclohexanones give respectively 4-adamantylphenol, a bis(2-hydroxyphenyl) derivative, and a 4-cyclohexylphenols. Alcohols and hydroperoxides alkylate phenols in the presence of solid acid catalysts (e.g. certain zeolite). Cresols and cumyl phenols can be produced in that way. Aqueous solutions of phenol are weakly acidic and turn blue litmus slightly to red. Phenol is neutralized by sodium hydroxide forming sodium phenate or phenolate, but being weaker than carbonic acid, it cannot be neutralized by sodium bicarbonate or sodium carbonate to liberate carbon dioxide. When a mixture of phenol and benzoyl chloride are shaken in presence of dilute sodium hydroxide solution, phenyl benzoate is formed. This is an example of the Schotten–Baumann reaction: Phenol is reduced to benzene when it is distilled with zinc dust or when its vapour is passed over granules of zinc at 400 °C: When phenol is treated with diazomethane in the presence of boron trifluoride (), anisole is obtained as the main product and nitrogen gas as a byproduct. Phenol and its derivatives react with iron(III) chloride to give intensely colored solutions containing phenoxide complexes. Production Because of phenol's commercial importance, many methods have been developed for its production, but the cumene process is the dominant technology. Cumene process Accounting for 95% of production (2003) is the cumene process, also called Hock process. It involves the partial oxidation of cumene (isopropylbenzene) via the Hock rearrangement: Compared to most other processes, the cumene process uses mild conditions and inexpensive raw materials. For the process to be economical, both phenol and the acetone by-product must be in demand. In 2010, worldwide demand for acetone was approximately 6.7 million tonnes, 83 percent of which was satisfied with acetone produced by the cumene process. A route analogous to the cumene process begins with cyclohexylbenzene. It is oxidized to a hydroperoxide, akin to the production of cumene hydroperoxide. Via the Hock rearrangement, cyclohexylbenzene hydroperoxide cleaves to give phenol and cyclohexanone. Cyclohexanone is an important precursor to some nylons. Oxidation of benzene, toluene, cyclohexylbenzene The direct oxidation of benzene () to phenol is possible, but it has not been commercialized: Nitrous oxide is a potentially "green" oxidant that is a more potent oxidant than O2. Routes for the generation of nitrous oxide however remain uncompetitive. An electrosynthesis employing alternating current gives phenol from benzene. The oxidation of toluene, as developed by Dow Chemical, involves copper-catalyzed reaction of molten sodium benzoate with air: The reaction is proposed to proceed via formation of benzyoylsalicylate. Autoxidation of cyclohexylbenzene give the hydroperoxide. Decomposition of this hydroperoxide affords cyclohexanone and phenol. Older methods Early methods relied on extraction of phenol from coal derivatives or the hydrolysis of benzene derivatives. Hydrolysis of benzenesulfonic acid The original commercial route was developed by Bayer and Monsanto in the early 1900s, based on discoveries by Wurtz and Kekule. The method involves the reaction of strong base with benzenesulfonic acid, proceeding by the reaction of hydroxide with sodium benzenesulfonate to give sodium phenoxide. Acidification of the latter gives phenol. The net conversion is: Hydrolysis of chlorobenzene Chlorobenzene can be hydrolyzed to phenol using base (Dow process) or steam (Raschig–Hooker process): These methods suffer from the cost of the chlorobenzene and the need to dispose of the chloride byproduct. Coal pyrolysis Phenol is also a recoverable byproduct of coal pyrolysis. In the Lummus process, the oxidation of toluene to benzoic acid is conducted separately. Miscellaneous methods Phenyldiazonium salts hydrolyze to phenol. The method is of no commercial interest since the precursor is expensive. Salicylic acid decarboxylates to phenol. Uses The major uses of phenol, consuming two thirds of its production, involve its conversion to precursors for plastics. Condensation with acetone gives bisphenol-A, a key precursor to polycarbonates and epoxide resins. Condensation of phenol, alkylphenols, or diphenols with formaldehyde gives phenolic resins, a famous example of which is Bakelite. Partial hydrogenation of phenol gives cyclohexanone, a precursor to nylon. Nonionic detergents are produced by alkylation of phenol to give the alkylphenols, e.g., nonylphenol, which are then subjected to ethoxylation. Phenol is also a versatile precursor to a large collection of drugs, most notably aspirin but also many herbicides and pharmaceutical drugs. Phenol is a component in liquid–liquid phenol–chloroform extraction technique used in molecular biology for obtaining nucleic acids from tissues or cell culture samples. Depending on the pH of the solution either DNA or RNA can be extracted. Phenol is so inexpensive that it also attracts many small-scale uses. It is a component of industrial paint strippers used in the aviation industry for the removal of epoxy, polyurethane and other chemically resistant coatings. Due to safety concerns, phenol is banned from use in cosmetic products in the European Union and Canada. Medical Phenol was widely used as an antiseptic, and it is used in the production of carbolic soap. Concentrated phenol liquids are used for permanent treatment of ingrown toe and finger nails, a procedure known as a chemical matrixectomy. The procedure was first described by Otto Boll in 1945. Since that time phenol has become the chemical of choice for chemical matrixectomies performed by podiatrists. Concentrated liquid phenol can be used topically as a local anesthetic for otology procedures, such as myringotomy and tympanotomy tube placement, as an alternative to general anesthesia or other local anesthetics. It also has hemostatic and antiseptic qualities that make it ideal for this use. Phenol spray, usually at 1.4% phenol as an active ingredient, is used medically to treat sore throat. It is the active ingredient in some oral analgesics such as Chloraseptic spray, TCP and Carmex. History Phenol was discovered in 1834 by Friedlieb Ferdinand Runge, who extracted it (in impure form) from coal tar. Runge called phenol "Karbolsäure" (coal-oil-acid, carbolic acid). Coal tar remained the primary source until the development of the petrochemical industry. French chemist Auguste Laurent extracted phenol in its pure form, as a derivative of benzene, in 1841. In 1836, Auguste Laurent coined the name "phène" for benzene; this is the root of the word "phenol" and "phenyl". In 1843, French chemist Charles Gerhardt coined the name "phénol". The antiseptic properties of phenol were used by Sir Joseph Lister in his pioneering technique of antiseptic surgery. Lister decided that the wounds had to be thoroughly cleaned. He then covered the wounds with a piece of rag or lint covered in phenol. The skin irritation caused by continual exposure to phenol eventually led to the introduction of aseptic (germ-free) techniques in surgery. Lister's work was inspired by the works and experiments of his contemporary Louis Pasteur in sterilizing various biological media. He theorized that if germs could be killed or prevented, no infection would occur. Lister reasoned that a chemical could be used to destroy the micro-organisms that cause infection. Meanwhile, in Carlisle, England, officials were experimenting with sewage treatment using carbolic acid to reduce the smell of sewage cesspools. Having heard of these developments, and having previously experimented with other chemicals for antiseptic purposes without much success, Lister decided to try carbolic acid as a wound antiseptic. He had his first chance on August 12, 1865, when he received a patient: an eleven-year-old boy with a tibia bone fracture which pierced the skin of his lower leg. Ordinarily, amputation would be the only solution. However, Lister decided to try carbolic acid. After setting the bone and supporting the leg with splints, he soaked clean cotton towels in undiluted carbolic acid and applied them to the wound, covered with a layer of tin foil, leaving them for four days. When he checked the wound, Lister was pleasantly surprised to find no signs of infection, just redness near the edges of the wound from mild burning by the carbolic acid. Reapplying fresh bandages with diluted carbolic acid, the boy was able to walk home after about six weeks of treatment. By 16 March 1867, when the first results of Lister's work were published in the Lancet, he had treated a total of eleven patients using his new antiseptic method. Of those, only one had died, and that was through a complication that was nothing to do with Lister's wound-dressing technique. Now, for the first time, patients with compound fractures were likely to leave the hospital with all their limbs intact — Richard Hollingham, Blood and Guts: A History of Surgery, p. 62 Before antiseptic operations were introduced at the hospital, there were sixteen deaths in thirty-five surgical cases. Almost one in every two patients died. After antiseptic surgery was introduced in the summer of 1865, there were only six deaths in forty cases. The mortality rate had dropped from almost 50 per cent to around 15 per cent. It was a remarkable achievement — Richard Hollingham, Blood and Guts: A History of Surgery, p. 63 Phenol was the main ingredient of the "carbolic smoke ball," an ineffective device marketed in London in the 19th century as protection against influenza and other ailments, and the subject of the famous law case Carlill v Carbolic Smoke Ball Company. In the tort law case of Roe v Minister of Health, phenol was used to sterilize anaesthetic packed in ampoules, in which it contaminated the anaesthetic through invisible micro-cracks and caused paraplegia to the plaintiffs. Second World War The toxic effect of phenol on the central nervous system causes sudden collapse and loss of consciousness in both humans and animals; a state of cramping precedes these symptoms because of the motor activity controlled by the central nervous system. Injections of phenol were used as a means of individual execution by Nazi Germany during the Second World War. It was originally used by the Nazis in 1939 as part of the mass-murder of disabled people under Aktion T4. The Germans learned that extermination of smaller groups was more economical by injection of each victim with phenol. Phenol injections were given to thousands of people. Maximilian Kolbe was also murdered with a phenol injection after surviving two weeks of dehydration and starvation in Auschwitz when he volunteered to die in place of a stranger. Approximately one gram is sufficient to cause death. Occurrences Phenol is a normal metabolic product, excreted in quantities up to 40 mg/L in human urine. The temporal gland secretion of male elephants showed the presence of phenol and 4-methylphenol during musth. It is also one of the chemical compounds found in castoreum. This compound is ingested from the plants the beaver eats. Phenol is a measurable component in the aroma and taste of the distinctive Islay scotch whisky, generally ~30 ppm, but it can be over 160ppm in the malted barley used to produce whisky. This amount is different from and presumably higher than the amount in the distillate. Biodegradation Cryptanaerobacter phenolicus is a bacterium species that produces benzoate from phenol via 4-hydroxybenzoate. Rhodococcus phenolicus is a bacterium species able to degrade phenol as sole carbon source. Toxicity Phenol and its vapors are corrosive to the eyes, the skin, and the respiratory tract. Its corrosive effect on skin and mucous membranes is due to a protein-degenerating effect. Repeated or prolonged skin contact with phenol may cause dermatitis, or even second and third-degree burns. Inhalation of phenol vapor may cause lung edema. The substance may cause harmful effects on the central nervous system and heart, resulting in dysrhythmia, seizures, and coma. The kidneys may be affected as well. Long-term or repeated exposure of the substance may have harmful effects on the liver and kidneys. There is no evidence that phenol causes cancer in humans. Besides its hydrophobic effects, another mechanism for the toxicity of phenol may be the formation of phenoxyl radicals. Since phenol is absorbed through the skin relatively quickly, systemic poisoning can occur in addition to the local caustic burns. Resorptive poisoning by a large quantity of phenol can occur even with only a small area of skin, rapidly leading to paralysis of the central nervous system and a severe drop in body temperature. The for oral toxicity is less than 500 mg/kg for dogs, rabbits, or mice; the minimum lethal human dose was cited as 140 mg/kg. The Agency for Toxic Substances and Disease Registry (ATSDR), U.S. Department of Health and Human Services states the fatal dose for ingestion of phenol is from 1 to 32 g. Chemical burns from skin exposures can be decontaminated by washing with polyethylene glycol, isopropyl alcohol, or perhaps even copious amounts of water. Removal of contaminated clothing is required, as well as immediate hospital treatment for large splashes. This is particularly important if the phenol is mixed with chloroform (a commonly used mixture in molecular biology for DNA and RNA purification). Phenol is also a reproductive toxin causing increased risk of miscarriage and low birth weight indicating retarded development in utero. Phenols The word phenol is also used to refer to any compound that contains a six-membered aromatic ring, bonded directly to a hydroxyl group (-OH). Thus, phenols are a class of organic compounds of which the phenol discussed in this article is the simplest member.
Physical sciences
Carbon–oxygen bond
null
24096
https://en.wikipedia.org/wiki/Plough
Plough
A plough or (US) plow (both pronounced ) is a farm tool for loosening or turning the soil before sowing seed or planting. Ploughs were traditionally drawn by oxen and horses but modern ploughs are drawn by tractors. A plough may have a wooden, iron or steel frame with a blade attached to cut and loosen the soil. It has been fundamental to farming for most of history. The earliest ploughs had no wheels; such a plough was known to the Romans as an aratrum. Celtic peoples first came to use wheeled ploughs in the Roman era. The prime purpose of ploughing is to turn over the uppermost soil, bringing fresh nutrients to the surface while burying weeds and crop remains to decay. Trenches cut by the plough are called furrows. In modern use, a ploughed field is normally left to dry and then harrowed before planting. Ploughing and cultivating soil evens the content of the upper layer of soil, where most plant feeder roots grow. Ploughs were initially powered by humans, but the use of farm animals is considerably more efficient. The earliest animals worked were oxen. Later, horses and mules were used in many areas. With the Industrial Revolution came the possibility of steam engines to pull ploughs. These in turn were superseded by internal-combustion-powered tractors in the early 20th century. The Petty Plough was a notable invention for ploughing out orchard strips in Australia in the 1930s. Use of the traditional plough has decreased in some areas threatened by soil damage and erosion. Used instead is shallower ploughing or other less-invasive conservation tillage. The plough appears in one of the oldest surviving pieces of written literature, from the 3rd millennium BC, where it is personified and debating with another tool, the hoe, over which is better: a Sumerian disputation poem known as the Debate between the hoe and the plough. Etymology In older English, as in other Germanic languages, the plough was traditionally known by other names, e.g. Old English (modern dialectal ), Old High German , , , Old Norse (Swedish ), and Gothic , all presumably referring to the ard (scratch plough). The modern word comes from the Old Norse , and is therefore Germanic, but it appears relatively late (it is not attested in Gothic) and is thought to be a loan from one of the north Italic languages. The German cognate is "pflug", the Dutch "ploeg" and the Swedish "plog". In many Slavic languages and in Romanian the word is "plug". Words with the same root appeared with related meanings: in Raetic "wheeled heavy plough" (Pliny, Nat. Hist. 18, 172), and in Latin "farm cart", "cart", and "cart box". The word must have originally referred to the wheeled heavy plough, common in Roman north-western Europe by the 5th century AD. Many view plough as a derivative of the verb *plehan ~ *plegan 'to take responsibility' (cf. German pflegen 'to look after, nurse'), which would explain, for example, Old High German pfluog with its double meaning of 'plough' and 'livelihood'. Guus Kroonen (2013) proposes a vṛddhi-derivative of *plag/kkōn 'sod' (cf. Dutch plag 'sod', Old Norse plagg 'cloth', Middle High German 'rag, patch, stain'). Finally, Vladimir Orel (2003) tentatively attaches plough to a PIE stem *, which supposedly gave Old Armenian "to dig" and Welsh "crack", though the word may not be of Indo-European origin. Parts The basic parts of the modern plough are: beam hitch (British English: hake) vertical regulator coulter (knife coulter pictured, but disk coulter common) chisel (foreshare) share (mainshare) mouldboard Other parts include the frog (or frame), runner, landside, shin, trashboard, and stilts (handles). On modern ploughs and some older ploughs, the mould board is separate from the share and runner, so these parts can be replaced without replacing the mould board. Abrasion eventually wears out all parts of a plough that come into contact with the soil. History Hoeing When agriculture was first developed, soil was turned using simple hand-held digging sticks and hoes. These were used in highly fertile areas, such as the banks of the Nile, where the annual flood rejuvenates the soil, to create drills (furrows) in which to plant seeds. Digging sticks, hoes and mattocks were not invented in any one place, and hoe cultivation must have been common everywhere agriculture was practised. Hoe-farming is the traditional tillage method in tropical or sub-tropical regions, which are marked by stony soils, steep slope gradients, predominant root crops, and coarse grains grown at wide intervals. While hoe-agriculture is best suited to these regions, it is used in some fashion everywhere. Ard Some ancient hoes, like the Egyptian mr, were pointed and strong enough to clear rocky soil and make seed drills, which is why they are called hand-ards. However, domestication of oxen in Mesopotamia and the Indus Valley Civilisation, perhaps as early as the 6th millennium BC, provided mankind with the draft power needed to develop the larger, animal-drawn true ard (or scratch plough). The earliest surviving evidence of ploughing has been dated to 3500–3800 BCE, on a site in Bubeneč, Czech Republic. A ploughed field, from BCE, was also discovered at Kalibangan, India. A terracotta model of the early ards was found at Banawali, India, giving insight into the form of the tool used. The ard remained easy to replace if it became damaged and easy to replicate. The earliest was the bow ard, which consists of a draft-pole (or beam) pierced by a thinner vertical pointed stick called the head (or body), with one end being the stilt (handle) and the other a share (cutting blade) dragged through the topsoil to cut a shallow furrow suitable for most cereal crops. The ard does not clear new land well, so hoes or mattocks had to be used to pull up grass and undergrowth, and a hand-held, coulter-like ristle could be made to cut deeper furrows ahead of the share. Because the ard left a strip of undisturbed earth between furrows, the fields were often cross-ploughed lengthwise and breadth-wise, which tended to form squarish Celtic fields. The ard is best suited to loamy or sandy soils that are naturally fertilised by annual flooding, as in the Nile Delta and Fertile Crescent, and to a lesser extent any other cereal-growing region with light or thin soil. Mould-board ploughing To grow crops regularly in less-fertile areas, it was once believed that the soil must be turned to bring nutrients to the surface. A major advance for this type of farming was the turn plough, also known as the mould-board plough (UK), moldboard plow (U.S.), or frame-plough. A coulter (or skeith) could be added to cut vertically into the ground just ahead of the share (in front of the frog), a wedge-shaped cutting edge at the bottom front of the mould board with the landside of the frame supporting the under-share (below-ground component). The heavy iron moldboard plow was invented in China's Han Empire in the 1st and 2nd century, and from there it spread to the Netherlands, which led the Agricultural Revolution. The mould-board plough introduced in the 18th century was a major advance in technology. The upper parts of the frame carry (from the front) the coupling for the motive power (horses), the coulter, and the landside frame. Depending on the size of the implement, and the number of furrows it is designed to plough at one time, a fore-carriage with a wheel or wheels (known as a furrow wheel and support wheel) may be added to support the frame (wheeled plough). In the case of a single-furrow plough there is one wheel at the front and handles at the rear for the ploughman to maneuver it. When dragged through a field, the coulter cuts down into the soil and the share cuts horizontally from the previous furrow to the vertical cut. This releases a rectangular strip of sod to be lifted by the share and carried by the mould board up and over, so that the strip of sod (slice of the topsoil) that is being cut lifts and rolls over as the plough moves forward, dropping back upside down into the furrow and onto the turned soil from the previous run down the field. Each gap in the ground where the soil has been lifted and moved across (usually to the right) is called a furrow. The sod lifted from it rests at an angle of about 45 degrees in the adjacent furrow, up the back of the sod from the previous run. A series of ploughings run down a field leaves a row of sods partly in the furrows and partly on the ground lifted earlier. Visually, across the rows, there is the land on the left, a furrow (half the width of the removed strip of soil) and the removed strip almost upside-down lying on about half of the previous strip of inverted soil, and so on across the field. Each layer of soil and the gutter it came from forms a classic furrow. The mould-board plough greatly reduced the time needed to prepare a field and so allowed a farmer to work a larger area of land. In addition, the resulting pattern of low (under the mould board) and high (beside it) ridges in the soil forms water channels, allowing the soil to drain. In areas where snow build-up causes difficulties, this lets farmers plant the soil earlier, as the meltwater run-off drains away more quickly. Parts There are five major parts of a mouldboard plough: Mouldboard Share Landside (short or long) Frog (sometimes called a standard) Tailpiece The share, landside and mould board are bolted to the frog, which is an irregular piece of cast iron at the base of the plough body, to which the soil-wearing parts are bolted. The share is the edge that makes the horizontal cut to separate the furrow slice from the soil below. Conventional shares are shaped to penetrate soil efficiently: the tip is pointed downward to pull the share into the ground to a regular depth. The clearance, usually referred to as suction or down suction, varies with different makes and types of plough. Share configuration is related to soil type, particularly in the down suction or concavity of its lower surface. Generally three degrees of clearance or down suction are recognised: regular for light soil, deep for ordinary dry soil, and double-deep for clay and gravelly soils. As the share wears away, it becomes blunt and the plough will require more power to pull it through the soil. A plough body with a worn share will not have enough "suck" to ensure it delves the ground to its full working depth. In addition, the share has horizontal suction related to the amount its point is bent out of line with the land side. Down suction causes the plough to penetrate to proper depth when pulled forward, while horizontal suction causes the plough to create the desired width of furrow. The share is a plane part with a trapezoidal shape. It cuts the soil horizontally and lifts it. Common types are regular, winged-plane, bar-point, and share with mounted or welded point. The regular share conserves a good cut but is recommended on stone-free soils. The winged-plane share is used on heavy soil with a moderate amount of stones. The bar-point share can be used in extreme conditions (hard and stony soils). The share with a mounted point is somewhere between the last two types. Makers have designed shares of various shapes (trapesium, diamond, etc.) with bolted point and wings, often separately renewable. Sometimes the share-cutting edge is placed well in advance of the mould board to reduce the pulverizing action of the soil. The mould board is the part of the plough that receives the furrow slice from the share. It is responsible for lifting and turning the furrow slice and sometimes for shattering it, depending on the type of mould board, ploughing depth and soil conditions. The intensity of this depends on the type of mould board. To suit different soil conditions and crop requirements, mould boards have been designed in different shapes, each producing its own furrow profile and surface finish, but essentially they still conform to the original plough body classification. The various types have been traditionally classified as general purpose, digger, and semi-digger, as described below. The general-purpose mould board. This has a low draft body with a gentle, cross-sectional convex curve from top to bottom, which turns a furrow three parts wide by two parts deep, e. g. wide by deep. It turns the furrow slice slowly almost without breaking it, and is normally used for shallow ploughing (maximum depth). It is useful for grassland ploughing and sets up the land for weathering by winter frosts, which reduces the time taken to prepare a seedbed for spring sown crops. The digger mould board is short, abruptly curved with a concave cross-section both from top to bottom and from shin to tail. It turns the furrow slice rapidly, giving maximum shatter, deeper than its width. It is normally used for very deep ploughing ( deep or more). It has a higher power requirement and leaves a very broken surface. Digger ploughs are mainly used for land for potatoes and other root crops. The semi-digger mould board is somewhat shorter than the general-purpose mould board, but with a concave cross-section and a more abrupt curve. Being intermediate between the two mould boards described above, it has a performance that comes in between (approximately deep), with less shattering than the digger mouldboard. It turns an almost square-sectioned furrow and leaves a more broken surface finish. Semi-digger mould boards can be used at various depths and speeds, which suits them for most of the general ploughing on a farm. In addition, slatted mould boards are preferred by some farmers, though they are a less common type. They consist of a number of curved steel slats bolted to the frog along the length of the mould board, with gaps between the slats. They tend to break up the soil more than a full mould board and improve soil movement across the mould board when working in sticky soils where a solid mould board does not scour well. The land side is the flat plate which presses against and transmits the lateral thrust of the plough bottom to the furrow wall. It helps to resist the side pressure exerted by the furrow slice on the mould board. It also helps to stabilise the plough while in operation. The rear bottom end of the landslide, which rubs against the furrow sole, is known as the heel. A heel iron is bolted to the end of the rear of the land side and helps to support the back of the plough. The land side and share are arranged to give a "lead" towards the unploughed land, so helping to sustain the correct furrow width. The land side is usually made of solid medium-carbon steel and is very short, except at the rear bottom of the plough. The heel or rear end of the rear land side may be subject to excessive wear if the rear wheel is out of adjustment, and so a chilled iron heel piece is frequently used. This is inexpensive and can be easily replaced. The land side is fastened to the frog by plough bolts. The frog (standard) is the central part of the plough bottom to which the other components of the bottom are attached. It is an irregular piece of metal, which may be made of cast iron for cast iron ploughs or welded steel for steel ploughs. The frog is the foundation of the plough bottom. It takes the shock resulting from hitting rocks, and therefore should be tough and strong. The frog is in turn fastened to the plough frame. A runner extending from behind the share to the rear of the plough controls the direction of the plough, because it is held against the bottom land-side corner of the new furrow being formed. The holding force is the weight of the sod, as it is raised and rotated, on the curved surface of the mould board. Because of this runner, the mould board plough is harder to turn around than the scratch plough, and its introduction brought about a change in the shape of fieldsfrom mostly square fields into longer rectangular "strips" (hence the introduction of the furlong). Iron ploughshare An advance on the basic design was the iron ploughshare, a replaceable horizontal cutting surface mounted on the tip of the share. The earliest ploughs with a detachable and replaceable share date from around 1000 BC in the Ancient Near East, and the earliest iron ploughshares from about 500 BC in China. Early mould boards were wedges that sat inside the cut formed by the coulter, turning over the soil to the side. The ploughshare spread the cut horizontally below the surface, so that when the mould board lifted it, a wider area of soil was turned over. Mould boards are known in Britain from the late 6th century onwards. Types There are multiple types of ploughs available. Mould board ploughs cut the soil into pieces. Disc ploughs can be used where mould plows are not suitable. Rotary ploughs are used to prepare seed beds. Plough wheel The gauge wheel is an auxiliary wheel to maintain uniform depths of ploughing in various soil conditions. It is usually placed in a hanging position. The land wheel of the plough runs on the ploughed land. The front or rear furrow wheel of the plough runs in the furrow. Plough protective devices When a plough hits a rock or other solid obstruction, serious damage may result unless the plough is equipped with some safety device. The damage may be bent or broken shares, bent standards, beams or braces. The three basic types of safety devices used on mould-board ploughs are a spring release device in the plough drawbar, a trip beam construction on each bottom, and an automatic reset design on each bottom. The spring release was used in the past almost universally on trailing-type ploughs with one to three or four bottoms. It is not practical on larger ploughs. When an obstruction is encountered, the spring release mechanism in the hitch permits the plough to uncouple from the tractor. When a hydraulic lift is used on the plough, the hydraulic hoses will also usually uncouple automatically when the plough uncouples. Most plough makers offer an automatic reset system for tough conditions or rocky soils. The re-set mechanism allows each body to move rearward and upward to pass without damage over obstacles such as rocks hidden below soil surface. A heavy leaf or coil-spring mechanism that holds the body in its working position under normal conditions resets the plough after the obstruction is passed. Another type of auto-reset mechanism uses an oil (hydraulic) and gas accumulator. Shock loads cause the oil to compress the gas. When the gas expands again, the leg returns to its working ploughing position after passing over the obstacle. The simplest mechanism is a breaking (shear) bolt that needs replacement. Shear bolts that break when a plough body hits an obstruction are a cheaper overload protection device. Trip-beam ploughs are constructed with a hinge point in the beam. This is usually located some distance above the top of the plough bottom. The bottom is held in normal ploughing position by a spring-operated latch. When an obstruction is encountered, the entire bottom is released and hinges back and up to pass over the obstruction. It is necessary to back up the tractor and plough to reset the bottom. This construction is used to protect the individual bottoms. The automatic reset design has only recently been introduced on US ploughs, but has been used extensively on European and Australian ploughs. Here the beam is hinged at a point almost above the point of the share. The bottom is held in the normal position by a set of springs or a hydraulic cylinder on each bottom. When an obstruction is encountered, the plough bottom hinges back and up in such a way as to pass over the obstruction, without stopping the tractor and plough. The bottom automatically returns to normal ploughing position as soon as the obstruction is passed, without any interruption of forward motion. The automatic reset design permits higher field efficiencies since stopping for stones is practically eliminated. It also reduces costs for broken shares, beams and other parts. The fast resetting action helps produce a better job of ploughing, as large areas of unploughed land are not left, as they are when lifting a plough over a stone. Loy ploughing Manual loy ploughing was a form used on small farms in Ireland where farmers could not afford more, or on hilly ground that precluded horses. It was used up until the 1960s in poorer land. It suited the moist Irish climate, as the trenches formed by turning in the sods provided drainage. It allowed potatoes to be grown in bogs (peat swamps) and on otherwise unfarmed mountain slopes. Heavy ploughs In the basic mould-board plough, the depth of cut is adjusted by lifting against the runner in the furrow, which limited the weight of the plough to what a ploughman could easily lift. This limited the construction to a small amount of wood (although metal edges were possible). These ploughs were fairly fragile and unsuitable for the heavier soils of northern Europe. The introduction of wheels to replace the runner allowed the weight of the plough to increase, and in turn the use of a larger mould-board faced in metal. These heavy ploughs led to greater food production and eventually a marked population increase, beginning around AD 1000. Before the Han dynasty (202 BCAD 220), Chinese ploughs were made almost wholly of wood except for the iron blade of the ploughshare. These were V-shaped iron pieces mounted on wooden blades and handles. By the Han period the entire ploughshare was made of cast iron. These are the earliest known heavy, mould-board iron ploughs. Several advancements such as the three-shared plow, the plow-and-sow implement, and the harrow were developed subsequently. By the end of the Song dynasty in 1279, Chinese ploughs had reached a state of development that would not be seen in Holland until the 17th century. The Romans achieved a heavy-wheeled mould-board plough in the late 3rd and 4th century AD, for which archaeological evidence appears, for instance, in Roman Britain. The Greek and Roman mould-boards were usually tied to the bottom of the shaft with bits of rope, which made them more fragile than the Chinese ones, and iron mould-boards did not appear in Europe until the 10th century. The first indisputable appearance after the Roman period is in a northern Italian document of 643. Old words connected with the heavy plough and its use appear in Slavic, suggesting possible early use in that region. General adoption of the carruca heavy plough in Europe seems to have accompanied adoption of the three-field system in the later 8th and early 9th centuries, leading to improved agricultural productivity per unit of land in northern Europe. This was accompanied by larger fields, known variously as carucates, ploughlands, and plough gates. Improved designs The basic plough with coulter, ploughshare and mould board remained in use for a millennium. Major changes in design spread widely in the Age of Enlightenment, when there was rapid progress in design. Joseph Foljambe in Rotherham, England, in 1730, used new shapes based on the Rotherham plough, which covered the mould board with iron. Unlike the heavy plough, the Rotherham, or Rotherham swing plough consisted entirely of the coulter, mould board and handles. It was much lighter than earlier designs and became common in England. It may have been the first plough widely built in factories and commercially successful there. In 1789 Robert Ransome, an iron founder in Ipswich, started casting ploughshares in a disused malting at St Margaret's Ditches. A broken mould in his foundry caused molten metal to come into contact with cold metal, making the metal surface extremely hard. This process, chilled casting, resulted in what Ransome advertised as "self-sharpening" ploughs. He received patents for his discovery. James Small further advanced the design. Using mathematical methods, he eventually arrived at a shape cast from a single piece of iron, an improvement on the Scots plough of James Anderson of Hermiston. A single-piece cast-iron plough was also developed and patented by Charles Newbold in the United States. This was again improved on by Jethro Wood, a blacksmith of Scipio, New York, who made a three-part Scots plough that allowed a broken piece to be replaced. In 1833 John Lane invented a steel plough. Then in 1837 John Deere introduced a steel plough; it was so much stronger than iron designs that it could work soil in US areas previously thought unsuitable for farming. Improvements on this followed developments in metallurgy: steel coulters and shares with softer iron mould boards to prevent breakage, the chilled plough (an early example of surface-hardened steel), and eventually mould boards with faces strong enough to dispense with the coulter. By the time of the early 1900s, the steel plough had many uses, shapes and names. The "two horse breaking plough" had a point and wing used to break the soil's surface and turn the dirt out and over. The "shovel plough" was used to lay off the rows. The "harrow plough" was used to cover the planted seed. The "scratcher" or "geewhiz" was used to deweed or cultivate the crop. The "bulltongue" and "sweeps" were used to plough the middle of the rows. All these metal plough points required being re-sharpened about every ten days, due to their use on rough and rocky ground. Single-sided ploughing The first mould-board ploughs could only turn the soil over in one direction (conventionally to the right), as dictated by the shape of the mould board; therefore, a field had to be ploughed in long strips, or lands. The plough was usually worked clockwise around each land, ploughing the long sides and being dragged across the short sides without ploughing. The length of the strip was limited by the distance oxen (later horses) could comfortably work without rest, and their width by the distance the plough could conveniently be dragged. These distances determined the traditional size of the strips: a furlong, (or "furrow's length", ) by a chain () – an area of one acre (about 0.4 hectares); this is the origin of the acre. The one-sided action gradually moved soil from the sides to the centre line of the strip. If the strip was in the same place each year, the soil built up into a ridge, creating the ridge and furrow topography still seen in some ancient fields. Turn-wrest plough The turn-wrest plough allows ploughing to be done to either side. The mould board is removable, turning to the right for one furrow, then being moved to the other side of the plough to turn to the left. (The coulter and ploughshare are fixed.) Thus adjacent furrows can be ploughed in opposite directions, allowing ploughing to proceed continuously along the field and so avoid the ridge–furrow topography. Reversible plough The reversible (or roll-over) plough has two mould-board ploughs mounted back to back, one turning right, the other left. While one works the land, the other is borne upside-down in the air. At the end of each row the paired ploughs are turned over so that the other can be used along the next furrow, again working the field in a consistent direction. These ploughs date back to the days of the steam engine and the horse. In almost universal use on farms, they have right and left-handed mould boards, enabling them to work up and down the same furrow. Reversible ploughs may either be mounted or semi-mounted and are heavier and more expensive than right-handed models, but have the great advantage of leaving a level surface that facilitates seedbed preparation and harvesting. Very little marking out is necessary before ploughing can start; idle running on the headland is minimal compared with conventional ploughs. Driving a tractor with furrow-side wheels in the furrow bottom provides the most efficient line of draught between tractor and plough. It is also easier to steer the tractor; driving with the front wheel against the furrow wall will keep the front furrow at the correct width. This is less satisfactory when using a tractor with wide front tyres. Although these make better use of the tractor power, the tyres may compact some of the last furrow slice turned on the previous run. The problem is overcome by using a furrow widener or longer mould board on the rear body. The latter moves the soil further towards the ploughed land, leaving more room for the tractor wheels on the next run. Driving with all four wheels on unploughed land is another solution to the problem of wide tyres. Semi-mounted ploughs can be hitched in a way that allows the tractor to run on unbroken land and pull the plough in correct alignment without any sideways movement (crabbing). Riding and multiple-furrow ploughs Early steel ploughs were walking ploughs, directed by a ploughman holding handles on either side of the plough. Steel ploughs were so much easier to draw through the soil that constant adjustment of the blade to deal with roots or clods was no longer necessary, as the plough could easily cut through them. Not long after that the first riding ploughs appeared, whose wheels kept the plough at an adjustable level above the ground, while the ploughman sat on a seat instead of walking. Direction was now controlled mostly through the draught team, with levers allowing fine adjustments. This led quickly to riding ploughs with multiple mould boards, which dramatically increased ploughing performance. A single draught horse can normally pull a single-furrow plough in clean light soil, but in heavier soils two horses are needed, one walking on the land and one in the furrow. Ploughs with two or more furrows call for more than two horses, and usually one or more have to walk on the ploughed sod, which is hard going for them and means they tread newly ploughed land down. It is usual to rest such horses every half-hour for about ten minutes. Improving metallurgy and design John Deere, an Illinois blacksmith, noted that ploughing many sticky, non-sandy soils might benefit from modifications in the design of the mould board and the metals used. A polished needle would enter leather and fabric with greater ease and a polished pitchfork also require less effort. Looking for a polished, slicker surface for a plough, he experimented with portions of saw blades, and by 1837 was making polished, cast steel ploughs. Balance plough The invention of the mobile steam engine allowed steam power to be applied to ploughing from about 1850. In Europe, soil conditions were often too soft to support the weight of a traction engine. Instead, counterbalanced, wheeled ploughs, known as balance ploughs, were drawn by cables across the fields by pairs of ploughing engines on opposite field edges, or by a single engine drawing directly towards it at one end and drawing away from it via a pulley at the other. The balance plough had two sets of facing ploughs arranged so that when one was in the ground, the other was lifted in the air. When pulled in one direction, the trailing ploughs were lowered onto the ground by the tension on the cable. When the plough reached the edge of the field, the other engine pulled the opposite cable, and the plough tilted (balanced), putting the other set of shares into the ground, and the plough worked back across the field. One set of ploughs was right-handed and the other left-handed, allowing continuous ploughing along the field, as with the turn-wrest and reversible ploughs. The man credited with inventing the ploughing engine and associated balance plough in the mid-19th century was John Fowler, an English agricultural engineer and inventor. However the Fisken brothers demonstrated (and went on to patent) a balance plough about 4 years before Fowler. One notable producer of steam-powered ploughs was J.Kemna of Eastern Prussia, who became the "leading steam plough company on the European continent and penetrated the monopoly of English companies on the world market" at the beginning of the 20th century. Stump-jump plough The stump-jump plough was invented in 1876 by Australian Richard Bowyer Smith alongside his brother Clarence Herbert Smith. It is designed to break up new farming land that contains tree stumps and rocks expensive to remove. It uses a moveable weight to hold the ploughshare in position. When a tree stump or rock is encountered, the ploughshare is thrown up clear of the obstacle, to avoid breaking its harness or linkage. Ploughing can continue when the weight is returned to the earth. A simpler, later system uses a concave disc (or pair of them) set at a wide angle to the direction of progress, using a concave shape to hold the disc into the soilunless something hard strikes the circumference of the disc, causing it to roll up and over the obstruction. As this is dragged forward, the sharp edge of the disc cuts the soil, and the concave surface of the rotating disc lifts and throws the soil to the side. It does not work as well as a mould-board plough (but this is not seen as a drawback, because it helps to fight wind erosion), but it does lift and break up the soil (see disc harrow). Modern ploughs Modern ploughs are usually multiply reversible, mounted on a tractor with a three-point linkage. These commonly have from two to as many as seven mould boardsand semi-mounted ploughs (whose lifting is assisted by a wheel about halfway along their length) can have as many as 18. The tractor's hydraulics are used to lift and reverse the implement and to adjust furrow width and depth. The plougher still has to set the draughting linkage from the tractor, so that the plough keeps the proper angle in the soil. This angle and depth can be controlled automatically by modern tractors. As a complement to the rear plough a two or three mould-board plough can be mounted on the front of the tractor if it is equipped with front three-point linkage. Specialist ploughs Chisel plough The chisel plough is a common tool for deep tillage (prepared land) with limited soil disruption. Its main function is to loosen and aerate the soils, while leaving crop residue on top. This plough can be used to reduce the effects of soil compaction and to help break up ploughpan and hardpan. Unlike many other ploughs, the chisel will not invert or turn the soil. This feature has made it a useful addition to no-till and low-till farming practices that attempt to maximise the erosion-preventing benefits of keeping organic matter and farming residues present on the soil surface throughout the year. Thus the chisel plough is considered by some to be more sustainable than other types of plough, such as the mould-board plough. Chisel ploughs are becoming more popular as a primary tillage tool in row-crop farming areas. Basically the chisel plough is a heavy-duty field cultivator intended to operate at depths from to as much as . However some models may run much deeper. Each individual plough or shank is typically set from to apart. Such a plough can meet significant soil drag: a tractor of sufficient power and traction is required. When ploughing with a chisel plough, per shank is required, depending on depth. Pull-type chisel ploughs are made in working widths from about up to . They are tractor-mounted, and working depth is hydraulically controlled. Those more than about wide may be equipped with folding wings to reduce transport width. Wider machines may have the wings supported by individual wheels and hinge joints to allow flexing of the machine over uneven ground. The wider models usually have a wheel each side to control working depth. Three-point hitch-mounted units are made in widths from about . Cultivators are often similar in form to chisel ploughs, but their goals are different. Cultivator teeth work near the surface, usually for weed control, whereas chisel plough shanks work deep under the surface; therefore, cultivation takes much less power per shank than does chisel ploughing. Country plough The country plough is a slanted plough. The most common plough in India, it is recommended for crops like groundnut after the use of a tractor. Ridging plough A ridging plough is used for crops such as potatoes or scallions grown buried in ridges of soil, using a technique called ridging or hilling. A ridging plough has two back-to-back mould boards cutting a deep furrow on each pass with high ridges either side. The same plough may be used to split the ridges to harvest the crop. Scots hand plough This variety of ridge plough is notable for having a blade pointing towards the operator. It is used solely by human effort rather than with animal or machine assistance and pulled backwards by the operator, requiring great physical effort. It is particularly used for second breaking of ground and for potato planting. It is found in Shetland, some western crofts, and more rarely Central Scotland, typically on holdings too small or poor to merit the use of animals. Ice plough Functionally operating as a saw, but pulled as a plough, this device was created in the 19th century and was mainly used in Scandinavia, as part of the ice export industry, creating blocks of ice to ship to Great Britain. Mole plough The mole plough allows under-drainage to be installed without trenches, or breaks up the deep impermeable soil layers that impede it. It is a deep plough with a torpedo or wedge-shaped tip and a narrow blade connecting it to the body. When dragged over ground, it leaves a channel deep under it that acts as a drain. Modern mole ploughs may also bury a flexible perforated plastic drain pipe as they go, making a more permanent drainor may be used to lay pipes for water supply or other purposes. Similar machines, so-called pipe-and-cable-laying ploughs, are even used under the sea for laying cables or for preparing the earth for side-scan sonar in a process used in oil exploration. Compacting a tennis ball-sized sample from moling depth by hand, then pushing a pencil through is a simple check to find if the subsoil is in the right condition for mole ploughing. If the hole stays intact without splitting the ball, the soil is in ideal condition for the mole plough. Heavy land requires draining to reduce its water content to a level efficient for plant growth. Heavy soils usually have a system of permanent drains, using perforated plastic or clay pipes that discharge into a ditch. The small tunnels (mole drains) that mole ploughs form lie at a depth of up to at an angle to the pipe drains. Water from the mole drains seeps into the pipes and runs along them into a ditch. Mole ploughs are usually trailed and pulled by a crawler tractor, but lighter models for use on the three-point linkage of powerful four-wheel drive tractors are also made. A mole plough has a strong frame that slides along the ground when the machine is at work. A heavy leg, similar to a sub-soiler leg, is attached to the frame and a circular section with a larger diameter expander on a flexible link is bolted to the leg. The bullet-shaped share forms a tunnel in the soil about diameter and the expander presses the soil outwards to form a long-lasting drainage channel. Para-plough The para-plough, or paraplow, loosens compacted soil layers 3 to 4 dm (12 to 16 inches) deep while maintaining high surface residue levels. It is primary tillage implement for deep ploughing without inversion. Spade plough The spade plough is designed to cut the soil and turn it on its side, minimising damage to earthworms, soil microorganism and fungi. This increases the sustainability and long-term fertility of the soil. Switch plough Using a bar with square shares mounted perpendicularly and a pivot point to change the bar's angle, the switch plough allows ploughing in either direction. It is best in previously-worked soils, as the ploughshares are designed more to turn the soil over than for deep tillage. At the headland, the operator pivots the bar (and so the ploughshares) to turn the soil to the opposite side of the direction of travel. Switch ploughs are usually lighter than roll-over ploughs, requiring less horsepower to operate. Effects of mould-board ploughing Mould-board ploughing in cold and temperate climates, down to , aerates the soil by loosening it. It incorporates crop residues, solid manures, limestone, and commercial fertilisers alongside oxygen, reducing nitrogen losses by denitrification, accelerating mineralisation, and raising short-term nitrogen availability for turning organic matter into humus. It erases wheel tracks and ruts from harvesting equipment. It controls many perennial weeds and delays the growth of others until spring. It accelerates spring soil warming and water evaporation due to lower residues on the soil surface. It facilitates seeding with a lighter seed, controls many crop enemies (slugs, crane flies, seedcorn maggots-bean seed flies, borers), and raises the number of "soil-eating" earthworms (endogic), but deters vertical-dwelling earthworms (anecic). Ploughing leaves little crop residue on the surface that might otherwise reduce both wind and water erosion. Over-ploughing can lead to the formation of hardpan. Typically, farmers break that up with a subsoiler, which acts as a long, sharp knife slicing through the hardened layer of soil deep below the surface. Soil erosion due to improper land and plough utilisation is possible. Contour ploughing mitigates soil erosion by ploughing across a slope, along elevation lines. Alternatives to ploughing, such as a no-till method, have the potential to build soil levels and humus. These may be suitable for smaller, intensively cultivated plots and for farming on poor, shallow or degraded soils that ploughing would further degrade. Depictions
Technology
Agricultural tools
null
24107
https://en.wikipedia.org/wiki/Peer-to-peer
Peer-to-peer
Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the network, forming a peer-to-peer network of nodes. In addition, a personal area network (PAN) is also in nature a type of decentralized peer-to-peer network typically between two devices. Peers make a portion of their resources, such as processing power, disk storage, or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client–server model in which the consumption and supply of resources are divided. While P2P systems had previously been used in many application domains, the architecture was popularized by the Internet file sharing system Napster, originally released in 1999. P2P is used in many protocols such as BitTorrent file sharing over the Internet and in personal networks like Miracast displaying and Bluetooth radio. The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general. Development While P2P systems had previously been used in many application domains, the concept was popularized by file sharing systems such as the music-sharing application Napster. The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems". The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the first Request for Comments, RFC 1. Tim Berners-Lee's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than the present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures. This contrasts with the broadcasting-like structure of the web as it has developed over the years. As a precursor to the Internet, ARPANET was a successful peer-to-peer network where "every participating node could request and serve content". However, ARPANET was not self-organized, and it could not "provide any means for context or content-based routing beyond 'simple' address-based routing." Therefore, Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces a decentralized model of control. The basic model is a client–server model from the user or client perspective that offers a self-organizing approach to newsgroup servers. However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of Email clients and their direct connections is strictly a client-server relationship. In May 1999, with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster. Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions". Architecture A peer-to-peer network is designed around the notion of equal peer nodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client–server model where communication is usually to and from a central server. A typical example of a file transfer that uses the client-server model is the File Transfer Protocol (FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests. Routing and resource discovery Peer-to-peer networks generally implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network. Data is still exchanged directly over the underlying TCP/IP network, but at the application layer peers can communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks as unstructured or structured (or as a hybrid between the two). Unstructured networks Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other. (Gnutella, Gossip, and Kazaa are examples of unstructured P2P protocols). Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay. Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network. However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses more CPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that the search will be successful. Structured networks In structured peer-to-peer networks the overlay is organized into a specific topology, and the protocol ensures that any node can efficiently search the network for a file/resource, even if the resource is extremely rare. The most common type of structured P2P networks implement a distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer. This enables peers to search for resources on the network using a hash table: that is, (key, value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key. However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors that satisfy specific criteria. This makes them less robust in networks with a high rate of churn (i.e. with large numbers of nodes frequently joining and leaving the network). More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance. Notable distributed networks that use DHTs include Tixati, an alternative to BitTorrent's distributed tracker, the Kad network, the Storm botnet, and the YaCy. Some prominent research projects include the Chord project, Kademlia, PAST storage utility, P-Grid, a self-organized and emerging overlay network, and CoopNet content distribution system. DHT-based networks have also been widely utilized for accomplishing efficient resource discovery for grid computing systems, as it aids in resource management and scheduling of applications. Hybrid models Hybrid models are a combination of peer-to-peer and client–server models. A common hybrid model is to have a central server that helps peers find each other. Spotify was an example of a hybrid model [until 2014]. There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks. CoopNet content distribution system CoopNet (Cooperative Networking) was a proposed system for off-loading serving to peers who have recently downloaded content, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working at Microsoft Research and Carnegie Mellon University. When a server experiences an increase in load it redirects incoming peers to other peers who have agreed to mirror the content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottleneck is most likely in the outgoing bandwidth than the CPU, hence its server-centric design. It assigns peers to other peers who are 'close in IP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the same file it designates that the node choose the fastest of its neighbors. Streaming media is transmitted by having clients cache the previous stream, and then transmit it piece-wise to new nodes. Security and trust Peer-to-peer systems pose unique challenges from a computer security perspective. Like any other form of software, P2P applications can contain vulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable to remote exploits. Routing attacks Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", or denial of service attacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes. Corrupted data and malware The prevalence of malware varies between different peer-to-peer protocols. Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on the gnutella network contained some form of malware, whereas only 3% of the content on OpenFT contained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on the Kazaa network found that 15% of the 500,000 file sample taken were infected by one or more of the 365 different computer viruses that were tested for. Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on the FastTrack network, the RIAA managed to introduce faked chunks into downloads and downloaded files (mostly MP3 files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing. Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modern hashing, chunk verification and different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts. Resilient and scalable computer networks The decentralized nature of P2P networks increases robustness because it removes the single point of failure that can be inherent in a client–server based system. As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down. Distributed storage and search There are both advantages and disadvantages in P2P networks related to the topic of data backup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. For example, YouTube has been pressured by the RIAA, MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point. In a P2P network, the community of users is entirely responsible for deciding which content is available. Unpopular files eventually disappear and become unavailable as fewer people share them. Popular files, however, are highly and easily distributed. Popular files on a P2P network are more stable and available than files on central networks. In a centralized network, a simple loss of connection between the server and clients can cause a failure, but in P2P networks, the connections between every node must be lost to cause a data-sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry, RIAA, MPAA, and the government are unable to delete or stop the sharing of content on P2P systems. Applications Content delivery In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actually increase as more users begin to access the content (especially with protocols such as Bittorrent that require users to share, refer a performance measurement study). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. File-sharing networks Peer-to-peer file sharing networks such as Gnutella, G2, and the eDonkey network have been useful in popularizing peer-to-peer technologies. These advancements have paved the way for Peer-to-peer content delivery networks and services, including distributed caching systems like Correli Caches to enhance performance. Furthermore, peer-to-peer networks have made possible the software publication and distribution, enabling efficient sharing of Linux distribution and various games though file sharing networks. Copyright infringements Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts with copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd.. In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement. Multimedia The P2PTV and PDTP protocols are used in various peer-to-peer applications. Some proprietary multimedia applications leverage a peer-to-peer network in conjunction with streaming servers to stream audio and video to their clients. Peercasting is employed for multicasting streams. Additionally, a project called LionShare, undertaken by Pennsylvania State University, MIT, and Simon Fraser University, aims to facilitate file sharing among educational institutions globally. Another notable program, Osiris, enables users to create anonymous and autonomous web portals that are distributed via a peer-to-peer network. Other P2P applications Dat is a distributed version-controlled publishing platform. I2P, is an overlay network used to browse the Internet anonymously. Unlike the related I2P, the Tor network is not itself peer-to-peer; however, it can enable peer-to-peer applications to be built on top of it via onion services. The InterPlanetary File System (IPFS) is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia distribution protocol, with nodes in the IPFS network forming a distributed file system. Jami is a peer-to-peer chat and SIP app. JXTA is a peer-to-peer protocol designed for the Java platform. Netsukuku is a Wireless community network designed to be independent from the Internet. Open Garden is a connection-sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth. Resilio Sync is a directory-syncing app. Research includes projects such as the Chord project, the PAST storage utility, the P-Grid, and the CoopNet content distribution system. Secure Scuttlebutt is a peer-to-peer gossip protocol capable of supporting many different types of applications, primarily social networking. Syncthing is also a directory-syncing app. Tradepal l and M-commerce applications are designed to power real-time marketplaces. The U.S. Department of Defense is conducting research on P2P networks as part of its modern network warfare strategy. In May 2003, Anthony Tether, then director of DARPA, testified that the United States military uses P2P networks. WebTorrent is a P2P streaming torrent client in JavaScript for use in web browsers, as well as in the WebTorrent Desktop standalone version that bridges WebTorrent and BitTorrent serverless networks. Microsoft, in Windows 10, uses a proprietary peer-to-peer technology called "Delivery Optimization" to deploy operating system updates using end-users' PCs either on the local network or other PCs. According to Microsoft's Channel 9, this led to a 30%-50% reduction in Internet bandwidth usage. Artisoft's LANtastic was built as a peer-to-peer operating system where machines can function as both servers and workstations simultaneously. Hotline Communications Hotline Client was built with decentralized servers and tracker software dedicated to any type of files and continues to operate today. Cryptocurrencies are peer-to-peer-based digital currencies that use blockchains List of cryptocurrencies List of blockchains Social implications Incentivizing resource sharing and cooperation Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem"). Freeloading can have a profound impact on the network and in some cases can cause the community to collapse. In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance". Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity. A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources. Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered. Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction. Privacy and anonymity Some peer-to-peer networks (e.g. Freenet) place a heavy emphasis on privacy and anonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed. Public key cryptography can be used to provide encryption, data validation, authorization, and authentication for data/messages. Onion routing and other mix network protocols (e.g. Tarzan) can be used to provide anonymity. Perpetrators of live streaming sexual abuse and other cybercrimes have used peer-to-peer platforms to carry out activities with anonymity. Political implications Intellectual property law and illegal sharing Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surrounding copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd. In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material. To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage. Fair use exceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems. A study ordered by the European Union found that illegal downloading may lead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses. Network neutrality Peer-to-peer applications present one of the core issues in the network neutrality controversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidth usage. Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007, Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such as BitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards a client–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to this bandwidth throttling, several P2P applications started implementing protocol obfuscation, such as the BitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random. The ISP's solution to the high bandwidth is P2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet. Current research Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work." If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments." Popular simulators that were widely used in the past are NS2, OMNeT++, SimPy, NetLogo, PlanetLab, ProtoPeer, QTM, PeerSim, ONE, P2PStrmSim, PlanetSim, GNUSim, and Bharambe. Besides all the above stated facts, there has also been work done on ns-2 open source network simulators. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here.
Technology
Internet
null
24116
https://en.wikipedia.org/wiki/Peer%20review
Peer review
Peer review is the evaluation of work by one or more people with similar competencies as the producers of the work (peers). It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs, e.g., medical peer review. It can also be used as a teaching tool to help students improve writing assignments. Henry Oldenburg (1619–1677) was a German-born British philosopher who is seen as the 'father' of modern scientific peer review. It developed over the following centuries with, for example, the journal Nature making it standard practice in 1973. The term "peer review" was first used in the early 1970s. A monument to peer review has been at the Higher School of Economics in Moscow since 2017. Professional Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is used to inform decisions related to faculty advancement and tenure. A prototype professional peer review process was recommended in the Ethics of the Physician written by Ishāq ibn ʻAlī al-Ruhāwī (854–931). He stated that a visiting physician had to make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care. Professional peer review is common in the field of health care, where it is usually called clinical peer review. Further, since peer review activity is commonly segmented by clinical discipline, there is also physician peer review, nursing peer review, dentistry peer review, etc. Many other professional fields have some level of peer review process: accounting, law, engineering (e.g., software peer review, technical peer review), aviation, and even forest fire management. Peer review is used in education to achieve certain learning objectives, particularly as a tool to reach higher order processes in the affective and cognitive domains as defined by Bloom's taxonomy. This may take a variety of forms, including closely mimicking the scholarly peer review processes used in science and medicine. Scholarly Medical Medical peer review may be distinguished in four classifications: Clinical peer review is a procedure for assessing a patient's involvement with experiences of care. It is a piece of progressing proficient practice assessment and centered proficient practice assessment—significant supporters of supplier credentialing and privileging. Peer evaluation of clinical teaching skills for both physicians and nurses. Scientific peer review of journal articles. A secondary round of peer review for the clinical value of articles concurrently published in medical journals. Additionally, "medical peer review" has been used by the American Medical Association to refer not only to the process of improving quality and safety in health care organizations, but also to the process of rating clinical behavior or compliance with professional society membership standards. The clinical network believes it to be the most ideal method of guaranteeing that distributed exploration is dependable and that any clinical medicines that it advocates are protected and viable for individuals. Thus, the terminology has poor standardization and specificity, particularly as a database search term. Technical In engineering, technical peer review is a type of engineering review. Technical peer reviews are a well defined review process for finding and fixing defects, conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed (usually limited to 6 or fewer people). Technical peer reviews are held within development phases, between milestone reviews, on completed products or completed portions of products. Government policy The European Union has been using peer review in the "Open Method of Co-ordination" of policies in the fields of active labour market policy since 1999. In 2004, a program of peer reviews started in social inclusion. Each program sponsors about eight peer review meetings in each year, in which a "host country" lays a given policy or initiative open to examination by half a dozen other countries and the relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating "peer countries" submit comments. The results are published on the web. The United Nations Economic Commission for Europe, through UNECE Environmental Performance Reviews, uses peer review, referred to as "peer learning", to evaluate progress made by its member countries in improving their environmental policies. The State of California is the only U.S. state to mandate scientific peer review. In 1997, the Governor of California signed into law Senate Bill 1320 (Sher), Chapter 295, statutes of 1997, which mandates that, before any CalEPA Board, Department, or Office adopts a final version of a rule-making, the scientific findings, conclusions, and assumptions on which the proposed rule are based must be submitted for independent external scientific peer review. This requirement is incorporated into the California Health and Safety Code Section 57004. Pedagogical Peer review, or student peer assessment, is the method by which editors and writers work together in hopes of helping the author establish and further flesh out and develop their own writing. Peer review is widely used in secondary and post-secondary education as part of the writing process. This collaborative learning tool involves groups of students reviewing each other's work and providing feedback and suggestions for revision. Rather than a means of critiquing each other's work, peer review is often framed as a way to build connection between students and help develop writers' identity. While widely used in English and composition classrooms, peer review has gained popularity in other disciplines that require writing as part of the curriculum including the social and natural sciences. Peer review in classrooms helps students become more invested in their work, and the classroom environment at large. Understanding how their work is read by a diverse readership before it is graded by the teacher may also help students clarify ideas and understand how to persuasively reach different audience members via their writing. It also gives students professional experience that they might draw on later when asked to review the work of a colleague prior to publication. The process can also bolster the confidence of students on both sides of the process. It has been found that students are more positive than negative when reviewing their classmates' writing. Peer review can help students not get discouraged but rather feel determined to improve their writing. Critics of peer review in classrooms say that it can be ineffective due to students' lack of practice giving constructive criticism, or lack of expertise in the writing craft at large. Peer review can be problematic for developmental writers, particularly if students view their writing as inferior to others in the class as they may be unwilling to offer suggestions or ask other writers for help. Peer review can impact a student's opinion of themselves as well as others as sometimes students feel a personal connection to the work they have produced, which can also make them feel reluctant to receive or offer criticism. Teachers using peer review as an assignment can lead to rushed-through feedback by peers, using incorrect praise or criticism, thus not allowing the writer or the editor to get much out of the activity. As a response to these concerns, instructors may provide examples, model peer review with the class, or focus on specific areas of feedback during the peer review process. Instructors may also experiment with in-class peer review vs. peer review as homework, or peer review using technologies afforded by learning management systems online. Students that are older can give better feedback to their peers, getting more out of peer review, but it is still a method used in classrooms to help students young and old learn how to revise. With evolving and changing technology, peer review will develop as well. New tools could help alter the process of peer review. Peer seminar Peer seminar is a method that involves a speaker that presents ideas to an audience that also acts as a "contest". To further elaborate, there are multiple speakers that are called out one at a time and given an amount of time to present the topic that they have researched. Each speaker may or may not talk about the same topic but each speaker has something to gain or lose which can foster a competitive atmosphere. This approach allows speakers to present in a more personal tone while trying to appeal to the audience while explaining their topic. Peer seminars may be somewhat similar to what conference speakers do, however, there is more time to present their points, and speakers can be interrupted by audience members to provide questions and feedback upon the topic or how well the speaker did in presenting their topic. Peer review in writing Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. Peer review in writing is a pivotal component among various peer review mechanisms, often spearheaded by educators and involving student participation, particularly in academic settings. It constitutes a fundamental process in academic and professional writing, serving as a systematic means to ensure the quality, effectiveness, and credibility of scholarly work. However, despite its widespread use, it is one of the most scattered, inconsistent, and ambiguous practices associated with writing instruction. Many scholars questioning its effectiveness and specific methodologies. Critics of peer review in classrooms express concerns about its ineffectiveness due to students' lack of practice in giving constructive criticism or their limited expertise in the writing craft overall. Critiques of peer review Academic peer review has faced considerable criticism, with many studies highlighting inherent issues in the peer review process. The editorial peer review process has been found to be strongly biased against ‘negative studies,’ i.e. studies that do not work. This then biases the information base of medicine. Journals become biased against negative studies when values come into play. “Who wants to read something that doesn’t work?” asks Richard Smith in the Journal of the Royal Society of Medicine. “That’s boring.” This is also particularly evident in university classrooms, where the most common source of writing feedback during student years often comes from teachers, whose comments are often highly valued. Students may become influenced to provide research in line with the professor’s viewpoints, because of the teacher’s position of high authority. The effectiveness of feedback largely stems from its high authority. Benjamin Keating, in his article "A Good Development Thing: A Longitudinal Analysis of Peer Review and Authority in Undergraduate Writing," conducted a longitudinal study comparing two groups of students (one majoring in writing and one not) to explore students' perceptions of authority. This research, involving extensive analysis of student texts, concludes that students majoring in non-writing fields tend to undervalue mandatory peer review in class, while those majoring in writing value classmates' comments more. This reflects that peer review feedback has a certain threshold, and effective peer review requires a certain level of expertise. For non-professional writers, peer review feedback may be overlooked, thereby affecting its effectiveness. Elizabeth Ellis Miller, Cameron Mozafari, Justin Lohr and Jessica Enoch state, "While peer review is an integral part of writing classrooms, students often struggle to effectively engage in it." The authors illustrate some reasons for the inefficiency of peer review based on research conducted during peer review sessions in university classrooms: Lack of Training: Students and even some faculty members may not have received sufficient training to provide constructive feedback. Without proper guidance on what to look for and how to provide helpful comments, peer reviewers may find it challenging to offer meaningful insights. Limited Engagement: Students may participate in peer review sessions with minimal enthusiasm or involvement, viewing them as obligatory tasks rather than valuable learning opportunities. This lack of investment can result in superficial feedback that fails to address underlying issues in the writing. Time Constraints: Instructors often allocate limited time for peer review activities during class sessions, which may not be adequate for thorough reviews of peers' work. Consequently, feedback may be rushed or superficial, lacking the depth required for meaningful improvement. This research demonstrates that besides issues related to expertise, numerous objective factors contribute to students' poor performance in peer review sessions, resulting in feedback from peer reviewers that may not effectively assist authors. Additionally, this study highlights the influence of emotions in peer review sessions, suggesting that both peer reviewers and authors cannot completely eliminate emotions when providing and receiving feedback. This can lead to peer reviewers and authors approaching the feedback with either positive or negative attitudes towards the text, resulting in selective or biased feedback and review, further impacting their ability to objectively evaluate the article. It implies that subjective emotions may also affect the effectiveness of peer review feedback. Pamela Bedore and Brian O’Sullivan also hold a skeptical view of peer review in most writing contexts. The authors conclude, based on comparing different forms of peer review after systematic training at two universities, that "the crux is that peer review is not just about improving writing but about helping authors achieve their writing vision." Feedback from the majority of non-professional writers during peer review sessions often tends to be superficial, such as simple grammar corrections and questions. This precisely reflects the implication in the conclusion that the focus is only on improving writing skills. Meaningful peer review involves understanding the author's writing intent, posing valuable questions and perspectives, and guiding the author to achieve their writing goals. Comparison and improvement Magda Tigchelaar compares peer review with self-assessment through an experiment that divided students into three groups: self-assessment, peer review, and no review. Across four writing projects, she observed changes in each group, with surprisingly results showing significant improvement only in the self-assessment group. The author's analysis suggests that self-assessment allows individuals to clearly understand the revision goals at each stage, as the author is the most familiar with their own writing. Thus, self-checking naturally follows a systematic and planned approach to revision. In contrast, the effectiveness of peer review is often limited due to the lack of structured feedback, characterized by scattered, meaningless summaries and evaluations that fail to meet author's expectations for revising their work. Stephanie Conner and Jennifer Gray highlight the value of most students' feedback during peer review. They argue that many peer review sessions fail to meet students' expectations, as students, even as reviewers themselves, feel uncertain about providing constructive feedback due to their lack of confidence in their own writing. The authors further offer numerous improvement strategies across various dimensions, such as course content and specific implementation steps. For instance, the peer review process can be segmented into groups, where students present the papers to be reviewed, while other group members take notes and analyze them. Then, the review scope can be expanded to the entire class. This widens the review sources and further enhances the level of professionalism. With evolving and changing technology, peer review is also expected to evolve. New tools have the potential to transform the peer review process. Mimi Li discusses the effectiveness and feedback of an online peer review software used in their freshman writing class. Unlike traditional peer review methods commonly used in classrooms, the online peer review software offers a plethora of tools for editing articles, along with comprehensive guidance. For instance, it lists numerous questions peer reviewers can ask and allows for various comments to be added to the selected text. Based on observations over the course of a semester, students showed varying degrees of improvement in their writing skills and grades after using the online peer review software. Additionally, they highly praised the technology of online peer review.
Physical sciences
Science basics
Basics and measurement
24130
https://en.wikipedia.org/wiki/Energy%20storage
Energy storage
Energy storage is the capture of energy produced at one time for use at a later time to reduce imbalances between energy demand and energy production. A device that stores energy is generally called an accumulator or battery. Energy comes in multiple forms including radiation, chemical, gravitational potential, electrical potential, electricity, elevated temperature, latent heat and kinetic. Energy storage involves converting energy from forms that are difficult to store to more conveniently or economically storable forms. Some technologies provide short-term energy storage, while others can endure for much longer. Bulk energy storage is currently dominated by hydroelectric dams, both conventional as well as pumped. Grid energy storage is a collection of methods used for energy storage on a large scale within an electrical power grid. Common examples of energy storage are the rechargeable battery, which stores chemical energy readily convertible to electricity to operate a mobile phone; the hydroelectric dam, which stores energy in a reservoir as gravitational potential energy; and ice storage tanks, which store ice frozen by cheaper energy at night to meet peak daytime demand for cooling. Fossil fuels such as coal and gasoline store ancient energy derived from sunlight by organisms that later died, became buried and over time were then converted into these fuels. Food (which is made by the same process as fossil fuels) is a form of energy stored in chemical form. History In the 20th century grid, electrical power was largely generated by burning fossil fuel. When less power was required, less fuel was burned. Hydropower, a mechanical energy storage method, is the most widely adopted mechanical energy storage, and has been in use for centuries. Large hydropower dams have been energy storage sites for more than one hundred years. Concerns with air pollution, energy imports, and global warming have spawned the growth of renewable energy such as solar and wind power. Wind power is uncontrolled and may be generating at a time when no additional power is needed. Solar power varies with cloud cover and at best is only available during daylight hours, while demand often peaks after sunset (see duck curve). Interest in storing power from these intermittent sources grows as the renewable energy industry begins to generate a larger fraction of overall energy consumption. In 2023 BloombergNEF forecast total energy storage deployments to grow at a compound annual growth rate of 27 percent through 2030. Off grid electrical use was a niche market in the 20th century, but in the 21st century, it has expanded. Portable devices are in use all over the world. Solar panels are now common in the rural settings worldwide. Access to electricity is now a question of economics and financial viability, and not solely on technical aspects. Electric vehicles are gradually replacing combustion-engine vehicles. However, powering long-distance transportation without burning fuel remains in development. Methods Outline The following list includes a variety of types of energy storage: Fossil fuel storage Mechanical Spring Compressed-air energy storage (CAES) Fireless locomotive Flywheel energy storage Solid mass gravitational Hydraulic accumulator Pumped-storage hydroelectricity (a.k.a. pumped hydroelectric storage, PHS, or pumped storage hydropower, PSH) Thermal expansion Electrical, electromagnetic Capacitor Supercapacitor Superconducting magnetic energy storage (SMES, also superconducting storage coil) Biological Glycogen Starch Electrochemical (battery energy storage system, BESS) Flow battery Rechargeable battery UltraBattery Thermal Brick storage heater Cryogenic energy storage, liquid-air energy storage (LAES) Liquid nitrogen engine Eutectic system Ice storage air conditioning Molten salt storage Phase-change material Seasonal thermal energy storage Solar pond Steam accumulator Thermal energy storage (general) Chemical Biofuels Hydrated salts Hydrogen peroxide Power-to-gas (methane, hydrogen storage, oxyhydrogen) Mechanical Energy can be stored in water pumped to a higher elevation using pumped storage methods or by moving solid matter to higher locations (gravity batteries). Other commercial mechanical methods include compressing air and flywheels that convert electric energy into internal energy or kinetic energy and then back again when electrical demand peaks. Hydroelectricity Hydroelectric dams with reservoirs can be operated to provide electricity at times of peak demand. Water is stored in the reservoir during periods of low demand and released when demand is high. The net effect is similar to pumped storage, but without the pumping loss. While a hydroelectric dam does not directly store energy from other generating units, it behaves equivalently by lowering output in periods of excess electricity from other sources. In this mode, dams are one of the most efficient forms of energy storage, because only the timing of its generation changes. Hydroelectric turbines have a start-up time on the order of a few minutes. Pumped hydro Worldwide, pumped-storage hydroelectricity (PSH) is the largest-capacity form of active grid energy storage available, and, as of March 2012, the Electric Power Research Institute (EPRI) reports that PSH accounts for more than 99% of bulk storage capacity worldwide, representing around 127,000 MW. PSH energy efficiency varies in practice between 70% and 80%, with claims of up to 87%. At times of low electrical demand, excess generation capacity is used to pump water from a lower source into a higher reservoir. When demand grows, water is released back into a lower reservoir (or waterway or body of water) through a turbine, generating electricity. Reversible turbine-generator assemblies act as both a pump and turbine (usually a Francis turbine design). Nearly all facilities use the height difference between two water bodies. Pure pumped-storage plants shift the water between reservoirs, while the "pump-back" approach is a combination of pumped storage and conventional hydroelectric plants that use natural stream-flow. Compressed air Compressed-air energy storage (CAES) uses surplus energy to compress air for subsequent electricity generation. Small-scale systems have long been used in such applications as propulsion of mine locomotives. The compressed air is stored in an underground reservoir, such as a salt dome. Compressed-air energy storage (CAES) plants can bridge the gap between production volatility and load. CAES storage addresses the energy needs of consumers by effectively providing readily available energy to meet demand. Renewable energy sources like wind and solar energy vary. So at times when they provide little power, they need to be supplemented with other forms of energy to meet energy demand. Compressed-air energy storage plants can take in the surplus energy output of renewable energy sources during times of energy over-production. This stored energy can be used at a later time when demand for electricity increases or energy resource availability decreases. Compression of air creates heat; the air is warmer after compression. Expansion requires heat. If no extra heat is added, the air will be much colder after expansion. If the heat generated during compression can be stored and used during expansion, efficiency improves considerably. A CAES system can deal with the heat in three ways. Air storage can be adiabatic, diabatic, or isothermal. Another approach uses compressed air to power vehicles. Flywheel Flywheel energy storage (FES) works by accelerating a rotor (a flywheel) to a very high speed, holding energy as rotational energy. When energy is added the rotational speed of the flywheel increases, and when energy is extracted, the speed declines, due to conservation of energy. Most FES systems use electricity to accelerate and decelerate the flywheel, but devices that directly use mechanical energy are under consideration. FES systems have rotors made of high strength carbon-fiber composites, suspended by magnetic bearings and spinning at speeds from 20,000 to over 50,000 revolutions per minute (rpm) in a vacuum enclosure. Such flywheels can reach maximum speed ("charge") in a matter of minutes. The flywheel system is connected to a combination electric motor/generator. FES systems have relatively long lifetimes (lasting decades with little or no maintenance; full-cycle lifetimes quoted for flywheels range from in excess of 105, up to 107, cycles of use), high specific energy (100–130 W·h/kg, or 360–500 kJ/kg) and power density. Solid mass gravitational Changing the altitude of solid masses can store or release energy via an elevating system driven by an electric motor/generator. Studies suggest energy can begin to be released with as little as 1 second warning, making the method a useful supplemental feed into an electricity grid to balance load surges. Efficiencies can be as high as 85% recovery of stored energy. This can be achieved by siting the masses inside old vertical mine shafts or in specially constructed towers where the heavy weights are winched up to store energy and allowed a controlled descent to release it. At 2020 a prototype vertical store is being built in Edinburgh, Scotland Potential energy storage or gravity energy storage was under active development in 2013 in association with the California Independent System Operator. It examined the movement of earth-filled hopper rail cars driven by electric locomotives from lower to higher elevations. Other proposed methods include:- using rails, cranes, or elevators to move weights up and down; using high-altitude solar-powered balloon platforms supporting winches to raise and lower solid masses slung underneath them, using winches supported by an ocean barge to take advantage of a 4 km (13,000 ft) elevation difference between the sea surface and the seabed, Thermal Thermal energy storage (TES) is the temporary storage or removal of heat. Sensible heat thermal Sensible heat storage take advantage of sensible heat in a material to store energy. Seasonal thermal energy storage (STES) allows heat or cold to be used months after it was collected from waste energy or natural sources. The material can be stored in contained aquifers, clusters of boreholes in geological substrates such as sand or crystalline bedrock, in lined pits filled with gravel and water, or water-filled mines. Seasonal thermal energy storage (STES) projects often have paybacks in four to six years. An example is Drake Landing Solar Community in Canada, for which 97% of the year-round heat is provided by solar-thermal collectors on garage roofs, enabled by a borehole thermal energy store (BTES). In Braedstrup, Denmark, the community's solar district heating system also uses STES, at a temperature of . A heat pump, which runs only while surplus wind power is available. It is used to raise the temperature to for distribution. When wind energy is not available, a gas-fired boiler is used. Twenty percent of Braedstrup's heat is solar. Latent heat thermal (LHTES) Latent heat thermal energy storage systems work by transferring heat to or from a material to change its phase. A phase-change is the melting, solidifying, vaporizing or liquifying. Such a material is called a phase change material (PCM). Materials used in LHTESs often have a high latent heat so that at their specific temperature, the phase change absorbs a large amount of energy, much more than sensible heat. A steam accumulator is a type of LHTES where the phase change is between liquid and gas and uses the latent heat of vaporization of water. Ice storage air conditioning systems use off-peak electricity to store cold by freezing water into ice. The stored cold in ice releases during melting process and can be used for cooling at peak hours. Cryogenic thermal energy storage Air can be liquefied by cooling using electricity and stored as a cryogen with existing technologies. The liquid air can then be expanded through a turbine and the energy recovered as electricity. The system was demonstrated at a pilot plant in the UK in 2012. In 2019, Highview announced plans to build a 50 MW in the North of England and northern Vermont, with the proposed facility able to store five to eight hours of energy, for a 250–400 MWh storage capacity. Carnot battery Electrical energy can be stored thermally by resistive heating or heat pumps, and the stored heat can be converted back to electricity via Rankine cycle or Brayton cycle. This technology has been studied to retrofit coal-fired power plants into fossil-fuel free generation systems. Coal-fired boilers are replaced by high-temperature heat storage charged by excess electricity from renewable energy sources. In 2020, German Aerospace Center started to construct the world's first large-scale Carnot battery system, which has 1,000  MWh storage capacity. Electrochemical Rechargeable battery A rechargeable battery comprises one or more electrochemical cells. It is known as a 'secondary cell' because its electrochemical reactions are electrically reversible. Rechargeable batteries come in many shapes and sizes, ranging from button cells to megawatt grid systems. Rechargeable batteries have lower total cost of use and environmental impact than non-rechargeable (disposable) batteries. Some rechargeable battery types are available in the same form factors as disposables. Rechargeable batteries have higher initial cost but can be recharged very cheaply and used many times. Common rechargeable battery chemistries include: Lead–acid battery: Lead acid batteries hold the largest market share of electric storage products. A single cell produces about 2V when charged. In the charged state the metallic lead negative electrode and the lead sulfate positive electrode are immersed in a dilute sulfuric acid (H2SO4) electrolyte. In the discharge process electrons are pushed out of the cell as lead sulfate is formed at the negative electrode while the electrolyte is reduced to water. Lead–acid battery technology has been developed extensively. Upkeep requires minimal labor and its cost is low. The battery's available energy capacity is subject to a quick discharge resulting in a low life span and low energy density. Nickel–cadmium battery (NiCd): Uses nickel oxide hydroxide and metallic cadmium as electrodes. Cadmium is a toxic element, and was banned for most uses by the European Union in 2004. Nickel–cadmium batteries have been almost completely replaced by nickel–metal hydride (NiMH) batteries. Nickel–metal hydride battery (NiMH): First commercial types were available in 1989. These are now a common consumer and industrial type. The battery has a hydrogen-absorbing alloy for the negative electrode instead of cadmium. Lithium-ion battery: The choice in many consumer electronics and have one of the best energy-to-mass ratios and a very slow self-discharge when not in use. Lithium-ion polymer battery: These batteries are light in weight and can be made in any shape desired. Aluminium-sulfur battery with rock salt crystals as electrolyte: aluminium and sulfur are Earth-abundant materials and are much more cheaper than traditional Lithium. Flow battery A flow battery works by passing a solution over a membrane where ions are exchanged to charge or discharge the cell. Cell voltage is chemically determined by the Nernst equation and ranges, in practical applications, from 1.0 V to 2.2 V. Storage capacity depends on the volume of solution. A flow battery is technically akin both to a fuel cell and an electrochemical accumulator cell. Commercial applications are for long half-cycle storage such as backup grid power. Supercapacitor Supercapacitors, also called electric double-layer capacitors (EDLC) or ultracapacitors, are a family of electrochemical capacitors that do not have conventional solid dielectrics. Capacitance is determined by two storage principles, double-layer capacitance and pseudocapacitance. Supercapacitors bridge the gap between conventional capacitors and rechargeable batteries. They store the most energy per unit volume or mass (energy density) among capacitors. They support up to 10,000 farads/1.2 Volt, up to 10,000 times that of electrolytic capacitors, but deliver or accept less than half as much power per unit time (power density). While supercapacitors have specific energy and energy densities that are approximately 10% of batteries, their power density is generally 10 to 100 times greater. This results in much shorter charge/discharge cycles. Also, they tolerate many more charge-discharge cycles than batteries. Supercapacitors have many applications, including: Low supply current for memory backup in static random-access memory (SRAM) Power for cars, buses, trains, cranes and elevators, including energy recovery from braking, short-term energy storage and burst-mode power delivery Chemical Power-to-gas Power-to-gas is the conversion of electricity to a gaseous fuel such as hydrogen or methane. The three commercial methods use electricity to reduce water into hydrogen and oxygen by means of electrolysis. In the first method, hydrogen is injected into the natural gas grid or is used for transportation. The second method is to combine the hydrogen with carbon dioxide to produce methane using a methanation reaction such as the Sabatier reaction, or biological methanation, resulting in an extra energy conversion loss of 8%. The methane may then be fed into the natural gas grid. The third method uses the output gas of a wood gas generator or a biogas plant, after the biogas upgrader is mixed with the hydrogen from the electrolyzer, to upgrade the quality of the biogas. Hydrogen The element hydrogen can be a form of stored energy. Hydrogen can produce electricity via a hydrogen fuel cell. At penetrations below 20% of the grid demand, renewables do not severely change the economics; but beyond about 20% of the total demand, external storage becomes important. If these sources are used to make ionic hydrogen, they can be freely expanded. A 5-year community-based pilot program using wind turbines and hydrogen generators began in 2007 in the remote community of Ramea, Newfoundland and Labrador. A similar project began in 2004 on Utsira, a small Norwegian island. Energy losses involved in the hydrogen storage cycle come from the electrolysis of water, liquification or compression of the hydrogen and conversion to electricity. Hydrogen can also be produced from aluminum and water by stripping aluminum's naturally-occurring aluminum oxide barrier and introducing it to water. This method is beneficial because recycled aluminum cans can be used to generate hydrogen, however systems to harness this option have not been commercially developed and are much more complex than electrolysis systems. Common methods to strip the oxide layer include caustic catalysts such as sodium hydroxide and alloys with gallium, mercury and other metals. Underground hydrogen storage is the practice of hydrogen storage in caverns, salt domes and depleted oil and gas fields. Large quantities of gaseous hydrogen have been stored in caverns by Imperial Chemical Industries for many years without any difficulties. The European Hyunder project indicated in 2013 that storage of wind and solar energy using underground hydrogen would require 85 caverns. Powerpaste is a magnesium and hydrogen -based fluid gel that releases hydrogen when reacting with water. It was invented, patented and is being developed by the Fraunhofer Institute for Manufacturing Technology and Advanced Materials (IFAM) of the Fraunhofer-Gesellschaft. Powerpaste is made by combining magnesium powder with hydrogen to form magnesium hydride in a process conducted at 350 °C and five to six times atmospheric pressure. An ester and a metal salt are then added to make the finished product. Fraunhofer states that they are building a production plant slated to start production in 2021, which will produce 4 tons of Powerpaste annually. Fraunhofer has patented their invention in the United States and EU. Fraunhofer claims that Powerpaste is able to store hydrogen energy at 10 times the energy density of a lithium battery of a similar dimension and is safe and convenient for automotive situations. Methane Methane is the simplest hydrocarbon with the molecular formula CH4. Methane is more easily stored and transported than hydrogen. Storage and combustion infrastructure (pipelines, gasometers, power plants) are mature. Synthetic natural gas (syngas or SNG) can be created in a multi-step process, starting with hydrogen and oxygen. Hydrogen is then reacted with carbon dioxide in a Sabatier process, producing methane and water. Methane can be stored and later used to produce electricity. The resulting water is recycled, reducing the need for water. In the electrolysis stage, oxygen is stored for methane combustion in a pure oxygen environment at an adjacent power plant, eliminating nitrogen oxides. Methane combustion produces carbon dioxide (CO2) and water. The carbon dioxide can be recycled to boost the Sabatier process and water can be recycled for further electrolysis. Methane production, storage and combustion recycles the reaction products. The CO2 has economic value as a component of an energy storage vector, not a cost as in carbon capture and storage. Power-to-liquid Power-to-liquid is similar to power to gas except that the hydrogen is converted into liquids such as methanol or ammonia. These are easier to handle than gases, and require fewer safety precautions than hydrogen. They can be used for transportation, including aircraft, but also for industrial purposes or in the power sector. Biofuels Various biofuels such as biodiesel, vegetable oil, alcohol fuels, or biomass can replace fossil fuels. Various chemical processes can convert the carbon and hydrogen in coal, natural gas, plant and animal biomass and organic wastes into short hydrocarbons suitable as replacements for existing hydrocarbon fuels. Examples are Fischer–Tropsch diesel, methanol, dimethyl ether and syngas. This diesel source was used extensively in World War II in Germany, which faced limited access to crude oil supplies. South Africa produces most of the country's diesel from coal for similar reasons. A long term oil price above US$35/bbl may make such large scale synthetic liquid fuels economical. Aluminum Aluminum has been proposed as an energy store by a number of researchers. Its electrochemical equivalent (8.04 Ah/cm3) is nearly four times greater than that of lithium (2.06 Ah/cm3). Energy can be extracted from aluminum by reacting it with water to generate hydrogen. However, it must first be stripped of its natural oxide layer, a process which requires pulverization, chemical reactions with caustic substances, or alloys. The byproduct of the reaction to create hydrogen is aluminum oxide, which can be recycled into aluminum with the Hall–Héroult process, making the reaction theoretically renewable. If the Hall-Heroult Process is run using solar or wind power, aluminum could be used to store the energy produced at higher efficiency than direct solar electrolysis. Boron, silicon, and zinc Boron, silicon, and zinc have been proposed as energy storage solutions. Other chemical The organic compound norbornadiene converts to quadricyclane upon exposure to light, storing solar energy as the energy of chemical bonds. A working system has been developed in Sweden as a molecular solar thermal system. Electrical methods Capacitor A capacitor (originally known as a 'condenser') is a passive two-terminal electrical component used to store energy electrostatically. Practical capacitors vary widely, but all contain at least two electrical conductors (plates) separated by a dielectric (i.e., insulator). A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries change. (This prevents loss of information in volatile memory.) Conventional capacitors provide less than 360 joules per kilogram, while a conventional alkaline battery has a density of 590 kJ/kg. Capacitors store energy in an electrostatic field between their plates. Given a potential difference across the conductors (e.g., when a capacitor is attached across a battery), an electric field develops across the dielectric, causing positive charge (+Q) to collect on one plate and negative charge (-Q) to collect on the other plate. If a battery is attached to a capacitor for a sufficient amount of time, no current can flow through the capacitor. However, if an accelerating or alternating voltage is applied across the leads of the capacitor, a displacement current can flow. Besides capacitor plates, charge can also be stored in a dielectric layer. Capacitance is greater given a narrower separation between conductors and when the conductors have a larger surface area. In practice, the dielectric between the plates emits a small amount of leakage current and has an electric field strength limit, known as the breakdown voltage. However, the effect of recovery of a dielectric after a high-voltage breakdown holds promise for a new generation of self-healing capacitors. The conductors and leads introduce undesired inductance and resistance. Research is assessing the quantum effects of nanoscale capacitors for digital quantum batteries. Superconducting magnetics Superconducting magnetic energy storage (SMES) systems store energy in a magnetic field created by the flow of direct current in a superconducting coil that has been cooled to a temperature below its superconducting critical temperature. A typical SMES system includes a superconducting coil, power conditioning system and refrigerator. Once the superconducting coil is charged, the current does not decay and the magnetic energy can be stored indefinitely. The stored energy can be released to the network by discharging the coil. The associated inverter/rectifier accounts for about 2–3% energy loss in each direction. SMES loses the least amount of electricity in the energy storage process compared to other methods of storing energy. SMES systems offer round-trip efficiency greater than 95%. Due to the energy requirements of refrigeration and the cost of superconducting wire, SMES is used for short duration storage such as improving power quality. It also has applications in grid balancing. Applications Mills The classic application before the Industrial Revolution was the control of waterways to drive water mills for processing grain or powering machinery. Complex systems of reservoirs and dams were constructed to store and release water (and the potential energy it contained) when required. Homes Home energy storage is expected to become increasingly common given the growing importance of distributed generation of renewable energies (especially photovoltaics) and the important share of energy consumption in buildings. To exceed a self-sufficiency of 40% in a household equipped with photovoltaics, energy storage is needed. Multiple manufacturers produce rechargeable battery systems for storing energy, generally to hold surplus energy from home solar or wind generation. Today, for home energy storage, Li-ion batteries are preferable to lead-acid ones given their similar cost but much better performance. Tesla Motors produces two models of the Tesla Powerwall. One is a 10 kWh weekly cycle version for backup applications and the other is a 7 kWh version for daily cycle applications. In 2016, a limited version of the Tesla Powerpack 2 cost $398(US)/kWh to store electricity worth 12.5 cents/kWh (US average grid price) making a positive return on investment doubtful unless electricity prices are higher than 30 cents/kWh. RoseWater Energy produces two models of the "Energy & Storage System", the HUB 120 and SB20. Both versions provide 28.8 kWh of output, enabling it to run larger houses or light commercial premises, and protecting custom installations. The system provides five key elements into one system, including providing a clean 60 Hz Sine wave, zero transfer time, industrial-grade surge protection, renewable energy grid sell-back (optional), and battery backup. Enphase Energy announced an integrated system that allows home users to store, monitor and manage electricity. The system stores 1.2 kWh of energy and 275W/500W power output. Storing wind or solar energy using thermal energy storage though less flexible, is considerably cheaper than batteries. A simple 52-gallon electric water heater can store roughly 12 kWh of energy for supplementing hot water or space heating. For purely financial purposes in areas where net metering is available, home generated electricity may be sold to the grid through a grid-tie inverter without the use of batteries for storage. Grid electricity and power stations Renewable energy The largest source and the greatest store of renewable energy is provided by hydroelectric dams. A large reservoir behind a dam can store enough water to average the annual flow of a river between dry and wet seasons, and a very large reservoir can store enough water to average the flow of a river between dry and wet years. While a hydroelectric dam does not directly store energy from intermittent sources, it does balance the grid by lowering its output and retaining its water when power is generated by solar or wind. If wind or solar generation exceeds the region's hydroelectric capacity, then some additional source of energy is needed. Many renewable energy sources (notably solar and wind) produce variable power. Storage systems can level out the imbalances between supply and demand that this causes. Electricity must be used as it is generated or converted immediately into storable forms. The main method of electrical grid storage is pumped-storage hydroelectricity. Areas of the world such as Norway, Wales, Japan and the US have used elevated geographic features for reservoirs, using electrically powered pumps to fill them. When needed, the water passes through generators and converts the gravitational potential of the falling water into electricity. Pumped storage in Norway, which gets almost all its electricity from hydro, has currently a capacity of 1.4 GW but since the total installed capacity is nearly 32 GW and 75% of that is regulable, it can be expanded significantly. Some forms of storage that produce electricity include pumped-storage hydroelectric dams, rechargeable batteries, thermal storage including molten salts which can efficiently store and release very large quantities of heat energy, and compressed air energy storage, flywheels, cryogenic systems and superconducting magnetic coils. Surplus power can also be converted into methane (Sabatier process) with stockage in the natural gas network. In 2011, the Bonneville Power Administration in the northwestern United States created an experimental program to absorb excess wind and hydro power generated at night or during stormy periods that are accompanied by high winds. Under central control, home appliances absorb surplus energy by heating ceramic bricks in special space heaters to hundreds of degrees and by boosting the temperature of modified hot water heater tanks. After charging, the appliances provide home heating and hot water as needed. The experimental system was created as a result of a severe 2010 storm that overproduced renewable energy to the extent that all conventional power sources were shut down, or in the case of a nuclear power plant, reduced to its lowest possible operating level, leaving a large area running almost completely on renewable energy. Another advanced method used at the former Solar Two project in the United States and the Solar Tres Power Tower in Spain uses molten salt to store thermal energy captured from the sun and then convert it and dispatch it as electrical power. The system pumps molten salt through a tower or other special conduits to be heated by the sun. Insulated tanks store the solution. Electricity is produced by turning water to steam that is fed to turbines. Since the early 21st century batteries have been applied to utility scale load-leveling and frequency regulation capabilities. In vehicle-to-grid storage, electric vehicles that are plugged into the energy grid can deliver stored electrical energy from their batteries into the grid when needed. Air conditioning Thermal energy storage (TES) can be used for air conditioning. It is most widely used for cooling single large buildings and/or groups of smaller buildings. Commercial air conditioning systems are the biggest contributors to peak electrical loads. In 2009, thermal storage was used in over 3,300 buildings in over 35 countries. It works by chilling material at night and using the chilled material for cooling during the hotter daytime periods. The most popular technique is ice storage, which requires less space than water and is cheaper than fuel cells or flywheels. In this application, a standard chiller runs at night to produce an ice pile. Water circulates through the pile during the day to chill water that would normally be the chiller's daytime output. A partial storage system minimizes capital investment by running the chillers nearly 24 hours a day. At night, they produce ice for storage and during the day they chill water. Water circulating through the melting ice augments the production of chilled water. Such a system makes ice for 16 to 18 hours a day and melts ice for six hours a day. Capital expenditures are reduced because the chillers can be just 40% – 50% of the size needed for a conventional, no-storage design. Storage sufficient to store half a day's available heat is usually adequate. A full storage system shuts off the chillers during peak load hours. Capital costs are higher, as such a system requires larger chillers and a larger ice storage system. This ice is produced when electrical utility rates are lower. Off-peak cooling systems can lower energy costs. The U.S. Green Building Council has developed the Leadership in Energy and Environmental Design (LEED) program to encourage the design of reduced-environmental impact buildings. Off-peak cooling may help toward LEED Certification. Thermal storage for heating is less common than for cooling. An example of thermal storage is storing solar heat to be used for heating at night. Latent heat can also be stored in technical phase change materials (PCMs). These can be encapsulated in wall and ceiling panels, to moderate room temperatures. Transport Liquid hydrocarbon fuels are the most commonly used forms of energy storage for use in transportation, followed by a growing use of Battery Electric Vehicles and Hybrid Electric Vehicles. Other energy carriers such as hydrogen can be used to avoid producing greenhouse gases. Public transport systems like trams and trolleybuses require electricity, but due to their variability in movement, a steady supply of electricity via renewable energy is challenging. Photovoltaic systems installed on the roofs of buildings can be used to power public transportation systems during periods in which there is increased demand for electricity and access to other forms of energy are not readily available. Upcoming transitions in the transportation system also include e.g. ferries and airplanes, where electric power supply is investigated as an interesting alternative. Electronics Capacitors are widely used in electronic circuits for blocking direct current while allowing alternating current to pass. In analog filter networks, they smooth the output of power supplies. In resonant circuits they tune radios to particular frequencies. In electric power transmission systems they stabilize voltage and power flow. Use cases The United States Department of Energy International Energy Storage Database (IESDB), is a free-access database of energy storage projects and policies funded by the United States Department of Energy Office of Electricity and Sandia National Labs. Capacity Storage capacity is the amount of energy extracted from an energy storage device or system; usually measured in joules or kilowatt-hours and their multiples, it may be given in number of hours of electricity production at power plant nameplate capacity; when storage is of primary type (i.e., thermal or pumped-water), output is sourced only with the power plant embedded storage system. Economics The economics of energy storage strictly depends on the reserve service requested, and several uncertainty factors affect the profitability of energy storage. Therefore, not every storage method is technically and economically suitable for the storage of several MWh, and the optimal size of the energy storage is market and location dependent. Moreover, ESS are affected by several risks, e.g.: Techno-economic risks, which are related to the specific technology; Market risks, which are the factors that affect the electricity supply system; Regulation and policy risks. Therefore, traditional techniques based on deterministic Discounted Cash Flow (DCF) for the investment appraisal are not fully adequate to evaluate these risks and uncertainties and the investor's flexibility to deal with them. Hence, the literature recommends to assess the value of risks and uncertainties through the Real Option Analysis (ROA), which is a valuable method in uncertain contexts. The economic valuation of large-scale applications (including pumped hydro storage and compressed air) considers benefits including: curtailment avoidance, grid congestion avoidance, price arbitrage and carbon-free energy delivery. In one technical assessment by the Carnegie Mellon Electricity Industry Centre, economic goals could be met using batteries if their capital cost was $30 to $50 per kilowatt-hour. A metric of energy efficiency of storage is energy storage on energy invested (ESOI), which is the amount of energy that can be stored by a technology, divided by the amount of energy required to build that technology. The higher the ESOI, the better the storage technology is energetically. For lithium-ion batteries this is around 10, and for lead acid batteries it is about 2. Other forms of storage such as pumped hydroelectric storage generally have higher ESOI, such as 210. Pumped-storage hydroelectricity is by far the largest storage technology used globally. However, the usage of conventional pumped-hydro storage is limited because it requires terrain with elevation differences and also has a very high land use for relatively small power. In locations without suitable natural geography, underground pumped-hydro storage could also be used. High costs and limited life still make batteries a "weak substitute" for dispatchable power sources, and are unable to cover for variable renewable power gaps lasting for days, weeks or months. In grid models with high VRE share, the excessive cost of storage tends to dominate the costs of the whole grid — for example, in California alone 80% share of VRE would require 9.6 TWh of storage but 100% would require 36.3 TWh. As of 2018 the state only had 150 GWh of storage, primarily in pumped storage and a small fraction in batteries. According to another study, supplying 80% of US demand from VRE would require a smart grid covering the whole country or battery storage capable to supply the whole system for 12 hours, both at cost estimated at $2.5 trillion. Similarly, several studies have found that relying only on VRE and energy storage would cost about 30–50% more than a comparable system that combines VRE with nuclear plants or plants with carbon capture and storage instead of energy storage. Research Germany In 2013, the German government allocated €200M (approximately US$270M) for research, and another €50M to subsidize battery storage in residential rooftop solar panels, according to a representative of the German Energy Storage Association. Siemens AG commissioned a production-research plant to open in 2015 at the Zentrum für Sonnenenergie und Wasserstoff (ZSW, the German Center for Solar Energy and Hydrogen Research in the State of Baden-Württemberg), a university/industry collaboration in Stuttgart, Ulm and Widderstall, staffed by approximately 350 scientists, researchers, engineers, and technicians. The plant develops new near-production manufacturing materials and processes (NPMM&P) using a computerized Supervisory Control and Data Acquisition (SCADA) system. It aims to enable the expansion of rechargeable battery production with increased quality and lower cost. From 2023 onwards, a new project by the German Research Foundation focuses on molecular photoswitches to store solar thermal energy. The spokesperson of these so-called molecular solar thermal (MOST) systems is Prof. Dr. Hermann A. Wegner. United States In 2014, research and test centers opened to evaluate energy storage technologies. Among them was the Advanced Systems Test Laboratory at the University of Wisconsin at Madison in Wisconsin State, which partnered with battery manufacturer Johnson Controls. The laboratory was created as part of the university's newly opened Wisconsin Energy Institute. Their goals include the evaluation of state-of-the-art and next generation electric vehicle batteries, including their use as grid supplements. The State of New York unveiled its New York Battery and Energy Storage Technology (NY-BEST) Test and Commercialization Center at Eastman Business Park in Rochester, New York, at a cost of $23 million for its almost 1,700 m2 laboratory. The center includes the Center for Future Energy Systems, a collaboration between Cornell University of Ithaca, New York and the Rensselaer Polytechnic Institute in Troy, New York. NY-BEST tests, validates and independently certifies diverse forms of energy storage intended for commercial use. On September 27, 2017, Senators Al Franken of Minnesota and Martin Heinrich of New Mexico introduced Advancing Grid Storage Act (AGSA), which would devote more than $1 billion in research, technical assistance and grants to encourage energy storage in the United States. In grid models with high VRE share, the excessive cost of storage tends to dominate the costs of the whole grid – for example, in California alone 80% share of VRE would require 9.6 TWh of storage but 100% would require 36.3 TWh. According to another study, supplying 80% of US demand from VRE would require a smart grid covering the whole country or battery storage capable to supply the whole system for 12 hours, both at cost estimated at $2.5 trillion. United Kingdom In the United Kingdom, some 14 industry and government agencies allied with seven British universities in May 2014 to create the SUPERGEN Energy Storage Hub in order to assist in the coordination of energy storage technology research and development.
Technology
Energy storage
null
24131
https://en.wikipedia.org/wiki/PHP
PHP
PHP is a general-purpose scripting language geared towards web development. It was originally created by Danish-Canadian programmer Rasmus Lerdorf in 1993 and released in 1995. The PHP reference implementation is now produced by the PHP Group. PHP was originally an abbreviation of Personal Home Page, but it now stands for the recursive acronym PHP: Hypertext Preprocessor. PHP code is usually processed on a web server by a PHP interpreter implemented as a module, a daemon or a Common Gateway Interface (CGI) executable. On a web server, the result of the interpreted and executed PHP codewhich may be any type of data, such as generated HTML or binary image datawould form the whole or part of an HTTP response. Various web template systems, web content management systems, and web frameworks exist that can be employed to orchestrate or facilitate the generation of that response. Additionally, PHP can be used for many programming tasks outside the web context, such as standalone graphical applications and drone control. PHP code can also be directly executed from the command line. The standard PHP interpreter, powered by the Zend Engine, is free software released under the PHP License. PHP has been widely ported and can be deployed on most web servers on a variety of operating systems and platforms. The PHP language has evolved without a written formal specification or standard, with the original implementation acting as the de facto standard that other implementations aimed to follow. W3Techs reports that (about two years since PHP 7 was discontinued and 11 months after the PHP 8.3 release), PHP 7 is still used by 50.0% of PHP websites, which is outdated and known to be insecure. In addition, the even more outdated (discontinued for 5+ years) and insecure PHP 5 is used by 13.2% and the no longer supported PHP 8.0 is also very popular, so the majority of PHP websites do not use supported versions. History Early history PHP development began in 1993 when Rasmus Lerdorf wrote several Common Gateway Interface (CGI) programs in C, which he used to maintain his personal homepage. He extended them to work with web forms and to communicate with databases, and called this implementation "Personal Home Page/Forms Interpreter" or PHP/FI. An example of the early PHP syntax: <!--include /text/header.html--> <!--getenv HTTP_USER_AGENT--> <!--if substr $exec_result Mozilla--> Hey, you are using Netscape!<p> <!--endif--> <!--sql database select * from table where user='$username'--> <!--ifless $numentries 1--> Sorry, that record does not exist<p> <!--endif exit--> Welcome <!--$user-->!<p> You have <!--$index:0--> credits left in your account.<p> <!--include /text/footer.html--> PHP/FI could be used to build simple, dynamic web applications. To accelerate bug reporting and improve the code, Lerdorf initially announced the release of PHP/FI as "Personal Home Page Tools (PHP Tools) version 1.0" on the Usenet discussion group comp.infosystems.www.authoring.cgi on 8 June 1995. This release included basic functionality such as Perl-like variables, form handling, and the ability to embed HTML. By this point, the syntax had changed to resemble that of Perl, but was simpler, more limited, and less consistent. Early PHP was never intended to be a new programming language; rather, it grew organically, with Lerdorf noting in retrospect: "I don't know how to stop it [...] there was never any intent to write a programming language [...] I have absolutely no idea how to write a programming language [...] I just kept adding the next logical step on the way." A development team began to form and, after months of work and beta testing, officially released PHP/FI 2 in November 1997. The fact that PHP was not originally designed, but instead was developed organically has led to inconsistent naming of functions and inconsistent ordering of their parameters. In some cases, the function names were chosen to match the lower-level libraries which PHP was "wrapping", while in some very early versions of PHP the length of the function names was used internally as a hash function, so names were chosen to improve the distribution of hash values. PHP 3 and 4 Zeev Suraski and Andi Gutmans rewrote the parser in 1997 and formed the base of PHP 3, changing the language's name to the recursive acronym PHP: Hypertext Preprocessor. Afterwards, public testing of PHP 3 began, and the official launch came in June 1998. Suraski and Gutmans then started a new rewrite of PHP's core, producing the Zend Engine in 1999. They also founded Zend Technologies in Ramat Gan, Israel. On 22 May 2000, PHP 4, powered by the Zend Engine 1.0, was released. By August 2008, this branch had reached version 4.4.9. PHP 4 is now no longer under development and nor are any security updates planned to be released. PHP 5 On 1 July 2004, PHP 5 was released, powered by the new Zend Engine II. PHP 5 included new features such as improved support for object-oriented programming, the PHP Data Objects (PDO) extension (which defines a lightweight and consistent interface for accessing databases), and numerous performance enhancements. In 2008, PHP 5 became the only stable version under development. Late static binding had been missing from previous versions of PHP, and was added in version 5.3. Many high-profile open-source projects ceased to support PHP 4 in new code from February 5, 2008, because of the GoPHP5 initiative, provided by a consortium of PHP developers promoting the transition from PHP 4 to PHP 5. Over time, PHP interpreters became available on most existing 32-bit and 64-bit operating systems, either by building them from the PHP source code or by using pre-built binaries. For PHP versions 5.3 and 5.4, the only available Microsoft Windows binary distributions were 32-bit IA-32 builds, requiring Windows 32-bit compatibility mode while using Internet Information Services (IIS) on a 64-bit Windows platform. PHP version 5.5 made the 64-bit x86-64 builds available for Microsoft Windows. Official security support for PHP 5.6 ended on 31 December 2018. PHP 6 and Unicode PHP received mixed reviews due to lacking native Unicode support at the core language level. In 2005, a project headed by Andrei Zmievski was initiated to bring native Unicode support throughout PHP, by embedding the International Components for Unicode (ICU) library, and representing text strings as UTF-16 internally. Since this would cause major changes both to the internals of the language and to user code, it was planned to release this as version 6.0 of the language, along with other major features then in development. However, a shortage of developers who understood the necessary changes, and performance problems arising from conversion to and from UTF-16, which is rarely used in a web context, led to delays in the project. As a result, a PHP 5.3 release was created in 2009, with many non-Unicode features back-ported from PHP 6, notably namespaces. In March 2010, the project in its current form was officially abandoned, and a PHP 5.4 release was prepared to contain most remaining non-Unicode features from PHP 6, such as traits and closure re-binding. Initial hopes were that a new plan would be formed for Unicode integration, but by 2014 none had been adopted. PHP 7 During 2014 and 2015, a new major PHP version was developed, PHP 7. The numbering of this version involved some debate among internal developers. While the PHP 6 Unicode experiments had never been released, several articles and book titles referenced the PHP 6 names, which might have caused confusion if a new release were to reuse the name. After a vote, the name PHP 7 was chosen. The foundation of PHP 7 is a PHP branch that was originally dubbed PHP next generation (phpng). It was authored by Dmitry Stogov, Xinchen Hui and Nikita Popov, and aimed to optimize PHP performance by refactoring the Zend Engine while retaining near-complete language compatibility. By 14 July 2014, WordPress-based benchmarks, which served as the main benchmark suite for the phpng project, showed an almost 100% increase in performance. Changes from phpng make it easier to improve performance in future versions, as more compact data structures and other changes are seen as better suited for a successful migration to a just-in-time (JIT) compiler. Because of the significant changes, the reworked Zend Engine was called Zend Engine 3, succeeding Zend Engine 2 used in PHP 5. Because of the major internal changes in phpng, it must receive a new major version number of PHP, rather than a minor PHP 5 release, according to PHP's release process. Major versions of PHP are allowed to break backward-compatibility of code and therefore PHP 7 presented an opportunity for other improvements beyond phpng that require backward-compatibility breaks. In particular, it involved the following changes: Many fatal or recoverable-level legacy PHP error mechanisms were replaced with modern object-oriented exceptions. The syntax for variable dereferencing was reworked to be internally more consistent and complete, allowing the use of the operators ->, [], (),{}, and ::, with arbitrary meaningful left-side expressions. Support for legacy PHP 4-style constructor methods was deprecated. The behavior of the foreach statement was changed to be more predictable. Constructors for the few classes built-in to PHP which returned null upon failure were changed to throw an exception instead, for consistency. Several unmaintained or deprecated server application programming interfaces (SAPIs) and extensions were removed from the PHP core, most notably the legacy mysql extension. The behavior of the list() operator was changed to remove support for strings. Support was removed for legacy ASP-style delimiters <% and %> and <script language="php"> ... </script>. An oversight allowing a switch statement to have multiple default clauses was fixed. Support for hexadecimal number support in some implicit conversions from strings to number types was removed. The left-shift and right-shift operators were changed to behave more consistently across platforms. Conversions between floating-point numbers and integers were changed (e.g. infinity changed to convert to zero) and implemented more consistently across platforms. PHP 7 also included new language features. Most notably, it introduced return type declarations for functions which complement the existing parameter type declarations, and support for the scalar types (integer, float, string, and boolean) in parameter and return type declarations. PHP 8 PHP 8 was released on 26 November 2020, and is currently the second-most used PHP major version. PHP 8 is a major version and has breaking changes from previous versions. New features and notable changes include: Just-in-time compilation Just-in-time compilation is supported in PHP 8. PHP 8's JIT compiler can provide substantial performance improvements for some use cases, while (then PHP) developer Nikita Popov stated that the performance improvements for most websites will be less substantial than the upgrade from PHP 5 to PHP 7. Substantial improvements are expected more for mathematical-type operations than for common web-development use cases. Additionally, the JIT compiler provides the future potential to move some code from C to PHP, due to the performance improvements for some use cases. Addition of the match expression PHP 8 introduced the expression. The match expression is conceptually similar to a statement and is more compact for some use cases. Because is an expression, its result can be assigned to a variable or returned from a function. Type changes and additions PHP 8 introduced union types, a new return type, and a new type. "Attributes", often referred to as "annotations" in other programming languages, were added in PHP 8, which allow metadata to be added to classes. was changed from being a statement to being an expression. This allows exceptions to be thrown in places that were not previously possible. Syntax changes and additions PHP 8 includes changes to allow alternate, more concise, or more consistent syntaxes in a number of scenarios. For example, the nullsafe operator is similar to the null coalescing operator , but used when calling methods. The following code snippet will not throw an error if returns null: $human_readable_date = $user->getBirthday()?->diffForHumans(); Constructor property promotion has been added as "syntactic sugar," allowing class properties to be set automatically when parameters are passed into a class constructor. This reduces the amount of boilerplate code that must be written. Other minor changes include support for use of on objects, which serves as an alternative for the use of ; non-capturing catches in try-catch blocks; variable syntax tweaks to resolve inconsistencies; support for named arguments; and support for trailing commas in parameter lists, which adds consistency with support for trailing commas in other contexts, such as in arrays. Standard library changes and additions Weak maps were added in PHP 8. A holds references to objects, but these references do not prevent such objects from being garbage collected. This can provide performance improvements in scenarios where data is being cached; this is of particular relevance for object–relational mappings (ORM). Various adjustments to interfaces, such as adding support for creating objects from interfaces, and the addition of a interface that can be used for type hinting. Various new functions including , , and ; ; ; and Object implementation of Additional changes Type annotations were also added into PHP's C source code itself to allow internal functions and methods to have "complete type information in reflection." Inheritance with private methods Abstract methods in traits improvements PHP 8.1 PHP 8.1 was released on November 25, 2021. It added support for enumerations (also called "enums"), declaring properties as readonly (which prevents modification of the property after initialization), and array unpacking with string keys. The new never type can be used to indicate that a function does not return. PHP 8.2 PHP 8.2 was released on December 8, 2022. New in this release are readonly classes (whose instance properties are implicitly readonly), disjunctive normal form (DNF) types, and the random extension, which provides a pseudorandom number generator with an object-oriented API, Sensitive Parameter value redaction, and a ton of other features. PHP 8.3 PHP 8.3 was released on November 23, 2023. This release introduced readonly array properties, allowing arrays to be declared as immutable after initialization. It also added support for class aliases for built-in PHP classes, new methods for random float generation in the Random extension, and enhanced PHP INI settings with fallback value support. Additionally, the new function provides improved API for stream manipulation, among other updates and deprecations. PHP 8.4 PHP 8.4 was released on November 21, 2024. Release history Beginning on 28 June 2011, the PHP Development Team implemented a timeline for the release of new versions of PHP. Under this system, at least one release should occur every month. Once per year, a minor release should occur which may include new features. Every minor release should at least be supported for two years with security and bug fixes, followed by at least one year of only security fixes, for a total of a three-year release process for every minor release. No new features, unless small and self-contained, are to be introduced into a minor release during the three-year release process. Mascot The mascot of the PHP project is the elePHPant, a blue elephant with the PHP logo on its side, designed by Vincent Pontier in 1998. "The (PHP) letters were forming the shape of an elephant if viewed in a sideways angle." The elePHPant is sometimes differently coloured when in plush toy form. Many variations of this mascot have been made over the years. Only the elePHPants based on the original design by Vincent Pontier are considered official by the community. These are collectable and some of them are extremely rare. Syntax The following "Hello, World!" program is written in PHP code embedded in an HTML document: <!DOCTYPE html> <html> <head> <title>PHP "Hello, World!" program</title> </head> <body> <p><?= 'Hello, World!' ?></p> </body> </html> However, as no requirement exists for PHP code to be embedded in HTML, the simplest version of Hello, World! may be written like this, with the closing tag ?> omitted as preferred in files containing pure PHP code. <?php echo 'Hello, World!'; The PHP interpreter only executes PHP code within its delimiters. Anything outside of its delimiters is not processed by PHP, although the non-PHP text is still subject to control structures described in PHP code. The most common delimiters are <?php to open and ?> to close PHP sections. The shortened form <? also exists. This short delimiter makes script files less portable since support for them can be disabled in the local PHP configuration and it is therefore discouraged. Conversely, there is no recommendation against the echo short tag <?=. Prior to PHP 5.4.0, this short syntax for echo only works with the short_open_tag configuration setting enabled, while for PHP 5.4.0 and later it is always available. The purpose of all these delimiters is to separate PHP code from non-PHP content, such as JavaScript code or HTML markup. So the shortest "Hello, World!" program written in PHP is: <?='Hello, World!'; The first form of delimiters, <?php and ?>, in XHTML and other XML documents, creates correctly formed XML processing instructions. This means that the resulting mixture of PHP code and other markups in the server-side file is itself well-formed XML. Variables are prefixed with a dollar symbol, and a type does not need to be specified in advance. PHP 5 introduced type declarations that allow functions to force their parameters to be objects of a specific class, arrays, interfaces or callback functions. However, before PHP 7, type declarations could not be used with scalar types such as integers or strings. Below is an example of how PHP variables are declared and initialized. <?php $name = 'John'; // variable of string type being declared and initialized $age = 18; // variable of integer type being declared and initialized $height = 5.3; // variable of double type being declared and initialized echo $name . ' is ' . $height . "m tall\n"; // concatenating variables and strings echo "$name is $age years old."; // interpolating variables to string ?> Unlike function and class names, variable names are case-sensitive. Both double-quoted ("") and heredoc strings provide the ability to interpolate a variable's value into the string. PHP treats newlines as whitespace in the manner of a free-form language, and statements are terminated by a semicolon. PHP has three types of comment syntax: /* */ marks block and inline comments; // or # are used for one-line comments. The echo statement is one of several facilities PHP provides to output text. In terms of keywords and language syntax, PHP is similar to C-style syntax. if conditions, for and while loops and function returns are similar in syntax to languages such as C, C++, C#, Java and Perl. Data types PHP is loosely typed. It stores integers in a platform-dependent range, either as a 32, 64 or 128-bit signed integer equivalent to the C-language long type. Unsigned integers are converted to signed values in certain situations, which is different behaviour to many other programming languages. Integer variables can be assigned using decimal (positive and negative), octal, hexadecimal, and binary notations. Floating-point numbers are also stored in a platform-specific range. They can be specified using floating-point notation, or two forms of scientific notation. PHP has a native Boolean type that is similar to the native Boolean types in Java and C++. Using the Boolean type conversion rules, non-zero values are interpreted as true and zero as false, as in Perl and C++. The null data type represents a variable that has no value; NULL is the only allowed value for this data type. Variables of the "resource" type represent references to resources from external sources. These are typically created by functions from a particular extension, and can only be processed by functions from the same extension; examples include file, image, and database resources. Arrays can contain elements of any type that PHP can handle, including resources, objects, and even other arrays. Order is preserved in lists of values and in hashes with both keys and values, and the two can be intermingled. PHP also supports strings, which can be used with single quotes, double quotes, nowdoc or heredoc syntax. The Standard PHP Library (SPL) attempts to solve standard problems and implements efficient data access interfaces and classes. Functions PHP defines a large array of functions in the core language and many are also available in various extensions; these functions are well documented online PHP documentation. However, the built-in library has a wide variety of naming conventions and associated inconsistencies, as described under history above. Custom functions may be defined by the developer: function myAge(int $birthYear): string { // calculate the age by subtracting the birth year from the current year. $yearsOld = date('Y') - $birthYear; // return the age in a descriptive string. return $yearsOld . ($yearsOld == 1 ? ' year' : ' years'); } echo 'I am currently ' . myAge(1995) . ' old.'; As of , the output of the above sample program is "I am currently years old." In lieu of function pointers, functions in PHP can be referenced by a string containing their name. In this manner, normal PHP functions can be used, for example, as callbacks or within function tables. User-defined functions may be created at any time without being prototyped. Functions may be defined inside code blocks, permitting a run-time decision as to whether or not a function should be defined. There is a function_exists function that determines whether a function with a given name has already been defined. Function calls must use parentheses, with the exception of zero-argument class constructor functions called with the PHP operator new, in which case parentheses are optional. Since PHP 4.0.1 create_function(), a thin wrapper around eval(), allowed normal PHP functions to be created during program execution; it was deprecated in PHP 7.2 and removed in PHP 8.0 in favor of syntax for anonymous functions or "closures" that can capture variables from the surrounding scope, which was added in PHP 5.3. Shorthand arrow syntax was added in PHP 7.4: function getAdder($x) { return fn($y) => $x + $y; } $adder = getAdder(8); echo $adder(2); // prints "10" In the example above, getAdder() function creates a closure using passed argument , which takes an additional argument , and returns the created closure to the caller. Such a function is a first-class object, meaning that it can be stored in a variable, passed as a parameter to other functions, etc. Unusually for a dynamically typed language, PHP supports type declarations on function parameters, which are enforced at runtime. This has been supported for classes and interfaces since PHP 5.0, for arrays since PHP 5.1, for "callables" since PHP 5.4, and scalar (integer, float, string and boolean) types since PHP 7.0. PHP 7.0 also has type declarations for function return types, expressed by placing the type name after the list of parameters, preceded by a colon. For example, the getAdder function from the earlier example could be annotated with types like so in PHP 7: function getAdder(int $x): Closure { return fn(int $y): int => $x + $y; } $adder = getAdder(8); echo $adder(2); // prints "10" echo $adder(null); // throws an exception because an incorrect type was passed $adder = getAdder([]); // would also throw an exception By default, scalar type declarations follow weak typing principles. So, for example, if a parameter's type is int, PHP would allow not only integers, but also convertible numeric strings, floats or Booleans to be passed to that function, and would convert them. However, PHP 7 has a "strict typing" mode which, when used, disallows such conversions for function calls and returns within a file. PHP objects Basic object-oriented programming functionality was added in PHP 3 and improved in PHP 4. This allowed for PHP to gain further abstraction, making creative tasks easier for programmers using the language. Object handling was completely rewritten for PHP 5, expanding the feature set and enhancing performance. In previous versions of PHP, objects were handled like value types. The drawback of this method was that code had to make heavy use of PHP's "reference" variables if it wanted to modify an object it was passed rather than creating a copy of it. In the new approach, objects are referenced by handle, and not by value. PHP 5 introduced private and protected member variables and methods, along with abstract classes, final classes, abstract methods, and final methods. It also introduced a standard way of declaring constructors and destructors, similar to that of other object-oriented languages such as C++, and a standard exception handling model. Furthermore, PHP 5 added interfaces and allowed for multiple interfaces to be implemented. There are special interfaces that allow objects to interact with the runtime system. Objects implementing ArrayAccess can be used with array syntax and objects implementing Iterator or IteratorAggregate can be used with the foreach language construct. There is no virtual table feature in the engine, so static variables are bound with a name instead of a reference at compile time. If the developer creates a copy of an object using the reserved word clone, the Zend engine will check whether a __clone() method has been defined. If not, it will call a default __clone() which will copy the object's properties. If a __clone() method is defined, then it will be responsible for setting the necessary properties in the created object. For convenience, the engine will supply a function that imports the properties of the source object, so the programmer can start with a by-value replica of the source object and only override properties that need to be changed. The visibility of PHP properties and methods is defined using the keywords public, private, and protected. The default is public, if only var is used; var is a synonym for public. Items declared public can be accessed everywhere. protected limits access to inherited classes (and to the class that defines the item). private limits visibility only to the class that defines the item. Objects of the same type have access to each other's private and protected members even though they are not the same instance. Example The following is a basic example of object-oriented programming in PHP 8: <?php abstract class User { protected string $name; public function __construct(string $name) { // make first letter uppercase and the rest lowercase $this->name = ucfirst(strtolower($name)); } public function greet(): string { return "Hello, my name is " . $this->name; } abstract public function job(): string; } class Student extends User { public function __construct(string $name, private string $course) { parent::__construct($name); } public function job(): string { return "I learn " . $this->course; } } class Teacher extends User { public function __construct(string $name, private array $teachingCourses) { parent::__construct($name); } public function job(): string { return "I teach " . implode(", ", $this->teachingCourses); } } $students = [ new Student("Alice", "Computer Science"), new Student("Bob", "Computer Science"), new Student("Charlie", "Business Studies"), ]; $teachers = [ new Teacher("Dan", ["Computer Science", "Information Security"]), new Teacher("Erin", ["Computer Science", "3D Graphics Programming"]), new Teacher("Frankie", ["Online Marketing", "Business Studies", "E-commerce"]), ]; foreach ([$students, $teachers] as $users) { echo $users[0]::class . "s:\n"; array_walk($users, function (User $user) { echo "{$user->greet()}, {$user->job()}\n"; }); } This program outputs the following: Implementations The only complete PHP implementation is the original, known simply as PHP. It is the most widely used and is powered by the Zend Engine. To disambiguate it from other implementations, it is sometimes unofficially called "Zend PHP". The Zend Engine compiles PHP source code on-the-fly into an internal format that it can execute, thus it works as an interpreter. It is also the "reference implementation" of PHP, as PHP has no formal specification, and so the semantics of Zend PHP define the semantics of PHP. Due to the complex and nuanced semantics of PHP, defined by how Zend works, it is difficult for competing implementations to offer complete compatibility. PHP's single-request-per-script-execution model, and the fact that the Zend Engine is an interpreter, leads to inefficiency; as a result, various products have been developed to help improve PHP performance. In order to speed up execution time and not have to compile the PHP source code every time the web page is accessed, PHP scripts can also be deployed in the PHP engine's internal format by using an opcode cache, which works by caching the compiled form of a PHP script (opcodes) in shared memory to avoid the overhead of parsing and compiling the code every time the script runs. An opcode cache, Zend Opcache, is built into PHP since version 5.5. Another example of a widely used opcode cache is the Alternative PHP Cache (APC), which is available as a PECL extension. While Zend PHP is still the most popular implementation, several other implementations have been developed. Some of these are compilers or support JIT compilation, and hence offer performance benefits over Zend PHP at the expense of lacking full PHP compatibility. Alternative implementations include the following: HHVM (HipHop Virtual Machine) – developed at Facebook and available as open source, it converts PHP code into a high-level bytecode (commonly known as an intermediate language), which is then translated into x86-64 machine code dynamically at runtime by a just-in-time (JIT) compiler, resulting in up to 6× performance improvements. However, since version 7.2 Zend has outperformed HHVM, and HHVM 3.24 is the last version to officially support PHP. HipHop – developed at Facebook and available as open source, it transforms the PHP scripts into C++ code and then compiles the resulting code, reducing the server load up to 50%. In early 2013, Facebook deprecated it in favour of HHVM due to multiple reasons, including deployment difficulties and lack of support for the whole PHP language, including the create_function() and eval() constructs. Parrot – a virtual machine designed to run dynamic languages efficiently; the cross-translator Pipp transforms the PHP source code into the Parrot intermediate representation, which is then translated into the Parrot's bytecode and executed by the virtual machine. PeachPie – a second-generation compiler to .NET Common Intermediate Language (CIL) bytecode, built on the Roslyn platform; successor of Phalanger, sharing several architectural components Phalanger – compiles PHP into .Net Common Intermediate Language bytecode; predecessor of PeachPie Quercus – compiles PHP into Java bytecode Licensing PHP is free software released under the PHP License, which stipulates that: This restriction on the use of "PHP" makes the PHP License incompatible with the GNU General Public License (GPL), while the Zend License is incompatible due to an advertising clause similar to that of the original BSD license. Development and community PHP includes various free and open-source libraries in its source distribution or uses them in resulting PHP binary builds. PHP is fundamentally an Internet-aware system with built-in modules for accessing File Transfer Protocol (FTP) servers and many database servers, including PostgreSQL, MySQL, Microsoft SQL Server and SQLite (which is an embedded database), LDAP servers, and others. Numerous functions are familiar to C programmers, such as those in the stdio family, are available in standard PHP builds. PHP allows developers to write extensions in C to add functionality to the PHP language. PHP extensions can be compiled statically into PHP or loaded dynamically at runtime. Numerous extensions have been written to add support for the Windows API, process management on Unix-like operating systems, multibyte strings (Unicode), cURL, and several popular compression formats. Other PHP features made available through extensions include integration with Internet Relay Chat (IRC), dynamic generation of images and Adobe Flash content, PHP Data Objects (PDO) as an abstraction layer used for accessing databases, and even speech synthesis. Some of the language's core functions, such as those dealing with strings and arrays, are also implemented as extensions. The PHP Extension Community Library (PECL) project is a repository for extensions to the PHP language. Some other projects, such as Zephir, provide the ability for PHP extensions to be created in a high-level language and compiled into native PHP extensions. Such an approach, instead of writing PHP extensions directly in C, simplifies the development of extensions and reduces the time required for programming and testing. By December 2018 the PHP Group consisted of ten people: Thies C. Arntzen, Stig Bakken, Shane Caraveo, Andi Gutmans, Rasmus Lerdorf, Sam Ruby, Sascha Schumann, Zeev Suraski, Jim Winstead, and Andrei Zmievski. Zend Technologies provides a PHP Certification based on PHP 8 exam (and previously based on PHP 7 and 5.5) for programmers to become certified PHP developers. The PHP Foundation On 26 November 2021, the JetBrains blog announced the creation of The PHP Foundation, which will sponsor the design and development of PHP. The foundation hires "Core Developers" to work on the PHP language's core repository. Roman Pronskiy, a member of the foundation's board, said that they aim to pay "market salaries" to developers. The response to the foundation has mostly been positive, with the foundation being praised for better supporting the language and helping to stop the decrease in the language's popularity. However, it has also been criticised for adding breaking changes to minor versions of PHP, such as in PHP 8.2 where initialising members of a class out-with the original class scope would cause depreciation errors, these changes impacted a number of open source projects including WordPress. Germany's Sovereign Tech Fund provided more than 200,000 Euros to support the PHP Foundation. Installation and configuration There are two primary ways for adding support for PHP to a web server – as a native web server module, or as a CGI executable. PHP has a direct module interface called server application programming interface (SAPI), which is supported by many web servers including Apache HTTP Server, Microsoft IIS and iPlanet Web Server. Some other web servers, such as OmniHTTPd, support the Internet Server Application Programming Interface (ISAPI), which is Microsoft's web server module interface. If PHP has no module support for a web server, it can always be used as a Common Gateway Interface (CGI) or FastCGI processor; in that case, the web server is configured to use PHP's CGI executable to process all requests to PHP files. PHP-FPM (FastCGI Process Manager) is an alternative FastCGI implementation for PHP, bundled with the official PHP distribution since version 5.3.3. When compared to the older FastCGI implementation, it contains some additional features, mostly useful for heavily loaded web servers. When using PHP for command-line scripting, a PHP command-line interface (CLI) executable is needed. PHP supports a CLI server application programming interface (SAPI) since PHP 4.3.0. The main focus of this SAPI is developing shell applications using PHP. There are quite a few differences between the CLI SAPI and other SAPIs, although they do share many of the same behaviours. PHP has a direct module interface called SAPI for different web servers; in case of PHP 5 and Apache 2.0 on Windows, it is provided in form of a DLL file called , which is a module that, among other functions, provides an interface between PHP and the web server, implemented in a form that the server understands. This form is what is known as a SAPI. There are different kinds of SAPIs for various web server extensions. For example, in addition to those listed above, other SAPIs for the PHP language include the Common Gateway Interface and command-line interface. PHP can also be used for writing desktop graphical user interface (GUI) applications, by using the or discontinued PHP-GTK extension. PHP-GTK is not included in the official PHP distribution, and as an extension, it can be used only with PHP versions 5.1.0 and newer. The most common way of installing PHP-GTK is by compiling it from the source code. When PHP is installed and used in cloud environments, software development kits (SDKs) are provided for using cloud-specific features. For example: Amazon Web Services provides the AWS SDK for PHP Microsoft Azure can be used with the Windows Azure SDK for PHP. Numerous configuration options are supported, affecting both core PHP features and extensions. Configuration file php.ini is searched for in different locations, depending on the way PHP is used. The configuration file is split into various sections, while some of the configuration options can be also set within the web server configuration. Use PHP is a general-purpose scripting language that is especially suited to server-side web development, in which case PHP generally runs on a web server. Any PHP code in a requested file is executed by the PHP runtime, usually to create dynamic web page content or dynamic images used on websites or elsewhere. It can also be used for command-line scripting and client-side graphical user interface (GUI) applications. PHP can be deployed on most web servers, many operating systems and platforms, and can be used with many relational database management systems (RDBMS). Most web hosting providers support PHP for use by their clients. It is available free of charge, and the PHP Group provides the complete source code for users to build, customize and extend for their own use. Originally designed to create dynamic web pages, PHP now focuses mainly on server-side scripting, and it is similar to other server-side scripting languages that provide dynamic content from a web server to a client, such as Python, Microsoft's ASP.NET, Sun Microsystems' JavaServer Pages, and mod_perl. PHP has also attracted the development of many software frameworks that provide building blocks and a design structure to promote rapid application development (RAD). Some of these include PRADO, CakePHP, Symfony, CodeIgniter, Laravel, Yii Framework, Phalcon and Laminas, offering features similar to other web frameworks. The LAMP architecture has become popular in the web industry as a way of deploying web applications. PHP is commonly used as the P in this bundle alongside Linux, Apache and MySQL, although the P may also refer to Python, Perl, or some mix of the three. Similar packages, WAMP and MAMP, are also available for Windows and macOS, with the first letter standing for the respective operating system. Although both PHP and Apache are provided as part of the macOS base install, users of these packages seek a simpler installation mechanism that can be more easily kept up to date. For specific and more advanced usage scenarios, PHP offers a well-defined and documented way for writing custom extensions in C or C++. Besides extending the language itself in form of additional libraries, extensions are providing a way for improving execution speed where it is critical and there is room for improvements by using a true compiled language. PHP also offers well-defined ways for embedding itself into other software projects. That way PHP can be easily used as an internal scripting language for another project, also providing tight interfacing with the project's specific internal data structures. PHP received mixed reviews due to lacking support for multithreading at the core language level, though using threads is made possible by the "pthreads" PECL extension. A command line interface, php-cli, and two ActiveX Windows Script Host scripting engines for PHP have been produced. Popularity and usage statistics PHP is used for Web content management systems including MediaWiki, WordPress, Joomla, Drupal, Moodle, eZ Publish, eZ Platform, and SilverStripe. , PHP was used in more than 240 million websites (39% of those sampled) and was installed on 2.1 million web servers. (two months after PHP 8.4's release), PHP is used as the server-side programming language on 75.0% of websites where the language could be determined; PHP 7 is the most used version of the language with 47.1% of websites using PHP being on that version, while 40.6% use PHP 8, 12.2% use PHP 5 and 0.1% use PHP 4. Security In 2019, 11% of all vulnerabilities listed by the National Vulnerability Database were linked to PHP; historically, about 30% of all vulnerabilities listed since 1996 in this database are linked to PHP. Technical security flaws of the language itself or of its core libraries are not frequent (22 in 2009, about 1% of the total although PHP applies to about 20% of programs listed). Recognizing that programmers make mistakes, some languages include taint checking to automatically detect the lack of input validation which induces many issues. Such a feature has been proposed for PHP in the past, but either been rejected or the proposal abandoned. Third-party projects such as Suhosin and Snuffleupagus aim to remove or change dangerous parts of the language. Historically, old versions of PHP had some configuration parameters and default values for such runtime settings that made some PHP applications prone to security issues. Among these, magic_quotes_gpc and register_globals configuration directives were the best known; the latter made any URL parameters become PHP variables, opening a path for serious security vulnerabilities by allowing an attacker to set the value of any uninitialized global variable and interfere with the execution of a PHP script. Support for "magic quotes" and "register globals" settings has been deprecated since PHP 5.3.0, and removed from PHP 5.4.0. Another example for the potential runtime-settings vulnerability comes from failing to disable PHP execution (for example by using the engine configuration directive) for the directory where uploaded files are stored; enabling it can result in the execution of malicious code embedded within the uploaded files. The best practice is to either locate the image directory outside of the document root available to the web server and serve it via an intermediary script or disable PHP execution for the directory which stores the uploaded files. Also, enabling the dynamic loading of PHP extensions (via enable_dl configuration directive) in a shared web hosting environment can lead to security issues. Implied type conversions that result in different values being treated as equal, sometimes against the programmer's intent, can lead to security issues. For example, the result of the comparison is true, because strings that are parsable as numbers are converted to numbers; in this case, the first compared value is treated as scientific notation having the value (), which is zero. Errors like this resulted in authentication vulnerabilities in Simple Machines Forum, Typo3 and phpBB when MD5 password hashes were compared. The recommended way is to use hash_equals() (for timing attack safety), strcmp or the identity operator (===), as results in false. In a 2013 analysis of over 170,000 website defacements, published by Zone-H, the most frequently (53%) used technique was the exploitation of file inclusion vulnerability, mostly related to insecure usage of the PHP language constructs include, require, and allow_url_fopen. Cryptographic Security PHP includes rand() and mt_rand()functions which use a pseudorandom number generator, and are not cryptographically secure. As of version 8.1, the random_int() function is included, which uses a cryptographically secure source of randomness provided by the system. There are two attacks that can be performed over PHP entropy sources: "seed attack" and "state recovery attack". As of 2012, a $250 GPU can perform up to 2 MD5 calculations per second, while a $750 GPU can perform four times as many calculations at the same time. In combination with a "birthday attack" this can lead to serious security vulnerabilities. Long-Term Support The PHP development team provides official bug fixes for 2 years following release of each minor version, followed by another 2 years where only security fixes are released. After this, the release is considered end of life and no longer officially supported. Extended Long-Term Support beyond this is available from commercial providers, such as Zend and others
Technology
Scripting languages
null
24138
https://en.wikipedia.org/wiki/Proton%20decay
Proton decay
In particle physics, proton decay is a hypothetical form of particle decay in which the proton decays into lighter subatomic particles, such as a neutral pion and a positron. The proton decay hypothesis was first formulated by Andrei Sakharov in 1967. Despite significant experimental effort, proton decay has never been observed. If it does decay via a positron, the proton's half-life is constrained to be at least . According to the Standard Model, the proton, a type of baryon, is stable because baryon number (quark number) is conserved (under normal circumstances; see Chiral anomaly for an exception). Therefore, protons will not decay into other particles on their own, because they are the lightest (and therefore least energetic) baryon. Positron emission and electron capture—forms of radioactive decay in which a proton becomes a neutron—are not proton decay, since the proton interacts with other particles within the atom. Some beyond-the-Standard-Model grand unified theories (GUTs) explicitly break the baryon number symmetry, allowing protons to decay via the Higgs particle, magnetic monopoles, or new X bosons with a half-life of 10 to 10 years. For comparison, the universe is roughly years old. To date, all attempts to observe new phenomena predicted by GUTs (like proton decay or the existence of magnetic monopoles) have failed. Quantum tunnelling may be one of the mechanisms of proton decay. Quantum gravity (via virtual black holes and Hawking radiation) may also provide a venue of proton decay at magnitudes or lifetimes well beyond the GUT scale decay range above, as well as extra dimensions in supersymmetry. There are theoretical methods of baryon violation other than proton decay including interactions with changes of baryon and/or lepton number other than 1 (as required in proton decay). These included B and/or L violations of 2, 3, or other numbers, or B − L violation. Such examples include neutron oscillations and the electroweak sphaleron anomaly at high energies and temperatures that can result between the collision of protons into antileptons or vice versa (a key factor in leptogenesis and non-GUT baryogenesis). Baryogenesis One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density – that is, there is more matter than antimatter. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. This has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter (as opposed to antimatter) under certain conditions. This imbalance would have been exceptionally small, on the order of 1 in every 1010 particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons. Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons or massive Higgs bosons (). The rate at which these events occur is governed largely by the mass of the intermediate or particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay. Experimental evidence Proton decay is one of the key predictions of the various grand unified theories (GUTs) proposed in the 1970s, another major one being the existence of magnetic monopoles. Both concepts have been the focus of major experimental physics efforts since the early 1980s. To date, all attempts to observe these events have failed; however, these experiments have been able to establish lower bounds on the half-life of the proton. Currently, the most precise results come from the Super-Kamiokande water Cherenkov radiation detector in Japan: a lower bound on the proton's half-life of via positron decay, and similarly, via antimuon decay, close to a supersymmetry (SUSY) prediction of 1034–1036 years. An upgraded version, Hyper-Kamiokande, probably will have sensitivity 5–10 times better than Super-Kamiokande. Theoretical motivation Despite the lack of observational evidence for proton decay, some grand unification theories, such as the SU(5) Georgi–Glashow model and SO(10), along with their supersymmetric variants, require it. According to such theories, the proton has a half-life of about ~ years and decays into a positron and a neutral pion that itself immediately decays into two gamma ray photons: Since a positron is an antilepton this decay preserves number, which is conserved in most GUTs. Additional decay modes are available (e.g.: ), both directly and when catalyzed via interaction with GUT-predicted magnetic monopoles. Though this process has not been observed experimentally, it is within the realm of experimental testability for future planned very large-scale detectors on the megaton scale. Such detectors include the Hyper-Kamiokande. Early grand unification theories (GUTs) such as the Georgi–Glashow model, which were the first consistent theories to suggest proton decay, postulated that the proton's half-life would be at least . As further experiments and calculations were performed in the 1990s, it became clear that the proton half-life could not lie below . Many books from that period refer to this figure for the possible decay time for baryonic matter. More recent findings have pushed the minimum proton half-life to at least – years, ruling out the simpler GUTs (including minimal SU(5) / Georgi–Glashow) and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at , a bound applicable to SUSY models, with a maximum for (minimal) non-SUSY GUTs at . Although the phenomenon is referred to as "proton decay", the effect would also be seen in neutrons bound inside atomic nuclei. Free neutrons—those not inside an atomic nucleus—are already known to decay into protons (and an electron and an antineutrino) in a process called beta decay. Free neutrons have a half-life of 10 minutes () due to the weak interaction. Neutrons bound inside a nucleus have an immensely longer half-life – apparently as great as that of the proton. Projected proton lifetimes The lifetime of the proton in vanilla SU(5) can be naively estimated as . Supersymmetric GUTs with reunification scales around   yield a lifetime of around , roughly the current experimental lower bound. Decay operators Dimension-6 proton decay operators The dimension-6 proton decay operators are and where is the cutoff scale for the Standard Model. All of these operators violate both baryon number () and lepton number () conservation but not the combination  − . In GUT models, the exchange of an X or Y boson with the mass can lead to the last two operators suppressed by . The exchange of a triplet Higgs with mass can lead to all of the operators suppressed by . See Doublet–triplet splitting problem. Dimension-5 proton decay operators In supersymmetric extensions (such as the MSSM), we can also have dimension-5 operators involving two fermions and two sfermions caused by the exchange of a tripletino of mass . The sfermions will then exchange a gaugino or Higgsino or gravitino leaving two fermions. The overall Feynman diagram has a loop (and other complications due to strong interaction physics). This decay rate is suppressed by where is the mass scale of the superpartners. Dimension-4 proton decay operators In the absence of matter parity, supersymmetric extensions of the Standard Model can give rise to the last operator suppressed by the inverse square of sdown quark mass. This is due to the dimension-4 operators and . The proton decay rate is only suppressed by which is far too fast unless the couplings are very small.
Physical sciences
Particle physics: General
Physics
24151
https://en.wikipedia.org/wiki/Flatworm
Flatworm
Platyhelminthes (from the Greek πλατύ, platy, meaning "flat" and ἕλμινς (root: ἑλμινθ-), helminth-, meaning "worm") is a phylum of relatively simple bilaterian, unsegmented, soft-bodied invertebrates commonly called flatworms or flat worms. Being acoelomates (having no body cavity), and having no specialised circulatory and respiratory organs, they are restricted to having flattened shapes that allow oxygen and nutrients to pass through their bodies by diffusion. The digestive cavity has only one opening for both ingestion (intake of nutrients) and egestion (removal of undigested wastes); as a result, the food can not be processed continuously. In traditional medicinal texts, Platyhelminthes are divided into Turbellaria, which are mostly non-parasitic animals such as planarians, and three entirely parasitic groups: Cestoda, Trematoda and Monogenea; however, since the turbellarians have since been proven not to be monophyletic, this classification is now deprecated. Free-living flatworms are mostly predators, and live in water or in shaded, humid terrestrial environments, such as leaf litter. Cestodes (tapeworms) and trematodes (flukes) have complex life-cycles, with mature stages that live as parasites in the digestive systems of fish or land vertebrates, and intermediate stages that infest secondary hosts. The eggs of trematodes are excreted from their main hosts, whereas adult cestodes generate vast numbers of hermaphroditic, segment-like proglottids that detach when mature, are excreted, and then release eggs. Unlike the other parasitic groups, the monogeneans are external parasites infesting aquatic animals, and their larvae metamorphose into the adult form after attaching to a suitable host. Because they do not have internal body cavities, Platyhelminthes were regarded as a primitive stage in the evolution of bilaterians (animals with bilateral symmetry and hence with distinct front and rear ends). However, analyses since the mid-1980s have separated out one subgroup, the Acoelomorpha, as basal bilaterians – closer to the original bilaterians than to any other modern groups. The remaining Platyhelminthes form a monophyletic group, one that contains all and only descendants of a common ancestor that is itself a member of the group. The redefined Platyhelminthes is part of the Lophotrochozoa, one of the three main groups of more complex bilaterians. These analyses had concluded the redefined Platyhelminthes, excluding Acoelomorpha, consists of two monophyletic subgroups, Catenulida and Rhabditophora, with Cestoda, Trematoda and Monogenea forming a monophyletic subgroup within one branch of the Rhabditophora. Hence, the traditional platyhelminth subgroup "Turbellaria" is now regarded as paraphyletic, since it excludes the wholly parasitic groups, although these are descended from one group of "turbellarians". Two planarian species have been used successfully in the Philippines, Indonesia, Hawaii, New Guinea, and Guam to control populations of the imported giant African snail Achatina fulica, which was displacing native snails. However, these planarians are themselves a serious threat to native snails and should not be used for biological control. In northwest Europe, there are concerns about the spread of the New Zealand planarian Arthurdendyus triangulatus, which preys on earthworms. Description Distinguishing features Platyhelminthes are bilaterally symmetrical animals: their left and right sides are mirror images of each other; this also implies they have distinct top and bottom surfaces and distinct head and tail ends. Like other bilaterians, they have three main cell layers (endoderm, mesoderm, and ectoderm), while the radially symmetrical cnidarians and ctenophores (comb jellies) have only two cell layers. Beyond that, they are "defined more by what they do not have than by any particular series of specializations." Unlike most other bilaterians, Platyhelminthes have no internal body cavity, so are described as acoelomates. Although the absence of a coelom also occurs in other bilaterians: gnathostomulids, gastrotrichs, xenacoelomorphs, cycliophorans, entoproctans and the parastic mesozoans. They also lack specialized circulatory and respiratory organs, both of these facts are defining features when classifying a flatworm's anatomy. Their bodies are soft and unsegmented. Features common to all subgroups The lack of circulatory and respiratory organs limits platyhelminths to sizes and shapes that enable oxygen to reach and carbon dioxide to leave all parts of their bodies by simple diffusion. Hence, many are microscopic, and the large species have flat ribbon-like or leaf-like shapes. Because there is no circulatory system which can transport nutrients around, the guts of large species have many branches, allowing the nutrients to diffuse to all parts of the body. Respiration through the whole surface of the body makes them vulnerable to fluid loss, and restricts them to environments where dehydration is unlikely: sea and freshwater, moist terrestrial environments such as leaf litter or between grains of soil, and as parasites within other animals. The space between the skin and gut is filled with mesenchyme, also known as parenchyma, a connective tissue made of cells and reinforced by collagen fibers that act as a type of skeleton, providing attachment points for muscles. The mesenchyme contains all the internal organs and allows the passage of oxygen, nutrients and waste products. It consists of two main types of cell: fixed cells, some of which have fluid-filled vacuoles; and stem cells, which can transform into any other type of cell, and are used in regenerating tissues after injury or asexual reproduction. Most platyhelminths have no anus and regurgitate undigested material through the mouth. The genus Paracatenula, whose members include tiny flatworms living in symbiosis with bacteria, is even missing a mouth and a gut. However, some long species have an anus and some with complex, branched guts have more than one anus, since excretion only through the mouth would be difficult for them. The gut is lined with a single layer of endodermal cells that absorb and digest food. Some species break up and soften food first by secreting enzymes in the gut or pharynx (throat). All animals need to keep the concentration of dissolved substances in their body fluids at a fairly constant level. Internal parasites and free-living marine animals live in environments with high concentrations of dissolved material, and generally let their tissues have the same level of concentration as the environment, while freshwater animals need to prevent their body fluids from becoming too dilute. Despite this difference in environments, most platyhelminths use the same system to control the concentration of their body fluids. Flame cells, so called because the beating of their flagella looks like a flickering candle flame, extract from the mesenchyme water that contains wastes and some reusable material, and drive it into networks of tube cells which are lined with flagella and microvilli. The tube cells' flagella drive the water towards exits called nephridiopores, while their microvilli reabsorb reusable materials and as much water as is needed to keep the body fluids at the right concentration. These combinations of flame cells and tube cells are called protonephridia. In all platyhelminths, the nervous system is concentrated at the head end. Other platyhelminths have rings of ganglia in the head and main nerve trunks running along their bodies. Major subgroups Early classification divided the flatworms in four groups: Turbellaria, Trematoda, Monogenea and Cestoda. This classification had long been recognized to be artificial, and in 1985, Ehlers proposed a phylogenetically more correct classification, where the massively polyphyletic "Turbellaria" was split into a dozen orders, and Trematoda, Monogenea and Cestoda were joined in the new order Neodermata. However, the classification presented here is the early, traditional, classification, as it still is the one used everywhere except in scientific articles. Turbellaria These have about 4,500 species, are mostly free-living, and range from to in length. Most are predators or scavengers, and terrestrial species are mostly nocturnal and live in shaded, humid locations, such as leaf litter or rotting wood. However, some are symbiotes of other animals, such as crustaceans, and some are parasites. Free-living turbellarians are mostly black, brown or gray, but some larger ones are brightly colored. The Acoela and Nemertodermatida were traditionally regarded as turbellarians, but are now regarded as members of a separate phylum, the Acoelomorpha, or as two separate phyla. Xenoturbella, a genus of very simple animals, has also been reclassified as a separate phylum. Some turbellarians have a simple pharynx lined with cilia and generally feed by using cilia to sweep food particles and small prey into their mouths, which are usually in the middle of their undersides. Most other turbellarians have a pharynx that is eversible (can be extended by being turned inside-out), and the mouths of different species can be anywhere along the underside. The freshwater species Microstomum caudatum can open its mouth almost as wide as its body is long, to swallow prey about as large as itself. Predatory species in suborder Kalyptorhynchia often have a muscular pharynx equipped with hooks or teeth used for seizing prey. Most turbellarians have pigment-cup ocelli ("little eyes"); one pair in most species, but two or even three pairs in others. A few large species have many eyes in clusters over the brain, mounted on tentacles, or spaced uniformly around the edge of the body. The ocelli can only distinguish the direction from which light is coming to enable the animals to avoid it. A few groups have statocysts - fluid-filled chambers containing a small, solid particle or, in a few groups, two. These statocysts are thought to function as balance and acceleration sensors, as they perform the same way in cnidarian medusae and in ctenophores. However, turbellarian statocysts have no sensory cilia, so the way they sense the movements and positions of solid particles is unknown. On the other hand, most have ciliated touch-sensor cells scattered over their bodies, especially on tentacles and around the edges. Specialized cells in pits or grooves on the head are most likely smell sensors. Planarians, a subgroup of seriates, are famous for their ability to regenerate if divided by cuts across their bodies. Experiments show that (in fragments that do not already have a head) a new head grows most quickly on those fragments which were originally located closest to the original head. This suggests the growth of a head is controlled by a chemical whose concentration diminishes throughout the organism, from head to tail. Many turbellarians clone themselves by transverse or longitudinal division, whilst others, reproduce by budding. The vast majority of turbellarians are hermaphrodites (they have both female and male reproductive cells) which fertilize eggs internally by copulation. Some of the larger aquatic species mate by penis fencing – a duel in which each tries to impregnate the other, and the loser adopts the female role of developing the eggs. In most species, "miniature adults" emerge when the eggs hatch, but a few large species produce plankton-like larvae. Trematoda These parasites' name refers to the cavities in their holdfasts (Greek τρῆμα, hole), which resemble suckers and anchor them within their hosts. The skin of all species is a syncitium, which is a layer of cells that shares a single external membrane. Trematodes are divided into two groups, Digenea and Aspidogastrea (also known as Aspodibothrea). Digenea These are often called flukes, as most have flat rhomboid shapes like that of a flounder (Old English ). There are about 11,000 species, more than all other platyhelminthes combined, and second only to roundworms among parasites on metazoans. Adults usually have two holdfasts: a ring around the mouth and a larger sucker midway along what would be the underside in a free-living flatworm. Although the name "Digeneans" means "two generations", most have very complex life cycles with up to seven stages, depending on what combinations of environments the early stages encounter – the most important factor being whether the eggs are deposited on land or in water. The intermediate stages transfer the parasites from one host to another. The definitive host in which adults develop is a land vertebrate; the earliest host of juvenile stages is usually a snail that may live on land or in water, whilst in many cases, a fish or arthropod is the second host. For example, the adjoining illustration shows the life cycle of the intestinal fluke metagonimus, which hatches in the intestine of a snail, then moves to a fish where it penetrates the body and encysts in the flesh, then migrating to the small intestine of a land animal that eats the fish raw, finally generating eggs that are excreted and ingested by snails, thereby completing the cycle. A similar life cycle occurs with Opisthorchis viverrini, which is found in South East Asia and can infect the liver of humans, causing Cholangiocarcinoma (bile duct cancer). Schistosomes, which cause the devastating tropical disease bilharzia, also belong to this group. Adults range between and in length. Individual adult digeneans are of a single sex, and in some species slender females live in enclosed grooves that run along the bodies of the males, partially emerging to lay eggs. In all species the adults have complex reproductive systems, capable of producing between 10,000 and 100,000 times as many eggs as a free-living flatworm. In addition, the intermediate stages that live in snails reproduce asexually. Adults of different species infest different parts of the definitive host - for example the intestine, lungs, large blood vessels, and liver. The adults use a relatively large, muscular pharynx to ingest cells, cell fragments, mucus, body fluids or blood. In both the adult and snail-inhabiting stages, the external syncytium absorbs dissolved nutrients from the host. Adult digeneans can live without oxygen for long periods. Aspidogastrea Members of this small group have either a single divided sucker or a row of suckers that cover the underside. They infest the guts of bony or cartilaginous fish, turtles, or the body cavities of marine and freshwater bivalves and gastropods. Their eggs produce ciliated swimming larvae, and the life cycle has one or two hosts. Cercomeromorpha Cercomeromorpha contains parasites attach themselves to their hosts by means of disks that bear crescent-shaped hooks. They are divided into the Monogenea and Cestoda groupings. Monogenea Of about 1,100 species of monogeneans, most are external parasites that require particular host species - mainly fish, but in some cases amphibians or aquatic reptiles. However, a few are internal parasites. Adult monogeneans have large attachment organs at the rear, known as haptors (Greek ἅπτειν, haptein, means "catch"), which have suckers, clamps, and hooks. They often have flattened bodies. In some species, the pharynx secretes enzymes to digest the host's skin, allowing the parasite to feed on blood and cellular debris. Others graze externally on mucus and flakes of the hosts' skins. The name "Monogenea" is based on the fact that these parasites have only one nonlarval generation. Cestoda These are often called tapeworms because of their flat, slender but very long bodies – the name "cestode" is derived from the Latin word cestus, which means "tape". The adults of all 3,400 cestode species are internal parasites. Cestodes have no mouths or guts, and the syncitial skin absorbs nutrients – mainly carbohydrates and amino acids – from the host, and also disguises it chemically to avoid attacks by the host's immune system. Shortage of carbohydrates in the host's diet stunts the growth of parasites and may even kill them. Their metabolisms generally use simple but inefficient chemical processes, compensating for this inefficiency by consuming large amounts of food relative to their physical size. In the majority of species, known as eucestodes ("true tapeworms"), the neck produces a chain of segments called proglottids via a process known as strobilation. As a result, the most mature proglottids are furthest from the scolex. Adults of Taenia saginata, which infests humans, can form proglottid chains over long, although is more typical. Each proglottid has both male and female reproductive organs. If the host's gut contains two or more adults of the same cestode species they generally fertilize each other, however, proglottids of the same worm can fertilize each other and even themselves. When the eggs are fully developed, the proglottids separate and are excreted by the host. The eucestode life cycle is less complex than that of digeneans, but varies depending on the species. For example: Adults of Diphyllobothrium infest fish, and the juveniles use copepod crustaceans as intermediate hosts. Excreted proglottids release their eggs into the water where the eggs hatch into ciliated, swimming larvae. If a larva is swallowed by a copepod, it sheds the cilia and the skin becomes a syncitium; the larva then makes its way into the copepod's hemocoel (an internal cavity which is the central part of the circulatory system) where it attaches itself using three small hooks. If the copepod is eaten by a fish, the larva metamorphoses into a small, unsegmented tapeworm, drills through to the gut and grows into an adult. Various species of Taenia infest the guts of humans, cats and dogs. The juveniles use herbivores – such as pigs, cattle and rabbits – as intermediate hosts. Excreted proglottids release eggs that stick to grass leaves and hatch after being swallowed by a herbivore. The larva then makes its way to the herbivore's muscle tissue, where it metamorphoses into an oval worm about long, with a scolex that is kept internally. When the definitive host eats infested raw or undercooked meat from an intermediate host, the worm's scolex pops out and attaches itself to the gut, when the adult tapeworm develops. Members of the smaller group known as Cestodaria have no scolex, do not produce proglottids, and have body shapes similar to those of diageneans. Cestodarians parasitize fish and turtles. Classification and evolutionary relationships The relationships of Platyhelminthes to other Bilateria are shown in the phylogenetic tree: The internal relationships of Platyhelminthes are shown below. The tree is not fully resolved. The oldest confidently identified parasitic flatworm fossils are cestode eggs found in a Permian shark coprolite, but helminth hooks still attached to Devonian acanthodians and placoderms might also represent parasitic flatworms with simple life cycles. The oldest known free-living platyhelminth specimen is a fossil preserved in Eocene age Baltic amber and placed in the monotypic species Micropalaeosoma balticus, whilst the oldest subfossil specimens are schistosome eggs discovered in ancient Egyptian mummies. The Platyhelminthes have very few synapomorphies - distinguishing features that all Platyhelminthes (but no other animals) exhibit. This makes it difficult to work out their relationships with other groups of animals, as well as the relationships between different groups that are described as members of the Platyhelminthes. The "traditional" view before the 1990s was that Platyhelminthes formed the sister group to all the other bilaterians, which include, for instance, arthropods, molluscs, annelids and chordates. Since then, molecular phylogenetics, which aims to work out evolutionary "family trees" by comparing different organisms' biochemicals such as DNA, RNA and proteins, has radically changed scientists' view of evolutionary relationships between animals. Flatworms are now recognized as secondarily simplified bilaterians. Detailed morphological analyses of anatomical features in the mid-1980s, as well as molecular phylogenetics analyses since 2000 using different sections of DNA, agree that Acoelomorpha, consisting of Acoela (traditionally regarded as very simple "turbellarians") and Nemertodermatida (another small group previously classified as "turbellarians") are the sister group to all other bilaterians. However, a 2007 study concluded that Acoela and Nemertodermatida were two distinct groups of bilaterians. Xenoturbella, a bilaterian whose only well-defined organ is a statocyst, was originally classified as a "primitive turbellarian". Later studies suggested it may instead be a deuterostome, but more detailed molecular phylogenetics have led to its classification as sister-group to the Acoelomorpha. The Platyhelminthes excluding Acoelomorpha contain two main groups - Catenulida and Rhabditophora - both of which are generally agreed to be monophyletic (each contains all and only the descendants of an ancestor that is a member of the same group). Early molecular phylogenetics analyses of the Catenulida and Rhabditophora left uncertainties about whether these could be combined in a single monophyletic group; a study in 2008 concluded that they could, therefore Platyhelminthes could be redefined as Catenulida plus Rhabditophora, excluding the Acoelomorpha. Other molecular phylogenetics analyses agree the redefined Platyhelminthes are most closely related to Gastrotricha, and both are part of a grouping known as Platyzoa. Platyzoa are generally agreed to be at least closely related to the Lophotrochozoa, a superphylum that includes molluscs and annelid worms. The majority view is that Platyzoa are part of Lophotrochozoa, but a significant minority of researchers regard Platyzoa as a sister group of Lophotrochozoa. It has been agreed since 1985 that each of the wholly parasitic platyhelminth groups (Cestoda, Monogenea and Trematoda) is monophyletic, and that together these form a larger monophyletic grouping, the Neodermata, in which the adults of all members have syncytial skins. However, there is debate about whether the Cestoda and Monogenea can be combined as an intermediate monophyletic group, the Cercomeromorpha, within the Neodermata. It is generally agreed that the Neodermata are a sub-group a few levels down in the "family tree" of the Rhabditophora. Hence the traditional sub-phylum "Turbellaria" is paraphyletic, since it does not include the Neodermata although these are descendants of a sub-group of "turbellarians". Evolution An outline of the origins of the parasitic lifestyle has been proposed; epithelial feeding monopisthocotyleans on fish hosts are basal in the Neodermata and were the first shift to parasitism from free living ancestors. The next evolutionary step was a dietary change from epithelium to blood. The last common ancestor of Digenea + Cestoda was monogenean and most likely sanguinivorous. In several members of the order Rhabdocoela an endosymbiotic relationship with microalgae has evolved. Some species in the same order has also evolved kleptoplasty. The earliest known fossils confidently classified as tapeworms have been dated to , after being found in coprolites (fossilised faeces) from an elasmobranch. Putative older fossils include a ribbon-shaped, bilaterally symmetrical organism named Rugosusivitta orthogonia from the Early Cambrian of China, brownish bodies on the bedding planes reported from the Late Ordovician (Katian) Vauréal Formation (Canada) by Knaust & Desrochers (2019), tentatively interpreted as turbellarians (though the authors cautioned that they might ultimately turn out to be fossils of acoelomorphs or nemerteans) and circlets of fossil hooks preserved with placoderm and acanthodian fossils from the Devonian of Latvia, at least some of which might represent parasitic monogeneans. Interaction with humans Parasitism Cestodes (tapeworms) and digeneans (flukes) cause diseases in humans and their livestock, whilst monogeneans can cause serious losses of stocks in fish farms. Schistosomiasis, also known as bilharzia or snail fever, is the second-most devastating parasitic disease in tropical countries, behind malaria. The Carter Center estimated 200 million people in 74 countries are infected with the disease, and half the victims live in Africa. The condition has a low mortality rate, but usually presents as a chronic illness that can damage internal organs. It can impair the growth and cognitive development of children, increasing the risk of bladder cancer in adults. The disease is caused by several flukes of the genus Schistosoma, which can bore through human skin; those most at risk use infected bodies of water for recreation or laundry. In 2000, an estimated 45 million people were infected with the beef tapeworm Taenia saginata and 3 million with the pork tapeworm Taenia solium. Infection of the digestive system by adult tapeworms causes abdominal symptoms that, whilst unpleasant, are seldom disabling or life-threatening. However, neurocysticercosis resulting from penetration of T. solium larvae into the central nervous system is the major cause of acquired epilepsy worldwide. In 2000, about 39 million people were infected with trematodes (flukes) that naturally parasitize fish and crustaceans, but can pass to humans who eat raw or lightly cooked seafood. Infection of humans by the broad fish tapeworm Diphyllobothrium latum occasionally causes vitamin B12 deficiency and, in severe cases, megaloblastic anemia. The threat to humans in developed countries is rising as a result of social trends: the increase in organic farming, which uses manure and sewage sludge rather than artificial fertilizers, spreads parasites both directly and via the droppings of seagulls which feed on manure and sludge; the increasing popularity of raw or lightly cooked foods; imports of meat, seafood and salad vegetables from high-risk areas; and, as an underlying cause, reduced awareness of parasites compared with other public health issues such as pollution. In less-developed countries, inadequate sanitation and the use of human feces (night soil) as fertilizer or to enrich fish farm ponds continues to spread parasitic platyhelminths, whilst poorly designed water-supply and irrigation projects have provided additional channels for their spread. People in these countries usually cannot afford the cost of fuel required to cook food thoroughly enough to kill parasites. Controlling parasites that infect humans and livestock has become more difficult, as many species have become resistant to drugs that used to be effective, mainly for killing juveniles in meat. While poorer countries still struggle with unintentional infection, cases have been reported of intentional infection in the US by dieters who are desperate for rapid weight-loss. Pests There is concern in northwest Europe (including the British Isles) regarding the possible proliferation of the New Zealand planarian Arthurdendyus triangulatus and the Australian flatworm Australoplana sanguinea, both of which prey on earthworms. A. triangulatus is thought to have reached Europe in containers of plants imported by botanical gardens. Benefits In Hawaii, the planarian Endeavouria septemlineata has been used to control the imported giant African snail Achatina fulica, which was displacing native snails; Platydemus manokwari, another planarian, has been used for the same purpose in Philippines, Indonesia, New Guinea and Guam. Although A. fulica has declined sharply in Hawaii, there are doubts about how much E. septemlineata contributed to this decline. However, P. manokwari is given credit for severely reducing, and in places exterminating, A. fulica – achieving much greater success than most biological pest control programs, which generally aim for a low, stable population of the pest species. The ability of planarians to take different kinds of prey and to resist starvation may account for their ability to decimate A. fulica. However, these planarians are a serious threat to native snails and should never be used for biological control. A study in Argentina shows the potential for planarians such as Girardia anceps, Mesostoma ehrenbergii, and Bothromesostoma evelinae to reduce populations of the mosquito species Aedes aegypti and Culex pipiens. The experiment showed that G. anceps can prey on all instars of both mosquito species, yet maintain a steady predation rate over time. The ability of these flatworms to live in artificial containers demonstrated the potential of placing these species in popular mosquito breeding sites, which might reduce the amount of mosquito-borne disease.
Biology and health sciences
Platyzoa
null
24160
https://en.wikipedia.org/wiki/Piper%20%28plant%29
Piper (plant)
Piper, the pepper plants or pepper vines, is an economically and ecologically important genus in the family Piperaceae. It contains about 1,000–2,000 species of shrubs, herbs, and lianas, many of which are dominant species in their native habitat. The diversification of this taxon is of interest to understanding the evolution of plants. Pepper plants belong to the magnoliids, which are angiosperms but neither monocots nor eudicots. Their family, Piperaceae, is most closely related to the lizardtail family (Saururaceae), which in fact generally look like smaller, more delicate and amphibious pepper plants. Both families have characteristic tail-shaped inflorescences covered in tiny flowers. A somewhat less close relative is the pipevine family (Aristolochiaceae). A well-known and very close relative – being also part of the Piperaceae – are the radiator plants of the genus Peperomia. The scientific name Piper and the common name "pepper" are derived from the Sanskrit term pippali, denoting the long pepper (P. longum). Evolution The earliest fossil of Piper is of †Piper margaritae from the Late Cretaceous (Maastrichtian) of Colombia. P. margaritae appears to be nested within the clade Schilleria, indicating extensive Cretaceous diversification of Piper into the multiple extant clades, coinciding with the final breakup of Gondwana. This contrasts with previous theories assuming a younger radiation of the genus. An earlier potential record is of †Piper arcuatile from the Cenomanian to Santonian Kaltag Formation of Yukon, although this affinity to Piper is not entirely reliable. Distribution and ecology Piper species have a pantropical distribution, and are most commonly found in the understory of lowland tropical forests, but can also occur in clearings and in higher elevation life zones such as cloud forests; one species – the Japanese Pepper (P. kadsura) from southern Japan and southernmost Korea – is subtropical and can tolerate light winter frost. Peppers are often dominant species where they are found. Most Piper species are either herbaceous or vines; some grow as shrubs or almost as small trees. A few species, commonly called "ant pipers" (e.g. Piper cenocladum), live in a mutualism with ants. The fruit of the Piper plant, called a peppercorn when it is round and pea-sized, as is usual, is distributed in the wild mainly by birds, but small fruit-eating mammals – e.g. bats of the genus Carollia – are also important. Despite the high content of chemicals that are noxious to herbivores, some have evolved the ability to withstand the chemical defences of pepper plants, for example the sematurine moth Homidiana subpicta or some flea beetles of the genus Lanka. The latter can be significant pests to pepper growers. Usages Many pepper plants make good ornamentals for gardens in subtropical or warmer regions. Pepper vines can be used much as ivy in temperate climates, while other species, like lacquered pepper (P. magnificum) grow as sizeable, compact and attractive shrubs with tough and shiny leaves. Smaller species, like Celebes pepper (P. ornatum) with its finely patterned leaves, are also suitable as indoor pot plants. Unsustainable logging of tropical primary forests is threatening a number of peppers. The extent of the effect of such wholesale habitat destruction on the genus is unknown, but in the forests of Ecuador – the only larger region for which comprehensive data exists – more than a dozen species are known to be on the brink of extinction. On the other hand, other Piper species (e.g. spiked pepper, P. aduncum) have been widely distributed as a result of human activity and are a major invasive species in certain areas. The most significant human use of Piper is not for its looks however, but ultimately for the wide range of powerful secondary compounds found particularly in the fruits. As spice and vegetable Culinary use of pepper plants is attested perhaps as early as 9,000 years ago. Peppercorn remains were found among the food refuse left by Hoabinhian artisans at Spirit Cave, Thailand. It is likely that these plants were collected from the wild rather than deliberately grown. Use of peppercorns as pungent spice is significant on an international scale. By classical antiquity, there was a vigorous trade of spices including black pepper (P. nigrum) from South Asia to Europe. The Apicius, a recipe collection complied about 400 AD, mentions "pepper" as a spice for most main dishes. In the late Roman Empire, black pepper was expensive, but was available readily enough to be used more frequently than salt or sugar. As Europe moved into the Early Middle Ages, trade routes deteriorated and the use of pepper declined somewhat, but peppercorns, storing easily and having a high mass per volume, never ceased to be a profitable trade item. In the Middle Ages, international traders were nicknamed Pfeffersäcke ("pepper-sacks") in German towns of the Hanseatic League and elsewhere. Later, wars were fought by European powers, between themselves and in complex alliances and enmities with Indian Ocean states, in part about control of the supply of spices, perhaps the most archetypal being black pepper fruit. Today, peppercorns of the three preparations (green, white and black) are one of the most widely used spices of plant origin worldwide. Due to the wide distribution of Piper, the fruit of other species are also important spices, many of them internationally. Long pepper (P. longum), is possibly the second-most popular Piper spice internationally; it has a rather chili-like "heat" and the whole inflorescence is used as the fruits are tiny. Cubeb (P. cubeba), also known as tailed pepper, played a major role in the spice trade. Reputedly Philip IV of Spain suppressed trade in cubeb peppercorns at the end of the 1630s to capitalize on his share of the black pepper trade. It remains a significant spice around the Indian Ocean region today, however. West African pepper (P. guineense), is commonly used in West African cuisine, and is sometimes used in the East African berbere spice mix. This species, despite being traded more extensively in earlier times, is less common outside Africa today. Not only the seeds of Piper are used in cooking. West African Pepper leaves, known locally as uziza, are used as a flavoring vegetable in Nigerian stews. In Mexican-influenced cooking, hoja santa or Mexican pepperleaf (P. auritum) has a variety of uses. In Southeast Asia, leaves of two species of Piper have major importance in cooking: lolot (P. lolot) is used to wrap meat for grilling in the Indochina region, while wild betel (P. sarmentosum) is used raw or cooked as a vegetable in Malay and Thai cuisine; The stems and roots of Piper chaba are used as a spice in Bangladeshi cuisine. As medicine Cubeb (P. cubeba) has been used in folk medicine and herbalism as well as, particularly in the early 20th century, as a cigarette flavoring. P. darienense is used medically by the Kuna people of the Panama-Colombia border region, and elsewhere it is used to intoxicate fish which then can be easily caught. Spiked pepper, often called matico appears to have strong disinfectant and antibiotic properties. Black pepper (P. nigrum) essential oil is sometimes used in herbalism, and long pepper (P. longum) is similarly employed in Ayurveda, where it was an ingredient of Triphala Guggulu and (together with black pepper) of Trikatu pills, used for rasayana (rejuvenating and detoxifying) purposes. One Piper species has gained large-scale use as a stimulant. Betel (P. betle) leaves are used to wrap betel palm nut slices; its sap helps release the stimulating effect of these "cookies" which are widely known as pan in India. Conversely, another Piper species, kava (P. methysticum), is used for its depressant and euphoriant effects. In the Pacific region, where it has been widely spread as a canoe plant, kava is used to produce a calming and socializing drink somewhat similar to alcohol and benzodiazepines but without many of the negative side effects and less of an addiction risk. It has also become popular elsewhere in recent decades, and is used as a medical plant. However, pills that contain parts of the whole plant have occasionally shown a strong hepatotoxic effect, which has led to the banning of kava in many countries. On the other hand, the traditional preparation of the root as a calming drink appears to pose little, if any, such hazard. In science The genus contains species suitable for studying natural history, molecular biology, natural products chemistry, community ecology, and evolutionary biology. Piper is a model genus for research in ecology and evolutionary biology. The diversity and ecological importance of the genus makes it a strong candidate for ecological and evolutionary studies. Most research has focused on the economically important species P. nigrum (black pepper), P. methysticum (kava), and P. betle (betel). A recent study based on DNA sequence analysis suggest that P. nigrum originated in the Western Ghats hot spot in India. The obligate and facultative ant mutualists found in some Piper species have a strong influence on their biology, making them ideal systems for research on the evolution of symbioses and the effect of mutualisms on biotic communities. Important secondary metabolites found in pepper plants are piperine and chavicine, which were first isolated from Black Pepper, and reported to have antibiotic activities. Preliminary research reports has shown that piperine has an antibacterial activity against various bacteria such as S. aureus, Streptococcus mutans, and gastric cancer pathogen Helicobacter pylori and decreased H. pylori toxin entry to gastric epithelial cells. The piperidine functional group is named after the former, and piperazine (which is not found in P. nigrum in noticeable quantities) was in turn named after piperidine. The significant secondary metabolites of kava are kavalactones and flavokawains. Pipermethystine is suspected to be the main hepatotoxic compound in this plant's stems and leaves. Repelling insects Studies have been done to determine the effectiveness of piper leaves to repel different types of insects. Capuchin monkeys have been recorded by BBC Earth rubbing the piper leaves on them to repel insects. Species The largest number of Piper species are found in the Americas (about 700 species), with about 300 species from Southern Asia. There are smaller groups of species from the South Pacific (about 40 species) and Africa (about 15 species). The American, Asian, and South Pacific groups each appear to be monophyletic; the affinity of the African species is unclear. Some species are sometimes segregated into the genera Pothomorphe, Macropiper, Ottonia, Arctottonia, Sarcorhachis, Trianaeopiper, and Zippelia, but other sources keep them in Piper. The species called "Piper aggregatum" and "P. fasciculatum" are actually Lacistema aggregatum, a plant from the family Lacistemataceae.
Biology and health sciences
Piperales
Plants
24222
https://en.wikipedia.org/wiki/Public-key%20cryptography
Public-key cryptography
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security. There are many kinds of public-key cryptosystems, with different security goals, including digital signature, Diffie-Hellman key exchange, public-key key encapsulation, and public-key encryption. Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), SSH, S/MIME, and PGP. Compared to symmetric cryptography, public-key cryptography can be too slow for many purposes, so these protocols often combine symmetric cryptography with public-key cryptography in hybrid cryptosystems. Description Before the mid-1970s, all cipher systems used symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via a secure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users. By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret. The two best-known types of public key cryptography are digital signature and public-key encryption: In a digital signature system, a sender can use a private key together with a message to create a signature. Anyone with the corresponding public key can verify whether the signature matches the message, but a forger who does not know the private key cannot find any message/signature pair that will pass verification with the public key.For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine. In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not conceal metadata like what computer a source used to send a message, when they sent it, or how long it is. Public-key encryption on its own also does not tell the recipient anything about who sent a message—it just conceals the content of the message. One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including: A public key infrastructure (PKI), in which one or more third parties – known as certificate authorities – certify ownership of key pairs. TLS relies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved. A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user. PGP uses this approach, in addition to lookup in the domain name system (DNS). The DKIM system for digitally signing emails also uses this approach. Applications The most obvious application of a public key encryption system is for encrypting communication to provide confidentiality – a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key. Another application in public key cryptography is the digital signature. Digital signature schemes can be used for sender authentication. Non-repudiation systems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication. Further applications built on this foundation include: digital cash, password-authenticated key agreement, time-stamping services and non-repudiation protocols. Hybrid cryptosystems Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/private asymmetric key-exchange algorithm to encrypt and exchange a symmetric key, which is then used by symmetric-key cryptography to transmit data using the now-shared symmetric key for a symmetric key encryption algorithm. PGP, SSH, and the SSL/TLS family of schemes use this procedure; they are thus called hybrid cryptosystems. The initial asymmetric cryptography-based key exchange to share a server-generated symmetric key from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection. Weaknesses As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost. Additionally, with the advent of quantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem. Algorithms All public key schemes are in theory susceptible to a "brute-force key search attack". However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; both RSA and ElGamal encryption have known attacks that are much faster than the brute-force approach. None of these are sufficiently improved to be actually practical, however. Major weaknesses have been found for several formerly promising asymmetric key algorithms. The "knapsack packing" algorithm was found to be insecure after the development of a new attack. As with all cryptographic functions, public-key implementations may be vulnerable to side-channel attacks that exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks. Alteration of public keys Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion. A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender. A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, the Internet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at an Internet service provider (ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk. In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker. Public key infrastructure One approach to prevent such attacks involves the use of a public key infrastructure (PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. However, this has potential weaknesses. For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin. Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check the bona fides of the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream. Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples include TLS and its predecessor SSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS for HTTPS). Aside from the resistance to attack of a particular key pair, the security of the certification hierarchy must be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate. Public key digital certificates are typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure. Unencrypted metadata Most of the available public-key encryption software does not conceal metadata in the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden. However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party. The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging. History During the early history of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach to distributing keys. Anticipation In his 1874 book The Principles of Science, William Stanley Jevons wrote:Can the reader say what two numbers multiplied together will produce the number 8616460799? I think it unlikely that anyone but myself will ever know. Here he described the relationship of one-way functions to cryptography, and went on to discuss specifically the factorization problem used to create a trapdoor function. In July 1996, mathematician Solomon W. Golomb said: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography." Classified discovery In 1970, James H. Ellis, a British cryptographer at the UK Government Communications Headquarters (GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it. In 1973, his colleague Clifford Cocks implemented what has become known as the RSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer, Malcolm J. Williamson, developed what is now known as Diffie–Hellman key exchange. The scheme was also passed to the US's National Security Agency. Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization: I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution from Berners-Lee designing an open internet architecture for CERN, its adaptation and adoption for the Arpanet ... did public key cryptography realise its full potential. —Ralph Benjamin These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997. Public discovery In 1976, an asymmetric key cryptosystem was published by Whitfield Diffie and Martin Hellman who, influenced by Ralph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which uses exponentiation in a finite field, came to be known as Diffie–Hellman key exchange. This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known as Merkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years. In 1977, a generalization of Cocks's scheme was independently invented by Ron Rivest, Adi Shamir and Leonard Adleman, all then at MIT. The latter authors published their work in 1978 in Martin Gardner's Scientific American column, and the algorithm came to be known as RSA, from their initials. RSA uses exponentiation modulo a product of two very large primes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty of factoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in the Mathematical Games column in the August 1977 issue of Scientific American. Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including the Rabin cryptosystem, ElGamal encryption, DSA and ECC. Examples Examples of well-regarded asymmetric key techniques for varied purposes include: Diffie–Hellman key exchange protocol DSS (Digital Signature Standard), which incorporates the Digital Signature Algorithm ElGamal Elliptic-curve cryptography Elliptic Curve Digital Signature Algorithm (ECDSA) Elliptic-curve Diffie–Hellman (ECDH) Ed25519 and Ed448 (EdDSA) X25519 and X448 (ECDH/EdDH) Various password-authenticated key agreement techniques Paillier cryptosystem RSA encryption algorithm (PKCS#1) Cramer–Shoup cryptosystem YAK authenticated key agreement protocol Examples of asymmetric key algorithms not yet widely adopted include: NTRUEncrypt cryptosystem Kyber McEliece cryptosystem Examples of notable – yet insecure – asymmetric key algorithms include: Merkle–Hellman knapsack cryptosystem Examples of protocols using asymmetric key algorithms include: S/MIME GPG, an implementation of OpenPGP, and an Internet Standard EMV, EMV Certificate Authority IPsec PGP ZRTP, a secure VoIP protocol Transport Layer Security standardized by IETF and its predecessor Secure Socket Layer SILC SSH Bitcoin Off-the-Record Messaging
Technology
Computer security
null
24236
https://en.wikipedia.org/wiki/Power%20%28physics%29
Power (physics)
Power is the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the watt, equal to one joule per second. Power is a scalar quantity. Specifying power in particular systems may require attention to other quantities; for example, the power involved in moving a ground vehicle is the product of the aerodynamic drag plus traction force on the wheels, and the velocity of the vehicle. The output power of a motor is the product of the torque that the motor generates and the angular velocity of its output shaft. Likewise, the power dissipated in an electrical element of a circuit is the product of the current flowing through the element and of the voltage across the element. Definition Power is the rate with respect to time at which work is done; it is the time derivative of work: where is power, is work, and is time. We will now show that the mechanical power generated by a force F on a body moving at the velocity v can be expressed as the product: If a constant force F is applied throughout a distance x, the work done is defined as . In this case, power can be written as: If instead the force is variable over a three-dimensional curve C, then the work is expressed in terms of the line integral: From the fundamental theorem of calculus, we know that Hence the formula is valid for any general situation. In older works, power is sometimes called activity. Units The dimension of power is energy divided by time. In the International System of Units (SI), the unit of power is the watt (W), which is equal to one joule per second. Other common and traditional measures are horsepower (hp), comparing to the power of a horse; one mechanical horsepower equals about 745.7 watts. Other units of power include ergs per second (erg/s), foot-pounds per minute, dBm, a logarithmic measure relative to a reference of 1 milliwatt, calories per hour, BTU per hour (BTU/h), and tons of refrigeration. Average power and instantaneous power As a simple example, burning one kilogram of coal releases more energy than detonating a kilogram of TNT, but because the TNT reaction releases energy more quickly, it delivers more power than the coal. If is the amount of work performed during a period of time of duration , the average power over that period is given by the formula It is the average amount of work done or energy converted per unit of time. Average power is often called "power" when the context makes it clear. Instantaneous power is the limiting value of the average power as the time interval approaches zero. When power is constant, the amount of work performed in time period can be calculated as In the context of energy conversion, it is more customary to use the symbol rather than . Mechanical power Power in mechanical systems is the combination of forces and movement. In particular, power is the product of a force on an object and the object's velocity, or the product of a torque on a shaft and the shaft's angular velocity. Mechanical power is also described as the time derivative of work. In mechanics, the work done by a force on an object that travels along a curve is given by the line integral: where defines the path and is the velocity along this path. If the force is derivable from a potential (conservative), then applying the gradient theorem (and remembering that force is the negative of the gradient of the potential energy) yields: where and are the beginning and end of the path along which the work was done. The power at any point along the curve is the time derivative: In one dimension, this can be simplified to: In rotational systems, power is the product of the torque and angular velocity , where is angular frequency, measured in radians per second. The represents scalar product. In fluid power systems such as hydraulic actuators, power is given by where is pressure in pascals or N/m2, and is volumetric flow rate in m3/s in SI units. Mechanical advantage If a mechanical system has no losses, then the input power must equal the output power. This provides a simple formula for the mechanical advantage of the system. Let the input power to a device be a force acting on a point that moves with velocity and the output power be a force acts on a point that moves with velocity . If there are no losses in the system, then and the mechanical advantage of the system (output force per input force) is given by The similar relationship is obtained for rotating systems, where and are the torque and angular velocity of the input and and are the torque and angular velocity of the output. If there are no losses in the system, then which yields the mechanical advantage These relations are important because they define the maximum performance of a device in terms of velocity ratios determined by its physical dimensions. See for example gear ratios. Electrical power The instantaneous electrical power P delivered to a component is given by where is the instantaneous power, measured in watts (joules per second), is the potential difference (or voltage drop) across the component, measured in volts, and is the current through it, measured in amperes. If the component is a resistor with time-invariant voltage to current ratio, then: where is the electrical resistance, measured in ohms. Peak power and duty cycle In the case of a periodic signal of period , like a train of identical pulses, the instantaneous power is also a periodic function of period . The peak power is simply defined by: The peak power is not always readily measurable, however, and the measurement of the average power is more commonly performed by an instrument. If one defines the energy per pulse as then the average power is One may define the pulse length such that so that the ratios are equal. These ratios are called the duty cycle of the pulse train. Radiant power Power is related to intensity at a radius ; the power emitted by a source can be written as:
Physical sciences
Classical mechanics
null
24241
https://en.wikipedia.org/wiki/Protoscience
Protoscience
In the philosophy of science, protoscience is a research field that has the characteristics of an undeveloped science that may ultimately develop into an established science. Philosophers use protoscience to understand the history of science and distinguish protoscience from science and pseudoscience. The word “protoscience” is a hybrid Greek-Latin compound of the roots proto- + scientia, meaning a first or primeval rational knowledge. History Protoscience as a research field with the characteristics of an undeveloped science appeared in the early 20th century. In 1910, Jones described economics: I confess to a personal predilection for some term such as proto-science, pre-science, or nas-science, to give expression to what I conceive to be the true state of affairs, which I take to be this, that economics and kindred subjects are not sciences, but are on the way to become sciences. Thomas Kuhn later provided a more precise description, protoscience as a field that generates testable conclusions, faces “incessant criticism and continually strive for a fresh start,” but currently, like art and philosophy, appears to have failed to progress in a way similar to the progress seen in the established sciences. He applies protoscience to the fields of natural philosophy, medicine and the crafts in the past that ultimately became established sciences. Philosophers later developed more precise criteria to identify protoscience using the cognitive field concept. Thought collective This material is from Ludwik Fleck § Thought collective Thomas Kuhn later discovered that Fleck 1935 had voiced concepts that predated Kuhn's own work. That is, Fleck wrote that the development of truth in scientific research was an unattainable ideal as different researchers were locked into thought collectives (or thought-styles). This means "that a pure and direct observation cannot exist: in the act of perceiving objects the observer, i.e. the epistemological subject, is always influenced by the epoch and the environment to which he belongs, that is by what Fleck calls the thought style". Thought style throughout Fleck's work is closely associated with representational style. A "fact" was a relative value, expressed in the language or symbolism of the thought collective in which it belonged, and subject to the social and temporal structure of this collective. He argued, however, that within the active cultural style of a thought collective, knowledge claims or facts were constrained by passive elements arising from the observations and experience of the natural world. This passive resistance of natural experience represented within the stylized means of the thought collective could be verified by anyone adhering to the culture of the thought collective, and thus facts could be agreed upon within any particular thought style. Thus while a fact may be verifiable within its own collective, it may be unverifiable in others. He felt that the development of scientific facts and concepts was not unidirectional and does not consist of just accumulating new pieces of information, but at times required changing older concepts, methods of observations, and forms of representation. This changing of prior knowledge is difficult because a collective attains over time a specific way of investigating, bringing with it a blindness to alternative ways of observing and conceptualization. Change was especially possible when members of two thought collectives met and cooperated in observing, formulating hypothesis and ideas. He strongly advocated comparative epistemology. He also notes some features of the culture of modern natural sciences that recognize provisionality and evolution of knowledge along the value of pursuit of passive resistances. This approach anticipated later developments in social constructionism, and especially the development of critical science and technology studies. Conceptual framework Cognitive field Philosophers describe protoscience using the cognitive field concept. In every society, there are fields of knowledge (cognitive fields). The cognitive field consists of a community of individuals within a society with a domain of inquiry, a philosophical worldview, logical/mathematical tools, specific background knowledge from neighboring fields, a set of problems investigated, accumulated knowledge from the community, aims and methods. Cognitive fields are either belief fields or research fields. A cognitive research field invariably changes over time due to research; research fields include natural sciences, applied sciences, mathematics, technology, medicine, jurisprudence, social sciences and the humanities. A belief field (faith field) is "a cognitive field which either does not change at all or changes due to factors other than research (such as economic interest, political or religious pressure, or brute violence)." Belief fields include political ideology, religion, pseudodoctrines and pseudoscience. Science field A science field is a research field that satisfies 12 conditions: 1) all components of the science field invariably change over time from research in the field, especially logical/mathematical tools and specific background/presuppositions from other fields; 2) the research community has special training, “hold strong information links”, initiates or continues the “tradition of inquiry”; 3) researchers have autonomy to pursue research and receive support from the host society; 4) the researchers worldview is the real world as contains “lawfully changing concrete” objects, an adequate view of the scientific method, a vision of organized science achieving truthfull descriptions and explanations, ethical principles for conducting research, and the free search for truthful, deep and systematic understanding; 5) up-to-date logical/mathematical tools precisely determine and process information; 6) the domain of research are real objects/entities; 7) specific background knowledge is up-to-date, confirmed data, hypotheses and theories from relevant neighboring fields; 8) the set of problems investigated are from the domain of inquiry or within the research field; 9) the accumulated knowledge includes worldview-compatible, up-to-date testworthy/testable theories, hypotheses and data, and special knowledge previously accumlated in the research field; 10) the aims are find and apply laws and theories in the domain of inquiry, systemize acquired knonwledge, generalized information into theories, and improve research methods; 11) appropriate scientific methods are “subject to test, correction and justification”; 12) the research field is connected with a wider research field with similar capable researchers capable of “scientific inference, action and discussion”, similar hosting society, a domain of inquiry containing the domain of inquiry of the narrower field, and shared worldview, logical/mathematical tools, background knowledge, accumulated knowledge, aims and methods. Protoscience Philosophers define protoscience as an undeveloped science field, undeveloped meaning an incomplete or approximate science field. Mario Bunge defined a protoscience as a research field that approximately satisfies a similar set of the 12 science conditions. A protoscience that is evolving to ultimately satisfy all 12 conditions is an emerging or developing science. Bunge states, "The difference between protoscience and pseudoscience parallels that between error and deception." A protoscience may not survive or evolve to a science or pseudoscience. Kuhn was skeptical about any remedy that would reliably transform a protoscience to a science stating, “I claim no therapy to assist the transformation of a proto-science to a science, nor do I suppose anything of this sort is to be had.” Raimo Tuomela defined a protoscience as a research field that satisfies 9 of the 12 science conditions; a protoscience fails to satisfy the up-to-date conditions for logic/mathematical tools, specific background knowledge from neighboring fields, and accumulated knowledge (5, 7, 9), and there is reason to believe the protoscience will ultimately satisfy all 12 conditions. Protosciences and belief fields are both non-science fields, but only a protoscience can become a science field. Tuomela emphasizes that the cognitive field concept refers to "ideal types" and there may be some persons within a science field with non-scientific "attitudes, thinking and actions"; therefore, it may be better to apply scientific and non-scientific to "attitudes, thinking and actions" rather than directly to cognitive fields. Developmental stages of science Bunge stated that protoscience may occur as the second stage of a five-stage process in the development of science. Each stage has a theoretical and empirical aspect: Prescience has unchecked speculation theory and unchecked data. Protoscience has hypotheses without theory accompanied by observation and occasional measurement, but no experiment. Deuteroscience has hypotheses formulated mathematically without theory accompanied by systematic measurement, and experiment on perceptible traits of perceptible objects. Tritoscience has mathematical models accompanied by systematic measurements and experiments on perceptible and imperceptible traits of perceptible and imperceptible objects. Tetartoscience has mathematical models and comprehensive theories accompanied by precise systematic measurements and experiments on perceptible and imperceptible traits of perceptible and imperceptible objects. Origin of protoscience Protoscience may arise from the philosophical inquiry that anticipates science. Philosophers anticipated the development of astronomy, atomic theory, evolution and linguistics. The Greek philosopher Anaximander (610–546 BC) viewed the earth as a non-moving free-floating cylinder in space. The atomist doctrine of Democritus (460–370 BC) to Epicurus (341–270 BC) was that objects were composed of non-visible small particles. Anaximander had anticipated that humans may have developed from more primitive organisms. Wittgenstein’s study of language preceded the linguistic studies of J. L. Austin and John Searle. Popper describes how scientific theory arises from myths such as atomism and the corpuscular theory of light. Popper states that the Copernican system was "inspired by a Neo-Platonic worship of the light of the Sun who had to occupy the center because of his nobility", leading to "testable components" that ultimately became "fruitful and important." Some scholars use the term "primitive protoscience" to describe ancient myths that help explain natural phenomena at a time prior to the development of the scientific method. Protoscience examples Physical science Ancient astronomical protoscience was recorded as astronomical images and records inscribed on stones, bones and cave walls. Luigi Ferdinando Marsili (1658–1730) contributed to protoscience oceanography, describing the ocean currents of the Bosporus and physical oceanography, and Benjamin Franklin contributed by identifying the currents of the Gulf Stream. Philosophers consider physics before Galileo and Huygens, chemistry before Lavoisier, medicine before Virchow and Bernard, electricity before the mid-eighteenth century, and the study of heredity and phylogeny before the mid-nineteenth century as protosciences that eventually became established science. Prior to 1905, leading scientists, Ostwald and Mach, viewed atomic and molecular-kinetic theory as a protoscience, a theory indirectly supported by chemistry and statistical thermodynamics; however, Einstein's theory of Brownian motion, and Perrin's experimental verification led to widespread acceptance of atomic and molecular-kinetic theory as established science. The early stage of plate tectonics, beginning with Wegener's theory of continental drift, was a protoscience until experimental research confirmed the theory many years later. The initial widespread rejection of Wegener's theory is an example of the importance of not dismissing a protoscience. Psychology Critics state that psychology is a protoscience because some practices occur that prevent falsification of research hypotheses. Folk psychology and coaching psychology are protosciences. Medicine The use of scientifically invalid biomarkers to identify adverse outcomes is a protoscience practice in medicine. The process for reporting adverse medical events is a protoscience because it relies on uncorroborated data and unsystematic methods. Technology Hatleback describes cybersecurity as a protoscience that lacks transparency in experimentation, scientific laws, and sound experimental design in some cases; however cybersecurity has the potential to become a science.
Physical sciences
Science basics
Basics and measurement
24255
https://en.wikipedia.org/wiki/Pandemic
Pandemic
A pandemic ( ) is an epidemic of an infectious disease that has a sudden increase in cases and spreads across a large region, for instance multiple continents or worldwide, affecting a substantial number of individuals. Widespread endemic diseases with a stable number of infected individuals such as recurrences of seasonal influenza are generally excluded as they occur simultaneously in large regions of the globe rather than being spread worldwide. Throughout human history, there have been a number of pandemics of diseases such as smallpox. The Black Death, caused by the Plague, caused the deaths of up to half of the population of Europe in the 14th century. The term pandemic had not been used then, but was used for later epidemics, including the 1918 H1N1 influenza A pandemic—more commonly known as the Spanish flu—which is the deadliest pandemic in history. The most recent pandemics include the HIV/AIDS pandemic, the 2009 swine flu pandemic and the COVID-19 pandemic. Almost all these diseases still circulate among humans though their impact now is often far less. In response to the COVID-19 pandemic, 194 member states of the World Health Organization began negotiations on an International Treaty on Pandemic Prevention, Preparedness and Response, with a requirement to submit a draft of this treaty to the 77th World Health Assembly during its 2024 convention. Further, on 6 May 2024, the White House released an official policy to more safely manage medical research projects involving potentially hazardous pathogens, including viruses and bacteria, that may pose a risk of a pandemic. Definition A medical dictionary definition of pandemic is "an epidemic occurring on a scale that crosses international boundaries, usually affecting people on a worldwide scale". A disease or condition is not a pandemic merely because it is widespread or kills many people; it must also be infectious. For instance, cancer is responsible for many deaths but is not considered a pandemic because the disease is not contagious—i.e. easily transmissible—and not even simply infectious. This definition differs from colloquial usage in that it encompasses outbreaks of relatively mild diseases. The World Health Organization (WHO) has a category of Public Health Emergency of International Concern, defined as "an extraordinary event which is determined to constitute a public health risk to other States through the international spread of disease and to potentially require a coordinated international response". There is a rigorous process underlying this categorization and a clearly defined trajectory of responses. A WHO-sponsored international body, tasked with preparing an international agreement on pandemic prevention, preparedness and response has defined a pandemic as "the global spread of a pathogen or variant that infects human populations with limited or no immunity through sustained and high transmissibility from person to person, overwhelming health systems with severe morbidity and high mortality, and causing social and economic disruptions, all of which require effective national and global collaboration and coordination for its control". The word comes from the Greek παν- pan- meaning "all", or "every" and δῆμος demos "people". Parameters A common early characteristic of a pandemic is a rapid, sometimes exponential, growth in the number of infections, coupled with a widening geographical spread. WHO utilises different criteria to declare a Public Health Emergency of International Concern (PHEIC), its nearest equivalent to the term pandemic. The potential consequences of an incident are considered, rather than its current status. For example, polio was declared a PHEIC in 2014 even though only 482 cases were reported globally in the previous year; this was justified by concerns that polio might break out of its endemic areas and again become a significant health threat globally. The PHEIC status of polio is reviewed regularly and is ongoing, despite the small number of cases annually. The end of a pandemic is more difficult to delineate. Generally, past epidemics & pandemics have faded out as the diseases become accepted into people's daily lives and routines, becoming endemic. The transition from pandemic to endemic may be defined based on: a high proportion of the global population having immunity (through either natural infection or vaccination) fewer deaths health systems step down from emergency status perceived personal risk is lessened restrictive measures such as travel restrictions removed less coverage in public media. An endemic disease is always present in a population, but at a relatively low and predictable level. There may be periodic spikes of infections or seasonality, (e.g. influenza) but generally the burden on health systems is manageable. Prevention and preparedness Pandemic prevention comprises activities such as anticipatory research and development of therapies and vaccines, as well as monitoring for pathogens and disease outbreaks which may have pandemic potential. Routine vaccination programs are a type of prevention strategy, holding back diseases such as influenza and polio which have caused pandemics in the past, and could do so again if not controlled. Prevention overlaps with preparedness which aims to curtail an outbreak and prevent it getting out of control - it involves strategic planning, data collection and modelling to measure the spread, stockpiling of therapies, vaccines, and medical equipment, as well as public health awareness campaigning. By definition, a pandemic involves many countries so international cooperation, data sharing, and collaboration are essential; as is universal access to tests and therapies. Collaboration - In response to the COVID-19 pandemic, WHO established a Pandemic Hub in September 2021 in Berlin, aiming to address weaknesses around the world in how countries detect, monitor and manage public health threats. The Hub's initiatives include using artificial intelligence to analyse more than 35,000 data feeds for indications of emerging health threats, as well as improving facilities and coordination between academic institutions and WHO member countries. Detection - In May 2023, WHO launched the International Pathogen Surveillance Network (IPSN) (hosted by the Pandemic Hub) aiming to detect and respond to disease threats before they become epidemics and pandemics, and to optimize routine disease surveillance. The network provides a platform to connect countries, improving systems for collecting and analysing samples of potentially harmful pathogens. Wastewater surveillance can for example provide early warnings by detecting pathogens in sewage. Therapies and vaccines - The Coalition for Epidemic Preparedness Innovations (CEPI) is developing a program to condense new vaccine development timelines to 100 days, a third of the time it took to develop a COVID-19 vaccine. CEPI aims to reduce global epidemic and pandemic risk by developing vaccines against known pathogens as well as enabling rapid response to Disease X. In the US, the National Institute of Allergy and Infectious Diseases (NIAID) has developed a Pandemic Preparedness Plan which focuses on identifying viruses of concern and developing diagnostics and therapies (including prototype vaccines) to combat them. Modeling is important to inform policy decisions. It helps to predict the burden of disease on healthcare facilities, the effectiveness of control measures, projected geographical spread, and timing and extent of future pandemic waves. Public awareness involves disseminating reliable information, ensuring consistency on message, transparency, and steps to discredit misinformation. Air quality - Enhanced indoor ventilation and air filtration systems are also effective at reducing transmission of airborne pathogens, while providing additional health benefits beyond pandemic control. Stockpiling involves maintaining strategic stockpiles of emergency supplies such as personal protective equipment, drugs and vaccines, and equipment such as respirators. Many of these items have limited shelf life, so they require stock rotation even though they may be rarely used. Ethical and political issues The COVID-19 pandemic highlighted a number of ethical and political issues which must be considered during a pandemic. These included decisions about who should be prioritised for treatment while resources are scarce; whether or not to make vaccination compulsory; the timing and extent of constraints on individual liberty, how to sanction individuals who do not comply with emergency regulations, and the extent of international collaboration and resource sharing. Pandemic management strategies The basic strategies in the control of an outbreak are containment and mitigation. Containment may be undertaken in the early stages of the outbreak, including contact tracing and isolating infected individuals to stop the disease from spreading to the rest of the population, other public health interventions on infection control, and therapeutic countermeasures such as vaccinations which may be effective if available. When it becomes apparent that it is no longer possible to contain the spread of the disease, management will then move on to the mitigation stage, in which measures are taken to slow the spread of the disease and mitigate its effects on society and the healthcare system. In reality, containment and mitigation measures may be undertaken simultaneously. A key part of managing an infectious disease outbreak is trying to decrease the epidemic peak, known as "flattening the curve". This helps decrease the risk of health services being overwhelmed and provides more time for a vaccine and treatment to be developed. A broad group of non-pharmaceutical interventions may be taken to manage the outbreak. In a flu pandemic, these actions may include personal preventive measures such as hand hygiene, wearing face-masks, and self-quarantine; community measures aimed at social distancing such as closing schools and canceling mass gatherings; community engagement to encourage acceptance and participation in such interventions; and environmental measures such as cleaning of surfaces. Another strategy, suppression, requires more extreme long-term non-pharmaceutical interventions to reverse the pandemic by reducing the basic reproduction number to less than1. The suppression strategy, which includes stringent population-wide social distancing, home isolation of cases, and household quarantine, was undertaken by China during the COVID-19 pandemic where entire cities were placed under lockdown; such a strategy may carry with it considerable social and economic costs. Frameworks for influenza pandemics WHO system For a novel influenza virus, WHO previously applied a six-stage classification to delineate the process by which the virus moves from the first few infections in humans through to a pandemic. Starting with phase 1 (infections identified in animals only), it moves through phases of increasing infection and spread to phase 6 (pandemic). In February 2020, a WHO spokesperson clarified that the system is no longer in use. CDC Frameworks In 2014, the United States Centers for Disease Control and Prevention (CDC) introduced a framework for characterising the progress of an influenza pandemic titled the Pandemic Intervals Framework. The six intervals of the framework are as follows: investigation of cases of novel influenza, recognition of increased potential for ongoing transmission, initiation of a pandemic wave, acceleration of a pandemic wave, deceleration of a pandemic wave, and preparation for future pandemic waves. At the same time, the CDC adopted the Pandemic Severity Assessment Framework (PSAF) to assess the severity of influenza pandemics. The PSAF rates the severity of an influenza outbreak on two dimensions: clinical severity of illness in infected persons; and the transmissibility of the infection in the population. This tool was not applied during the COVID-19 pandemic. Notable pandemics and outbreaks Recent outbreaks COVID-19 SARS-CoV-2, a new strain of coronavirus, was first detected in the city of Wuhan, Hubei Province, China, in December 2019. The outbreak was characterized as a Public Health Emergency of International Concern (PHEIC) between January 2020 and May 2023 by WHO. The number of people infected with COVID-19 has reached more than 767 million worldwide, with a death toll of 6.9 million. It is considered likely that the virus will eventually become endemic and, like the common cold, cause less severe disease for most people. HIV/AIDS HIV/AIDS was first identified as a disease in 1981, and is an ongoing worldwide public health issue. Since then, HIV/AIDS has killed an estimated 40 million people with a further 630,000 deaths annually; 39 million people are currently living with HIV infection. HIV has a zoonotic origin, having originated in nonhuman primates in Central Africa and transferred to humans in the early 20th century. The most frequent mode of transmission of HIV is through sexual contact with an infected person. There may be a short period of mild, nonspecific symptoms followed by an asymptomatic (but nevertheless infectious) stage called clinical latency - without treatment, this stage can last between 3 and 20 years. The only way to detect infection is by means of a HIV test. There is no vaccine to prevent HIV infection, but the disease can be held in check by means of antiretroviral therapy. Pandemics in history Historical accounts of epidemics are often vague or contradictory in describing how victims were affected. A rash accompanied by a fever might be smallpox, measles, scarlet fever, or varicella, and it is possible that epidemics overlapped, with multiple infections striking the same population at once. It is often impossible to know the exact causes of mortality, although ancient DNA studies can sometimes detect residues of certain pathogens. It is assumed that, prior to the Neolithic Revolution around 10,000 BC, disease outbreaks were limited to a single family or clan, and did not spread widely before dying out. The domestication of animals increased human-animal contact, increasing the possibility of zoonotic infections. The advent of agriculture, and trade between settled groups, made it possible for pathogens to spread widely. As population increased, contact between groups became more frequent. A history of epidemics maintained by the Chinese Empire from 243 B.C. to 1911 A.C. shows an approximate correlation between the frequency of epidemics and the growth of the population. Here is an incomplete list of known epidemics which spread widely enough to merit the title "pandemic". Plague of Athens (430 to 426 BC): During the Peloponnesian War, an epidemic killed a quarter of the Athenian troops and a quarter of the population. This disease fatally weakened the dominance of Athens, but the sheer virulence of the disease prevented its wider spread; i.e., it killed off its hosts at a rate faster than they could spread it. The exact cause of the plague was unknown for many years. In January 2006, researchers from the University of Athens analyzed teeth recovered from a mass grave underneath the city and confirmed the presence of bacteria responsible for typhoid fever. Antonine Plague (165 to 180 AD): Possibly measles or smallpox brought to the Italian peninsula by soldiers returning from the Near East, it killed a quarter of those infected, up to five million in total. Plague of Cyprian (251–266 AD): A second outbreak of what may have been the same disease as the Antonine Plague killed (it was said) 5,000 people a day in Rome. Plague of Justinian (541 to 549 AD): Also known as the First Plague Pandemic. This epidemic started in Egypt and reached Constantinople the following spring, killing (according to the Byzantine chronicler Procopius) 10,000 a day at its height, and perhaps 40% of the city's inhabitants. The plague went on to eliminate a quarter to half the human population of the known world and was identified in 2013 as being caused by bubonic plague. Black Death (1331 to 1353): Also known as the Second Plague Pandemic. The total number of deaths worldwide is estimated at 75 to 200 million. Starting in Asia, the disease reached the Mediterranean and western Europe in 1348 (possibly from Italian merchants fleeing fighting in Crimea) and killed an estimated 20 to 30 million Europeans in six years; a third of the total population, and up to a half in the worst-affected urban areas. It was the first of a cycle of European plague epidemics that continued until the 18th century; there were more than 100 plague epidemics in Europe during this period, including the Great Plague of London of 1665–66 which killed approximately 100,000 people, 20% of London's population. 1817–1824 cholera pandemic. Previously endemic in the Indian subcontinent, the pandemic began in Bengal, then spread across India by 1820. The deaths of 10,000 British troops were documented - it is assumed that tens of thousands of Indians must have died. The disease spread as far as China, Indonesia (where more than 100,000 people succumbed on the island of Java alone) and the Caspian Sea before receding. Subsequent cholera pandemics during the 19th century are estimated to have caused many millions of deaths globally. Third plague pandemic (1855–1960): Starting in China, it is estimated to have caused over 12 million deaths in total, the majority of them in India. During this pandemic, the United States saw its first outbreak: the San Francisco plague of 1900–1904. The causative bacterium, Yersinia pestis, was identified in 1894. The association with fleas, and in particular rat fleas in urban environments, led to effective control measures. The pandemic was considered to be over in 1959 when annual deaths due to plague dropped below 200. The disease is nevertheless present in the rat population worldwide and isolated human cases still occur. The 1918–1920 Spanish flu infected half a billion people around the world, including on remote Pacific islands and in the Arctic—killing 20 to 100 million. Most influenza outbreaks disproportionately kill the very young and the very old, but the 1918 pandemic had an unusually high mortality rate for young adults. It killed more people in 25 weeks than AIDS did in its first 25 years. Mass troop movements and close quarters during World WarI caused it to spread and mutate faster, and the susceptibility of soldiers to the flu may have been increased by stress, malnourishment and chemical attacks. Improved transportation systems made it easier for soldiers, sailors and civilian travelers to spread the disease. Pandemics in indigenous populations Beginning from the Middle Ages, encounters between European settlers and native populations in the rest of the world often introduced epidemics of extraordinary virulence. Settlers introduced novel diseases which were endemic in Europe, such as smallpox, measles, pertussis and influenza, to which the indigenous peoples had no immunity. The Europeans infected with such diseases typically carried them in a dormant state, were actively infected but asymptomatic, or had only mild symptoms. Smallpox was the most destructive disease that was brought by Europeans to the Native Americans, both in terms of morbidity and mortality. The first well-documented smallpox epidemic in the Americas began in Hispaniola in late 1518 and soon spread to Mexico. Estimates of mortality range from one-quarter to one-half of the population of central Mexico. It is estimated that over the 100 years after European arrival in 1492, the indigenous population of the Americas dropped from 60 million to only 6 million, due to a combination of disease, war, and famine. The majority these deaths are attributed to successive waves of introduced diseases such as smallpox, measles, and typhoid fever. In Australia, smallpox was introduced by European settlers in 1789 devastating the Australian Aboriginal population, killing an estimated 50% of those infected with the disease during the first decades of colonisation. In the early 1800s, measles, smallpox and intertribal warfare killed an estimated 20,000 New Zealand Māori. In 1848–49, as many as 40,000 out of 150,000 Hawaiians are estimated to have died of measles, whooping cough and influenza. Measles killed more than 40,000 Fijians, approximately one-third of the population, in 1875, and in the early 19th century devastated the Great Andamanese population. In Hokkaido, an epidemic of smallpox introduced by Japanese settlers is estimated to have killed 34% of the native Ainu population in 1845. Concerns about future pandemics Prevention of future pandemics requires steps to identify future causes of pandemics and to take preventive measures before the disease moves uncontrollably into the human population. For example, influenza is a rapidly evolving disease which has caused pandemics in the past and has potential to cause future pandemics. WHO collates the findings of 144 national influenza centres worldwide which monitor emerging flu viruses. Virus variants which are assessed as likely to represent a significant risk are identified and can then be incorporated into the next seasonal influenza vaccine program. In a press conference on 28 December 2020, Mike Ryan, head of the WHO Emergencies Program, and other officials said the current COVID-19 pandemic is "not necessarily the big one" and "the next pandemic may be more severe." They called for preparation. WHO and the UN have warned the world must tackle the cause of pandemics and not just the health and economic symptoms. Diseases with pandemic potential There is always a possibility that a disease which has caused epidemics in the past may return in the future. It is also possible that little known diseases may become more virulent; in order to encourage research, a number of organisations which monitor global health have drawn up lists of diseases which may have pandemic potential; see table below. Coronaviruses Coronavirus diseases are a family of usually mild illnesses in humans, including those such as the common cold, that have resulted in outbreaks and pandemics such as the 1889-1890 pandemic, the 2002–2004 SARS outbreak, Middle East respiratory syndrome–related coronavirus and the COVID-19 pandemic. There is widespread concern that members of the coronavirus family, particularly SARS and MERS have the potential to cause future pandemics. Many human coronaviruses have zoonotic origin, their with natural reservoir in bats or rodents, leading to concerns for future spillover events. Following the end of the COVID-19 pandemic Public Health Emergency of International Concern deceleration by WHO, WHO Director General Tedros Ghebreyesus stated he would not hesitate to re-declare COVID-19 a PHEIC should the global situation worsen in the coming months or years. Influenza Influenza was first described by the Greek physician Hippocrates in 412BC. Since the Middle Ages, influenza pandemics have been recorded every 10 to 30 years as the virus mutates to evade immunity. Influenza is an endemic disease, with a fairly constant number of cases which vary seasonally and can, to a certain extent, be predicted. In a typical year, 5–15% of the population contracts influenza. There are 3–5 million severe cases annually, with up to 650,000 respiratory-related deaths globally each year. The 1889–1890 pandemic is estimated to have caused around a million fatalities, and the "Spanish flu" of 1918–1920 eventually infected about one-third of the world's population and caused an estimate 50million fatalities. The Global Influenza Surveillance and Response System is a global network of laboratories that has for purpose to monitor the spread of influenza with the aim to provide WHO with influenza control information. More than two million respiratory specimens are tested by GISRS annually to monitor the spread and evolution of influenza viruses through a network of about 150 laboratories in 114 countries representing 91% of the world's population. Antibiotic resistance Antibiotic-resistant microorganisms, which sometimes are referred to as "superbugs", may contribute to the re-emergence of diseases with pandemic potential that are currently well controlled. For example, cases of tuberculosis that are resistant to traditionally effective treatments remain a cause of great concern to health professionals. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. China and India have the highest rate of MDR-TB. WHO reports that approximately 50 million people worldwide are infected with MDR-TB, with 79 percent of those cases resistant to three or more antibiotics. Extensively drug-resistant tuberculosis (XDR-TB) was first identified in Africa in 2006 and subsequently discovered to exist in 49 countries. During 2021 there were estimated to be around 25,000 cases XDR-TB worldwide. In the past 20 years, other common bacteria including Staphylococcus aureus, Serratia marcescens and Enterococcus, have developed resistance to a wide range of antibiotics. Antibiotic-resistant organisms have become an important cause of healthcare-associated (nosocomial) infections. Climate change There are two groups of infectious disease that may be affected by climate change. The first group are vector-borne diseases which are transmitted via insects such as mosquitos or ticks. Some of these diseases, such as malaria, yellow fever, and dengue fever, can have potentially severe health consequences. Climate can affect the distribution of these diseases due to the changing geographic range of their vectors, with the potential to cause serious outbreaks in areas where the disease has not previously been known. The other group comprises water-borne diseases such as cholera, dysentery, and typhoid which may increase in prevalence due to changes in rainfall patterns. Encroaching into wildlands The October 2020 'era of pandemics' report by the United Nations' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, written by 22 experts in a variety of fields, said the anthropogenic destruction of biodiversity is paving the way to the pandemic era and could result in as many as 850,000 viruses being transmitted from animals—in particular birds and mammals—to humans. The "exponential rise" in consumption and trade of commodities such as meat, palm oil, and metals, largely facilitated by developed nations, and a growing human population, are the primary drivers of this destruction. According to Peter Daszak, the chair of the group who produced the report, "there is no great mystery about the cause of the Covid-19 pandemic or any modern pandemic. The same human activities that drive climate change and biodiversity loss also drive pandemic risk through their impacts on our environment." Proposed policy options from the report include taxing meat production and consumption, cracking down on the illegal wildlife trade, removing high-risk species from the legal wildlife trade, eliminating subsidies to businesses that are harmful to the natural world, and establishing a global surveillance network. In June 2021, a team of scientists assembled by the Harvard Medical School Center for Health and the Global Environment warned that the primary cause of pandemics so far, the anthropogenic destruction of the natural world through such activities including deforestation and hunting, is being ignored by world leaders. Melting permafrost Permafrost covers a fifth of the northern hemisphere and is made up of soil that has been kept at temperatures below freezing for long periods. Viable samples of viruses have been recovered from thawing permafrost, after having been frozen for many years, sometimes for millennia. There is a remote possibility that a thawed pathogen could infect humans or animals. Artificial intelligence Experts have raised concerns that advances in artificial intelligence could facilitate the design of particularly dangerous pathogens with pandemic potential. They recommended in 2024 that governments implement mandatory oversight and testing requirements. Economic consequences In 2016, the commission on a Global Health Risk Framework for the Future estimated that pandemic disease events would cost the global economy over $6 trillion in the 21st century—over $60 billion per year. The same report recommended spending $4.5 billion annually on global prevention and response capabilities to reduce the threat posed by pandemic events, a figure that the World Bank Group raised to $13 billion in a 2019 report. It has been suggested that such costs be paid from a tax on aviation rather than from, e.g., income taxes, given the crucial role of air traffic in transforming local epidemics into pandemics (being the only factor considered in state-of-the-art models of long-range disease transmission ). The COVID-19 pandemic is expected to have a profound negative effect on the global economy, potentially for years to come, with substantial drops in GDP accompanied by increases in unemployment noted around the world. The slowdown of economic activity early in the COVID-19 pandemic had a profound effect on emissions of pollutants and greenhouse gases. Analysis of ice cores taken from the Swiss Alps have revealed a reduction in atmospheric lead pollution over a four-year period corresponding to the years 1349 to 1353 (when the Black Death was ravaging Europe), indicating a reduction in mining and economic activity generally.
Biology and health sciences
Concepts
Health
24278
https://en.wikipedia.org/wiki/Pear
Pear
Pears are fruits produced and consumed around the world, growing on a tree and harvested in late summer into mid-autumn. The pear tree and shrub are a species of genus Pyrus , in the family Rosaceae, bearing the pomaceous fruit of the same name. Several species of pears are valued for their edible fruit and juices, while others are cultivated as trees. The tree is medium-sized and native to coastal and mildly temperate regions of Europe, North Africa, and Asia. Pear wood is one of the preferred materials in the manufacture of high-quality woodwind instruments and furniture. About 3,000 known varieties of pears are grown worldwide, which vary in both shape and taste. The fruit is consumed fresh, canned, as juice, dried, or fermented as perry. Etymology The word pear is probably from Germanic pera as a loanword of Vulgar Latin pira, the plural of pirum, akin to Greek apios (from Mycenaean ápisos), of Semitic origin (pirâ), meaning "fruit". The adjective pyriform or piriform means pear-shaped. The classical Latin word for a pear tree is pirus; pyrus is an alternate form of this word sometimes used in medieval Latin. Description The pear is native to coastal, temperate, and mountainous regions of the Old World, from Western Europe and North Africa east across Asia. They are medium-sized trees, reaching up to 20 m tall, often with a tall, narrow crown; a few pear species are shrubby. The leaves are alternately arranged, simple, long, glossy green on some species, densely silvery-hairy in some others; leaf shape varies from broad oval to narrow lanceolate. Most pears are deciduous, but one or two species in Southeast Asia are evergreen. Some pears are cold-hardy, withstanding temperatures as low as in winter, but many grown for agriculture are vulnerable to cold damage. Evergreen species only tolerate temperatures down to about . The flowers are white, rarely tinted yellow or pink, diameter, and have five petals, five sepals, and numerous stamens. Like that of the related apple, the pear fruit is a pome, in most wild species diameter, but in some cultivated forms up to long and broad. The shape varies in most species from oblate or globose, to the classic pyriform "pear shape" of the European pear with an elongated basal portion and a bulbous end. The fruit is a pseudofruit composed of the receptacle or upper end of the flower stalk (the so-called calyx tube) greatly dilated. Enclosed within its cellular flesh is the true fruit: 2–5 'cartilaginous' carpels, known colloquially as the "core". Pears and apples cannot always be distinguished by the form of the fruit; some pears look very much like some apples, e.g. the nashi pear. History Pear cultivation in temperate climates extends to the remotest antiquity, and evidence exists of its use as a food since prehistoric times. Many traces have been found in prehistoric pile dwellings around Lake Zurich. Pears were cultivated in China as early as 2000 BC. An article on Pear tree cultivation in Spain is brought down in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture. The word pear, or its equivalent, occurs in all the Celtic languages, while in Slavic and other dialects, differing appellations still referring to the same thing are found—a diversity and multiplicity of nomenclature, which led Alphonse Pyramus de Candolle to infer a very ancient cultivation of the tree from the shores of the Caspian to those of the Atlantic. The pear was also cultivated by the Romans, who ate the fruits raw or cooked, just like apples. Pliny's Natural History recommended stewing them with honey and noted three dozen varieties. The Roman cookbook De re coquinaria has a recipe for a spiced, stewed-pear patina, or soufflé. Romans also introduced the fruit to Britain. Pyrus nivalis, which has white down on the undersurface of the leaves, is chiefly used in Europe in the manufacture of perry (see also cider). Other small-fruited pears, distinguished by their early ripening and globose fruit, may be referred to as P. cordata, a species found wild in southwestern Europe. The genus is thought to have originated in present-day Western China in the foothills of the Tian Shan, a mountain range of Central Asia, and to have spread to the north and south along mountain chains, evolving into a diverse group of over 20 widely recognized primary species. The enormous number of varieties of the cultivated European pear (Pyrus communis subsp. communis), are likely derived from one or two wild subspecies (P. c. subsp. pyraster and P. c. subsp. caucasica), widely distributed throughout Europe, and sometimes forming part of the natural vegetation of the forests. Court accounts of Henry III of England record pears shipped from La Rochelle-Normande and presented to the king by the sheriffs of the City of London. The French names of pears grown in English medieval gardens suggest that their reputation, at the least, was French; a favoured variety in the accounts was named for Saint Rieul of Senlis, Bishop of Senlis in northern France. Asian species with medium to large edible fruit include P. pyrifolia, P. ussuriensis, P. × bretschneideri, and P. × sinkiangensis. Small-fruited species, such as Pyrus calleryana, may be used as rootstocks for the cultivated forms. Subdivision The genus can be divided into two subgenera—Pyrus and Pashia. Subgenus Pyrus, the Occidental clade, is distributed mainly in the western portion of Eurasia and North Africa, while subgenus Pashia, the Oriental clade, is native to East Asia. The two subgenera come in contact in Xingjiang, China, and in fact P. sinkiangensis appears to have arisen from a hybridisation event between P. communis and either P. pyrifolia or P. bretschneideri, i.e. a hybridisation between a member of the Occidental clade and a member of the Oriental clade. As of December 2024, Plants of the World Online accepts the following 74 species. Species and selected hybrids Subgenus Pyrus Pyrus acutiserrata Pyrus armeniacifolia—Apricot-leaved pear Pyrus asiae-mediae Pyrus austriaca Pyrus × babadagensis Pyrus × bardoensis Pyrus boissieriana Pyrus bourgaeana—Iberian pear Pyrus browiczii Pyrus cajon Pyrus castribonensis Pyrus chosrovica Pyrus ciancioi—Ciancio's pear Pyrus communis—European pear Pyrus communis subsp. communis—European pear (cultivars include Beurre d'Anjou, Bartlett and Beurre Bosc) Pyrus communis subsp. caucasica (syn. Pyrus caucasica) Pyrus communis subsp. pyraster (syn. Pyrus pyraster) Pyrus complexa Pyrus cordata—Plymouth pear Pyrus cordifolia Pyrus costata Pyrus daralagezi Pyrus demetrii Pyrus elaeagrifolia—Oleaster-leaved pear Pyrus elata Pyrus eldarica Pyrus fedorovii Pyrus ferganensis Pyrus georgica Pyrus gergerana—Gergeranian pear Pyrus glabra Pyrus grossheimii Pyrus hajastani Pyrus hakkiarica Pyrus hyrcana Pyrus jacquemontiana Pyrus × jordanovii Pyrus ketzkhovelii Pyrus mazanderanica Pyrus medvedevii Pyrus megrica Pyrus × michauxii Pyrus neoserrulata Pyrus nivalis—Snow pear Pyrus nutans Pyrus oxyprion Pyrus pedrottiana Pyrus raddeana Pyrus regelii Pyrus sachokiana Pyrus salicifolia—Willow-leaved pear Pyrus sicanorum Pyrus × sinkiangensis—thought to be an interspecific hybrid between P. ×bretschneideri and Pyrus communis Pyrus sogdiana Pyrus sosnovskyi Pyrus spinosa—Almond-leaved pear Pyrus syriaca—Syrian pear Pyrus tadshikistanica Pyrus takhtadzhianii Pyrus tamamschiannae Pyrus terpoi Pyrus theodorovii Pyrus turcomanica Pyrus tuskaulensis Pyrus vallis-demonis Pyrus × vavilovii Pyrus voronovii Pyrus vsevolodovii Pyrus yaltirikii Pyrus zangezura Subgenus Pashia Pyrus alpinotaiwaniana Pyrus betulifolia—Birchleaf pear Pyrus × bretschneideri—Chinese white pear; also classified as a subspecies of Pyrus pyrifolia Pyrus calleryana—Callery pear Pyrus hopeiensis Pyrus korshinskyi Pyrus pashia—Afghan pear Pyrus × phaeocarpa Pyrus pseudopashia Pyrus pyrifolia—Nashi pear, Sha Li; tree species native to China, Japan, and Korea, also known as the Asian pear Pyrus trilocularis Pyrus ussuriensis—Siberian pear (also known as the Ussurian pear, Harbin pear, or Manchurian pear) Pyrus xerophila Cultivation According to Pear Bureau Northwest, about 3,000 known varieties of pears are grown worldwide. The pear is normally propagated by grafting a selected variety onto a rootstock, which may be of a pear or quince variety. Quince rootstocks produce smaller trees, which is often desirable in commercial orchards or domestic gardens. For new varieties the flowers can be cross-bred to preserve or combine desirable traits. The fruit of the pear is produced on spurs, which appear on shoots more than one year old. There are four species which are primarily grown for edible fruit production: the European pear Pyrus communis subsp. communis cultivated mainly in Europe and North America, the Chinese white pear (bai li) Pyrus × bretschneideri, the Chinese pear Pyrus ussuriensis, and the Nashi pear Pyrus pyrifolia (also known as Asian pear or apple pear), which are grown mainly in eastern Asia. There are thousands of cultivars of these three species. A species grown in western China, P. sinkiangensis, and P. pashia, grown in southern China and south Asia, are also produced to a lesser degree. Other species are used as rootstocks for European and Asian pears and as ornamental trees. Pear wood is close-grained and has been used as a specialized timber for fine furniture and making the blocks for woodcuts. The Manchurian or Ussurian Pear, Pyrus ussuriensis (which produces unpalatable fruit primarily used for canning) has been crossed with Pyrus communis to breed hardier pear cultivars. The Bradford pear (Pyrus calleryana 'Bradford') is widespread as an ornamental tree in North America, where it has become invasive in regions. It is also used as a blight-resistant rootstock for Pyrus communis fruit orchards. The Willow-leaved pear (Pyrus salicifolia) is grown for its silvery leaves, flowers, and its "weeping" form. Cultivars The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit: 'Beth' 'Beurré Hardy' 'Beurré Superfin' 'Concorde' 'Conference' 'Doyenné du Comice' 'Joséphine de Malines' The purely decorative cultivar P. salicifolia 'Pendula', with pendulous branches and silvery leaves, has also won the award. Harvest Summer and autumn cultivars of Pyrus communis, being climacteric fruits, are gathered before they are fully ripe, while they are still green, but snap off when lifted. Certain other pears, including Pyrus pyrifolia and P. × bretschneideri, have both climacteric and non-climacteric varieties. Diseases and pests Production In 2022, world production of pears was 26 million tonnes, led by China with 73% of the total (table). About 48% of the Southern Hemisphere's pears are produced in the Patagonian valley of Río Negro in Argentina. Storage Pears may be stored at room temperature until ripe. Pears are ripe when the flesh around the stem gives to gentle pressure. Ripe pears are optimally stored refrigerated, uncovered in a single layer, where they have a shelf life of 2 to 3 days. Pears ripen at room temperature. Ripening is accelerated by the gas ethylene. If pears are placed next to bananas in a fruit bowl, the ethylene emitted by the banana causes the pears to ripen. Refrigeration will slow further ripening. According to Pear Bureau Northwest, most varieties show little color change as they ripen (though the skin on Bartlett pears changes from green to yellow as they ripen). Uses Cooking Pears are consumed fresh, canned, as juice, and dried. The juice can also be used in jellies and jams, usually in combination with other fruits, including berries. Fermented pear juice is called perry or pear cider and is made in a way that is similar to how cider is made from apples. Perry can be distilled to produce an eau de vie de poire, a colorless, unsweetened fruit brandy. Pear purée is used to manufacture snack foods such as Fruit by the Foot and Fruit Roll-Ups. The culinary or cooking pear is green but dry and hard, and only edible after several hours of cooking. Two Dutch cultivars are Gieser Wildeman (a sweet variety) and Saint Remy (slightly sour). Timber Pear wood is one of the preferred materials in the manufacture of high-quality woodwind instruments and furniture, and was used for making the carved blocks for woodcuts. It is also used for wood carving, and as a firewood to produce aromatic smoke for smoking meat or tobacco. Pear wood is valued for kitchen spoons, scoops and stirrers, as it does not contaminate food with color, flavor or smell, and resists warping and splintering despite repeated soaking and drying cycles. Lincoln describes it as "a fairly tough, very stable wood... (used for) carving... brushbacks, umbrella handles, measuring instruments such as set squares and T-squares... recorders... violin and guitar fingerboards and piano keys... decorative veneering." Pearwood is the favored wood for architect's rulers because it does not warp. It is similar to the wood of its relative, the apple tree (Malus domestica) and used for many of the same purposes. Nutrition Raw pear is 84% water, 15% carbohydrates and contains negligible protein and fat (table). In a reference amount, raw pear supplies of food energy, a moderate amount of dietary fiber, and no micronutrients in significant amounts (table). Research A 2019 review found preliminary evidence for the potential of pear consumption to favorably affect cardiovascular health. Cultural references Pears grow in the sublime orchard of Alcinous, in the Odyssey vii: "Therein grow trees, tall and luxuriant, pears and pomegranates and apple-trees with their bright fruit, and sweet figs, and luxuriant olives. Of these the fruit perishes not nor fails in winter or in summer, but lasts throughout the year." "A Partridge in a Pear Tree" is the first gift in the cumulative song "The Twelve Days of Christmas". The pear tree was an object of particular veneration (as was the walnut) in the tree worship of the Nakh peoples of the North Caucasus – see Vainakh mythology and see also Ingushetia – the best-known of the Vainakh peoples today being the Chechens of Chechnya. Pear and walnut trees were held to be the sacred abodes of beneficent spirits in pre-Islamic Chechen religion and, for this reason, it was forbidden to fell them. Gallery
Biology and health sciences
Rosales
null
24304
https://en.wikipedia.org/wiki/Password
Password
A password, sometimes called a passcode, is secret data, typically a string of characters, usually used to confirm a user's identity. Traditionally, passwords were expected to be memorized, but the large number of password-protected services that a typical individual accesses can make memorization of unique passwords for each service impractical. Using the terminology of the NIST Digital Identity Guidelines, the secret is held by a party called the claimant while the party verifying the identity of the claimant is called the verifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an established authentication protocol, the verifier is able to infer the claimant's identity. In general, a password is an arbitrary string of characters including letters, digits, or other symbols. If the permissible characters are constrained to be numeric, the corresponding secret is sometimes called a personal identification number (PIN). Despite its name, a password does not need to be an actual word; indeed, a non-word (in the dictionary sense) may be harder to guess, which is a desirable property of passwords. A memorized secret consisting of a sequence of words or other text separated by spaces is sometimes called a passphrase. A passphrase is similar to a password in usage, but the former is generally longer for added security. History Passwords have been used since ancient times. Sentries would challenge those wishing to enter an area to supply a password or watchword, and would only allow a person or group to pass if they knew the password. Polybius describes the system for the distribution of watchwords in the Roman military as follows: The way in which they secure the passing round of the watchword for the night is as follows: from the tenth maniple of each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune, and receiving from him the watchword—that is a wooden tablet with the word inscribed on it – takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next to him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits. Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example in the opening days of the Battle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password—flash—which was presented as a challenge, and answered with the correct response—thunder. The challenge and response were changed every three days. American paratroopers also famously used a device known as a "cricket" on D-Day in place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply. Passwords have been used with computers since the earliest days of computing. The Compatible Time-Sharing System (CTSS), an operating system introduced at MIT in 1961, was the first computer system to implement password login. CTSS had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password with privacy." In the early 1970s, Robert Morris developed a system of storing login passwords in a hashed form as part of the Unix operating system. The system was based on a simulated Hagelin rotor crypto machine, and first appeared in 6th Edition Unix in 1974. A later version of his algorithm, known as crypt(3), used a 12-bit salt and invoked a modified form of the DES algorithm 25 times to reduce the risk of pre-computed dictionary attacks. In modern times, user names and passwords are commonly used by people during a log in process that controls access to protected computer operating systems, mobile phones, cable TV decoders, automated teller machines (ATMs), etc. A typical computer user has passwords for many purposes: logging into accounts, retrieving e-mail, accessing applications, databases, networks, web sites, and even reading the morning newspaper online. Choosing a secure and memorable password The easier a password is for the owner to remember generally means it will be easier for an attacker to guess. However, passwords that are difficult to remember may also reduce the security of a system because (a) users might need to write down or electronically store the password, (b) users will need frequent password resets and (c) users are more likely to re-use the same password across different accounts. Similarly, the more stringent the password requirements, such as "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system. Others argue longer passwords provide more security (e.g., entropy) than shorter passwords with a wide variety of characters. In The Memorability and Security of Passwords, Jeff Yan et al. examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords. Combining two or more unrelated words and altering some of the letters to special characters or numbers is another good method, but a single dictionary word is not. Having a personally designed algorithm for generating obscure passwords is another good method. However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1', substitutions that are well known to attackers. Similarly typing the password one keyboard row higher is a common trick known to attackers. In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media), which includes: The name of a pet, child, family member, or significant other Anniversary dates and birthdays Birthplace Name of a favorite holiday Something related to a favorite sports team The word "password" Alternatives to memorization Traditional advice to memorize passwords and never write them down has become a challenge because of the sheer number of passwords users of computers and the internet are expected to maintain. One survey concluded that the average user has around 100 passwords. To manage the proliferation of passwords, some users employ the same password for multiple accounts, a dangerous practice since a data breach in one account could compromise the rest. Less risky alternatives include the use of password managers, single sign-on systems and simply keeping paper lists of less critical passwords. Such practices can reduce the number of passwords that must be memorized, such as the password manager's master password, to a more manageable number. Factors in the security of a password system The security of a password-protected system depends on several factors. The overall system must be designed for sound security, with protection against computer viruses, man-in-the-middle attacks and the like. Physical security issues are also a concern, from deterring shoulder surfing to more sophisticated physical threats such as video cameras and keyboard sniffers. Passwords should be chosen so that they are hard for an attacker to guess and hard for an attacker to discover using any of the available automatic attack schemes. Nowadays, it is a common practice for computer systems to hide passwords as they are typed. The purpose of this measure is to prevent bystanders from reading the password; however, some argue that this practice may lead to mistakes and stress, encouraging users to choose weak passwords. As an alternative, users should have the option to show or hide passwords as they type them. Effective access control provisions may force extreme measures on criminals seeking to acquire a password or biometric token. Less extreme measures include extortion, rubber hose cryptanalysis, and side channel attack. Some specific password management issues that must be considered when thinking about, choosing, and handling, a password follow. Rate at which an attacker can try guessed passwords The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password entry attempts, also known as throttling. In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords if they have been well chosen and are not easily guessed. Many systems store a cryptographic hash of the password. If an attacker gets access to the file of hashed passwords guessing can be done offline, rapidly testing candidate passwords against the true password's hash value. In the example of a web-server, an online attacker can guess only at the rate at which the server will respond, while an off-line attacker (who gains access to the file) can guess at a rate limited only by the hardware on which the attack is running and the strength of the algorithm used to create the hash. Passwords that are used to generate cryptographic keys (e.g., for disk encryption or Wi-Fi security) can also be subjected to high rate guessing, known as password cracking. Lists of common passwords are widely available and can make password attacks very efficient. Security in such situations depends on using passwords or passphrases of adequate complexity, making such an attack computationally infeasible for the attacker. Some systems, such as PGP and Wi-Fi WPA, apply a computation-intensive hash to the password to slow such attacks, in a technique known as key stretching. Limits on the number of password guesses An alternative to limiting the rate at which an attacker can make guesses on a password is to limit the total number of guesses that can be made. The password can be disabled, requiring a reset, after a small number of consecutive bad guesses (say 5); and the user may be required to change the password after a larger cumulative number of bad guesses (say 30), to prevent an attacker from making an arbitrarily large number of bad guesses by interspersing them between good guesses made by the legitimate password owner. Attackers may conversely use knowledge of this mitigation to implement a denial of service attack against the user by intentionally locking the user out of their own device; this denial of service may open other avenues for the attacker to manipulate the situation to their advantage via social engineering. Form of stored passwords Some computer systems store user passwords as plaintext, against which to compare user logon attempts. If an attacker gains access to such an internal password store, all passwords—and so all user accounts—will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well. More secure systems store each password in a cryptographically protected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. The most secure do not store passwords at all, but a one-way derivation, such as a polynomial, modulus, or an advanced hash function. Roger Needham invented the now-common approach of storing only a "hashed" form of the plaintext password. When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user's entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, in many implementations, another value known as a salt. A salt prevents attackers from easily building a list of hash values for common passwords and prevents password cracking efforts from scaling across all users. MD5 and SHA1 are frequently used cryptographic hash functions, but they are not recommended for password hashing unless they are used as part of a larger construction such as in PBKDF2. The stored data—sometimes called the "password verifier" or the "password hash"—is often stored in Modular Crypt Format or RFC 2307 hash format, sometimes in the /etc/passwd file or the /etc/shadow file. The main storage methods for passwords are plain text, hashed, hashed and salted, and reversibly encrypted. If an attacker gains access to the password file, then if it is stored as plain text, no cracking is necessary. If it is hashed but not salted then it is vulnerable to rainbow table attacks (which are more efficient than cracking). If it is reversibly encrypted then if the attacker gets the decryption key along with the file no cracking is necessary, while if he fails to get the key cracking is not possible. Thus, of the common storage formats for passwords only when passwords have been salted and hashed is cracking both necessary and possible. If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover a plaintext password. An attacker can, however, use widely available tools to attempt to guess the passwords. These tools work by hashing possible passwords and comparing the result of each guess to the actual password hashes. If the attacker finds a match, they know that their guess is the actual password for the associated user. Password cracking tools can operate by brute force (i.e. trying every possible combination of characters) or by hashing every word from a list; large lists of possible passwords in many languages are widely available on the Internet. The existence of password cracking tools allows attackers to easily recover poorly chosen passwords. In particular, attackers can quickly recover passwords that are short, dictionary words, simple variations on dictionary words, or that use easily guessable patterns. A modified version of the DES algorithm was used as the basis for the password hashing algorithm in early Unix systems. The crypt algorithm used a 12-bit salt value so that each user's hash was unique and iterated the DES algorithm 25 times in order to make the hash function slower, both measures intended to frustrate automated guessing attacks. The user's password was used as a key to encrypt a fixed value. More recent Unix or Unix-like systems (e.g., Linux or the various BSD systems) use more secure password hashing algorithms such as PBKDF2, bcrypt, and scrypt, which have large salts and an adjustable cost or number of iterations. A poorly designed hash function can make attacks feasible even if a strong password is chosen. LM hash is a widely deployed and insecure example. Methods of verifying a password over a network Simple transmission of the password Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or person. If the password is carried as electrical signals on unsecured physical wiring between the user access point and the central system controlling the password database, it is subject to snooping by wiretapping methods. If it is carried as packeted data over the Internet, anyone able to watch the packets containing the logon information can snoop with a very low probability of detection. Email is sometimes used to distribute passwords but this is generally an insecure method. Since most email is sent as plaintext, a message containing a password is readable without effort during transport by any eavesdropper. Further, the message will be stored as plaintext on at least two computers: the sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on there as well, at least for some time, and may be copied to backup, cache or history files on any of these systems. Using client-side encryption will only protect transmission from the mail handling system server to the client machine. Previous or subsequent relays of the email will not be protected and the email will probably be stored on multiple computers, certainly on the originating and receiving computers, most often in clear text. Transmission through encrypted channels The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, using cryptographic protection. The most widely used is the Transport Layer Security (TLS, previously called SSL) feature built into most current Internet browsers. Most browsers alert the user of a TLS/SSL-protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use. Hash-based challenge–response methods There is a conflict between stored hashed-passwords and hash-based challenge–response authentication; the latter requires a client to prove to a server that they know what the shared secret (i.e., password) is, and to do this, the server must be able to obtain the shared secret from its stored form. On many systems (including Unix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need the original password to authenticate remotely; they only need the hash. Zero-knowledge password proofs Rather than transmitting a password, or transmitting the hash of the password, password-authenticated key agreement systems can perform a zero-knowledge password proof, which proves knowledge of the password without exposing it. Moving a step further, augmented systems for password-authenticated key agreement (e.g., AMP, B-SPEKE, PAK-Z, SRP-6) avoid both the conflict and limitation of hash-based methods. An augmented system allows a client to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and where the un-hashed password is required to gain access. Procedures for changing passwords Usually, a system must provide a way to change a password, either because a user believes the current password has been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in unencrypted form, security can be lost (e.g., via wiretapping) before the new password can even be installed in the password database and if the new password is given to a compromised employee, little is gained. Some websites include the user-selected password in an unencrypted confirmation e-mail message, with the obvious increased vulnerability. Identity management systems are increasingly used to automate the issuance of replacements for lost passwords, a feature called self-service password reset. The user's identity is verified by asking questions and comparing the answers to ones previously stored (i.e., when the account was opened). Some password reset questions ask for personal information that could be found on social media, such as mother's maiden name. As a result, some security experts recommend either making up one's own questions or giving false answers. Password longevity "Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly, monthly or even more often). Such policies usually provoke user protest and foot-dragging at best and hostility at worst. There is often an increase in the number of people who note down the password and leave it where it can easily be found, as well as help desk calls to reset a forgotten password. Users may use simpler passwords or develop variation patterns on a consistent theme to keep their passwords memorable. Because of these issues, there is some debate as to whether password aging is effective. Changing a password will not prevent abuse in most cases, since the abuse would often be immediately noticeable. However, if someone may have had access to the password through some means, such as sharing a computer or breaching a different site, changing the password limits the window for abuse. Number of users per password Allotting separate passwords to each user of a system is preferable to having a single password shared by legitimate users of the system, certainly from a security viewpoint. This is partly because users are more willing to tell another person (who may not be authorized) a shared password than one exclusively for their use. Single passwords are also much less convenient to change because many people need to be told at the same time, and they make removal of a particular user's access more difficult, as for instance on graduation or resignation. Separate logins are also often used for accountability, for example to know who changed a piece of data. Password security architecture Common techniques used to improve the security of computer systems protected by a password include: Not displaying the password on the display screen as it is being entered or obscuring it as it is typed by using asterisks (*) or bullets (•). Allowing passwords of adequate length. (Some legacy operating systems, including early versions of Unix and Windows, limited passwords to an 8 character maximum, reducing security.) Requiring users to re-enter their password after a period of inactivity (a semi log-off policy). Enforcing a password policy to increase password strength and security. Assigning randomly chosen passwords. Requiring minimum password lengths. Some systems require characters from various character classes in a password—for example, "must have at least one uppercase and at least one lowercase letter". However, all-lowercase passwords are more secure per keystroke than mixed capitalization passwords. Employ a password blacklist to block the use of weak, easily guessed passwords Providing an alternative to keyboard entry (e.g., spoken passwords, or biometric identifiers). Requiring more than one authentication system, such as two-factor authentication (something a user has and something the user knows). Using encrypted tunnels or password-authenticated key agreement to prevent access to transmitted passwords via network attacks Limiting the number of allowed failures within a given time period (to prevent repeated password guessing). After the limit is reached, further attempts will fail (including correct password attempts) until the beginning of the next time period. However, this is vulnerable to a form of denial of service attack. Introducing a delay between password submission attempts to slow down automated password guessing programs. Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result. Password reuse It is common practice amongst computer users to reuse the same password on multiple sites. This presents a substantial security risk, because an attacker needs to only compromise a single site in order to gain access to other sites the victim uses. This problem is exacerbated by also reusing usernames, and by websites requiring email logins, as it makes it easier for an attacker to track a single user across multiple sites. Password reuse can be avoided or minimized by using mnemonic techniques, writing passwords down on paper, or using a password manager. It has been argued by Redmond researchers Dinei Florencio and Cormac Herley, together with Paul C. van Oorschot of Carleton University, Canada, that password reuse is inevitable, and that users should reuse passwords for low-security websites (which contain little personal data and no financial information, for example) and instead focus their efforts on remembering long, complex passwords for a few important accounts, such as bank accounts. Similar arguments were made by Forbes in not change passwords as often as many "experts" advise, due to the same limitations in human memory. Writing down passwords on paper Historically, many security experts asked people to memorize their passwords: "Never write down a password". More recently, many security experts such as Bruce Schneier recommend that people use passwords that are too complicated to memorize, write them down on paper, and keep them in a wallet. Password manager software can also store passwords relatively safely, in an encrypted file sealed with a single master password. After death To facilitate estate administration, it is helpful for people to provide a mechanism for their passwords to be communicated to the persons who will administer their affairs in the event of their death. Should a record of accounts and passwords be prepared, care must be taken to ensure that the records are secure, to prevent theft or fraud. Multi-factor authentication Multi-factor authentication schemes combine passwords (as "knowledge factors") with one or more other means of authentication, to make authentication more secure and less vulnerable to compromised passwords. For example, a simple two-factor login might send a text message, e-mail, automated phone call, or similar alert whenever a login attempt is made, possibly supplying a code that must be entered in addition to a password. More sophisticated factors include such things as hardware tokens and biometric security. Password rotation Password rotation is a policy that is commonly implemented with the goal of enhancing computer security. In 2019, Microsoft stated that the practice is "ancient and obsolete". Password rules Most organizations specify a password policy that sets requirements for the composition and usage of passwords, typically dictating minimum length, required categories (e.g., upper and lower case, numbers, and special characters), prohibited elements (e.g., use of one's own name, date of birth, address, telephone number). Some governments have national authentication frameworks that define requirements for user authentication to government services, including requirements for passwords. Many websites enforce standard rules such as minimum and maximum length, but also frequently include composition rules such as featuring at least one capital letter and at least one number/symbol. These latter, more specific rules were largely based on a 2003 report by the National Institute of Standards and Technology (NIST), authored by Bill Burr. It originally proposed the practice of using numbers, obscure characters and capital letters and updating regularly. In a 2017 article in The Wall Street Journal, Burr reported he regrets these proposals and made a mistake when he recommended them. According to a 2017 rewrite of this NIST report, many websites have rules that actually have the opposite effect on the security of their users. This includes complex composition rules as well as forced password changes after certain periods of time. While these rules have long been widespread, they have also long been seen as annoying and ineffective by both users and cyber-security experts. The NIST recommends people use longer phrases as passwords (and advises websites to raise the maximum password length) instead of hard-to-remember passwords with "illusory complexity" such as "pA55w+rd". A user prevented from using the password "password" may simply choose "Password1" if required to include a number and uppercase letter. Combined with forced periodic password changes, this can lead to passwords that are difficult to remember but easy to crack. Paul Grassi, one of the 2017 NIST report's authors, further elaborated: "Everyone knows that an exclamation point is a 1, or an I, or the last character of a password. $ is an S or a 5. If we use these well-known tricks, we aren't fooling any adversary. We are simply fooling the database that stores passwords into thinking the user did something good." Pieris Tsokkis and Eliana Stavrou were able to identify some bad password construction strategies through their research and development of a password generator tool. They came up with eight categories of password construction strategies based on exposed password lists, password cracking tools, and online reports citing the most used passwords. These categories include user-related information, keyboard combinations and patterns, placement strategy, word processing, substitution, capitalization, append dates, and a combination of the previous categories Password cracking Attempting to crack passwords by trying as many possibilities as time and money permit is a brute force attack. A related method, rather more efficient in most cases, is a dictionary attack. In a dictionary attack, all words in one or more dictionaries are tested. Lists of common passwords are also typically tested. Password strength is the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Cryptologists and computer scientists often refer to the strength or 'hardness' in terms of entropy. Passwords easily discovered are termed weak or vulnerable; passwords very difficult or impossible to discover are considered strong. There are several programs available for password attack (or even auditing and recovery by systems personnel) such as L0phtCrack, John the Ripper, and Cain; some of which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detect weak passwords proposed by users. Studies of production computer systems have consistently shown that a large fraction of all user-chosen passwords are readily guessed automatically. For example, Columbia University found 22% of user passwords could be recovered with little effort. According to Bruce Schneier, examining data from a 2006 phishing attack, 55% of MySpace passwords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006. He also reported that the single most common password was password1, confirming yet again the general lack of informed care in choosing passwords among users. (He nevertheless maintained, based on these data, that the general quality of passwords has improved over the years—for example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.) Incidents On 16 July 1998, CERT reported an incident where an attacker had found 186,126 encrypted passwords. At the time the attacker was discovered, 47,642 passwords had already been cracked. In September 2001, after the deaths of 658 of their 960 New York employees in the September 11 attacks, financial services firm Cantor Fitzgerald through Microsoft broke the passwords of deceased employees to gain access to files needed for servicing client accounts. Technicians used brute-force attacks, and interviewers contacted families to gather personalized information that might reduce the search time for weaker passwords. In December 2009, a major password breach of the Rockyou.com website occurred that led to the release of 32 million passwords. The hacker then leaked the full list of the 32 million passwords (with no other identifiable information) to the Internet. Passwords were stored in cleartext in the database and were extracted through a SQL injection vulnerability. The Imperva Application Defense Center (ADC) did an analysis on the strength of the passwords. In June 2011, NATO (North Atlantic Treaty Organization) experienced a security breach that led to the public release of first and last names, usernames, and passwords for more than 11,000 registered users of their e-bookshop. The data was leaked as part of Operation AntiSec, a movement that includes Anonymous, LulzSec, as well as other hacking groups and individuals. The aim of AntiSec is to expose personal, sensitive, and restricted information to the world, using any means necessary. On 11 July 2011, Booz Allen Hamilton, a consulting firm that does work for the Pentagon, had their servers hacked by Anonymous and leaked the same day. "The leak, dubbed 'Military Meltdown Monday,' includes 90,000 logins of military personnel—including personnel from USCENTCOM, SOCOM, the Marine corps, various Air Force facilities, Homeland Security, State Department staff, and what looks like private sector contractors." These leaked passwords wound up being hashed in SHA1, and were later decrypted and analyzed by the ADC team at Imperva, revealing that even military personnel look for shortcuts and ways around the password requirements. On 5 June 2012, a security breach at LinkedIn resulted in 117 million stolen passwords and emails. Millions of the passwords were later posted on a Russian forum. A hacker named "Peace" later offered additional passwords for sale. LinkedIn undertook a mandatory reset of all compromised accounts. Alternatives to passwords for authentication The numerous ways in which permanent or semi-permanent passwords can be compromised has prompted the development of other techniques. Some are inadequate in practice, and in any case few have become universally available for users seeking a more secure alternative. A 2012 paper examines why passwords have proved so hard to supplant (despite numerous predictions that they would soon be a thing of the past); in examining thirty representative proposed replacements with respect to security, usability and deployability they conclude "none even retains the full set of benefits that legacy passwords already provide." Single-use passwords. Having passwords that are only valid once makes many potential attacks ineffective. Most users find single-use passwords extremely inconvenient. They have, however, been widely implemented in personal online banking, where they are known as Transaction Authentication Numbers (TANs). As most home users only perform a small number of transactions each week, the single-use issue has not led to intolerable customer dissatisfaction in this case. Time-synchronized one-time passwords are similar in some ways to single-use passwords, but the value to be entered is displayed on a small (generally pocketable) item and changes every minute or so. Passwordless authentication which a user can log in to a computer system without entering (and having to remember) a password or any other knowledge-based secret. In most common implementations users are asked to enter their public identifier (username, phone number, email address etc.) and then complete the authentication process by providing a secure proof of identity through a registered device or token. Most of implementations rely on public-key cryptography infrastructure where the public key is provided during registration to the authenticating service (remote server, application or website) while the private key is kept on a user’s device (PC, smartphone or an external security token) and can be accessed only by providing a biometric signature or another authentication factor which is not knowledge-based. PassWindow one-time passwords are used as single-use passwords, but the dynamic characters to be entered are visible only when a user superimposes a unique printed visual key over a server-generated challenge image shown on the user's screen. Access controls based on public-key cryptography e.g. ssh. The necessary keys are usually too large to memorize (but see proposal Passmaze) and must be stored on a local computer, security token or portable memory device, such as a USB flash drive or even floppy disk. The private key may be stored on a cloud service provider, and activated by the use of a password or two-factor authentication. Biometric methods promise authentication based on unalterable personal characteristics, but have high error rates and require additional hardware to scan, for example, fingerprints, irises, etc. They have proven easy to spoof in some famous incidents testing commercially available systems, for example, the gummie fingerprint spoof demonstration, and, because these characteristics are unalterable, they cannot be changed if compromised; this is a highly important consideration in access control as a compromised access token is necessarily insecure. Single sign-on technology is claimed to eliminate the need for having multiple passwords. Such schemes do not relieve users and administrators from choosing reasonable single passwords, nor system designers or administrators from ensuring that private access control information passed among systems enabling single sign-on is secure against attack. As yet, no satisfactory standard has been developed. Envaulting technology is a password-free way to secure data on removable storage devices such as USB flash drives. Instead of user passwords, access control is based on the user's access to a network resource. Non-text-based passwords, such as graphical passwords or mouse-movement based passwords. Graphical passwords are an alternative means of authentication for log-in intended to be used in place of conventional password; they use images, graphics or colours instead of letters, digits or special characters. One system requires users to select a series of faces as a password, utilizing the human brain's ability to recall faces easily. In some implementations the user is required to pick from a series of images in the correct sequence in order to gain access. Another graphical password solution creates a one-time password using a randomly generated grid of images. Each time the user is required to authenticate, they look for the images that fit their pre-chosen categories and enter the randomly generated alphanumeric character that appears in the image to form the one-time password. So far, graphical passwords are promising, but are not widely used. Studies on this subject have been made to determine its usability in the real world. While some believe that graphical passwords would be harder to crack, others suggest that people will be just as likely to pick common images or sequences as they are to pick common passwords. 2D Key (2-Dimensional Key) is a 2D matrix-like key input method having the key styles of multiline passphrase, crossword, ASCII/Unicode art, with optional textual semantic noises, to create big password/key beyond 128 bits to realize the MePKC (Memorizable Public-Key Cryptography) using fully memorizable private key upon the current private key management technologies like encrypted private key, split private key, and roaming private key. Cognitive passwords use question and answer cue/response pairs to verify identity. "The password is dead" "The password is dead" is a recurring idea in computer security. The reasons given often include reference to the usability as well as security problems of passwords. It often accompanies arguments that the replacement of passwords by a more secure means of authentication is both necessary and imminent. This claim has been made by numerous people at least since 2004. Alternatives to passwords include biometrics, two-factor authentication or single sign-on, Microsoft's Cardspace, the Higgins project, the Liberty Alliance, NSTIC, the FIDO Alliance and various Identity 2.0 proposals. However, in spite of these predictions and efforts to replace them passwords are still the dominant form of authentication on the web. In "The Persistence of Passwords", Cormac Herley and Paul van Oorschot suggest that every effort should be made to end the "spectacularly incorrect assumption" that passwords are dead. They argue that "no other single technology matches their combination of cost, immediacy and convenience" and that "passwords are themselves the best fit for many of the scenarios in which they are currently used." Following this, Bonneau et al. systematically compared web passwords to 35 competing authentication schemes in terms of their usability, deployability, and security. Their analysis shows that most schemes do better than passwords on security, some schemes do better and some worse with respect to usability, while every scheme does worse than passwords on deployability. The authors conclude with the following observation: "Marginal gains are often not sufficient to reach the activation energy necessary to overcome significant transition costs, which may provide the best explanation of why we are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery."
Technology
Computer security
null
24306
https://en.wikipedia.org/wiki/PNG
PNG
Portable Network Graphics (PNG, officially pronounced , colloquially pronounced ) is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF)—unofficially, the initials PNG stood for the recursive acronym "PNG's not GIF". PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without an alpha channel for transparency), and full-color non-palette-based RGB or RGBA images. The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics; therefore, non-RGB color spaces such as CMYK are not supported. A PNG file contains a single image in an extensible structure of chunks, encoding the basic pixels and other information such as textual comments and integrity checks documented in RFC 2083. PNG files have the ".png" file extension and the "image/png" MIME media type. PNG was published as an informational RFC 2083 in March 1997 and as an ISO/IEC 15948 standard in 2004. History and development The motivation for creating the PNG format was the announcement on 28 December 1994, that implementations of the Graphics Interchange Format (GIF) format would have to pay royalties to Unisys due to their patent of the Lempel–Ziv–Welch (LZW) data compression algorithm used in GIF. This led to a flurry of criticism from Usenet users. One of them was Thomas Boutell, who on 4 January 1995 posted a precursory discussion thread on the Usenet newsgroup "comp.graphics" in which he devised a plan for a free alternative to GIF. Other users in that thread put forth many propositions that would later be part of the final file format. Oliver Fromme, author of the popular JPEG viewer QPEG, proposed the PING name, eventually becoming PNG, a recursive acronym meaning PING is not GIF, and also the .png extension. Other suggestions later implemented included the deflate compression algorithm and 24-bit color support, the lack of the latter in GIF also motivating the team to create their file format. The group would become known as the PNG Development Group, and as the discussion rapidly expanded, it later used a mailing list associated with a CompuServe forum. The full specification of PNG was released under the approval of W3C on 1 October 1996, and later as RFC 2083 on 15 January 1997. The specification was revised on 31 December 1998 as version 1.1, which addressed technical problems for gamma and color correction. Version 1.2, released on 11 August 1999, added the iTXt chunk as the specification's only change, and a reformatted version of 1.2 was released as a second edition of the W3C standard on 10 November 2003, and as an International Standard (ISO/IEC 15948:2004) on 3 March 2004. Although GIF allows for animation, it was initially decided that PNG should be a single-image format. In 2001, the developers of PNG published the Multiple-image Network Graphics (MNG) format, with support for animation. MNG achieved moderate application support, but not enough among mainstream web browsers and no usage among web site designers or publishers. In 2008, certain Mozilla developers published the Animated Portable Network Graphics (APNG) format with similar goals. APNG is a format that is natively supported by Gecko- and Presto-based web browsers and is also commonly used for thumbnails on Sony's PlayStation Portable system (using the normal PNG file extension). In 2017, Chromium based browsers adopted APNG support. In January 2020, Microsoft Edge became Chromium based, thus inheriting support for APNG. With this all major browsers now support APNG. PNG Working Group The original PNG specification was authored by an ad hoc group of computer graphics experts and enthusiasts. Discussions and decisions about the format were conducted by email. The original authors listed on RFC 2083 are: Editor: Thomas Boutell Contributing Editor: Tom Lane Authors (in alphabetical order by last name): Mark Adler, Thomas Boutell, Christian Brunschen, Adam M. Costello, Lee Daniel Crocker, Andreas Dilger, Oliver Fromme, Jean-loup Gailly, Chris Herborth, Aleks Jakulin, Neal Kettler, Tom Lane, Alexander Lehmann, Chris Lilley, Dave Martindale, Owen Mortensen, Keith S. Pickens, Robert P. Poole, Glenn Randers-Pehrson, Greg Roelofs, Willem van Schaik, Guy Schalnat, Paul Schmidt, Tim Wegner, Jeremy Wohl File format File header A PNG file starts with an eight-byte signature (refer to hex editor image on the right): "Chunks" within the file After the header, comes a series of chunks, each of which conveys certain information about the image. Chunks declare themselves as critical or ancillary, and a program encountering an ancillary chunk that it does not understand can safely ignore it. This chunk-based storage layer structure, similar in concept to a container format or to Amigas IFF, is designed to allow the PNG format to be extended while maintaining compatibility with older versions—it provides forward compatibility, and this same file structure (with different signature and chunks) is used in the associated MNG, JNG, and APNG formats. A chunk consists of four parts: length (4 bytes, big-endian), chunk type/name (4 bytes), chunk data (length bytes) and CRC (cyclic redundancy code/checksum; 4 bytes). The CRC is a network-byte-order CRC-32 computed over the chunk type and chunk data, but not the length. Chunk types are given a four-letter case sensitive ASCII type/name; compare FourCC. The case of the different letters in the name (bit 5 of the numeric value of the character) is a bit field that provides the decoder with some information on the nature of chunks it does not recognize. The case of the first letter indicates whether the chunk is critical or not. If the first letter is uppercase, the chunk is critical; if not, the chunk is ancillary. Critical chunks contain information that is necessary to read the file. If a decoder encounters a critical chunk it does not recognize, it must abort reading the file or supply the user with an appropriate warning. The case of the second letter indicates whether the chunk is "public" (either in the specification or the registry of special-purpose public chunks) or "private" (not standardized). Uppercase is public and lowercase is private. This ensures that public and private chunk names can never conflict with each other (although two private chunk names could conflict). The third letter must be uppercase to conform to the PNG specification. It is reserved for future expansion. Decoders should treat a chunk with a lower case third letter the same as any other unrecognized chunk. The case of the fourth letter indicates whether the chunk is safe to copy by editors that do not recognize it. If lowercase, the chunk may be safely copied regardless of the extent of modifications to the file. If uppercase, it may only be copied if the modifications have not touched any critical chunks. Critical chunks A decoder must be able to interpret critical chunks to read and render a PNG file. IHDR must be the first chunk; it contains (in this order) the image's width (4 bytes) height (4 bytes) bit depth (1 byte, values 1, 2, 4, 8, or 16) color type (1 byte, values 0, 2, 3, 4, or 6) compression method (1 byte, value 0) filter method (1 byte, value 0) interlace method (1 byte, values 0 "no interlace" or 1 "Adam7 interlace") (13 data bytes total). As stated in the World Wide Web Consortium, bit depth is defined as "the number of bits per sample or per palette index (not per pixel)". PLTE contains the palette: a list of colors. IDAT contains the image, which may be split among multiple IDAT chunks. Such splitting slightly increases the file size, but makes it possible to generate a PNG in a streaming manner. The IDAT chunk contains the actual image data, which is the output stream of the compression algorithm. IEND marks the image end; the data field of the IEND chunk has 0 bytes/is empty. The PLTE chunk is essential for color type 3 (indexed color). It is optional for color types two and six (truecolor and truecolor with alpha) and it must not appear for color types 0 and 4 (grayscale and grayscale with alpha). Ancillary chunks Other image attributes that can be stored in PNG files include gamma values, background color, and textual metadata information. PNG also supports color management through the inclusion of ICC color profiles. bKGD gives the default background color. It is intended for use when there is no better choice available, such as in standalone image viewers (but not web browsers; see below for more details). cHRM gives the chromaticity coordinates of the display primaries and white point. cICP specifies the color space, transfer function and matrix coefficients as defined in ITU-T H.273. It is intended for use with HDR imagery without requiring a color profile. dSIG is for storing digital signatures. eXIf stores Exif metadata. gAMA specifies gamma. The gAMA chunk contains only 4 bytes, and its value represents the gamma value multiplied by 100,000; for example, the gamma value 1/3.4 calculates to 29411.7647059 ((1/3.4)*(100,000)) and is converted to an integer (29412) for storage. hIST can store the histogram, or total amount of each color in the image. iCCP is an ICC color profile. iTXt contains a keyword and UTF-8 text, with encodings for possible compression and translations marked with language tag. The Extensible Metadata Platform (XMP) uses this chunk with a keyword 'XML:com.adobe.xmp' pHYs holds the intended pixel size (or pixel aspect ratio); the pHYs contains "Pixels per unit, X axis" (4 bytes), "Pixels per unit, Y axis" (4 bytes), and "Unit specifier" (1 byte) for a total of 9 bytes. sBIT (significant bits) indicates the color-accuracy of the source data; this chunk contains a total of between 1 and 5 bytes, depending on the color type. sPLT suggests a palette to use if the full range of colors is unavailable. sRGB indicates that the standard sRGB color space is used; the sRGB chunk contains only 1 byte, which is used for "rendering intent" (4 values—0, 1, 2, and 3—are defined for rendering intent). sTER stereo-image indicator chunk for stereoscopic images. tEXt can store text that can be represented in ISO/IEC 8859-1, with one key-value pair for each chunk. The "key" must be between one and 79 characters long. Separator is a null character. The "value" can be any length, including zero up to the maximum permissible chunk size minus the length of the keyword and separator. Neither "key" nor "value" can contain null character. Leading or trailing spaces are also disallowed. tIME stores the time that the image was last changed. tRNS contains transparency information. For indexed images, it stores alpha channel values for one or more palette entries. For truecolor and grayscale images, it stores a single pixel value that is to be regarded as fully transparent. zTXt contains compressed text (and a compression method marker) with the same limits as tEXt. The lowercase first letter in these chunks indicates that they are not needed for the PNG specification. The lowercase last letter in some chunks indicates that they are safe to copy, even if the application concerned does not understand them. Pixel format Pixels in PNG images are numbers that may be either indices of sample data in the palette or the sample data itself. The palette is a separate table contained in the PLTE chunk. Sample data for a single pixel consists of a tuple of between one and four numbers. Whether the pixel data represents palette indices or explicit sample values, the numbers are referred to as channels and every number in the image is encoded with an identical format. The permitted formats encode each number as an unsigned integer value using a fixed number of bits, referred to in the PNG specification as the bit depth. Notice that this is not the same as color depth, which is commonly used to refer to the total number of bits in each pixel, not each channel. The permitted bit depths are summarized in the table along with the total number of bits used for each pixel. The number of channels depends on whether the image is grayscale or color and whether it has an alpha channel. PNG allows the following combinations of channels, called the color type. The color type is specified as an 8-bit value however only the low three bits are used and, even then, only the five combinations listed above are permitted. So long as the color type is valid it can be considered as a bit field as summarized in the adjacent table: bit value 1: the image data stores palette indices. This is only valid in combination with bit value 2; bit value 2: the image samples contain three channels of data encoding trichromatic colors, otherwise the image samples contain one channel of data encoding relative luminance, bit value 4: the image samples also contain an alpha channel expressed as a linear measure of the opacity of the pixel. This is not valid in combination with bit value 1. With indexed color images, the palette always stores trichromatic colors at a depth of 8 bits per channel (24 bits per palette entry). Additionally, an optional list of 8-bit alpha values for the palette entries may be included; if not included, or if shorter than the palette, the remaining palette entries are assumed to be opaque. The palette must not have more entries than the image bit depth allows for, but it may have fewer (for example, if an image with 8-bit pixels only uses 90 colors then it does not need palette entries for all 256 colors). The palette must contain entries for all the pixel values present in the image. The standard allows indexed color PNGs to have 1, 2, 4 or 8 bits per pixel; grayscale images with no alpha channel may have 1, 2, 4, 8 or 16 bits per pixel. Everything else uses a bit depth per channel of either 8 or 16. The combinations this allows are given in the table above. The standard requires that decoders can read all supported color formats, but many image editors can only produce a small subset of them. Transparency of image PNG offers a variety of transparency options. With true-color and grayscale images either a single pixel value can be declared as transparent or an alpha channel can be added (enabling any percentage of partial transparency to be used). For paletted images, alpha values can be added to palette entries. The number of such values stored may be less than the total number of palette entries, in which case the remaining entries are considered fully opaque. The scanning of pixel values for binary transparency is supposed to be performed before any color reduction to avoid pixels becoming unintentionally transparent. This is most likely to pose an issue for systems that can decode 16-bits-per-channel images (as is required for compliance with the specification) but only output at 8 bits per channel (the norm for all but the highest end systems). Alpha storage can be "associated" ("premultiplied") or "unassociated", but PNG standardized on "unassociated" ("non-premultiplied") alpha, which means that imagery is not alpha encoded; the emissions represented in RGB are not the emissions at the pixel level. This means that the over operation will multiply the RGB emissions by the alpha, and cannot represent emission and occlusion properly. Compression PNG uses a two-stage compression process: pre-compression: filtering (prediction) compression: DEFLATE PNG uses DEFLATE, a non-patented lossless data compression algorithm involving a combination of LZ77 and Huffman coding. Permissively licensed DEFLATE implementations, such as zlib, are widely available. Compared to formats with lossy compression such as JPEG, choosing a compression setting higher than average delays processing, but often does not result in a significantly smaller file size. Filtering Before DEFLATE is applied, the data is transformed via a prediction method: a single filter method is used for the entire image, while for each image line, a filter type is chosen to transform the data to make it more efficiently compressible. The filter type used for a scanline is prepended to the scanline to enable inline decompression. There is only one filter method in the current PNG specification (denoted method 0), and thus in practice the only choice is which filter type to apply to each line. For this method, the filter predicts the value of each pixel based on the values of previous neighboring pixels, and subtracts the predicted color of the pixel from the actual value, as in DPCM. An image line filtered in this way is often more compressible than the raw image line would be, especially if it is similar to the line above, since the differences from prediction will generally be clustered around 0, rather than spread over all possible image values. This is particularly important in relating separate rows, since DEFLATE has no understanding that an image is a 2D entity, and instead just sees the image data as a stream of bytes. There are five filter types for filter method 0; each type predicts the value of each byte (of the image data before filtering) based on the corresponding byte of the pixel to the left (A), the pixel above (B), and the pixel above and to the left (C) or some combination thereof, and encodes the difference between the predicted value and the actual value. Filters are applied to byte values, not pixels; pixel values may be one or two bytes, or several values per byte, but never cross byte boundaries. The filter types are: The Paeth filter is based on an algorithm by Alan W. Paeth. Compare to the version of DPCM used in lossless JPEG, and to the discrete wavelet transform using 1 × 2, 2 × 1, or (for the Paeth predictor) 2 × 2 windows and Haar wavelets. Compression is further improved by choosing filter types adaptively on a line-by-line basis. This improvement, and a heuristic method of implementing it commonly used by PNG-writing software, were created by Lee Daniel Crocker, who tested the methods on many images during the creation of the format; the choice of filter is a component of file size optimization, as discussed below. If interlacing is used, each stage of the interlacing is filtered separately, meaning that the image can be progressively rendered as each stage is received; however, interlacing generally makes compression less effective. Interlacing PNG offers an optional 2-dimensional, 7-pass interlacing scheme—the Adam7 algorithm. This is more sophisticated than GIF's 1-dimensional, 4-pass scheme, and allows a clearer low-resolution image to be visible earlier in the transfer, particularly if interpolation algorithms such as bicubic interpolation are used. However, the 7-pass scheme tends to reduce the data's compressibility more than simpler schemes. Animation The core PNG format does not support animation. MNG is an extension to PNG that does; it was designed by members of the PNG Group. MNG shares PNG's basic structure and chunks, but it is significantly more complex and has a different file signature, which automatically renders it incompatible with standard PNG decoders. This means that most web browsers and applications either never supported MNG or dropped support for it. The complexity of MNG led to the proposal of APNG by developers at the Mozilla Foundation. It is based on PNG, supports animation and is simpler than MNG. APNG offers fallback to single-image display for PNG decoders that do not support APNG. Today, the APNG format is supported by all major web browsers. APNG is supported in Firefox 3.0 and up, Pale Moon (all versions), and Safari 8.0 and up. Chromium 59.0 added APNG support, followed by Google Chrome. Opera supported APNG in versions 10–12.1, but support lapsed in version 15 when it switched to the Blink rendering engine; support was re-added in Opera 46 (inherited from Chromium 59). Microsoft Edge has supported APNG since version 79.0, when it switched to a Chromium-based engine. The PNG Group decided in April 2007 not to embrace APNG. Several alternatives were under discussion, including ANG, aNIM/mPNG, "PNG in GIF" and its subset "RGBA in GIF". However, currently only APNG has widespread support. With the development of the Third Edition of the PNG Specification, now maintained by the PNG working group, APNG will finally be incorporated into the specification as an extension. Examples Displayed in the fashion of hex editors, with on the left side byte values shown in hex format, and on the right side their equivalent characters from ISO-8859-1 with unrecognized and control characters replaced with periods. Additionally the PNG signature and individual chunks are marked with colors. Note they are easy to identify because of their human readable type names (in this example PNG, IHDR, IDAT, and IEND). Advantages Reasons to use PNG: Portability: Transmission is independent of the software and hardware platform. Completeness: it's possible to represent truecolor, indexed-color, and grayscale images. Coding and decoding in series: allows to generate and read data streams in series, that is, the format of the data stream is used for the generation and visualization of images at the moment through serial communication. Progressive presentation: to be able to transmit data flows that are initially an approximation of the entire image and progressively they improve as the data flow is received. Soundness to transmission errors: detects the transmission errors of the data stream correctly. Losslessness: No loss: filtering and compression preserve all information. Efficiency: any progressive image presentation, compression and filtering seeks efficient decoding and presentation. Compression: images can be compressed efficiently and consistently. Easiness: the implementation of the standard is easy. Interchangeability: any PNG decoder that follows the standards can read all PNG data streams. Flexibility: allows future extensions and private additions without affecting the previous point. Freedom of legal restrictions: the algorithms used are free and accessible. Comparison with other file formats Graphics Interchange Format (GIF) On small images, GIF can achieve greater compression than PNG (see the section on filesize, below). On most images, except for the above case, a GIF file has a larger size than an indexed PNG image. PNG gives a much wider range of transparency options than GIF, including alpha channel transparency. Whereas GIF is limited to 8-bit indexed color, PNG gives a much wider range of color depths, including 24-bit (8 bits per channel) and 48-bit (16 bits per channel) truecolor, allowing for greater color precision, smoother fades, etc. When an alpha channel is added, up to 64 bits per pixel (before compression) are possible. When converting an image from the PNG format to GIF, the image quality may suffer due to posterization if the PNG image has more than 256 colors. GIF intrinsically supports animated images. PNG supports animation only via unofficial extensions (see the section on animation, above). PNG images are less widely supported by older browsers. In particular, IE6 has limited support for PNG. JPEG The JPEG (Joint Photographic Experts Group) format can produce a smaller file than PNG for photographic (and photo-like) images, since JPEG uses a lossy encoding method specifically designed for photographic image data, which is typically dominated by soft, low-contrast transitions, and an amount of noise or similar irregular structures. Using PNG instead of a high-quality JPEG for such images would result in a large increase in file size with negligible gain in quality. In comparison, when storing images that contain text, line art, or graphics – images with sharp transitions and large areas of solid color – the PNG format can compress image data more than JPEG can. Additionally, PNG is lossless, while JPEG produces visual artifacts around high-contrast areas. (Such artifacts depend on the settings used in the JPG compression; they can be quite noticeable when a low-quality [high-compression] setting is used.) Where an image contains both sharp transitions and photographic parts, a choice must be made between the two effects. JPEG does not support transparency. JPEG's lossy compression also suffers from generation loss, where repeatedly decoding and re-encoding an image to save it again causes a loss of information each time, degrading the image. Because PNG is lossless, it is suitable for storing images to be edited. While PNG is reasonably efficient when compressing photographic images, there are lossless compression formats designed specifically for photographic images, lossless WebP and Adobe DNG (digital negative) for example. However these formats are either not widely supported, or are proprietary. An image can be stored losslessly and converted to JPEG format only for distribution, so that there is no generation loss. While the PNG specification does not explicitly include a standard for embedding Exif image data from sources such as digital cameras, the preferred method for embedding EXIF data in a PNG is to use the non-critical ancillary chunk label eXIf. Early web browsers did not support PNG images; JPEG and GIF were the main image formats. JPEG was commonly used when exporting images containing gradients for web pages, because of GIF's limited color depth. However, JPEG compression causes a gradient to blur slightly. A PNG format reproduces a gradient as accurately as possible for a given bit depth, while keeping the file size small. PNG became the optimal choice for small gradient images as web browser support for the format improved. No images at all are needed to display gradients in modern browsers, as gradients can be created using CSS. JPEG-LS JPEG-LS is an image format by the Joint Photographic Experts Group, though far less widely known and supported than the other lossy JPEG format discussed above. It is directly comparable with PNG, and has a standard set of test images. On the Waterloo Repertoire ColorSet, a standard set of test images (unrelated to the JPEG-LS conformance test set), JPEG-LS generally performs better than PNG, by 10–15%, but on some images PNG performs substantially better, on the order of 50–75%. Thus, if both of these formats are options and file size is an important criterion, they should both be considered, depending on the image. JPEG XL JPEG XL is another, much improved, lossless or lossy format, that is unfortunately supported much less, developed to replace lossless formats like PNG. JPEG XL is more than 50% smaller than JPEG, and that can happen while it's lossless, therefore making it even smaller than PNG. It also supports high dynamic range, wide colour gamuts, and large colour depths. JPEG XL is also very efficient at decoding, and provides smooth transitions from the formats it intends to replace, losslessly able to convert from JPEG. It also excels at compressing without compromising on fidelity. TIFF Tag Image File Format (TIFF) is a format that incorporates an extremely wide range of options. While this makes TIFF useful as a generic format for interchange between professional image editing applications, it makes adding support for it to applications a much bigger task and so it has little support in applications not concerned with image manipulation (such as web browsers). The high level of extensibility also means that most applications provide only a subset of possible features, potentially creating user confusion and compatibility issues. The most common general-purpose, lossless compression algorithm used with TIFF is Lempel–Ziv–Welch (LZW). This compression technique, also used in GIF, was covered by patents until 2003. TIFF also supports the compression algorithm PNG uses (i.e. Compression Tag 000816 'Adobe-style') with medium usage and support by applications. TIFF also offers special-purpose lossless compression algorithms like CCITT Group IV, which can compress bilevel images (e.g., faxes or black-and-white text) better than PNG's compression algorithm. PNG supports non-premultiplied alpha only whereas TIFF also supports "associated" (premultiplied) alpha. WebP WebP is a format invented by Google that was intended to replace PNG, JPEG, and GIF. WebP files allow for both lossy and lossless compression, while PNG only allows for lossless compression. WebP also supports animation, something that only GIF files could previously accomplish. The main improvements of WebP over PNG, however, are the large reduction in file size and therefore faster loading times when embedded into websites. Google claims that lossless WebP images are 26% smaller than PNG files. WebP has received criticism for being incompatible with various image editing programs and social media websites, unlike PNG. WebP is also not supported across all web browsers, which may require web image hosters to create a fallback image to display to the user, negating the potential storage savings of WebP. AVIF AVIF is an image format developed by the Alliance for Open Media. AVIF was designed by the foundation to make up for the shortcomings of other image codecs, including PNG, GIF, and WebP. AVIF is generally smaller in size than both WebP and PNG. AVIF supports animation while PNG does not and has a superior image quality when compared to PNG. However, like WebP, AVIF is supported across fewer browsers and applications than PNG. Specifically, AVIF is supported by the most used browsers, Microsoft Edge, Firefox, and Google Chrome, but requires an additional download for use with Microsoft Windows. Software support The official reference implementation of the PNG format is the programming library libpng. It is published as free software under the terms of a permissive free software license. Therefore, it is usually found as an important system library in free operating systems. Bitmap graphics editor support for PNG The PNG format is widely supported by graphics programs, including Adobe Photoshop, Corel's Photo-Paint and Paint Shop Pro, the GIMP, GraphicConverter, Helicon Filter, ImageMagick, Inkscape, IrfanView, Pixel image editor, Paint.NET and Xara Photo & Graphic Designer and many others (including online graphic design platforms such as Canva). Some programs bundled with popular operating systems which support PNG include Microsoft's Paint and Apple's Photos/iPhoto and Preview, with the GIMP also often being bundled with popular Linux distributions. Adobe Fireworks (formerly by Macromedia) uses PNG as its native file format, allowing other image editors and preview utilities to view the flattened image. However, Fireworks by default also stores metadata for layers, animation, vector data, text and effects. Such files should not be distributed directly. Fireworks can instead export the image as an optimized PNG without the extra metadata for use on web pages, etc. Web browser support for PNG PNG support first appeared in 1997, in Internet Explorer 4.0b1 (32-bit only for NT), and in Netscape 4.04. Despite calls by the Free Software Foundation and the World Wide Web Consortium (W3C), tools such as gif2png, and campaigns such as Burn All GIFs, PNG adoption on websites was fairly slow due to late and buggy support in Internet Explorer, particularly regarding transparency. PNG compatible browsers include: Apple Safari, Google Chrome, Mozilla Firefox, Opera, Camino, Internet Explorer, Microsoft Edge and many others. For the complete comparison, see Comparison of web browsers (Image format support). Especially versions of Internet Explorer (Windows) below 9.0 (released 2011) had numerous problems which prevented it from correctly rendering PNG images. 4.0 crashes on large PNG chunks. 4.0 does not include the functionality to view .png files, but there is a registry fix. 5.0 and 5.01 have broken OBJECT support. 5.01 prints palette images with black (or dark gray) backgrounds under Windows 98, sometimes with radically altered colors. 6.0 fails to display PNG images of 4097 or 4098 bytes in size. 6.0 cannot open a PNG file that contains one or more zero-length IDAT chunks. This issue was first fixed in security update 947864 (MS08-024). For more information, see this article in the Microsoft Knowledge Base: 947864 MS08-024: Cumulative Security Update for Internet Explorer. 6.0 sometimes completely loses ability to display PNGs, but there are various fixes. 6.0 and below have broken alpha-channel transparency support (will display the default background color instead). 7.0 and below cannot combine 8-bit alpha transparency AND element opacity (CSS – filter: Alpha (opacity=xx)) without filling partially transparent sections with black. 8.0 and below have inconsistent/broken gamma support. 8.0 and below don't have color-correction support. Operating system support for PNG icons PNG icons have been supported in most distributions of Linux since at least 1999, in desktop environments such as GNOME. In 2006, Microsoft Windows support for PNG icons was introduced in Windows Vista. PNG icons are supported in AmigaOS 4, AROS, macOS, iOS and MorphOS as well. In addition, Android makes extensive use of PNGs. File size and optimization software PNG file size can vary significantly depending on how it is encoded and compressed; this is discussed and a number of tips are given in PNG: The Definitive Guide. Compared to GIF Compared to GIF files, a PNG file with the same information (256 colors, no ancillary chunks/metadata), compressed by an effective compressor is normally smaller than a GIF image. Depending on the file and the compressor, PNG may range from somewhat smaller (10%) to significantly smaller (50%) to somewhat larger (5%), but is rarely significantly larger for large images. This is attributed to the performance of PNG's DEFLATE compared to GIF's LZW, and because the added precompression layer of PNG's predictive filters take account of the 2-dimensional image structure to further compress files; as filtered data encodes differences between pixels, they will tend to cluster closer to 0, rather than being spread across all possible values, and thus be more easily compressed by DEFLATE. However, some versions of Adobe Photoshop, CorelDRAW and MS Paint provide poor PNG compression, creating the impression that GIF is more efficient. File size factors PNG files vary in size due to a number of factors: color depth Color depth can range from 1 to 64 bits per pixel. ancillary chunks PNG supports metadata—this may be useful for editing, but unnecessary for viewing, as on websites. interlacing As each pass of the Adam7 algorithm is separately filtered, this can increase file size. filter As a precompression stage, each line is filtered by a predictive filter, which can change from line to line. As the ultimate DEFLATE step operates on the whole image's filtered data, one cannot optimize this row-by-row; the choice of filter for each row is thus potentially very variable, though heuristics exist. compression With additional computation, DEFLATE compressors can produce smaller files. There is thus a filesize trade-off between high color depth, maximal metadata (including color space information, together with information that does not affect display), interlacing, and speed of compression, which all yield large files, with lower color depth, fewer or no ancillary chunks, no interlacing, and tuned but computationally intensive filtering and compression. For different purposes, different trade-offs are chosen: a maximal file may be best for archiving and editing, while a stripped down file may be best for use on a website, and similarly fast but poor compression is preferred when repeatedly editing and saving a file, while slow but high compression is preferred when a file is stable: when archiving or posting. Interlacing is a trade-off: it dramatically speeds up early rendering of large files (improves latency), but may increase file size (decrease throughput) for little gain, particularly for small files. Lossy PNG compression Although PNG is a lossless format, PNG encoders can preprocess image data in a lossy fashion to improve PNG compression. For example, quantizing a truecolor PNG to 256 colors allows the indexed color type to be used for a likely reduction in file size. Image editing software Some programs are more efficient than others when saving PNG files, this relates to implementation of the PNG compression used by the program. Many graphics programs (such as Apple's Preview software) save PNGs with large amounts of metadata and color-correction data that are generally unnecessary for Web viewing. Unoptimized PNG files from Adobe Fireworks are also notorious for this since they contain options to make the image editable in supported editors. Also CorelDRAW (at least version 11) sometimes produces PNGs which cannot be opened by Internet Explorer (versions 6–8). Adobe Photoshop's performance on PNG files has improved in the CS Suite when using the Save For Web feature (which also allows explicit PNG/8 use). Adobe's Fireworks saves larger PNG files than many programs by default. This stems from the mechanics of its Save format: the images produced by Fireworks' save function include large, private chunks, containing complete layer and vector information. This allows further lossless editing. When saved with the Export option, Fireworks' PNGs are competitive with those produced by other image editors, but are no longer editable as anything but flattened bitmaps. Fireworks is unable to save size-optimized vector-editable PNGs. Other notable examples of poor PNG compressors include: Microsoft's Paint for Windows XP Microsoft Picture It! Photo Premium 9 Poor compression increases the PNG file size but does not affect the image quality or compatibility of the file with other programs. When the color depth of a truecolor image is reduced to an 8-bit palette (as in GIF), the resulting image data is typically much smaller. Thus a truecolor PNG is typically larger than a color-reduced GIF, although PNG could store the color-reduced version as a palettized file of comparable size. Conversely, some tools, when saving images as PNGs, automatically save them as truecolor, even if the original data use only 8-bit color, thus bloating the file unnecessarily. Both factors can lead to the misconception that PNG files are larger than equivalent GIF files. Optimizing tools Various tools are available for optimizing PNG files; they do this by: (optionally) removing ancillary chunks, reducing color depth, either: use a palette (instead of RGB) if the image has 256 or fewer colors, use a smaller palette, if the image has 2, 4, or 16 colors, or (optionally) lossily discard some of the data in the original image, optimizing line-by-line filter choice, and optimizing DEFLATE compression. Tool list pngcrush is the oldest of the popular PNG optimizers. It allows for multiple trials on filter selection and compression arguments, and finally chooses the smallest one. This working model is used in almost every png optimizer. advpng and the similar advdef utility in the AdvanceCOMP package recompress the PNG IDAT. Different DEFLATE implementations are applied depending on the selected compression level, trading between speed and file size: zlib at level 1, libdeflate at level 2, 7-zip's LZMA DEFLATE at level 3, and zopfli at level 4. pngout was made with the author's own deflater (same to the author's zip utility, kzip), while keeping all facilities of color reduction / filtering. However, pngout doesn't allow for using several trials on filters in a single run. It's suggested to use its commercial GUI version, pngoutwin, or used with a wrapper to automate the trials or to recompress using its own deflater while keep the filter line by line. zopflipng was also made with its own deflater, zopfli. It has all the optimizing features pngcrush has (including automating trials) while providing a very good, but slow deflater. A simple comparison of their features is listed below. Before zopflipng was available, a good way in practice to perform a png optimization is to use a combination of 2 tools in sequence for optimal compression: one which optimizes filters (and removes ancillary chunks), and one which optimizes DEFLATE. Although pngout offers both, only one type of filter can be specified in a single run, therefore it can be used with a wrapper tool or in combination with pngcrush, acting as a re-deflater, like advdef. Ancillary chunk removal For removing ancillary chunks, most PNG optimization tools have the ability to remove all color correction data from PNG files (gamma, white balance, ICC color profile, standard RGB color profile). This often results in much smaller file sizes. For example, the following command line options achieve this with pngcrush: pngcrush -rem gAMA -rem cHRM -rem iCCP -rem sRGB InputFile.png OutputFile.png Filter optimization pngcrush, pngout, and zopflipng all offer options applying one of the filter types 0–4 globally (using the same filter type for all lines) or with a "pseudo filter" (numbered 5), which for each line chooses one of the filter types 0–4 using an adaptive algorithm. Zopflipng offers 3 different adaptive method, including a brute-force search that attempts to optimize the filtering. pngout and zopflipng provide an option to preserve/reuse the line-by-line filter set present in the input image. pngcrush and zopflipng provide options to try different filter strategies in a single run and choose the best. The freeware command line version of pngout doesn't offer this, but the commercial version, pngoutwin, does. DEFLATE optimization Zopfli and the LZMA SDK provide DEFLATE implementations that can produce higher compression ratios than the zlib reference implementation at the cost of performance. AdvanceCOMP's advpng and advdef can use either of these libraries to re-compress PNG files. Additionally, PNGOUT contains its own proprietary DEFLATE implementation. advpng doesn't have an option to apply filters and always uses filter 0 globally (leaving the image data unfiltered); therefore it should not be used where the image benefits significantly from filtering. By contrast, advdef from the same package doesn't deal with PNG structure and acts only as a re-deflater, retaining any existing filter settings. Icon optimization Since icons intended for Windows Vista and later versions may contain PNG subimages, the optimizations can be applied to them as well. At least one icon editor, Pixelformer, is able to perform a special optimization pass while saving ICO files, thereby reducing their sizes. FileOptimizer (mentioned above) can also handle ICO files. Icons for macOS may also contain PNG subimages, yet there isn't such tool available.
Technology
File formats
null
24314
https://en.wikipedia.org/wiki/Planar%20graph
Planar graph
In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a plane graph, or a planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points. Every graph that can be drawn on a plane can be drawn on the sphere as well, and vice versa, by means of stereographic projection. Plane graphs can be encoded by combinatorial maps or rotation systems. An equivalence class of topologically equivalent drawings on the sphere, usually with additional assumptions such as the absence of isthmuses, is called a planar map. Although a plane graph has an external or unbounded face, none of the faces of a planar map has a particular status. Planar graphs generalize to graphs drawable on a surface of a given genus. In this terminology, planar graphs have genus 0, since the plane (and the sphere) are surfaces of genus 0. See "graph embedding" for other related topics. Planarity criteria Kuratowski's and Wagner's theorems The Polish mathematician Kazimierz Kuratowski provided a characterization of planar graphs in terms of forbidden graphs, now known as Kuratowski's theorem: A finite graph is planar if and only if it does not contain a subgraph that is a subdivision of the complete graph or the complete bipartite graph (utility graph). A subdivision of a graph results from inserting vertices into edges (for example, changing an edge zero or more times. Instead of considering subdivisions, Wagner's theorem deals with minors: A finite graph is planar if and only if it does not have or as a minor. A minor of a graph results from taking a subgraph and repeatedly contracting an edge into a vertex, with each neighbor of the original end-vertices becoming a neighbor of the new vertex. Klaus Wagner asked more generally whether any minor-closed class of graphs is determined by a finite set of "forbidden minors". This is now the Robertson–Seymour theorem, proved in a long series of papers. In the language of this theorem, and are the forbidden minors for the class of finite planar graphs. Other criteria In practice, it is difficult to use Kuratowski's criterion to quickly decide whether a given graph is planar. However, there exist fast algorithms for this problem: for a graph with vertices, it is possible to determine in time (linear time) whether the graph may be planar or not (see planarity testing). For a simple, connected, planar graph with vertices and edges and faces, the following simple conditions hold for : Theorem 1. ; Theorem 2. If there are no cycles of length 3, then . Theorem 3. . In this sense, planar graphs are sparse graphs, in that they have only edges, asymptotically smaller than the maximum . The graph , for example, has 6 vertices, 9 edges, and no cycles of length 3. Therefore, by Theorem 2, it cannot be planar. These theorems provide necessary conditions for planarity that are not sufficient conditions, and therefore can only be used to prove a graph is not planar, not that it is planar. If both theorem 1 and 2 fail, other methods may be used. Whitney's planarity criterion gives a characterization based on the existence of an algebraic dual; Mac Lane's planarity criterion gives an algebraic characterization of finite planar graphs, via their cycle spaces; The Fraysseix–Rosenstiehl planarity criterion gives a characterization based on the existence of a bipartition of the cotree edges of a depth-first search tree. It is central to the left-right planarity testing algorithm; Schnyder's theorem gives a characterization of planarity in terms of partial order dimension; Colin de Verdière's planarity criterion gives a characterization based on the maximum multiplicity of the second eigenvalue of certain Schrödinger operators defined by the graph. The Hanani–Tutte theorem states that a graph is planar if and only if it has a drawing in which each independent pair of edges crosses an even number of times; it can be used to characterize the planar graphs via a system of equations modulo 2. Properties Euler's formula Euler's formula states that if a finite, connected, planar graph is drawn in the plane without any edge intersections, and is the number of vertices, is the number of edges and is the number of faces (regions bounded by edges, including the outer, infinitely large region), then As an illustration, in the butterfly graph given above, , and . In general, if the property holds for all planar graphs of faces, any change to the graph that creates an additional face while keeping the graph planar would keep an invariant. Since the property holds for all graphs with , by mathematical induction it holds for all cases. Euler's formula can also be proved as follows: if the graph isn't a tree, then remove an edge which completes a cycle. This lowers both and by one, leaving constant. Repeat until the remaining graph is a tree; trees have and , yielding , i. e., the Euler characteristic is 2. In a finite, connected, simple, planar graph, any face (except possibly the outer one) is bounded by at least three edges and every edge touches at most two faces, so ; using Euler's formula, one can then show that these graphs are sparse in the sense that if : Euler's formula is also valid for convex polyhedra. This is no coincidence: every convex polyhedron can be turned into a connected, simple, planar graph by using the Schlegel diagram of the polyhedron, a perspective projection of the polyhedron onto a plane with the center of perspective chosen near the center of one of the polyhedron's faces. Not every planar graph corresponds to a convex polyhedron in this way: the trees do not, for example. Steinitz's theorem says that the polyhedral graphs formed from convex polyhedra are precisely the finite 3-connected simple planar graphs. More generally, Euler's formula applies to any polyhedron whose faces are simple polygons that form a surface topologically equivalent to a sphere, regardless of its convexity. Average degree Connected planar graphs with more than one edge obey the inequality , because each face has at least three face-edge incidences and each edge contributes exactly two incidences. It follows via algebraic transformations of this inequality with Euler's formula that for finite planar graphs the average degree is strictly less than 6. Graphs with higher average degree cannot be planar. Coin graphs We say that two circles drawn in a plane kiss (or osculate) whenever they intersect in exactly one point. A "coin graph" is a graph formed by a set of circles, no two of which have overlapping interiors, by making a vertex for each circle and an edge for each pair of circles that kiss. The circle packing theorem, first proved by Paul Koebe in 1936, states that a graph is planar if and only if it is a coin graph. This result provides an easy proof of Fáry's theorem, that every simple planar graph can be embedded in the plane in such a way that its edges are straight line segments that do not cross each other. If one places each vertex of the graph at the center of the corresponding circle in a coin graph representation, then the line segments between centers of kissing circles do not cross any of the other edges. Planar graph density The meshedness coefficient or density of a planar graph, or network, is the ratio of the number of bounded faces (the same as the circuit rank of the graph, by Mac Lane's planarity criterion) by its maximal possible values for a graph with vertices: The density obeys , with for a completely sparse planar graph (a tree), and for a completely dense (maximal) planar graph. Dual graph Given an embedding of a (not necessarily simple) connected graph in the plane without edge intersections, we construct the dual graph as follows: we choose one vertex in each face of (including the outer face) and for each edge in we introduce a new edge in connecting the two vertices in corresponding to the two faces in that meet at . Furthermore, this edge is drawn so that it crosses exactly once and that no other edge of or is intersected. Then is again the embedding of a (not necessarily simple) planar graph; it has as many edges as , as many vertices as has faces and as many faces as has vertices. The term "dual" is justified by the fact that ; here the equality is the equivalence of embeddings on the sphere. If is the planar graph corresponding to a convex polyhedron, then is the planar graph corresponding to the dual polyhedron. Duals are useful because many properties of the dual graph are related in simple ways to properties of the original graph, enabling results to be proven about graphs by examining their dual graphs. While the dual constructed for a particular embedding is unique (up to isomorphism), graphs may have different (i.e. non-isomorphic) duals, obtained from different (i.e. non-homeomorphic) embeddings. Families of planar graphs Maximal planar graphs A simple graph is called maximal planar if it is planar but adding any edge (on the given vertex set) would destroy that property. All faces (including the outer one) are then bounded by three edges, explaining the alternative term plane triangulation (which technically means a plane drawing of the graph). The alternative names "triangular graph" or "triangulated graph" have also been used, but are ambiguous, as they more commonly refer to the line graph of a complete graph and to the chordal graphs respectively. Every maximal planar graph on more than 3 vertices is at least 3-connected. If a maximal planar graph has vertices with , then it has precisely edges and faces. Apollonian networks are the maximal planar graphs formed by repeatedly splitting triangular faces into triples of smaller triangles. Equivalently, they are the planar 3-trees. Strangulated graphs are the graphs in which every peripheral cycle is a triangle. In a maximal planar graph (or more generally a polyhedral graph) the peripheral cycles are the faces, so maximal planar graphs are strangulated. The strangulated graphs include also the chordal graphs, and are exactly the graphs that can be formed by clique-sums (without deleting edges) of complete graphs and maximal planar graphs. Outerplanar graphs Outerplanar graphs are graphs with an embedding in the plane such that all vertices belong to the unbounded face of the embedding. Every outerplanar graph is planar, but the converse is not true: is planar but not outerplanar. A theorem similar to Kuratowski's states that a finite graph is outerplanar if and only if it does not contain a subdivision of or of . The above is a direct corollary of the fact that a graph is outerplanar if the graph formed from by adding a new vertex, with edges connecting it to all the other vertices, is a planar graph. A 1-outerplanar embedding of a graph is the same as an outerplanar embedding. For a planar embedding is -outerplanar if removing the vertices on the outer face results in a -outerplanar embedding. A graph is -outerplanar if it has a -outerplanar embedding. Halin graphs A Halin graph is a graph formed from an undirected plane tree (with no degree-two nodes) by connecting its leaves into a cycle, in the order given by the plane embedding of the tree. Equivalently, it is a polyhedral graph in which one face is adjacent to all the others. Every Halin graph is planar. Like outerplanar graphs, Halin graphs have low treewidth, making many algorithmic problems on them more easily solved than in unrestricted planar graphs. Upward planar graphs An upward planar graph is a directed acyclic graph that can be drawn in the plane with its edges as non-crossing curves that are consistently oriented in an upward direction. Not every planar directed acyclic graph is upward planar, and it is NP-complete to test whether a given graph is upward planar. Convex planar graphs A planar graph is said to be convex if all of its faces (including the outer face) are convex polygons. Not all planar graphs have a convex embedding (e.g. the complete bipartite graph ). A sufficient condition that a graph can be drawn convexly is that it is a subdivision of a 3-vertex-connected planar graph. Tutte's spring theorem even states that for simple 3-vertex-connected planar graphs the position of the inner vertices can be chosen to be the average of its neighbors. Word-representable planar graphs Word-representable planar graphs include triangle-free planar graphs and, more generally, 3-colourable planar graphs, as well as certain face subdivisions of triangular grid graphs, and certain triangulations of grid-covered cylinder graphs. Theorems Enumeration of planar graphs The asymptotic for the number of (labeled) planar graphs on vertices is , where and . Almost all planar graphs have an exponential number of automorphisms. The number of unlabeled (non-isomorphic) planar graphs on vertices is between and . Other results The four color theorem states that every planar graph is 4-colorable (i.e., 4-partite). Fáry's theorem states that every simple planar graph admits a representation as a planar straight-line graph. A universal point set is a set of points such that every planar graph with n vertices has such an embedding with all vertices in the point set; there exist universal point sets of quadratic size, formed by taking a rectangular subset of the integer lattice. Every simple outerplanar graph admits an embedding in the plane such that all vertices lie on a fixed circle and all edges are straight line segments that lie inside the disk and don't intersect, so n-vertex regular polygons are universal for outerplanar graphs. Scheinerman's conjecture (now a theorem) states that every planar graph can be represented as an intersection graph of line segments in the plane. The planar separator theorem states that every n-vertex planar graph can be partitioned into two subgraphs of size at most 2n/3 by the removal of O() vertices. As a consequence, planar graphs also have treewidth and branch-width O(). The planar product structure theorem states that every planar graph is a subgraph of the strong graph product of a graph of treewidth at most 8 and a path. This result has been used to show that planar graphs have bounded queue number, bounded non-repetitive chromatic number, and universal graphs of near-linear size. It also has applications to vertex ranking and p-centered colouring of planar graphs. For two planar graphs with v vertices, it is possible to determine in time O(v) whether they are isomorphic or not (see also graph isomorphism problem). Any planar graph on n nodes has at most 8(n-2) maximal cliques, which implies that the class of planar graphs is a class with few cliques. Generalizations An apex graph is a graph that may be made planar by the removal of one vertex, and a k-apex graph is a graph that may be made planar by the removal of at most k vertices. A 1-planar graph is a graph that may be drawn in the plane with at most one simple crossing per edge, and a k-planar graph is a graph that may be drawn with at most k simple crossings per edge. A map graph is a graph formed from a set of finitely many simply-connected interior-disjoint regions in the plane by connecting two regions when they share at least one boundary point. When at most three regions meet at a point, the result is a planar graph, but when four or more regions meet at a point, the result can be nonplanar (for example, if one thinks of a circle divided into sectors, with the sectors being the regions, then the corresponding map graph is the complete graph as all the sectors have a common boundary point - the centre point). A toroidal graph is a graph that can be embedded without crossings on the torus. More generally, the genus of a graph is the minimum genus of a two-dimensional surface into which the graph may be embedded; planar graphs have genus zero and nonplanar toroidal graphs have genus one. Every graph can be embedded without crossings into some (orientable, connected) closed two-dimensional surface (sphere with handles) and thus the genus of a graph is well defined. Obviously, if the graph can be embedded without crossings into a (orientable, connected, closed) surface with genus g, it can be embedded without crossings into all (orientable, connected, closed) surfaces with greater or equal genus. There are also other concepts in graph theory that are called "X genus" with "X" some qualifier; in general these differ from the above defined concept of "genus" without any qualifier. Especially the non-orientable genus of a graph (using non-orientable surfaces in its definition) is different for a general graph from the genus of that graph (using orientable surfaces in its definition). Any graph may be embedded into three-dimensional space without crossings. In fact, any graph can be drawn without crossings in a two plane setup, where two planes are placed on top of each other and the edges are allowed to "jump up" and "drop down" from one plane to the other at any place (not just at the graph vertexes) so that the edges can avoid intersections with other edges. This can be interpreted as saying that it is possible to make any electrical conductor network with a two-sided circuit board where electrical connection between the sides of the board can be made (as is possible with typical real life circuit boards, with the electrical connections on the top side of the board achieved through pieces of wire and at the bottom side by tracks of copper constructed on to the board itself and electrical connection between the sides of the board achieved through drilling holes, passing the wires through the holes and soldering them into the tracks); one can also interpret this as saying that in order to build any road network, one only needs just bridges or just tunnels, not both (2 levels is enough, 3 is not needed). Also, in three dimensions the question about drawing the graph without crossings is trivial. However, a three-dimensional analogue of the planar graphs is provided by the linklessly embeddable graphs, graphs that can be embedded into three-dimensional space in such a way that no two cycles are topologically linked with each other. In analogy to Kuratowski's and Wagner's characterizations of the planar graphs as being the graphs that do not contain K5 or K3,3 as a minor, the linklessly embeddable graphs may be characterized as the graphs that do not contain as a minor any of the seven graphs in the Petersen family. In analogy to the characterizations of the outerplanar and planar graphs as being the graphs with Colin de Verdière graph invariant at most two or three, the linklessly embeddable graphs are the graphs that have Colin de Verdière invariant at most four.
Mathematics
Graph theory
null
24327
https://en.wikipedia.org/wiki/Prairie%20dog
Prairie dog
Prairie dogs (genus Cynomys) are herbivorous burrowing ground squirrels native to the grasslands of North America. There are five recognized species of prairie dog: black-tailed, white-tailed, Gunnison's, Utah, and Mexican prairie dogs. In Mexico, prairie dogs are found primarily in the northern states, which lie at the southern end of the Great Plains: northeastern Sonora, north and northeastern Chihuahua, northern Coahuila, northern Nuevo León, and northern Tamaulipas. In the United States, they range primarily to the west of the Mississippi River, though they have also been introduced in a few eastern locales. They are also found in the Canadian Prairies. Despite the name, they are not actually canines; prairie dogs, along with the marmots, chipmunks, and several other basal genera belong to the ground squirrels (tribe Marmotini), part of the larger squirrel family (Sciuridae). Prairie dogs are considered a keystone species, with their mounds often being used by other species. Their mound-building encourages grass development and renewal of topsoil, with rich mineral, and nutrient renewal in the soil, which can be crucial for soil quality and agriculture. They are extremely important in the food chain, being important to the diet of many animals such as the black-footed ferret, swift fox, golden eagle, red tailed hawk, American badger, and coyote. Other species, such as the golden-mantled ground squirrel, mountain plover, and the burrowing owl, also rely on prairie dog burrows for nesting areas. Grazing species, such as plains bison, pronghorn, and mule deer, have shown a proclivity for grazing on the same land used by prairie dogs. Prairie dogs have some of the most complex systems of communication and social structures in the animal kingdom. The prairie dog habitat has been affected by direct removal by farmers, and the more obvious encroachment of urban development, which has greatly reduced their populations. The removal of prairie dogs "causes undesirable spread of brush", the costs of which to livestock range and soil quality often outweighs the benefits of removal. Other threats include disease. The prairie dog is protected in many areas to maintain local populations and ensure natural ecosystems. Etymology Prairie dogs are named for their habitat and warning call, which sounds similar to a dog's bark. The name was in use at least as early as 1774. The 1804 journals of the Lewis and Clark Expedition note that in September 1804, they "discovered a Village of an animal the French Call the Prairie Dog". Its genus, Cynomys, derives from the Greek for "dog mouse" (κυων kuōn, κυνος kunos – dog; μυς mus, μυός muos – mouse). The prairie dog is known by several indigenous names. The name wishtonwish was recorded by Lt. Zebulon Pike while on the Arkansas two years after Lewis and Clark's expedition. In Lakota, the word is pispíza or pìspíza. Classification and first identification The black-tailed prairie dog (Cynomys ludovicianus) was first described by Lewis and Clark in 1804. Lewis described it in more detail in 1806, calling it the "barking squirrel". Order Rodentia Suborder Sciuromorpha Family Sciuridae (squirrels, chipmunks, marmots, and prairie dogs) Subfamily Xerinae Genus Cynomys Gunnison's prairie dog, Cynomys gunnisoni White-tailed prairie dog, Cynomys leucurus Black-tailed prairie dog, Cynomys ludovicianus Mexican prairie dog, Cynomys mexicanus Utah prairie dog, Cynomys parvidens About 14 other genera in subfamily Extant species Description Prairie dogs are stout-bodied rodents that, on average, are long, including the short tail, and weigh between . Sexual dimorphism in body mass in the prairie dog varies 105 to 136% between the sexes. Among the species, black-tailed prairie dogs tend to be the least sexually dimorphic, and white-tailed prairie dogs tend to be the most sexually dimorphic. Sexual dimorphism peaks during weaning, when the females lose weight and the males start eating more, and is at its lowest when the females are pregnant, which is also when the males are depleted from breeding. Despite their name, a prairie dog skull has a condylobasal length of between 5.2 and 6.4 cm shorter than the skull of a canine or actual dog which is between 11.39 and 17.96 cm in length. The average lifespan of a prairie dog in the wild is 8 to 10 years. Ecology and behavior Diet Prairie dogs are chiefly herbivorous, although they occasionally eat insects. They feed primarily on grasses and small seeds. In the fall, they eat broadleaf forbs. In the winter, lactating and pregnant females supplement their diets with snow for extra water. They also will eat roots, seeds, fruit, buds, and grasses of various species. Black-tailed prairie dogs in South Dakota eat western bluegrass, blue grama, buffalo grass, six weeks fescue, and tumblegrass, while Gunnison's prairie dogs eat rabbit brush, tumbleweeds, dandelions, saltbush, and cacti in addition to buffalo grass and blue grama. White-tailed prairie dogs have been observed to kill ground squirrels, a competing herbivore. Habitat and burrowing Prairie dogs live mainly at altitudes ranging from above sea level. The areas where they live can get as warm as in the summer and as cold as in the winter. As prairie dogs live in areas prone to environmental threats, including hailstorms, blizzards, and floods, as well as drought and prairie fires, burrows provide important protection. Burrows help prairie dogs control their body temperature (thermoregulation) as they are during the winter and in the summer. Prairie dog tunnel systems channel rainwater into the water table, which prevents runoff and erosion, and can also change the composition of the soil in a region by reversing soil compaction that can result from cattle grazing. Prairie dog burrows are long and below the ground. The entrance holes are generally in diameter. Prairie dog burrows can have up to six entrances. Sometimes, the entrances are simply flat holes in the ground, while at other times, they are surrounded by mounds of soil either left as piles or hard-packed. Some mounds, known as dome craters, can be as high as . Other mounds, known as rim craters, can be as high as . Dome craters and rim craters serve as observation posts used by the animals to watch for predators. They also protect the burrows from flooding. The holes also possibly provide ventilation as the air enters through the dome crater and leaves through the rim crater, causing a breeze though the burrow. Prairie dog burrows contain chambers to provide certain functions. They have nursery chambers for their young, chambers for night, and chambers for the winter. They also contain air chambers that may function to protect the burrow from flooding and a listening post for predators. When hiding from predators, prairie dogs use less-deep chambers that are usually below the surface. Nursery chambers tend to be deeper, being below the surface. Social organization and spacing Prairie dogs are highly social animals. They live in large colonies or "towns", and collections of prairie dog families can span hundreds of acres. The prairie dog family groups are the most basic units of its society. Members of a family group inhabit the same territory. Family groups of black-tailed and Mexican prairie dogs are called "coteries", while "clans" describes family groups of white-tailed, Gunnison's, and Utah prairie dogs. Although these two family groups are similar, coteries tend to be more closely knit than clans. Members of a family group interact through oral contact or "kissing" and grooming one another. They do not perform these behaviors with prairie dogs from other family groups. A prairie dog town may contain 15–26 family groups, with subgroups within a town, called "wards", which are separated by a physical barrier. Family groups exist within these wards. Most prairie dog family groups are made up of one adult breeding male, two or three adult females, and one or two male offspring and one or two female offspring. Females remain in their natal groups for life, thus are the source of stability in the groups. Males leave their natal groups when they mature to find another family group to defend and breed in. Some family groups contain more breeding females than one male can control, so have more than one breeding adult male in them. Among these multiple-male groups, some may contain males that have friendly relationships, but the majority contain males that have largely antagonistic relationships. In the former, the males tend to be related, while in the latter, they tend not to be related. Two or three groups of females may be controlled by one male. However, among these female groups, no friendly relationships exist. The typical prairie dog territory takes up . Territories have well-established borders that coincide with physical barriers such as rocks and trees. The resident male of a territory defends it, and antagonistic behavior occurs between two males of different families to defend their territories. These interactions may happen 20 times per day and last five minutes. When two prairie dogs encounter each other at the edges of their territories, they stare, make bluff charges, flare their tails, chatter their teeth, and sniff each other's perianal scent glands. When fighting, prairie dogs bite, kick, and ram each other. If their competitor is around their size or smaller, the females participate in fighting. Otherwise, if a competitor is sighted, the females signal for the resident male. Reproduction and parenting Prairie dog copulation occurs in the burrows, which reduces the risk of interruption by a competing male. They are also at less risk of predation. Behaviors that signal that a female is in estrus include underground consorting, self-licking of genitals, dust-bathing, and late entrances into the burrow at night. The licking of genitals may protect against sexually transmitted diseases and genital infections, while dust-bathing may protect against fleas and other parasites. Prairie dogs also have a mating call which consists of up to 25 barks with a 3- to 15-second pause between each one. Females may try to increase their reproduction success by mating with males outside their family groups. When copulation is over, the male is no longer interested in the female sexually, but will prevent other males from mating with her by inserting copulatory plugs. For black-tailed prairie dogs, the resident male of the family group fathers all the offspring. Multiple paternity in litters seems to be more common in Utah and Gunnison's prairie dogs. Mother prairie dogs do most of the care for the young. In addition to nursing the young, the mother also defends the nursery chamber and collects grass for the nest. Males play their part by defending the territories and maintaining the burrows. The young spend their first six weeks below the ground being nursed. They are then weaned and begin to surface from the burrow. By five months, they are fully grown. The subject of cooperative breeding in prairie dogs has been debated among biologists. Some argue prairie dogs will defend and feed young that are not theirs, and young seemingly sleep in a nursery chamber with other mothers; since most nursing occurs at night, this may be a case of communal nursing. In the case of the latter, others suggest communal nursing occurs only when mothers mistake another female's young for their own. Infanticide is known to occur in prairie dogs. Males that take over a family group will kill the offspring of the previous male. This causes the mother to go into estrus sooner. However, most infanticide is done by close relatives. Lactating females will kill the offspring of a related female both to decrease competition for the female's offspring and for increased foraging area due to a decrease in territorial defense by the victimized mother. Supporters of the theory that prairie dogs are communal breeders state that another reason for this type of infanticide is so that the female can get a possible helper. With their own offspring gone, the victimized mother may help raise the young of other females. Anti-predator calls The prairie dog is well adapted to predators. Using its dichromatic color vision, it can detect predators from a great distance; it then alerts other prairie dogs of the danger with a special, high-pitched call. Constantine Slobodchikoff and others assert that prairie dogs use a sophisticated system of vocal communication to describe specific predators. According to them, prairie dog calls contain specific information as to what the predator is, how big it is and how fast it is approaching. These have been described as a form of grammar. According to Slobodchikoff, these calls, with their individuality in response to a specific predator, imply that prairie dogs have highly developed cognitive abilities. He also writes that prairie dogs have calls for things that are not predators to them. This is cited as evidence that the animals have a very descriptive language and have calls for any potential threat. Alarm response behavior varies according to the type of predator announced. If the alarm indicates a hawk diving toward the colony, all the prairie dogs in its flight path dive into their holes, while those outside the flight path stand and watch. If the alarm is for a human, all members of the colony immediately rush inside the burrows. For coyotes, the prairie dogs move to the entrance of a burrow and stand outside the entrance, observing the coyote, while those prairie dogs that were inside the burrows come out to stand and watch, as well. For domestic dogs, the response is to observe, standing in place where they were when the alarm was sounded, again with the underground prairie dogs emerging to watch. Debate exists over whether the alarm calling of prairie dogs is selfish or altruistic. Prairie dogs may alert others to the presence of a predator so they can protect themselves, but the calls could be meant to cause confusion and panic in the groups and cause the others to be more conspicuous to the predator than the caller. Studies of black-tailed prairie dogs suggest that alarm-calling is a form of kin selection, as a prairie dog's call alerts both offspring and indirectly related kin, such as cousins, nephews, and nieces. Prairie dogs with kin close by called more often than those that did not have kin nearby. In addition, the caller may be trying to make itself more noticeable to the predator. Predators, though, seem to have difficulty determining which prairie dog is making the call due to its "ventriloquistic" nature. Perhaps the most striking of prairie dog communications is the territorial call or "jump-yip" display of the black-tailed prairie dog. A black-tailed prairie dog stretches the length of its body vertically and throws its forefeet into the air while making a call. A jump-yip from one prairie dog causes others nearby to do the same. Conservation status Ecologists consider the prairie dog to be a keystone species. They are an important prey species, being the primary diet in prairie species such as the black-footed ferret, swift fox, golden eagle, red tailed hawk, American badger, coyote, and ferruginous hawk. Other species, such as the golden-mantled ground squirrel, mountain plover, and the burrowing owl, also rely on prairie dog burrows for nesting areas. Even grazing species, such as plains bison, pronghorn, and mule deer have shown a proclivity for grazing on the same land used by prairie dogs. Nevertheless, prairie dogs are often identified as pests and exterminated from agricultural properties because they are capable of damaging crops, as they clear the immediate area around their burrows of most vegetation. As a result, prairie dog habitat has been affected by direct removal by farmers, as well as the more obvious encroachment of urban development, which has greatly reduced their populations. The removal of prairie dogs "causes undesirable spread of brush", the costs of which to livestock range may outweigh the benefits of removal. Black-tailed prairie dogs comprise the largest remaining community. In spite of human encroachment, prairie dogs have adapted, continuing to dig burrows in open areas of western cities. One common concern, which led to the widespread extermination of prairie dog colonies, was that their digging activities could injure horses by fracturing their limbs. According to writer Fred Durso Jr., of E Magazine, though, "after years of asking ranchers this question, we have found not one example." Another concern is their susceptibility to Bubonic plague. the U.S. Fish and Wildlife Service plans to distribute an oral vaccine it had developed by unmanned aircraft or drones. In captivity Until 2003, primarily black-tailed prairie dogs were collected from the wild for the exotic pet trade in Canada, the United States, Japan, and Europe. They were removed from their burrows each spring, as young pups, with a large vacuum device. They can be difficult to breed in captivity, but breed well in zoos. Removing them from the wild was a far more common method of supplying the market demand. They can be difficult pets to care for, requiring regular attention and a very specific diet of grasses and hay. Each year, they go into a period called rut that can last for several months, in which their personalities can drastically change, often becoming defensive or even aggressive. Despite their needs, prairie dogs are very social animals and come to seem as though they treat humans as members of their colony. In mid-2003, due to cross-contamination at a Madison, Wisconsin-area pet swap from an unquarantined Gambian pouched rat imported from Ghana, several prairie dogs in captivity acquired monkeypox, and subsequently a few humans were also infected. This led the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA) to issue a joint order banning the sale, trade, and transport within the United States of prairie dogs (with a few exceptions). The disease was never introduced to any wild populations. The European Union also banned importation of prairie dogs in response. All Cynomys species are classed as a "prohibited new organism" under New Zealand's Hazardous Substances and New Organisms Act 1996, preventing them from being imported into the country. Prairie dogs are also very susceptible to Bubonic plague, and many wild colonies have been wiped out by it. Also, in 2002, a large group of prairie dogs in captivity in Texas were found to have contracted tularemia. The prairie dog ban is frequently cited by the CDC as a successful response to the threat of zoonosis. Prairie dogs that were in captivity at the time of the ban in 2003 were allowed to be kept under a grandfather clause, but were not to be bought, traded, or sold, and transport was permitted only to and from a veterinarian under quarantine procedures. On 8 September 2008, the FDA and CDC rescinded the ban, making it once again legal to capture, sell, and transport prairie dogs. Although the federal ban has been lifted, several states still have in place their own ban on prairie dogs. The European Union has not lifted its ban on imports from the U.S. of animals captured in the wild. Major European Prairie Dog Associations, such as the Italian Associazione Italiana Cani della Prateria, remain against import from the United States, due to the high death rate of wild captures. Several zoos in Europe have stable prairie dog colonies that generate enough surplus pups to saturate the EU internal demand, and several associations help owners to give adoption to captive-born animals. Prairie dogs in captivity may live up to 10 years. Literary descriptions From George Wilkins Kendall's account of the Texan Santa Fe Expedition: "In their habits, they are clannish, social, and extremely convivial, never living alone like other animals, but on the contrary, always found in villages or large settlements. They are a wild, frolicsome, madcap set of fellows when undisturbed, uneasy and ever on the move, and appear to take especial delight in chattering away the time, and visiting from hole to hole to gossip and talk over each other's affairs—at least so their actions would indicate. On several occasions I crept close to their villages, without being observed, to watch their movements. Directly in the centre of one of them I particularly noticed a very large dog, sitting in front of the door or entrance to his burrow, and by his own actions and those of his neighbors it really seemed as though he was the president, mayor, or chief—at all events, he was the 'big dog' of the place. For at least an hour I secretly watched the operations in this community. During that time the large dog I have mentioned received at least a dozen visits from his fellow-dogs, which would stop and chat with him a few moments, and then run off to their domiciles. All this while he never left his post for a moment, and I thought I could discover a gravity in his deportment not discernible in those by which he was surrounded. Far is it from me to say that the visits he received were upon business, or had anything to do with the local government of the village; but it certainly appeared so. If any animal has a system of laws regulating the body politic, it is certainly the prairie dog." From Josiah Gregg's journal, Commerce of the Prairies: "Of all the prairie animals, by far the most curious, and by no means the least celebrated, is the little prairie dog. ...The flesh, though often eaten by travelers, is not esteemed savory. It was denominated the 'barking squirrel', the 'prairie ground-squirrel', etc., by early explorers, with much more apparent propriety than the present established name. Its yelp, which resembles that of the little toy-dog, seems its only canine attribute. It rather appears to occupy a middle ground betwixt the rabbit and squirrel—like the former in feeding and burrowing—like the latter in frisking, flirting, sitting erect, and somewhat so in its barking. The prairie dog has been reckoned by some naturalists a species of the marmot (arctomys ludoviciana); yet it seems to possess scarce any other quality in common with this animal except that of burrowing. ...I have the concurrent testimony of several persons, who have been upon the Prairies in winter, that, like rabbits and squirrels, they issue from their holes every soft day; and therefore lay up no doubt a hoard of 'hay' (as there is rarely anything else to be found in the vicinity of their towns) for winter's use. A collection of their burrows has been termed by travelers a 'dog town,' which comprises from a dozen or so, to some thousands in the same vicinity; often covering an area of several square miles. They generally locate upon firm dry plains, coated with fine short grass, upon which they feed; for they are no doubt exclusively herbivorous. But even when tall coarse grass surrounds, they seem commonly to destroy this within their 'streets,' which are nearly always found 'paved' with a fine species suited to their palates. They must need but little water, if any at all, as their 'towns' are often, indeed generally, found in the midst of the most arid plains—unless we suppose they dig down to subterranean fountains. At least they evidently burrow remarkably deep. Attempts either to dig or drown them out of their holes have generally proved unsuccessful. Approaching a 'village,' the little dogs may be observed frisking about the 'streets'—passing from dwelling to dwelling apparently on visits—sometimes a few clustered together as though in council—here feeding upon the tender herbage—there cleansing their 'houses,' or brushing the little hillock about the door—yet all quiet. Upon seeing a stranger, however, each streaks it to its home, but is apt to stop at the entrance, and spread the general alarm by a succession of shrill yelps, usually sitting erect. Yet at the report of a gun or the too near approach of the visitor, they dart down and are seen no more till the cause of alarm seems to have disappeared. In culture In companies that use large numbers of cubicles in a common space, employees sometimes use the term "prairie dogging" to refer to the action of several people simultaneously looking over the walls of their cubicles in response to a noise or other distraction. This action is thought to resemble the startled response of a group of prairie dogs. The same term is also vulgar slang to refer to one who is on the verge of defecating (often involuntarily), with the implication that fecal matter has already begun partially exiting the anus. The Amarillo Sod Poodles, a minor league baseball team, use a nickname for prairie dogs as their cognomen.
Biology and health sciences
Rodents
Animals
24330
https://en.wikipedia.org/wiki/Porcupinefish
Porcupinefish
Porcupinefish are medium-to-large fish belonging to the family Diodontidae from the order Tetraodontiformes which are also commonly called blowfish and, sometimes, balloonfish and globefish. The family includes about 18 species. They are sometimes collectively called pufferfish, not to be confused with the morphologically similar and closely related Tetraodontidae, which are more commonly given this name. They are found in shallow, temperate, and tropical seas worldwide. A few species are found much further out from shore, wherein large schools of thousands of individuals can occur. Taxonomy Extant genera The following genera are known: Allomycterus McCulloch, 1921 Chilomycterus Brisout de Barneville, 1846 Cyclichthys Kaup, 1855 Diodon Linnaeus, 1758 Dicotylichthys Kaup, 1855 Lophodiodon Fraser-Brunner, 1943 Tragulichthys Whitley, 1931 Fossil genera The following genera are known only from fossil remains: †Eodiodon Casier, 1952 (Late Eocene of Belgium) †Heptadiodon Bronn, 1855 (Early Eocene of Italy) †Prodiodon Ladanois, 1955 (Early Eocene of Italy) †Progymnodon Dames, 1883 (mid-late Eocene of the United States and Romania) †Pshekhadiodon Bannikov & Tyler, 1997 (Middle Eocene of the North Caucasus, Russia) †Zignodon Tyler & Santini, 2002 (Early Eocene of Italy) Characteristics Porcupinefish are generally slow-moving. They have the ability to inflate their bodies by swallowing water or air, thereby becoming rounder. This increase in size (almost double vertically) reduces the range of potential predators to those with much larger mouths. A second defense mechanism is provided by the sharp spines, which radiate outwards when the fish is inflated. They have upper and lower teeth that fuse into a shape of a parrot's beak; they use this beak to eat molluscs and sea urchins. Some species are poisonous, having tetrodotoxin in their internal organs, such as the ovaries and liver. This neurotoxin is at least 1,200 times more potent than cyanide. The poison is produced by several types of bacteria obtained from the fish's diet. As a result of these three defenses, porcupinefish have few predators, though adults are sometimes preyed upon by sharks and orcas. Juveniles are also preyed on by Lysiosquillina maculata, tuna, and dolphins. Relationship with humans Consumption Porcupinefish are eaten as food fish and are an exotic delicacy in Cebu, Philippines, where they are called tagotongan. However, pufferfish can be dangerous to consume since they can cause tetrodotoxin poisoning. In popular culture The porcupine fish (as Diodon antennatus) is mentioned in Charles Darwin's famous account of his trip around the world, The Voyage of the Beagle. He noted how the fish can swim quite well when inflated, though the altered buoyancy requires them to do so upside down. Darwin also mentioned hearing a fellow naturalist, Dr. Allen of Forres, had "frequently found a Diodon, floating alive and distended, in the stomach of the shark; and that on several occasions he has known it eat its way, not only through the coats of the stomach, but through the sides of the monster". Gallery
Biology and health sciences
Acanthomorpha
Animals
24350
https://en.wikipedia.org/wiki/Projective%20plane
Projective plane
In mathematics, a projective plane is a geometric structure that extends the concept of a plane. In the ordinary Euclidean plane, two lines typically intersect at a single point, but there are some pairs of lines (namely, parallel lines) that do not intersect. A projective plane can be thought of as an ordinary plane equipped with additional "points at infinity" where parallel lines intersect. Thus any two distinct lines in a projective plane intersect at exactly one point. Renaissance artists, in developing the techniques of drawing in perspective, laid the groundwork for this mathematical topic. The archetypical example is the real projective plane, also known as the extended Euclidean plane. This example, in slightly different guises, is important in algebraic geometry, topology and projective geometry where it may be denoted variously by , RP2, or P2(R), among other notations. There are many other projective planes, both infinite, such as the complex projective plane, and finite, such as the Fano plane. A projective plane is a 2-dimensional projective space. Not all projective planes can be embedded in 3-dimensional projective spaces; such embeddability is a consequence of a property known as Desargues' theorem, not shared by all projective planes. Definition A projective plane is a rank 2 incidence structure consisting of a set of points , a set of lines , and a symmetric relation on the set called incidence, having the following properties: Given any two distinct points, there is exactly one line incident with both of them. Given any two distinct lines, there is exactly one point incident with both of them. There are four points such that no line is incident with more than two of them. The second condition means that there are no parallel lines. The last condition excludes the so-called degenerate cases (see below). The term "incidence" is used to emphasize the symmetric nature of the relationship between points and lines. Thus the expression "point P is incident with line ℓ" is used instead of either "P is on ℓ" or "ℓ passes through P". It follows from the definition that the number of points incident with any given line in a projective plane is the same as the number of lines incident with any given point. The (possibly infinite) cardinal number is called order of the plane. Examples The extended Euclidean plane To turn the ordinary Euclidean plane into a projective plane, proceed as follows: To each parallel class of lines (a maximum set of mutually parallel lines) associate a single new point. That point is to be considered incident with each line in its class. The new points added are distinct from each other. These new points are called points at infinity. Add a new line, which is considered incident with all the points at infinity (and no other points). This line is called the line at infinity. The extended structure is a projective plane and is called the extended Euclidean plane or the real projective plane. The process outlined above, used to obtain it, is called "projective completion" or projectivization. This plane can also be constructed by starting from R3 viewed as a vector space, see below. Projective Moulton plane The points of the Moulton plane are the points of the Euclidean plane, with coordinates in the usual way. To create the Moulton plane from the Euclidean plane some of the lines are redefined. That is, some of their point sets will be changed, but other lines will remain unchanged. Redefine all the lines with negative slopes so that they look like "bent" lines, meaning that these lines keep their points with negative x-coordinates, but the rest of their points are replaced with the points of the line with the same y-intercept but twice the slope wherever their x-coordinate is positive. The Moulton plane has parallel classes of lines and is an affine plane. It can be projectivized, as in the previous example, to obtain the projective Moulton plane. Desargues' theorem is not a valid theorem in either the Moulton plane or the projective Moulton plane. A finite example This example has just thirteen points and thirteen lines. We label the points P1, ..., P13 and the lines m1, ..., m13. The incidence relation (which points are on which lines) can be given by the following incidence matrix. The rows are labelled by the points and the columns are labelled by the lines. A 1 in row i and column j means that the point Pi is on the line mj, while a 0 (which we represent here by a blank cell for ease of reading) means that they are not incident. The matrix is in Paige–Wexler normal form. {| class="wikitable" style="text-align:center;" |- ! ! m1 ! m2 !! m3 !! m4 ! m5 !! m6 !! m7 ! m8 !! m9 !! m10 ! m11!! m12!! m13 |- style="border-bottom:2px solid #999;" ! P1 | bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 ||   ||   ||   ||   ||   ||   ||   ||   ||   |- ! P2 | bgcolor="#9cf"|1 ||   ||   ||   || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 ||   ||   ||   ||   ||   ||   |- ! P3 | bgcolor="#9cf"|1 ||   || ||   ||   ||   ||   || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 ||   ||   ||   |- style="border-bottom:2px solid #999;" ! P4 | bgcolor="#9cf"|1 ||   ||   ||   ||   || ||  || ||  ||   || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 |- ! P5 |   || bgcolor="#9cf"|1 ||   ||   || bgcolor="#9cf"|1 || ||  || bgcolor="#9cf"|1 ||   ||   || bgcolor="#9cf"|1 ||   ||   |- ! P6 |   || bgcolor="#9cf"|1 ||   ||   ||   || bgcolor="#9cf"|1 ||  || || bgcolor="#9cf"|1 ||   ||   || bgcolor="#9cf"|1 ||   |- style="border-bottom:2px solid #999;" ! P7 | || bgcolor="#9cf"|1 || || || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 |- ! P8 | || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 |- ! P9 | || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || || |- style="border-bottom:2px solid #999;" ! P10 | || || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || |- ! P11 | || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || || || || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || |- ! P12 | || || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || || || || bgcolor="#9cf"|1 |- ! P13 | || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || |} To verify the conditions that make this a projective plane, observe that every two rows have exactly one common column in which 1s appear (every pair of distinct points are on exactly one common line) and that every two columns have exactly one common row in which 1s appear (every pair of distinct lines meet at exactly one point). Among many possibilities, the points P1, P4, P5, and P8, for example, will satisfy the third condition. This example is known as the projective plane of order three. Vector space construction Though the line at infinity of the extended real plane may appear to have a different nature than the other lines of that projective plane, this is not the case. Another construction of the same projective plane shows that no line can be distinguished (on geometrical grounds) from any other. In this construction, each "point" of the real projective plane is the one-dimensional subspace (a geometric line) through the origin in a 3-dimensional vector space, and a "line" in the projective plane arises from a (geometric) plane through the origin in the 3-space. This idea can be generalized and made more precise as follows. Let K be any division ring (skewfield). Let K3 denote the set of all triples x = of elements of K (a Cartesian product viewed as a vector space). For any nonzero x in K3, the minimal subspace of K3 containing x (which may be visualized as all the vectors in a line through the origin) is the subset of K3. Similarly, let x and y be linearly independent elements of K3, meaning that implies that . The minimal subspace of K3 containing x and y (which may be visualized as all the vectors in a plane through the origin) is the subset of K3. This 2-dimensional subspace contains various 1-dimensional subspaces through the origin that may be obtained by fixing k and m and taking the multiples of the resulting vector. Different choices of k and m that are in the same ratio will give the same line. The projective plane over K, denoted PG(2, K) or KP2, has a set of points consisting of all the 1-dimensional subspaces in K3. A subset L of the points of PG(2, K) is a line in PG(2, K) if there exists a 2-dimensional subspace of K3 whose set of 1-dimensional subspaces is exactly L. Verifying that this construction produces a projective plane is usually left as a linear algebra exercise. An alternate (algebraic) view of this construction is as follows. The points of this projective plane are the equivalence classes of the set modulo the equivalence relation x ~ kx, for all k in K×. Lines in the projective plane are defined exactly as above. The coordinates of a point in PG(2, K) are called homogeneous coordinates. Each triple represents a well-defined point in PG(2, K), except for the triple , which represents no point. Each point in PG(2, K), however, is represented by many triples. If K is a topological space, then KP2 inherits a topology via the product, subspace, and quotient topologies. Classical examples The real projective plane RP2 arises when K is taken to be the real numbers, R. As a closed, non-orientable real 2-manifold, it serves as a fundamental example in topology. In this construction, consider the unit sphere centered at the origin in R3. Each of the R3 lines in this construction intersects the sphere at two antipodal points. Since the R3 line represents a point of RP2, we will obtain the same model of RP2 by identifying the antipodal points of the sphere. The lines of RP2 will be the great circles of the sphere after this identification of antipodal points. This description gives the standard model of elliptic geometry. The complex projective plane CP2 arises when K is taken to be the complex numbers, C. It is a closed complex 2-manifold, and hence a closed, orientable real 4-manifold. It and projective planes over other fields (known as pappian planes) serve as fundamental examples in algebraic geometry. The quaternionic projective plane HP2 is also of independent interest. Finite field planes By Wedderburn's Theorem, a finite division ring must be commutative and so be a field. Thus, the finite examples of this construction are known as "field planes". Taking K to be the finite field of elements with prime p produces a projective plane of points. The field planes are usually denoted by PG(2, q) where PG stands for projective geometry, the "2" is the dimension and q is called the order of the plane (it is one less than the number of points on any line). The Fano plane, discussed below, is denoted by PG(2, 2). The third example above is the projective plane PG(2, 3). The Fano plane is the projective plane arising from the field of two elements. It is the smallest projective plane, with only seven points and seven lines. In the figure at right, the seven points are shown as small balls, and the seven lines are shown as six line segments and a circle. However, one could equivalently consider the balls to be the "lines" and the line segments and circle to be the "points" – this is an example of duality in the projective plane: if the lines and points are interchanged, the result is still a projective plane (see below). A permutation of the seven points that carries collinear points (points on the same line) to collinear points is called a collineation or symmetry of the plane. The collineations of a geometry form a group under composition, and for the Fano plane this group () has 168 elements. Desargues' theorem and Desarguesian planes The theorem of Desargues is universally valid in a projective plane if and only if the plane can be constructed from a three-dimensional vector space over a skewfield as above. These planes are called Desarguesian planes, named after Girard Desargues. The real (or complex) projective plane and the projective plane of order 3 given above are examples of Desarguesian projective planes. The projective planes that can not be constructed in this manner are called non-Desarguesian planes, and the Moulton plane given above is an example of one. The PG(2, K) notation is reserved for the Desarguesian planes. When K is a field, a very common case, they are also known as field planes and if the field is a finite field they can be called Galois planes. Subplanes A subplane of a projective plane is a pair of subsets where , and is itself a projective plane with respect to the restriction of the incidence relation to . proves the following theorem. Let Π be a finite projective plane of order N with a proper subplane Π0 of order M. Then either N = M2 or N ≥ M2 + M. A subplane of is a Baer subplane if every line in is incident with exactly one point in and every point in is incident with exactly one line of . A finite Desarguesian projective plane of order admits Baer subplanes (all necessarily Desarguesian) if and only if is square; in this case the order of the Baer subplanes is . In the finite Desarguesian planes PG(2, pn), the subplanes have orders which are the orders of the subfields of the finite field GF(pn), that is, pi where i is a divisor of n. In non-Desarguesian planes however, Bruck's theorem gives the only information about subplane orders. The case of equality in the inequality of this theorem is not known to occur. Whether or not there exists a subplane of order M in a plane of order N with M2 + M = N is an open question. If such subplanes existed there would be projective planes of composite (non-prime power) order. Fano subplanes A Fano subplane is a subplane isomorphic to PG(2, 2), the unique projective plane of order 2. If you consider a quadrangle (a set of 4 points no three collinear) in this plane, the points determine six of the lines of the plane. The remaining three points (called the diagonal points of the quadrangle) are the points where the lines that do not intersect at a point of the quadrangle meet. The seventh line consists of all the diagonal points (usually drawn as a circle or semicircle). In finite desarguesian planes, PG(2, q), Fano subplanes exist if and only if q is even (that is, a power of 2). The situation in non-desarguesian planes is unsettled. They could exist in any non-desarguesian plane of order greater than 6, and indeed, they have been found in all non-desarguesian planes in which they have been looked for (in both odd and even orders). An open question, apparently due to Hanna Neumann though not published by her, is: Does every non-desarguesian plane contain a Fano subplane? A theorem concerning Fano subplanes due to is: If every quadrangle in a finite projective plane has collinear diagonal points, then the plane is desarguesian (of even order). Affine planes Projectivization of the Euclidean plane produced the real projective plane. The inverse operation—starting with a projective plane, remove one line and all the points incident with that line—produces an affine plane. Definition More formally an affine plane consists of a set of lines and a set of points, and a relation between points and lines called incidence, having the following properties: Given any two distinct points, there is exactly one line incident with both of them. Given any line l and any point P not incident with l, there is exactly one line incident with P that does not meet l. There are four points such that no line is incident with more than two of them. The second condition means that there are parallel lines and is known as Playfair's axiom. The expression "does not meet" in this condition is shorthand for "there does not exist a point incident with both lines". The Euclidean plane and the Moulton plane are examples of infinite affine planes. A finite projective plane will produce a finite affine plane when one of its lines and the points on it are removed. The order of a finite affine plane is the number of points on any of its lines (this will be the same number as the order of the projective plane from which it comes). The affine planes which arise from the projective planes PG(2, q) are denoted by AG(2, q). There is a projective plane of order N if and only if there is an affine plane of order N. When there is only one affine plane of order N there is only one projective plane of order N, but the converse is not true. The affine planes formed by the removal of different lines of the projective plane will be isomorphic if and only if the removed lines are in the same orbit of the collineation group of the projective plane. These statements hold for infinite projective planes as well. Construction of projective planes from affine planes The affine plane K2 over K embeds into KP2 via the map which sends affine (non-homogeneous) coordinates to homogeneous coordinates, The complement of the image is the set of points of the form . From the point of view of the embedding just given, these points are the points at infinity. They constitute a line in KP2—namely, the line arising from the plane in K3—called the line at infinity. The points at infinity are the "extra" points where parallel lines intersect in the construction of the extended real plane; the point (0, x1, x2) is where all lines of slope x2 / x1 intersect. Consider for example the two lines in the affine plane K2. These lines have slope 0 and do not intersect. They can be regarded as subsets of KP2 via the embedding above, but these subsets are not lines in KP2. Add the point to each subset; that is, let These are lines in KP2; ū arises from the plane in K3, while ȳ arises from the plane The projective lines ū and ȳ intersect at . In fact, all lines in K2 of slope 0, when projectivized in this manner, intersect at in KP2. The embedding of K2 into KP2 given above is not unique. Each embedding produces its own notion of points at infinity. For example, the embedding has as its complement those points of the form , which are then regarded as points at infinity. When an affine plane does not have the form of K2 with K a division ring, it can still be embedded in a projective plane, but the construction used above does not work. A commonly used method for carrying out the embedding in this case involves expanding the set of affine coordinates and working in a more general "algebra". Generalized coordinates One can construct a coordinate "ring"—a so-called planar ternary ring (not a genuine ring)—corresponding to any projective plane. A planar ternary ring need not be a field or division ring, and there are many projective planes that are not constructed from a division ring. They are called non-Desarguesian projective planes and are an active area of research. The Cayley plane (OP2), a projective plane over the octonions, is one of these because the octonions do not form a division ring. Conversely, given a planar ternary ring (R, T), a projective plane can be constructed (see below). The relationship is not one to one. A projective plane may be associated with several non-isomorphic planar ternary rings. The ternary operator T can be used to produce two binary operators on the set R, by: a + b = T(a, 1, b), and a ⋅ b = T(a, b, 0). The ternary operator is linear if . When the set of coordinates of a projective plane actually form a ring, a linear ternary operator may be defined in this way, using the ring operations on the right, to produce a planar ternary ring. Algebraic properties of this planar ternary coordinate ring turn out to correspond to geometric incidence properties of the plane. For example, Desargues' theorem corresponds to the coordinate ring being obtained from a division ring, while Pappus's theorem corresponds to this ring being obtained from a commutative field. A projective plane satisfying Pappus's theorem universally is called a Pappian plane. Alternative, not necessarily associative, division algebras like the octonions correspond to Moufang planes. There is no known purely geometric proof of the purely geometric statement that Desargues' theorem implies Pappus' theorem in a finite projective plane (finite Desarguesian planes are Pappian). (The converse is true in any projective plane and is provable geometrically, but finiteness is essential in this statement as there are infinite Desarguesian planes which are not Pappian.) The most common proof uses coordinates in a division ring and Wedderburn's theorem that finite division rings must be commutative; give a proof that uses only more "elementary" algebraic facts about division rings. To describe a finite projective plane of order N(≥ 2) using non-homogeneous coordinates and a planar ternary ring: Let one point be labelled (∞). Label N points, (r) where r = 0, ..., (N − 1). Label N2 points, (r, c) where r, c = 0, ..., (N − 1). On these points, construct the following lines: One line [∞] = { (∞), (0), ..., (N − 1)} N lines [c] = {(∞), (c, 0), ..., (c, N − 1)}, where c = 0, ..., (N − 1) N2 lines [r, c] = {(r) and the points (x, T(x, r, c)) }, where x, r, c = 0, ..., (N − 1) and T is the ternary operator of the planar ternary ring. For example, for we can use the symbols {0, 1} associated with the finite field of order 2. The ternary operation defined by with the operations on the right being the multiplication and addition in the field yields the following: One line [∞] = { (∞), (0), (1)}, 2 lines [c] = {(∞), (c,0), (c,1) : c = 0, 1}, [0] = {(∞), (0,0), (0,1) } [1] = {(∞), (1,0), (1,1) } 4 lines [r, c]: (r) and the points (i, ir + c), where i = 0, 1 : r, c = 0, 1. [0,0]: {(0), (0,0), (1,0) } [0,1]: {(0), (0,1), (1,1) } [1,0]: {(1), (0,0), (1,1) } [1,1]: {(1), (0,1), (1,0) } Degenerate planes Degenerate planes do not fulfill the third condition in the definition of a projective plane. They are not structurally complex enough to be interesting in their own right, but from time to time they arise as special cases in general arguments. There are seven kinds of degenerate plane according to . They are: the empty set; a single point, no lines; a single line, no points; a single point, a collection of lines, the point is incident with all of the lines; a single line, a collection of points, the points are all incident with the line; a point P incident with a line m, an arbitrary collection of lines all incident with P and an arbitrary collection of points all incident with m; a point P not incident with a line m, an arbitrary (can be empty) collection of lines all incident with P and all the points of intersection of these lines with m. These seven cases are not independent, the fourth and fifth can be considered as special cases of the sixth, while the second and third are special cases of the fourth and fifth respectively. The special case of the seventh plane with no additional lines can be seen as an eighth plane. All the cases can therefore be organized into two families of degenerate planes as follows (this representation is for finite degenerate planes, but may be extended to infinite ones in a natural way): 1) For any number of points P1, ..., Pn, and lines L1, ..., Lm, L1 = { P1, P2, ..., Pn} L2 = { P1 } L3 = { P1 } ... Lm = { P1 } 2) For any number of points P1, ..., Pn, and lines L1, ..., Ln, (same number of points as lines) L1 = { P2, P3, ..., Pn } L2 = { P1, P2 } L3 = { P1, P3 } ... Ln = { P1, Pn } Collineations A collineation of a projective plane is a bijective map of the plane to itself which maps points to points and lines to lines that preserves incidence, meaning that if σ is a bijection and point P is on line m, then Pσ is on mσ. If σ is a collineation of a projective plane, a point P with P = Pσ is called a fixed point of σ, and a line m with m = mσ is called a fixed line of σ. The points on a fixed line need not be fixed points, their images under σ are just constrained to lie on this line. The collection of fixed points and fixed lines of a collineation form a closed configuration, which is a system of points and lines that satisfy the first two but not necessarily the third condition in the definition of a projective plane. Thus, the fixed point and fixed line structure for any collineation either form a projective plane by themselves, or a degenerate plane. Collineations whose fixed structure forms a plane are called planar collineations. Homography A homography (or projective transformation) of PG(2, K) is a collineation of this type of projective plane which is a linear transformation of the underlying vector space. Using homogeneous coordinates they can be represented by invertible matrices over K which act on the points of PG(2, K) by , where x and y are points in K3 (vectors) and M is an invertible matrix over K. Two matrices represent the same projective transformation if one is a constant multiple of the other. Thus the group of projective transformations is the quotient of the general linear group by the scalar matrices called the projective linear group. Another type of collineation of PG(2, K) is induced by any automorphism of K, these are called automorphic collineations. If α is an automorphism of K, then the collineation given by is an automorphic collineation. The fundamental theorem of projective geometry says that all the collineations of PG(2, K) are compositions of homographies and automorphic collineations. Automorphic collineations are planar collineations. Plane duality A projective plane is defined axiomatically as an incidence structure, in terms of a set P of points, a set L of lines, and an incidence relation I that determines which points lie on which lines. As P and L are only sets one can interchange their roles and define a plane dual structure. By interchanging the role of "points" and "lines" in C = (P, L, I) we obtain the dual structure C* = (L, P, I*), where I* is the converse relation of I. In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called the plane dual statement of the first. The plane dual statement of "Two points are on a unique line." is "Two lines meet at a unique point." Forming the plane dual of a statement is known as dualizing the statement. If a statement is true in a projective plane C, then the plane dual of that statement must be true in the dual plane C*. This follows since dualizing each statement in the proof "in C" gives a statement of the proof "in C*." In the projective plane C, it can be shown that there exist four lines, no three of which are concurrent. Dualizing this theorem and the first two axioms in the definition of a projective plane shows that the plane dual structure C* is also a projective plane, called the dual plane of C. If C and C* are isomorphic, then C is called self-dual. The projective planes PG(2, K) for any division ring K are self-dual. However, there are non-Desarguesian planes which are not self-dual, such as the Hall planes and some that are, such as the Hughes planes. The Principle of plane duality says that dualizing any theorem in a self-dual projective plane C produces another theorem valid in C. Correlations A duality is a map from a projective plane to its dual plane (see above) which preserves incidence. That is, a duality σ will map points to lines and lines to points ( and ) in such a way that if a point Q is on a line m (denoted by ) then . A duality which is an isomorphism is called a correlation. If a correlation exists then the projective plane C is self-dual. In the special case that the projective plane is of the PG(2, K) type, with K a division ring, a duality is called a reciprocity. These planes are always self-dual. By the fundamental theorem of projective geometry a reciprocity is the composition of an automorphic function of K and a homography. If the automorphism involved is the identity, then the reciprocity is called a projective correlation. A correlation of order two (an involution) is called a polarity. If a correlation φ is not a polarity then φ2 is a nontrivial collineation. Finite projective planes It can be shown that a projective plane has the same number of lines as it has points (infinite or finite). Thus, for every finite projective plane there is an integer N ≥ 2 such that the plane has N2 + N + 1 points, N2 + N + 1 lines, N + 1 points on each line, and N + 1 lines through each point. The number N is called the order of the projective plane. The projective plane of order 2 is called the Fano plane.
Mathematics
Non-Euclidean geometry
null
24354
https://en.wikipedia.org/wiki/Pharmacology
Pharmacology
Pharmacology is the science of drugs and medications, including a substance's origin, composition, pharmacokinetics, pharmacodynamics, therapeutic use, and toxicology. More specifically, it is the study of the interactions that occur between a living organism and chemicals that affect normal or abnormal biochemical function. If substances have medicinal properties, they are considered pharmaceuticals. The field encompasses drug composition and properties, functions, sources, synthesis and drug design, molecular and cellular mechanisms, organ/systems mechanisms, signal transduction/cellular communication, molecular diagnostics, interactions, chemical biology, therapy, and medical applications and antipathogenic capabilities. The two main areas of pharmacology are pharmacodynamics and pharmacokinetics. Pharmacodynamics studies the effects of a drug on biological systems, and pharmacokinetics studies the effects of biological systems on a drug. In broad terms, pharmacodynamics discusses the chemicals with biological receptors, and pharmacokinetics discusses the absorption, distribution, metabolism, and excretion (ADME) of chemicals from the biological systems. Pharmacology is not synonymous with pharmacy and the two terms are frequently confused. Pharmacology, a biomedical science, deals with the research, discovery, and characterization of chemicals which show biological effects and the elucidation of cellular and organismal function in relation to these chemicals. In contrast, pharmacy, a health services profession, is concerned with the application of the principles learned from pharmacology in its clinical settings; whether it be in a dispensing or clinical care role. In either field, the primary contrast between the two is their distinctions between direct-patient care, pharmacy practice, and the science-oriented research field, driven by pharmacology. Etymology The word pharmacology is derived from Greek word , pharmakon, meaning "drug" or "poison", together with another Greek word , logia with the meaning of "study of" or "knowledge of" (cf. the etymology of pharmacy). Pharmakon is related to pharmakos, the ritualistic sacrifice or exile of a human scapegoat or victim in Ancient Greek religion. The modern term pharmacon is used more broadly than the term drug because it includes endogenous substances, and biologically active substances which are not used as drugs. Typically it includes pharmacological agonists and antagonists, but also enzyme inhibitors (such as monoamine oxidase inhibitors). History The origins of clinical pharmacology date back to the Middle Ages, with pharmacognosy and Avicenna's The Canon of Medicine, Peter of Spain's Commentary on Isaac, and John of St Amand's Commentary on the Antedotary of Nicholas. Early pharmacology focused on herbalism and natural substances, mainly plant extracts. Medicines were compiled in books called pharmacopoeias. Crude drugs have been used since prehistory as a preparation of substances from natural sources. However, the active ingredient of crude drugs are not purified and the substance is adulterated with other substances. Traditional medicine varies between cultures and may be specific to a particular culture, such as in traditional Chinese, Mongolian, Tibetan and Korean medicine. However much of this has since been regarded as pseudoscience. Pharmacological substances known as entheogens may have spiritual and religious use and historical context. In the 17th century, the English physician Nicholas Culpeper translated and used pharmacological texts. Culpeper detailed plants and the conditions they could treat. In the 18th century, much of clinical pharmacology was established by the work of William Withering. Pharmacology as a scientific discipline did not further advance until the mid-19th century amid the great biomedical resurgence of that period. Before the second half of the nineteenth century, the remarkable potency and specificity of the actions of drugs such as morphine, quinine and digitalis were explained vaguely and with reference to extraordinary chemical powers and affinities to certain organs or tissues. The first pharmacology department was set up by Rudolf Buchheim in 1847, at University of Tartu, in recognition of the need to understand how therapeutic drugs and poisons produced their effects. Subsequently, the first pharmacology department in England was set up in 1905 at University College London. Pharmacology developed in the 19th century as a biomedical science that applied the principles of scientific experimentation to therapeutic contexts. The advancement of research techniques propelled pharmacological research and understanding. The development of the organ bath preparation, where tissue samples are connected to recording devices, such as a myograph, and physiological responses are recorded after drug application, allowed analysis of drugs' effects on tissues. The development of the ligand binding assay in 1945 allowed quantification of the binding affinity of drugs at chemical targets. Modern pharmacologists use techniques from genetics, molecular biology, biochemistry, and other advanced tools to transform information about molecular mechanisms and targets into therapies directed against disease, defects or pathogens, and create methods for preventive care, diagnostics, and ultimately personalized medicine. Divisions The discipline of pharmacology can be divided into many sub disciplines each with a specific focus. Systems of the body Pharmacology can also focus on specific systems comprising the body. Divisions related to bodily systems study the effects of drugs in different systems of the body. These include neuropharmacology, in the central and peripheral nervous systems; immunopharmacology in the immune system. Other divisions include cardiovascular, renal and endocrine pharmacology. Psychopharmacology is the study of the use of drugs that affect the psyche, mind and behavior (e.g. antidepressants) in treating mental disorders (e.g. depression). It incorporates approaches and techniques from neuropharmacology, animal behavior and behavioral neuroscience, and is interested in the behavioral and neurobiological mechanisms of action of psychoactive drugs. The related field of neuropsychopharmacology focuses on the effects of drugs at the overlap between the nervous system and the psyche. Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics, the quantification and analysis of metabolites produced by the body. It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. Pharmacometabolomics can be applied to measure metabolite levels following the administration of a drug, in order to monitor the effects of the drug on metabolic pathways. Pharmacomicrobiomics studies the effect of microbiome variations on drug disposition, action, and toxicity. Pharmacomicrobiomics is concerned with the interaction between drugs and the gut microbiome. Pharmacogenomics is the application of genomic technologies to drug discovery and further characterization of drugs related to an organism's entire genome. For pharmacology regarding individual genes, pharmacogenetics studies how genetic variation gives rise to differing responses to drugs. Pharmacoepigenetics studies the underlying epigenetic marking patterns that lead to variation in an individual's response to medical treatment. Clinical practice and drug discovery Pharmacology can be applied within clinical sciences. Clinical pharmacology is the application of pharmacological methods and principles in the study of drugs in humans. An example of this is posology, which is the study of dosage of medicines. Pharmacology is closely related to toxicology. Both pharmacology and toxicology are scientific disciplines that focus on understanding the properties and actions of chemicals. However, pharmacology emphasizes the therapeutic effects of chemicals, usually drugs or compounds that could become drugs, whereas toxicology is the study of chemical's adverse effects and risk assessment. Pharmacological knowledge is used to advise pharmacotherapy in medicine and pharmacy. Drug discovery Drug discovery is the field of study concerned with creating new drugs. It encompasses the subfields of drug design and development. Drug discovery starts with drug design, which is the inventive process of finding new drugs. In the most basic sense, this involves the design of molecules that are complementary in shape and charge to a given biomolecular target. After a lead compound has been identified through drug discovery, drug development involves bringing the drug to the market. Drug discovery is related to pharmacoeconomics, which is the sub-discipline of health economics that considers the value of drugs Pharmacoeconomics evaluates the cost and benefits of drugs in order to guide optimal healthcare resource allocation. The techniques used for the discovery, formulation, manufacturing and quality control of drugs discovery is studied by pharmaceutical engineering, a branch of engineering. Safety pharmacology specialises in detecting and investigating potential undesirable effects of drugs. Development of medication is a vital concern to medicine, but also has strong economical and political implications. To protect the consumer and prevent abuse, many governments regulate the manufacture, sale, and administration of medication. In the United States, the main body that regulates pharmaceuticals is the Food and Drug Administration; they enforce standards set by the United States Pharmacopoeia. In the European Union, the main body that regulates pharmaceuticals is the EMA, and they enforce standards set by the European Pharmacopoeia. The metabolic stability and the reactivity of a library of candidate drug compounds have to be assessed for drug metabolism and toxicological studies. Many methods have been proposed for quantitative predictions in drug metabolism; one example of a recent computational method is SPORCalc. A slight alteration to the chemical structure of a medicinal compound could alter its medicinal properties, depending on how the alteration relates to the structure of the substrate or receptor site on which it acts: this is called the structural activity relationship (SAR). When a useful activity has been identified, chemists will make many similar compounds called analogues, to try to maximize the desired medicinal effect(s). This can take anywhere from a few years to a decade or more, and is very expensive. One must also determine how safe the medicine is to consume, its stability in the human body and the best form for delivery to the desired organ system, such as tablet or aerosol. After extensive testing, which can take up to six years, the new medicine is ready for marketing and selling. Because of these long timescales, and because out of every 5000 potential new medicines typically only one will ever reach the open market, this is an expensive way of doing things, often costing over 1 billion dollars. To recoup this outlay pharmaceutical companies may do a number of things: Carefully research the demand for their potential new product before spending an outlay of company funds. Obtain a patent on the new medicine preventing other companies from producing that medicine for a certain allocation of time. The inverse benefit law describes the relationship between a drugs therapeutic benefits and its marketing. When designing drugs, the placebo effect must be considered to assess the drug's true therapeutic value. Drug development uses techniques from medicinal chemistry to chemically design drugs. This overlaps with the biological approach of finding targets and physiological effects. Wider contexts Pharmacology can be studied in relation to wider contexts than the physiology of individuals. For example, pharmacoepidemiology concerns the variations of the effects of drugs in or between populations, it is the bridge between clinical pharmacology and epidemiology. Pharmacoenvironmentology or environmental pharmacology is the study of the effects of used pharmaceuticals and personal care products (PPCPs) on the environment after their elimination from the body. Human health and ecology are intimately related so environmental pharmacology studies the environmental effect of drugs and pharmaceuticals and personal care products in the environment. Drugs may also have ethnocultural importance, so ethnopharmacology studies the ethnic and cultural aspects of pharmacology. Emerging fields Photopharmacology is an emerging approach in medicine in which drugs are activated and deactivated with light. The energy of light is used to change for shape and chemical properties of the drug, resulting in different biological activity. This is done to ultimately achieve control when and where drugs are active in a reversible manner, to prevent side effects and pollution of drugs into the environment. Theory of pharmacology The study of chemicals requires intimate knowledge of the biological system affected. With the knowledge of cell biology and biochemistry increasing, the field of pharmacology has also changed substantially. It has become possible, through molecular analysis of receptors, to design chemicals that act on specific cellular signaling or metabolic pathways by affecting sites directly on cell-surface receptors (which modulate and mediate cellular signaling pathways controlling cellular function). Chemicals can have pharmacologically relevant properties and effects. Pharmacokinetics describes the effect of the body on the chemical (e.g. half-life and volume of distribution), and pharmacodynamics describes the chemical's effect on the body (desired or toxic). Systems, receptors and ligands Pharmacology is typically studied with respect to particular systems, for example endogenous neurotransmitter systems. The major systems studied in pharmacology can be categorised by their ligands and include acetylcholine, adrenaline, glutamate, GABA, dopamine, histamine, serotonin, cannabinoid and opioid. Molecular targets in pharmacology include receptors, enzymes and membrane transport proteins. Enzymes can be targeted with enzyme inhibitors. Receptors are typically categorised based on structure and function. Major receptor types studied in pharmacology include G protein coupled receptors, ligand gated ion channels and receptor tyrosine kinases. Network pharmacology is a subfield of pharmacology that combines principles from pharmacology, systems biology, and network analysis to study the complex interactions between drugs and targets (e.g., receptors or enzymes etc.) in biological systems. The topology of a biochemical reaction network determines the shape of drug dose-response curve as well as the type of drug-drug interactions, thus can help designing efficient and safe therapeutic strategies. The topology Network pharmacology utilizes computational tools and network analysis algorithms to identify drug targets, predict drug-drug interactions, elucidate signaling pathways, and explore the polypharmacology of drugs. Pharmacodynamics Pharmacodynamics is defined as how the body reacts to the drugs. Pharmacodynamics theory often investigates the binding affinity of ligands to their receptors. Ligands can be agonists, partial agonists or antagonists at specific receptors in the body. Agonists bind to receptors and produce a biological response, a partial agonist produces a biological response lower than that of a full agonist, antagonists have affinity for a receptor but do not produce a biological response. The ability of a ligand to produce a biological response is termed efficacy, in a dose-response profile it is indicated as percentage on the y-axis, where 100% is the maximal efficacy (all receptors are occupied). Binding affinity is the ability of a ligand to form a ligand-receptor complex either through weak attractive forces (reversible) or covalent bond (irreversible), therefore efficacy is dependent on binding affinity. Potency of drug is the measure of its effectiveness, EC50 is the drug concentration of a drug that produces an efficacy of 50% and the lower the concentration the higher the potency of the drug therefore EC50 can be used to compare potencies of drugs. Medication is said to have a narrow or wide therapeutic index, certain safety factor or therapeutic window. This describes the ratio of desired effect to toxic effect. A compound with a narrow therapeutic index (close to one) exerts its desired effect at a dose close to its toxic dose. A compound with a wide therapeutic index (greater than five) exerts its desired effect at a dose substantially below its toxic dose. Those with a narrow margin are more difficult to dose and administer, and may require therapeutic drug monitoring (examples are warfarin, some antiepileptics, aminoglycoside antibiotics). Most anti-cancer drugs have a narrow therapeutic margin: toxic side-effects are almost always encountered at doses used to kill tumors. The effect of drugs can be described with Loewe additivity which is one of several common reference models. Other models include the Hill equation, Cheng-Prusoff equation and Schild regression. Pharmacokinetics Pharmacokinetics is the study of the bodily absorption, distribution, metabolism, and excretion of drugs. When describing the pharmacokinetic properties of the chemical that is the active ingredient or active pharmaceutical ingredient (API), pharmacologists are often interested in L-ADME: Liberation – How is the API disintegrated (for solid oral forms (breaking down into smaller particles), dispersed, or dissolved from the medication? Absorption – How is the API absorbed (through the skin, the intestine, the oral mucosa)? Distribution – How does the API spread through the organism? Metabolism – Is the API converted chemically inside the body, and into which substances. Are these active (as well)? Could they be toxic? Excretion – How is the API excreted (through the bile, urine, breath, skin)? Drug metabolism is assessed in pharmacokinetics and is important in drug research and prescribing. Pharmacokinetics is the movement of the drug in the body, it is usually described as 'what the body does to the drug' the physico-chemical properties of a drug will affect the rate and extent of absorption, extent of distribution, metabolism and elimination. The drug needs to have the appropriate molecular weight, polarity etc. in order to be absorbed, the fraction of a drug the reaches the systemic circulation is termed bioavailability, this is simply a ratio of the peak plasma drug levels after oral administration and the drug concentration after an IV administration(first pass effect is avoided and therefore no amount drug is lost). A drug must be lipophilic (lipid soluble) in order to pass through biological membranes this is true because biological membranes are made up of a lipid bilayer (phospholipids etc.) Once the drug reaches the blood circulation it is then distributed throughout the body and being more concentrated in highly perfused organs. Administration, drug policy and safety Drug policy In the United States, the Food and Drug Administration (FDA) is responsible for creating guidelines for the approval and use of drugs. The FDA requires that all approved drugs fulfill two requirements: The drug must be found to be effective against the disease for which it is seeking approval (where 'effective' means only that the drug performed better than placebo or competitors in at least two trials). The drug must meet safety criteria by being subject to animal and controlled human testing. Gaining FDA approval usually takes several years. Testing done on animals must be extensive and must include several species to help in the evaluation of both the effectiveness and toxicity of the drug. The dosage of any drug approved for use is intended to fall within a range in which the drug produces a therapeutic effect or desired outcome. The safety and effectiveness of prescription drugs in the U.S. are regulated by the federal Prescription Drug Marketing Act of 1987. The Medicines and Healthcare products Regulatory Agency (MHRA) has a similar role in the UK. Medicare Part D is a prescription drug plan in the U.S. The Prescription Drug Marketing Act (PDMA) is an act related to drug policy. Prescription drugs are drugs regulated by legislation. Societies and education Societies and administration The International Union of Basic and Clinical Pharmacology, Federation of European Pharmacological Societies and European Association for Clinical Pharmacology and Therapeutics are organisations representing standardisation and regulation of clinical and scientific pharmacology. Systems for medical classification of drugs with pharmaceutical codes have been developed. These include the National Drug Code (NDC), administered by Food and Drug Administration.; Drug Identification Number (DIN), administered by Health Canada under the Food and Drugs Act; Hong Kong Drug Registration, administered by the Pharmaceutical Service of the Department of Health (Hong Kong) and National Pharmaceutical Product Index in South Africa. Hierarchical systems have also been developed, including the Anatomical Therapeutic Chemical Classification System (AT, or ATC/DDD), administered by World Health Organization; Generic Product Identifier (GPI), a hierarchical classification number published by MediSpan and SNOMED, C axis. Ingredients of drugs have been categorised by Unique Ingredient Identifier. Education The study of pharmacology overlaps with biomedical sciences and is the study of the effects of drugs on living organisms. Pharmacological research can lead to new drug discoveries, and promote a better understanding of human physiology. Students of pharmacology must have a detailed working knowledge of aspects in physiology, pathology, and chemistry. They may also require knowledge of plants as sources of pharmacologically active compounds. Modern pharmacology is interdisciplinary and involves biophysical and computational sciences, and analytical chemistry. A pharmacist needs to be well-equipped with knowledge on pharmacology for application in pharmaceutical research or pharmacy practice in hospitals or commercial organisations selling to customers. Pharmacologists, however, usually work in a laboratory undertaking research or development of new products. Pharmacological research is important in academic research (medical and non-medical), private industrial positions, science writing, scientific patents and law, consultation, biotech and pharmaceutical employment, the alcohol industry, food industry, forensics/law enforcement, public health, and environmental/ecological sciences. Pharmacology is often taught to pharmacy and medicine students as part of a Medical School curriculum.
Biology and health sciences
Drugs and pharmacology
null
24373
https://en.wikipedia.org/wiki/Pain
Pain
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain defines pain as "an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage." Pain motivates organisms to withdraw from damaging situations, to protect a damaged body part while it heals, and to avoid similar experiences in the future. Most pain resolves once the noxious stimulus is removed and the body has healed, but it may persist despite removal of the stimulus and apparent healing of the body. Sometimes pain arises in the absence of any detectable stimulus, damage or disease. Pain is the most common reason for physician consultation in most developed countries. It is a major symptom in many medical conditions, and can interfere with a person's quality of life and general functioning. People in pain experience impaired concentration, working memory, mental flexibility, problem solving and information processing speed, and are more likely to experience irritability, depression, and anxiety. Simple pain medications are useful in 20% to 70% of cases. Psychological factors such as social support, cognitive behavioral therapy, excitement, or distraction can affect pain's intensity or unpleasantness. Etymology First attested in English in 1297, the word peyn comes from the Old French peine, in turn from Latin poena meaning "punishment, penalty" (also meaning "torment, hardship, suffering" in Late Latin) and that from Greek ποινή (poine), generally meaning "price paid, penalty, punishment". Classification The International Association for the Study of Pain recommends using specific features to describe a patient's pain: Region of the body involved (e.g. abdomen, lower limbs) System whose dysfunction may be causing the pain (e.g., nervous, gastrointestinal) Duration and pattern of occurrence Intensity Cause Chronic versus acute Pain is usually transitory, lasting only until the noxious stimulus is removed or the underlying damage or pathology has healed. But some painful conditions, such as rheumatoid arthritis, peripheral neuropathy, cancer, and idiopathic pain, may persist for years. Pain that lasts a long time is called "chronic" or "persistent", and pain that resolves quickly is called "acute". Traditionally, the distinction between acute and chronic pain has relied upon an arbitrary interval of time between onset and resolution; the two most commonly used markers being 3 months and 6 months since the onset of pain, though some theorists and researchers have placed the transition from acute to chronic pain at 12 months. Others apply "acute" to pain that lasts less than 30 days, "chronic" to pain of more than six months' duration, and "subacute" to pain that lasts from one to six months. A popular alternative definition of "chronic pain", involving no arbitrarily fixed duration, is "pain that extends beyond the expected period of healing". Chronic pain may be classified as "cancer-related" or "benign." Allodynia Allodynia is pain experienced in response to a normally painless stimulus. It has no biological function and is classified by characteristics of the stimuli as cold, heat, touch, pressure or a pinprick. Phantom Phantom pain is pain felt in a part of the body that has been amputated, or from which the brain no longer receives signals. It is a type of neuropathic pain. The prevalence of phantom pain in upper limb amputees is nearly 82%, and in lower limb amputees is 54%. One study found that eight days after amputation, 72% of patients had phantom limb pain, and six months later, 67% reported it. Some amputees experience continuous pain that varies in intensity or quality; others experience several bouts of pain per day, or it may reoccur less often. It is often described as shooting, crushing, burning or cramping. If the pain is continuous for a long period, parts of the intact body may become sensitized, so that touching them evokes pain in the phantom limb. Phantom limb pain may accompany urination or defecation. Local anesthetic injections into the nerves or sensitive areas of the stump may relieve pain for days, weeks, or sometimes permanently, despite the drug wearing off in a matter of hours; and small injections of hypertonic saline into the soft tissue between vertebrae produces local pain that radiates into the phantom limb for ten minutes or so and may be followed by hours, weeks, or even longer of partial or total relief from phantom pain. Vigorous vibration or electrical stimulation of the stump, or current from electrodes surgically implanted onto the spinal cord, all produce relief in some patients. Mirror box therapy produces the illusion of movement and touch in a phantom limb which in turn may cause a reduction in pain. Paraplegia, the loss of sensation and voluntary motor control after serious spinal cord damage, may be accompanied by girdle pain at the level of the spinal cord damage, visceral pain evoked by a filling bladder or bowel, or, in five to ten percent of paraplegics, phantom body pain in areas of complete sensory loss. This phantom body pain is initially described as burning or tingling but may evolve into severe crushing or pinching pain, or the sensation of fire running down the legs or of a knife twisting in the flesh. Onset may be immediate or may not occur until years after the disabling injury. Surgical treatment rarely provides lasting relief. Breakthrough Breakthrough pain is transitory pain that comes on suddenly and is not alleviated by the patient's regular pain management. It is common in cancer patients who often have background pain that is generally well-controlled by medications, but who also sometimes experience bouts of severe pain that from time to time "breaks through" the medication. The characteristics of breakthrough cancer pain vary from person to person and according to the cause. Management of breakthrough pain can entail intensive use of opioids, including fentanyl. Asymbolia and insensitivity The ability to experience pain is essential for protection from injury, and recognition of the presence of injury. Episodic analgesia may occur under special circumstances, such as in the excitement of sport or war: a soldier on the battlefield may feel no pain for many hours from a traumatic amputation or other severe injury. Although unpleasantness is an essential part of the IASP definition of pain, it is possible in some patients to induce a state known as pain asymbolia, described as intense pain devoid of unpleasantness, with morphine injection or psychosurgery. Such patients report that they have pain but are not bothered by it; they recognize the sensation of pain but suffer little, or not at all. Indifference to pain can also rarely be present from birth; these people have normal nerves on medical investigations, and find pain unpleasant, but do not avoid repetition of the pain stimulus. Insensitivity to pain may also result from abnormalities in the nervous system. This is usually the result of acquired damage to the nerves, such as spinal cord injury, diabetes mellitus (diabetic neuropathy), or leprosy in countries where that disease is prevalent. These individuals are at risk of tissue damage and infection due to undiscovered injuries. People with diabetes-related nerve damage, for instance, sustain poorly-healing foot ulcers as a result of decreased sensation. A much smaller number of people are insensitive to pain due to an inborn abnormality of the nervous system, known as "congenital insensitivity to pain". Children with this condition incur carelessly-repeated damage to their tongues, eyes, joints, skin, and muscles. Some die before adulthood, and others have a reduced life expectancy. Most people with congenital insensitivity to pain have one of five hereditary sensory and autonomic neuropathies (which includes familial dysautonomia and congenital insensitivity to pain with anhidrosis). These conditions feature decreased sensitivity to pain together with other neurological abnormalities, particularly of the autonomic nervous system. A very rare syndrome with isolated congenital insensitivity to pain has been linked with mutations in the SCN9A gene, which codes for a sodium channel (Nav1.7) necessary in conducting pain nerve stimuli. Functional effects Experimental subjects challenged by acute pain and patients in chronic pain experience impairments in attention control, working memory capacity, mental flexibility, problem solving, and information processing speed. Pain is also associated with increased depression, anxiety, fear, and anger. On subsequent negative emotion Although pain is considered to be aversive and unpleasant and is therefore usually avoided, a meta-analysis which summarized and evaluated numerous studies from various psychological disciplines, found a reduction in negative affect. Across studies, participants that were subjected to acute physical pain in the laboratory subsequently reported feeling better than those in non-painful control conditions, a finding which was also reflected in physiological parameters. A potential mechanism to explain this effect is provided by the opponent-process theory. Theory Historical Before the relatively recent discovery of neurons and their role in pain, various body functions were proposed to account for pain. There were several competing early theories of pain among the ancient Greeks: Hippocrates believed that it was due to an imbalance in vital fluids. In the 11th century, Avicenna theorized that there were a number of feeling senses, including touch, pain, and titillation. In 1644, René Descartes theorized that pain was a disturbance that passed along nerve fibers until the disturbance reached the brain. The work of Descartes and Avicenna prefigured the 19th-century development of specificity theory. Specificity theory saw pain as "a specific sensation, with its own sensory apparatus independent of touch and other senses". Another theory that came to prominence in the 18th and 19th centuries was intensive theory, which conceived of pain not as a unique sensory modality, but an emotional state produced by stronger than normal stimuli such as intense light, pressure or temperature. By the mid-1890s, specificity was backed primarily by physiologists and physicians, and psychologists mostly backed the intensive theory. However, after a series of clinical observations by Henry Head and experiments by Max von Frey, the psychologists migrated to specificity almost en masse. By the century's end, most physiology and psychology textbooks presented pain specificity as fact. Modern Some sensory fibers do not differentiate between noxious and non-noxious stimuli, while others (i.e., nociceptors) respond only to noxious, high-intensity stimuli. At the peripheral end of the nociceptor, noxious stimuli generate currents that, above a given threshold, send signals along the nerve fiber to the spinal cord. The "specificity" (whether it responds to thermal, chemical, or mechanical features of its environment) of a nociceptor is determined by which ion channels it expresses at its peripheral end. So far, dozens of types of nociceptor ion channels have been identified, and their exact functions are still being determined. The pain signal travels from the periphery to the spinal cord along A-delta and C fibers. Because the A-delta fiber is thicker than the C fiber, and is thinly sheathed in an electrically insulating material (myelin), it carries its signal faster (5–30 m/s) than the unmyelinated C fiber (0.5–2 m/s). Pain evoked by the A-delta fibers is described as sharp and is felt first. This is followed by a duller pain—often described as burning—carried by the C fibers. These A-delta and C fibers enter the spinal cord via Lissauer's tract and connect with spinal cord nerve fibers in the central gelatinous substance of the spinal cord. These spinal cord fibers then cross the cord via the anterior white commissure and ascend in the spinothalamic tract. Before reaching the brain, the spinothalamic tract splits into the lateral, neospinothalamic tract and the medial, paleospinothalamic tract. The neospinothalamic tract carries the fast, sharp A-delta signal to the ventral posterolateral nucleus of the thalamus. The paleospinothalamic tract carries the slow, dull C fiber pain signal. Some of the paleospinothalamic fibers peel off in the brain stem—connecting with the reticular formation or midbrain periaqueductal gray—and the remainder terminate in the intralaminar nuclei of the thalamus. Pain-related activity in the thalamus spreads to the insular cortex (thought to embody, among other things, the feeling that distinguishes pain from other homeostatic emotions such as itch and nausea) and anterior cingulate cortex (thought to embody, among other things, the affective/motivational element, the unpleasantness of pain), and pain that is distinctly located also activates the primary and secondary somatosensory cortex. Spinal cord fibers dedicated to carrying A-delta fiber pain signals and others that carry both A-delta and C fiber pain signals to the thalamus have been identified. Other spinal cord fibers, known as wide dynamic range neurons, respond to A-delta and C fibers and the much larger, more heavily myelinated A-beta fibers that carry touch, pressure, and vibration signals. Ronald Melzack and Patrick Wall introduced their gate control theory in the 1965 Science article "Pain Mechanisms: A New Theory". The authors proposed that the thin C and A-delta (pain) and large diameter A-beta (touch, pressure, vibration) nerve fibers carry information from the site of injury to two destinations in the dorsal horn of the spinal cord, and that A-beta fiber signals acting on inhibitory cells in the dorsal horn can reduce the intensity of pain signals sent to the brain. Three dimensions of pain In 1968, Ronald Melzack and Kenneth Casey described chronic pain in terms of its three dimensions: "sensory-discriminative" (sense of the intensity, location, quality, and duration of the pain), "affective-motivational" (unpleasantness and urge to escape the unpleasantness) and "cognitive-evaluative" (cognitions such as appraisal, cultural values, distraction, and hypnotic suggestion). They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational dimension) are not simply determined by the magnitude of the painful stimulus, but "higher" cognitive activities can influence perceived intensity and unpleasantness. Cognitive activities may affect both sensory and affective experience, or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears to block both the sensory-discriminative and affective-motivational dimensions of pain, while suggestion and placebos may modulate only the affective-motivational dimension and leave the sensory-discriminative dimension relatively undisturbed. (p. 432) The paper ends with a call to action: "Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435) Evolutionary and behavioral role Pain is part of the body's defense system, producing a reflexive retraction from the painful stimulus, and tendencies to protect the affected body part while it heals, and avoid that harmful situation in the future. It is an important part of animal life, vital to healthy survival. People with congenital insensitivity to pain have reduced life expectancy. In The Greatest Show on Earth: The Evidence for Evolution, biologist Richard Dawkins addresses the question of why pain should have the quality of being painful. He describes the alternative as a mental raising of a "red flag". To argue why that red flag might be insufficient, Dawkins argues that drives must compete with one another within living beings. The most "fit" creature would be the one whose pains are well balanced. Those pains which mean certain death when ignored will become the most powerfully felt. The relative intensities of pain, then, may resemble the relative importance of that risk to our ancestors. This resemblance will not be perfect, however, because natural selection can be a poor designer. This may have maladaptive results such as supernormal stimuli. Pain, however, does not only wave a "red flag" within living beings but may also act as a warning sign and a call for help to other living beings. Especially in humans who readily helped each other in case of sickness or injury throughout their evolutionary history, pain might be shaped by natural selection to be a credible and convincing signal of the need for relief, help, and care. Idiopathic pain (pain that persists after the trauma or pathology has healed, or that arises without any apparent cause) may be an exception to the idea that pain is helpful to survival, although some psychodynamic psychologists argue that such pain is psychogenic, enlisted as a protective distraction to keep dangerous emotions unconscious. Thresholds In pain science, thresholds are measured by gradually increasing the intensity of a stimulus in a procedure called quantitative sensory testing which involves such stimuli as electric current, thermal (heat or cold), mechanical (pressure, touch, vibration), ischemic, or chemical stimuli applied to the subject to evoke a response. The "pain perception threshold" is the point at which the subject begins to feel pain, and the "pain threshold intensity" is the stimulus intensity at which the stimulus begins to hurt. The "pain tolerance threshold" is reached when the subject acts to stop the pain. Assessment A person's self-report is the most reliable measure of pain. Some health care professionals may underestimate pain severity. A definition of pain widely employed in nursing, emphasizing its subjective nature and the importance of believing patient reports, was introduced by Margo McCaffery in 1968: "Pain is whatever the experiencing person says it is, existing whenever he says it does". To assess intensity, the patient may be asked to locate their pain on a scale of 0 to 10, with 0 being no pain at all, and 10 the worst pain they have ever felt. Quality can be established by having the patient complete the McGill Pain Questionnaire indicating which words best describe their pain. Visual analogue scale The visual analogue scale is a common, reproducible tool in the assessment of pain and pain relief. The scale is a continuous line anchored by verbal descriptors, one for each extreme of pain where a higher score indicates greater pain intensity. It is usually 10 cm in length with no intermediate descriptors as to avoid marking of scores around a preferred numeric value. When applied as a pain descriptor, these anchors are often 'no pain' and 'worst imaginable pain". Cut-offs for pain classification have been recommended as no pain (0–4mm), mild pain (5–44mm), moderate pain (45–74mm) and severe pain (75–100mm). Multidimensional pain inventory The Multidimensional Pain Inventory (MPI) is a questionnaire designed to assess the psychosocial state of a person with chronic pain. Combining the MPI characterization of the person with their IASP five-category pain profile is recommended for deriving the most useful case description. Assessment in non-verbal people Non-verbal people cannot use words to tell others that they are experiencing pain. However, they may be able to communicate through other means, such as blinking, pointing, or nodding. With a non-communicative person, observation becomes critical, and specific behaviors can be monitored as pain indicators. Behaviors such as facial grimacing and guarding (trying to protect part of the body from being bumped or touched) indicate pain, as well as an increase or decrease in vocalizations, changes in routine behavior patterns and mental status changes. Patients experiencing pain may exhibit withdrawn social behavior and possibly experience a decreased appetite and decreased nutritional intake. A change in condition that deviates from baseline, such as moaning with movement or when manipulating a body part, and limited range of motion are also potential pain indicators. In patients who possess language but are incapable of expressing themselves effectively, such as those with dementia, an increase in confusion or display of aggressive behaviors or agitation may signal that discomfort exists, and further assessment is necessary. Changes in behavior may be noticed by caregivers who are familiar with the person's normal behavior. Infants do feel pain, but lack the language needed to report it, and so communicate distress by crying. A non-verbal pain assessment should be conducted involving the parents, who will notice changes in the infant which may not be obvious to the health care provider. Pre-term babies are more sensitive to painful stimuli than those carried to full term. Another approach, when pain is suspected, is to give the person treatment for pain, and then watch to see whether the suspected indicators of pain subside. Other reporting barriers The way in which one experiences and responds to pain is related to sociocultural characteristics, such as gender, ethnicity, and age. An aging adult may not respond to pain in the same way that a younger person might. Their ability to recognize pain may be blunted by illness or the use of medication. Depression may also keep older adult from reporting they are in pain. Decline in self-care may also indicate the older adult is experiencing pain. They may be reluctant to report pain because they do not want to be perceived as weak, or may feel it is impolite or shameful to complain, or they may feel the pain is a form of deserved punishment. Cultural barriers may also affect the likelihood of reporting pain. Patients may feel that certain treatments go against their religious beliefs. They may not report pain because they feel it is a sign that death is near. Many people fear the stigma of addiction, and avoid pain treatment so as not to be prescribed potentially addicting drugs. Many Asians do not want to lose respect in society by admitting they are in pain and need help, believing the pain should be borne in silence, while other cultures feel they should report pain immediately to receive immediate relief. Gender can also be a perceived factor in reporting pain. Gender differences can be the result of social and cultural expectations, with, in some cultures, women expected to be more emotional and show pain, and men to be more stoic. As a result, female pain may be at a higher risk of being stigmatized, leading to less urgent treatment of women based on social expectations of their ability to accurately report it. This has been postulated to lead to extended emergency room wait times for women and frequent dismissal of their ability to accurately report pain. Diagnostic aid Pain is a symptom of many medical conditions. Knowing the time of onset, location, intensity, pattern of occurrence (continuous, intermittent, etc.), exacerbating and relieving factors, and quality (burning, sharp, etc.) of the pain will help the examining physician to accurately diagnose the problem. For example, chest pain described as extreme heaviness may indicate myocardial infarction, while chest pain described as tearing may indicate aortic dissection. Physiological measurement Functional magnetic resonance imaging brain scanning has been used to measure pain, and correlates well with self-reported pain. Mechanisms Nociceptive Nociceptive pain is caused by stimulation of sensory nerve fibers that respond to stimuli approaching or exceeding harmful intensity (nociceptors), and may be classified according to the mode of noxious stimulation. The most common categories are "thermal" (e.g. heat or cold), "mechanical" (e.g. crushing, tearing, shearing, etc.) and "chemical" (e.g. iodine in a cut or chemicals released during inflammation). Some nociceptors respond to more than one of these modalities and are consequently designated polymodal. Nociceptive pain may also be classed according to the site of origin and divided into "visceral", "deep somatic" and "superficial somatic" pain. Visceral structures (e.g., the heart, liver and intestines) are highly sensitive to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant, usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep, squeezing, and dull. Deep somatic pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones. Superficial somatic pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp, well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds and minor (first degree) burns. Neuropathic Neuropathic pain is caused by damage or disease affecting any part of the nervous system involved in bodily feelings (the somatosensory system). Neuropathic pain may be divided into peripheral, central, or mixed (peripheral and central) neuropathic pain. Peripheral neuropathic pain is often described as "burning", "tingling", "electrical", "stabbing", or "pins and needles". Bumping the "funny bone" elicits acute peripheral neuropathic pain. Some manifestations of neuropathic pain include: traumatic neuropathy, tic douloureux, painful diabetic neuropathy, and postherpetic neuralgia. Nociplastic Nociplastic pain is pain characterized by a changed nociception (but without evidence of real or threatened tissue damage, or without disease or damage in the somatosensory system). Psychogenic Psychogenic pain, also called psychalgia or somatoform pain, is pain caused, increased or prolonged by mental, emotional or behavioral factors. Headaches, back pain and stomach pain are sometimes diagnosed as psychogenic. Those affected are often stigmatized, because both medical professionals and the general public tend to think that pain from a psychological source is not "real". However, specialists consider that it is no less actual or hurtful than pain from any other source. People with long-term pain frequently display psychological disturbance, with elevated scores on the Minnesota Multiphasic Personality Inventory scales of hysteria, depression and hypochondriasis (the "neurotic triad"). Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical evidence points in the other direction, to chronic pain causing neuroticism. When long-term pain is relieved by therapeutic intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in chronic pain patients, also shows improvement once pain has resolved. Management Pain can be treated through a variety of methods. The most appropriate method depends upon the situation. Management of chronic pain can be difficult and may require the coordinated efforts of a pain management team, which typically includes medical practitioners, clinical pharmacists, clinical psychologists, physiotherapists, occupational therapists, physician assistants, and nurse practitioners. Inadequate treatment of pain is widespread throughout surgical wards, intensive care units, and accident and emergency departments, in general practice, in the management of all forms of chronic pain including cancer pain, and in end of life care. This neglect extends to all ages, from newborns to medically frail elderly. In the US, African and Hispanic Americans are more likely than others to suffer unnecessarily while in the care of a physician; and women's pain is more likely to be undertreated than men's. The International Association for the Study of Pain advocates that the relief of pain should be recognized as a human right, that chronic pain should be considered a disease in its own right, and that pain medicine should have the full status of a medical specialty. It is a specialty only in China and Australia at this time. Elsewhere, pain medicine is a subspecialty under disciplines such as anesthesiology, physiatry, neurology, palliative medicine and psychiatry. In 2011, Human Rights Watch alerted that tens of millions of people worldwide are still denied access to inexpensive medications for severe pain. Medication Acute pain is usually managed with medications such as analgesics and anesthetics. Caffeine when added to pain medications such as ibuprofen, may provide some additional benefit. Ketamine can be used instead of opioids for short-term pain. Pain medications can cause paradoxical side effects, such as opioid-induced hyperalgesia (severe generalized pain caused by long-term opioid use). Sugar (sucrose) when taken by mouth reduces pain in newborn babies undergoing some medical procedures (a lancing of the heel, venipuncture, and intramuscular injections). Sugar does not remove pain from circumcision, and it is unknown if sugar reduces pain for other procedures. Sugar did not affect pain-related electrical activity in the brains of newborns one second after the heel lance procedure. Sweet liquid by mouth moderately reduces the rate and duration of crying caused by immunization injection in children between one and twelve months of age. Psychological Individuals with more social support experience less cancer pain, take less pain medication, report less labor pain and are less likely to use epidural anesthesia during childbirth, or suffer from chest pain after coronary artery bypass surgery. Suggestion can significantly affect pain intensity. About 35% of people report marked relief after receiving a saline injection they believed to be morphine. This placebo effect is more pronounced in people who are prone to anxiety, and so anxiety reduction may account for some of the effect, but it does not account for all of it. Placebos are more effective for intense pain than mild pain; and they produce progressively weaker effects with repeated administration. It is possible for many with chronic pain to become so absorbed in an activity or entertainment that the pain is no longer felt, or is greatly diminished. A number of meta-analyses have found clinical hypnosis to be effective in controlling pain associated with diagnostic and surgical procedures in both adults and children, as well as pain associated with cancer and childbirth. A 2007 review of 13 studies found evidence for the efficacy of hypnosis in the reduction of chronic pain under some conditions, though the number of patients enrolled in the studies was low, raising issues related to the statistical power to detect group differences, and most lacked credible controls for placebo or expectation. The authors concluded that "although the findings provide support for the general applicability of hypnosis in the treatment of chronic pain, considerably more research will be needed to fully determine the effects of hypnosis for different chronic-pain conditions." Alternative medicine An analysis of the 13 highest quality studies of pain treatment with acupuncture, published in January 2009, concluded there was little difference in the effect of real, fake and no acupuncture. However, more recent reviews have found some benefit. Additionally, there is tentative evidence for a few herbal medicines. For chronic (long-term) lower back pain, spinal manipulation produces tiny, clinically insignificant, short-term improvements in pain and function, compared with sham therapy and other interventions. Spinal manipulation produces the same outcome as other treatments, such as general practitioner care, pain-relief drugs, physical therapy, and exercise, for acute (short-term) lower back pain. There has been some interest in the relationship between vitamin D and pain, but the evidence so far from controlled trials for such a relationship, other than in osteomalacia, is inconclusive. The International Association for the Study of Pain (IASP) says that due to a lack of evidence from high quality research, it does not endorse the general use of cannabinoids to treat pain. Epidemiology Pain is the main reason for visiting an emergency department in more than 50% of cases, and is present in 30% of family practice visits. Several epidemiological studies have reported widely varying prevalence rates for chronic pain, ranging from 12 to 80% of the population. It becomes more common as people approach death. A study of 4,703 patients found that 26% had pain in the last two years of life, increasing to 46% in the last month. A survey of 6,636 children (0–18 years of age) found that, of the 5,424 respondents, 54% had experienced pain in the preceding three months. A quarter reported having experienced recurrent or continuous pain for three months or more, and a third of these reported frequent and intense pain. The intensity of chronic pain was higher for girls, and girls' reports of chronic pain increased markedly between ages 12 and 14. Society and culture Physical pain is a universal experience, and a strong motivator of human and animal behavior. As such, physical pain is used politically in relation to various issues such as pain management policy, drug control, animal rights or animal welfare, torture, and pain compliance. The deliberate infliction of pain and the medical management of pain are both important aspects of biopower, a concept that encompasses the "set of mechanisms through which the basic biological features of the human species became the object of a political strategy". In various contexts, the deliberate infliction of pain in the form of corporal punishment is used as retribution for an offence, for the purpose of disciplining or reforming a wrongdoer, or to deter attitudes or behaviour deemed unacceptable. In Western societies, the intentional infliction of severe pain (torture) was principally used to extract confession prior to its abolition in the latter part of the 19th century. Torture as a means to punish the citizen has been reserved for offences posing a severe threat to the social fabric (for example, treason). The administration of torture on bodies othered by the cultural narrative, those observed as not 'full members of society' met a resurgence in the 20th century, possibly due to the heightened warfare. Many cultures use painful ritual practices as a catalyst for psychological transformation. The use of pain to transition to a 'cleansed and purified' state is seen in religious self-flagellation practices (particularly those of Christianity and Islam), or personal catharsis in neo-primitive body suspension experiences. Beliefs about pain play an important role in sporting cultures. Pain may be viewed positively, exemplified by the 'no pain, no gain' attitude, with pain seen as an essential part of training. Sporting culture tends to normalise experiences of pain and injury and celebrate athletes who 'play hurt'. Pain has psychological, social, and physical dimensions, and is greatly influenced by cultural factors. Non-humans René Descartes argued that animals lack consciousness and therefore do not experience pain and suffering in the way that humans do. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, wrote that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. The ability of invertebrate species of animals, such as insects, to feel pain and suffering is unclear. Specialists believe that all vertebrates can feel pain, and that certain invertebrates, like the octopus, may also. The presence of pain in animals is unknown, but can be inferred through physical and behavioral reactions, such as paw withdrawal from various noxious mechanical stimuli in rodents. While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain simply because of the lack of any pain receptors, nerves, or a brain, and, by extension, a lack of consciousness. Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not, are known for their "obvious sensory abilities". Nevertheless, no member of the plant kingdom does feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death.
Biology and health sciences
Nervous system
null
24408
https://en.wikipedia.org/wiki/Polar%20bear
Polar bear
The polar bear (Ursus maritimus) is a large bear native to the Arctic and nearby areas. It is closely related to the brown bear, and the two species can interbreed. The polar bear is the largest extant species of bear and land carnivore, with adult males weighing . The species is sexually dimorphic, as adult females are much smaller. The polar bear is white- or yellowish-furred with black skin and a thick layer of fat. It is more slender than the brown bear, with a narrower skull, longer neck and lower shoulder hump. Its teeth are sharper and more adapted to cutting meat. The paws are large and allow the bear to walk on ice and paddle in the water. Polar bears are both terrestrial and pagophilic (ice-living) and are considered marine mammals because of their dependence on marine ecosystems. They prefer the annual sea ice but live on land when the ice melts in the summer. They are mostly carnivorous and specialized for preying on seals, particularly ringed seals. Such prey is typically taken by ambush; the bear may stalk its prey on the ice or in the water, but also will stay at a breathing hole or ice edge to wait for prey to swim by. The bear primarily feeds on the seal's energy-rich blubber. Other prey include walruses, beluga whales and some terrestrial animals. Polar bears are usually solitary but can be found in groups when on land. During the breeding season, male bears guard females and defend them from rivals. Mothers give birth to cubs in maternity dens during the winter. Young stay with their mother for up to two and a half years. The polar bear is considered a vulnerable species by the International Union for Conservation of Nature (IUCN) with an estimated total population of 22,000 to 31,000 individuals. Its biggest threats are climate change, pollution and energy development. Climate change has caused a decline in sea ice, giving the polar bear less access to its favoured prey and increasing the risk of malnutrition and starvation. Less sea ice also means that the bears must spend more time on land, increasing conflicts with people. Polar bears have been hunted, both by native and non-native peoples, for their coats, meat and other items. They have been kept in captivity in zoos and circuses and are prevalent in art, folklore, religion and modern culture. Naming The polar bear was given its common name by Thomas Pennant in A Synopsis of Quadrupeds (1771). It was known as the "white bear" in Europe between the 13th and 18th centuries, as well as "ice bear", "sea bear" and "Greenland bear". The Norse referred to it as and . The bear is called by the Inuit. The Netsilik cultures additionally have different names for bears based on certain factors, such as sex and age: these include adult males (), single adult females (), gestating females (), newborns (), large adolescents () and dormant bears (). The scientific name is Latin for . Taxonomy Carl Linnaeus classified the polar bear as a type of brown bear (Ursus arctos), labelling it as Ursus maritimus albus-major, arcticus ('mostly-white sea bear, arctic') in the 1758 edition of his work Systema Naturae. Constantine John Phipps formally described the polar bear as a distinct species, Ursus maritimus in 1774, following his 1773 voyage towards the North Pole. Because of its adaptations to a marine environment, some taxonomists, such as Theodore Knottnerus-Meyer, have placed the polar bear in its own genus, Thalarctos. However Ursus is widely considered to be the valid genus for the species on the basis of the fossil record and the fact that it can breed with the brown bear. Different subspecies have been proposed including Ursus maritimus maritimus and U. m. marinus. However, these are not supported, and the polar bear is considered to be monotypic. One possible fossil subspecies, U. m. tyrannus, was posited in 1964 by Björn Kurtén, who reconstructed the subspecies from a single fragment of an ulna which was approximately 20 percent larger than expected for a polar bear. However, re-evaluation in the 21st century has indicated that the fragment likely comes from a giant brown bear. Evolution The polar bear is one of eight extant species in the bear family, Ursidae, and of six extant species in the subfamily Ursinae. Fossils of polar bears are uncommon. The oldest known fossil is a 130,000- to 110,000-year-old jaw bone, found on Prince Charles Foreland, Norway, in 2004. Scientists in the 20th century surmised that polar bears directly descended from a population of brown bears, possibly in eastern Siberia or Alaska. Mitochondrial DNA studies in the 1990s and 2000s supported the status of the polar bear as a derivative of the brown bear, finding that some brown bear populations were more closely related to polar bears than to other brown bears, particularly the ABC Islands bears of Southeast Alaska. A 2010 study estimated that the polar bear lineage split from other brown bears around 150,000 years ago. More extensive genetic studies have refuted the idea that polar bears are directly descended from brown bears and found that the two species are separate sister lineages. The genetic similarities between polar bears and some brown bears were found to be the result of interbreeding. A 2012 study estimated the split between polar and brown bears as occurring around 600,000 years ago. A 2022 study estimated the divergence as occurring even earlier at over one million years ago. Glaciation events over hundreds of thousands of years led to both the origin of polar bears and their subsequent interactions and hybridizations with brown bears. Studies in 2011 and 2012 concluded that gene flow went from brown bears to polar bears during hybridization. In particular, a 2011 study concluded that living polar bear populations derived their maternal lines from now-extinct Irish brown bears. Later studies have clarified that gene flow went from polar to brown bears rather than the reverse. Up to 9 percent of the genome of ABC bears was transferred from polar bears, while Irish bears had up to 21.5 percent polar bear origin. Mass hybridization between the two species appears to have stopped around 200,000 years ago. Modern hybrids are relatively rare in the wild. Analysis of the number of variations of gene copies in polar bears compared with brown bears and American black bears shows distinct adaptions. Polar bears have a less diverse array of olfactory receptor genes, a result of there being fewer odours in their Arctic habitat. With its carnivorous, high-fat diet the species has fewer copies of the gene involved in making amylase, an enzyme that breaks down starch, and more selection for genes for fatty acid breakdown and a more efficient circulatory system. The polar bear's thicker coat is the result of more copies of genes involved in keratin-creating proteins. Characteristics The polar bear is the largest living species of bear and land carnivore, though some brown bear subspecies like the Kodiak bear can rival it in size. Males are generally long with a weight of . Females are smaller at with a weight of . Sexual dimorphism in the species is particularly high compared with most other mammals. Male polar bears also have proportionally larger heads than females. The weight of polar bears fluctuates during the year, as they can bulk up on fat and increase their mass by 50 percent. A fattened, pregnant female can weigh as much as . Adults may stand tall at the shoulder. The tail is long. The largest polar bear on record, reportedly weighing , was a male shot at Kotzebue Sound in northwestern Alaska in 1960. Compared with the brown bear, this species has a more slender build, with a narrower, flatter and smaller skull, a longer neck, and a lower shoulder hump. The snout profile is curved, resembling a "Roman nose". They have 34–42 teeth including 12 incisors, 4 canines, 8–16 premolars and 10 molars. The teeth are adapted for a more carnivorous diet than that of the brown bear, having longer, sharper and more spaced out canines, and smaller, more pointed cheek teeth (premolars and molars). The species has a large space or diastema between the canines and cheek teeth, which may allow it to better bite into prey. Since it normally preys on animals much smaller than it, the polar bear does not have a particularly strong bite. Polar bears have large paws, with the front paws being broader than the back. The feet are hairier than in other bear species, providing warmth and friction when stepping on snow and sea ice. The claws are small but sharp and hooked and are used both to snatch prey and climb onto ice. The coat consists of dense underfur around long and guard hairs around long. Males have long hairs on their forelegs, which is thought to signal their fitness to females. The outer surface of the hairs has a scaly appearance, and the guard hairs are hollow, which allows the animals to trap heat and float in the water. The transparent guard hairs forward scatter ultraviolet light between the underfur and the skin, leading to a cycle of absorption and re-emission, keeping them warm. The fur appears white because of the backscatter of incident light and the absence of pigment. Polar bears gain a yellowish colouration as they are exposed more to the sun. This is reversed after they moult. It can also be grayish or brownish. Their light fur provides camouflage in their snowy environment. After emerging from the water, the bear can easily shake itself dry before freezing since the hairs are resistant to tangling when wet. The skin, including the nose and lips, is black and absorbs heat. Polar bears have a thick layer of fat underneath the skin, which provides both warmth and energy. Polar bears maintain their core body temperature at about . Overheating is countered by a layer of highly vascularized striated muscle tissue and finely controlled blood vessels. Bears also cool off by entering the water. The eyes of a polar bear are close to the top of the head, which may allow them to stay out of the water when the animal is swimming at the surface. They are relatively small, which may be an adaption against blowing snow and snow blindness. Polar bears are dichromats, and lack the cone cells for seeing medium, mainly green, wavelengths. They have many rod cells, which allow them to see at night. The ears are small, allowing them to retain heat and not get frostbitten. They can hear best at frequencies of 11.2–22.5 kHz, a wider frequency range than expected given that their prey mostly makes low-frequency sounds. The nasal concha creates a large surface area, so more warm air can move through the nasal passages. Their olfactory system is also large and adapted for smelling prey over vast distances. The animal has reniculate kidneys which filter out the salt in their food. Distribution and habitat Polar bears inhabit the Arctic and adjacent areas. Their range includes Greenland, Canada, Alaska, Russia and the Svalbard Archipelago of Norway. Polar bears have been recorded as close as from the North Pole. The southern limits of their range include James Bay and Newfoundland and Labrador in Canada and St. Matthew Island and the Pribilof Islands of Alaska. They are not permanent residents of Iceland but have been recorded visiting there if they can reach it via sea ice. As there has been minimal human encroachment on the bears' remote habitat, they can still be found in much of their original range, more of it than any other large land carnivore. Polar bears have been divided into at least 18 subpopulations labelled East Greenland (ES), Barents Sea (BS), Kara Sea (KS), Laptev Sea (LVS), Chukchi Sea (CS), northern and southern Beaufort Sea (SBS and NBS), Viscount Melville (VM), M'Clintock Channel (MC), Gulf of Boothia (GB), Lancaster Sound (LS), Norwegian Bay (NB), Kane Basin (KB), Baffin Bay (BB), Davis Strait (DS), Foxe Basin (FB) and the western and southern Hudson Bay (WHB and SHB) populations. Bears in and around the Queen Elizabeth Islands have been proposed as a subpopulation but this is not universally accepted. A 2022 study has suggested that the bears in southeast Greenland should be considered a different subpopulation based on their geographic isolation and genetics. Polar bear populations can also be divided into four gene clusters: Southern Canadian, Canadian Archipelago, Western Basin (northwestern Canada west to the Russian Far East) and Eastern Basin (Greenland east to Siberia). The polar bear is dependent enough on the ocean to be considered a marine mammal. It is pagophilic and mainly inhabits annual sea ice covering continental shelves and between islands of archipelagos. These areas, known as the "Arctic Ring of Life", have high biological productivity. The species tends to frequent areas where sea ice meets water, such as polynyas and leads, to hunt the seals that make up most of its diet. Polar bears travel in response to changes in ice cover throughout the year. They are forced onto land in summer when the sea ice disappears. Terrestrial habitats used by polar bears include forests, mountains, rocky areas, lakeshores and creeks. In the Chukchi and Beaufort seas, where the sea ice breaks off and floats north during the summer, polar bears generally stay on the ice, though a large portion of the population (15–40%) has been observed spending all summer on land since the 1980s. Some areas have thick multiyear ice that does not completely melt and the bears can stay on all year, though this type of ice has fewer seals and allows for less productivity in the water. Behaviour and ecology Polar bears may travel areas as small as to as large as in a year, while drifting ice allows them to move further. Depending on ice conditions, a bear can travel an average of per day. These movements are powered by their energy-rich diet. Polar bears move by walking and galloping and do not trot. Walking bears tilt their front paws towards each other. They can run at estimated speeds of up to but typically move at around . Polar bears are also capable swimmers and can swim at up to . One study found they can swim for an average of 3.4 days at a time and travel an average of . They can dive for as long as three minutes. When swimming, the broad front paws do the paddling, while the hind legs play a role in steering and diving. Most polar bears are active year-round. Hibernation occurs only among pregnant females. Non-hibernating bears typically have a normal 24-hour cycle even during days of all darkness or all sunlight, though cycles less than a day are more common during the former. The species is generally diurnal, being most active early in the day. Polar bears sleep close to eight hours a day on average. They will sleep in various positions, including curled up, sitting up, lying on one side, on the back with limbs spread, or on the belly with the rump elevated. On sea ice, polar bears snooze at pressure ridges where they dig on the sheltered side and lie down. After a snowstorm, a bear may rest under the snow for hours or days. On land, the bears may dig a resting spot on gravel or sand beaches. They will also sleep on rocky outcrops. In mountainous areas on the coast, mothers and subadults will sleep on slopes where they can better spot another bear coming. Adult males are less at risk from other bears and can sleep nearly anywhere. Social life Polar bears are typically solitary, aside from mothers with cubs and mating pairs. On land, they are found closer together and gather around food resources. Adult males, in particular, are more tolerant of each other in land environments and outside the breeding season. They have been recorded forming stable "alliances", travelling, resting and playing together. A dominance hierarchy exists among polar bears with the largest mature males ranking at the top. Adult females outrank subadults and adolescents and younger males outrank females of the same age. In addition, cubs with their mothers outrank those on their own. Females with dependent offspring tend to stay away from males, but are sometimes associated with other female–offspring units, creating "composite families". Polar bears are generally quiet but can produce various sounds. Chuffing, a soft pulsing call, is made by mother bears presumably to keep in contact with their young. During the breeding season, adult males will chuff at potential mates. Unlike other animals where chuffing is passed through the nostrils, in polar bears it is emitted through a partially open mouth. Cubs will cry for attention and produce humming noises while nursing. Teeth chops, jaw pops, blows, huffs, moans, growls and roars are heard in more hostile encounters. A polar bear visually communicates with its eyes, ears, nose and lips. Chemical communication can also be important: bears secrete their scent from their foot pads into their tracks, allowing individuals to keep track of one another. Diet and hunting The polar bear is a hypercarnivore, and the most carnivorous species of bear. It is an apex predator of the Arctic, preying on ice-living seals and consuming their energy-rich blubber. The most commonly taken species is the ringed seal, but they also prey on bearded seals and harp seals. Ringed seals are ideal prey as they are abundant and small enough to be overpowered by even small bears. Bearded seal adults are larger and are more likely to break free from an attacking bear, hence adult male bears are more successful in hunting them. Less common prey are hooded seals, spotted seals, ribbon seals and the more temperate-living harbour seals. Polar bears, mostly adult males, will occasionally hunt walruses both on land and ice. They mainly target young walruses, as adults, with their thick skin and long tusks, are too large and formidable. Besides seals, bears will prey on cetacean species such as beluga whales and narwhals, as well as reindeer, birds and their eggs, fish and marine invertebrates. They rarely eat plant material as their digestive system is too specialized for animal matter, though they have been recorded eating berries, moss, grass and seaweed. In their southern range, especially near Hudson Bay and James Bay, polar bears endure all summer without sea ice to hunt from and must subsist more on terrestrial foods. Fat reserves allow polar bears to survive for months without eating. Cannibalism is known to occur in the species. Polar bears hunt their prey in several different ways. When a bear spots a seal hauling out on the sea ice, it slowly stalks it with the head and neck lowered, possibly to make its dark nose and eyes less noticeable. As it gets closer, the bear crouches more and eventually charges at a high speed, attempting to catch the seal before it can escape into its ice hole. Some stalking bears need to move through water; traversing through water cavities in the ice when approaching the seal or swimming towards a seal on an ice floe. The polar bear can stay underwater with its nose exposed. When it gets close enough, the animal lunges from the water to attack. During a limited time in spring, polar bears will search for ringed seal pups in their birth lairs underneath the ice. Once a bear catches the scent of a hiding pup and pinpoints its location, it approaches the den quietly to not alert it. It uses its front feet to smash through the ice and then pokes its head in to catch the pup before it can escape. A ringed seal's lair can be more than below the surface of the ice and thus more massive bears are better equipped for breaking in. Some bears may simply stay still near a breathing hole or other spot near the water and wait for prey to come by. This can last hours and when a seal surfaces the bear will try to pull it out with its paws and claws. This tactic is the primary hunting method from winter to early spring. Bears hunt walrus groups by provoking them into stampeding and then look for young that have been crushed or separated from their mothers during the turmoil. There are reports of bears trying to kill or injure walruses by throwing rocks and pieces of ice on them. Belugas and narwhals are vulnerable to bear attacks when they are stranded in shallow water or stuck in isolated breathing holes in the ice. When stalking reindeer, polar bears will hide in vegetation before an ambush. On some occasions, bears may try to catch prey in open water, swimming underneath a seal or aquatic bird. Seals in particular, however, are more agile than bears in the water. Polar bears rely on raw power when trying to kill their prey, and will employ bites and paw swipes. They have the strength to pull a mid-sized seal out of the water or haul a beluga carcass for quite some distance. Polar bears only occasionally store food for later—burying it under snow—and only in the short term. Arctic foxes routinely follow polar bears and scavenge scraps from their kills. The bears usually tolerate them but will charge a fox that gets too close when they are feeding. Polar bears themselves will scavenge. Subadult bears will eat remains left behind by others. Females with cubs often abandon a carcass when they see an adult male approaching, though are less likely to if they have not eaten in a long time. Whale carcasses are a valuable food source, particularly on land and after the sea ice melts, and attract several bears. In one area in northeastern Alaska, polar bears have been recorded competing with grizzly bears for whale carcasses. Despite their smaller size, grizzlies are more aggressive and polar bears are likely to yield to them in confrontations. Polar bears will also scavenge at garbage dumps during ice-free periods. Reproduction and development Polar bear mating takes place on the sea ice and during spring, mostly between March and May. Males search for females in estrus and often travel in twisting paths which reduces the chances of them encountering other males while still allowing them to find females. The movements of females remain linear and they travel more widely. The mating system can be labelled as female-defence polygyny, serial monogamy or promiscuity. Upon finding a female, a male will try to isolate and guard her. Courtship can be somewhat aggressive, and a male will pursue a female if she tries to run away. It can take days for the male to mate with the female which induces ovulation. After their first copulation, the couple bond. Undisturbed polar bear pairings typically last around two weeks during which they will sleep together and mate multiple times. Competition for mates can be intense and this has led to sexual selection for bigger males. Polar bear males often have scars from fighting. A male and female that have already bonded will flee together when another male arrives. A female mates with multiple males in a season and a single litter can have more than one father. When the mating season ends, the female will build up more fat reserves to sustain both herself and her young. Sometime between August and October, the female constructs and enters a maternity den for winter. Depending on the area, maternity dens can be found in sea ice just off the coastline or further inland and may be dug underneath snow, earth or a combination of both. The inside of these shelters can be around wide with a ceiling height of while the entrance may be long and wide. The temperature of a den can be much higher than the outside. Females hibernate and give birth to their cubs in the dens. Hibernating bears fast and internally recycle bodily waste. Polar bears experience delayed implantation and the fertilized embryo does not start development until the fall, between mid-September and mid-October. With delayed implantation, gestation in the species lasts seven to nine months but actual pregnancy is only two months. Mother polar bears typically give birth to two cubs per litter. As with other bear species, newborn polar bears are tiny and altricial. The newborns have woolly hair and pink skin, with a weight of around . Their eyes remain closed for a month. The mother's fatty milk fuels their growth, and the cubs are kept warm both by the mother's body heat and the den. The mother emerges from the den between late February and early April, and her cubs are well-developed and capable of walking with her. At this time they weigh . A polar bear family stays near the den for roughly two weeks; during this time the cubs will move and play around while the mother mostly rests. They eventually head out on the sea ice. Cubs under a year old stay close to their mother. When she hunts, they stay still and watch until she calls them back. Observing and imitating the mother helps the cubs hone their hunting skills. After their first year they become more independent and explore. At around two years old, they are capable of hunting on their own. The young suckle their mother as she is lying on her side or sitting on her rump. A lactating female cannot conceive and give birth, and cubs are weaned between two and two-and-a-half years. She may simply leave her weaned young or they may be chased away by a courting male. Polar bears reach sexual maturity at around four years for females and six years for males. Females reach their adult size at 4 or 5 years of age while males are fully grown at twice that age. Mortality Polar bears can live up to 30 years. The bear's long lifespan and ability to consistently produce young offsets cub deaths in a population. Some cubs die in the dens or the womb if the female is not in good condition. Nevertheless, the female has a chance to produce a surviving litter the next spring if she can eat better in the coming year. Cubs will eventually starve if their mothers cannot kill enough prey. Cubs also face threats from wolves and adult male bears. Males kill cubs to bring their mother back into estrus but also kill young outside the breeding season for food. A female and her cubs can flee from the slower male. If the male can get close to a cub, the mother may try to fight him off, sometimes at the cost of her life. Subadult bears, who are independent but not quite mature, have a particularly rough time as they are not as successful hunters as adults. Even when they do succeed, their kill will likely be stolen by a larger bear. Hence subadults have to scavenge and are often underweight and at risk of starvation. At adulthood, polar bears have a high survival rate, though adult males suffer injuries from fights over mates. Polar bears are especially susceptible to Trichinella, a parasitic roundworm they contract through cannibalism. Conservation status In 2015, the IUCN Red List categorized the polar bear as vulnerable because of a "decline in area of occupancy, extent of occurrence and/or quality of habitat". It estimated the total population to be between 22,000 and 31,000, and the current population trend is unknown. Threats to polar bear populations include climate change, pollution and energy development. In 2021, the IUCN/SSC Polar Bear Specialist Group labelled four subpopulations (Barents and Chukchi Sea, Foxe Basin and Gulf of Boothia) as "likely stable", two (Kane Basin and M'Clintock Channel) as "likely increased" and three (Southern Beaufort Sea, Southern and Western Hudson Bay) as "likely decreased" over specific periods between the 1980s and 2010s. The remaining ten did not have enough data. A 2008 study predicted two-thirds of the world's polar bears may disappear by 2050, based on the reduction of sea ice, and only one population would likely survive in 50 years. A 2016 study projected a likely decline in polar bear numbers of more than 30 percent over three generations. The study concluded that declines of more than 50 percent are much less likely. A 2012 review suggested that polar bears may become regionally extinct in southern areas by 2050 if trends continue, leaving the Canadian Archipelago and northern Greenland as strongholds. A 2020 study concluded that contemporary trends would lead to a majority of subpopulations disappearing by 2100, equaling 80 percent of the population, and a moderate decrease in emissions is unlikely to stop the extirpation of some subpopulations within the same time period. The key danger from climate change is malnutrition or starvation due to habitat loss. Polar bears hunt seals on the sea ice, and rising temperatures cause the ice to melt earlier in the year, driving the bears to shore before they have built sufficient fat reserves to survive the period of scarce food in the late summer and early fall. Thinner sea ice tends to break more easily, which makes it more difficult for polar bears to access seals. Insufficient nourishment leads to lower reproductive rates in adult females and lower survival rates in cubs and juvenile bears. Lack of access to seals also causes bears to find food on land which increases the risk of conflict with humans. A 2024 study concluded that greater consumption of terrestrial foods during the longer warm periods are unlikely to provide enough nourishment, increasing the risk of starvation during ice-free periods. Subadult bears would be particularly vulnerable. Reduction in sea ice cover also forces bears to swim longer distances, which further depletes their energy stores and occasionally leads to drowning. Increased ice mobility may result in less stable sites for dens or longer distances for mothers travelling to and from dens on land. Thawing of permafrost would lead to more fire-prone roofs for bears denning underground. Less snow may affect insulation while more rain could cause more cave-ins. The maximum corticosteroid-binding capacity of corticosteroid-binding globulin in polar bear serum correlates with stress in polar bears, and this has increased with climate warming. Disease-causing bacteria and parasites would flourish more readily in a warmer climate. Oil and gas development also affects polar bear habitat. The Chukchi Sea Planning Area of northwestern Alaska, which has had many drilling leases, was found to be an important site for non-denning female bears. Oil spills are also a risk. A 2018 study found that ten percent or less of prime bear habitat in the Chukchi Sea is vulnerable to a potential spill, but a spill at full reach could impact nearly 40 percent of the polar bear population. Polar bears accumulate high levels of persistent organic pollutants such as polychlorinated biphenyl (PCBs) and chlorinated pesticides, because of their position at the top of the ecological pyramid. Many of these chemicals have been internationally banned as a result of the recognition of their harm to the environment. Traces of them have slowly dwindled in polar bears but persist and have even increased in some populations. Polar bears receive some legal protection in all the countries they inhabit. The species has been labelled as threatened under the US Endangered Species Act since 2008, while the Committee on the Status of Endangered Wildlife in Canada listed it as of 'Special concern' since 1991. In 1973, the Agreement on the Conservation of Polar Bears was signed by all five nations with polar bear populations, Canada, Denmark (of which Greenland is an autonomous territory), Russia (then USSR), Norway and the US. This banned most harvesting of polar bears, allowed indigenous hunting using traditional methods, and promoted the preservation of bear habitat. The Convention on International Trade in Endangered Species of Wild Fauna lists the species under Appendix II, which allows regulated trade. Relationship with humans Polar bears have coexisted and interacted with circumpolar peoples for millennia. "White bears" are mentioned as commercial items in the Japanese book Nihon Shoki in the seventh century. It is not clear if these were polar bears or white-coloured brown bears. During the Middle Ages, Europeans considered white bears to be a novelty and were more familiar with brown- and black-coloured bears. The first known written account of the polar bear in its natural environment is found in the 13th-century anonymous Norwegian text Konungs skuggsjá, which mentions that "the white bear of Greenland wanders most of the time on the ice of the sea, hunting seals and whales and feeding on them" and says the bear is "as skillful a swimmer as any seal or whale". Over the next centuries, several European explorers would mention polar bears and describe their habits. Such accounts became more accurate after the Enlightenment, and both living and dead specimens were brought back. Nevertheless, some fanciful reports continued, including the idea that polar bears cover their noses during hunts. A relatively accurate drawing of a polar bear is found in Henry Ellis's work A Voyage to Hudson's Bay (1748). Polar bears were formally classified as a species by Constantine Phipps after his 1773 voyage to the Arctic. Accompanying him was a young Horatio Nelson, who was said to have wanted to get a polar bear coat for his father but failed in his hunt. In his 1785 edition of Histoire Naturelle, Comte de Buffon mentions and depicts a "sea bear", clearly a polar bear, and "land bears", likely brown and black bears. This helped promote ideas about speciation. Buffon also mentioned a "white bear of the forest", possibly a Kermode bear. Exploitation Polar bears were hunted as early as 8,000 years ago, as indicated by archaeological remains at Zhokhov Island in the East Siberian Sea. The oldest graphic depiction of a polar bear shows it being hunted by a man with three dogs. This rock art was among several petroglyphs found at Pegtymel in Siberia and dates from the fifth to eighth centuries. Before access to firearms, native people used lances, bows and arrows and hunted in groups accompanied by dogs. Though hunting typically took place on foot, some people killed swimming bears from boats with a harpoon. Polar bears were sometimes killed in their dens. Killing a polar bear was considered a rite of passage for boys in some cultures. Native people respected the animal and hunts were subject to strict rituals. Bears were harvested for the fur, meat, fat, tendons, bones and teeth. The fur was worn and slept on, while the bones and teeth were made into tools. For the Netsilik, the individual who finally killed the bear had the right to its fur while the meat was passed to all in the party. Some people kept the cubs of slain bears. Norsemen in Greenland traded polar bear furs in the Middle Ages. Russia traded polar bear products as early as 1556, with Novaya Zemlya and Franz Josef Land being important commercial centres. Large-scale hunting of bears at Svalbard occurred since at least the 18th century, when no less than 150 bears were killed each year by Russian explorers. In the next century, more Norwegians were harvesting the bears on the island. From the 1870s to the 1970s, around 22,000 of the animals were hunted in total. Over 150,000 polar bears in total were either killed or captured in Russia and Svalbard, from the 18th to the 20th century. In the Canadian Arctic, bears were harvested by commercial whalers especially if they could not get enough whales. The Hudson's Bay Company is estimated to have sold 15,000 polar bear coats between the late 19th century and early 20th century. In the mid-20th century, countries began to regulate polar bear harvesting, culminating in the 1973 agreement. Polar bear meat was commonly eaten as rations by explorers and sailors in the Arctic, to widely varying appraisal. Some have called it too coarse and strong-smelling to eat, while others have praised it as a "royal dish". The liver was known for being too toxic to eat. This is due to the accumulation of vitamin A from the bears' prey. Polar bear fat was also used in lamps when other fuel was unavailable. Polar bear rugs were almost ubiquitous on the floors of Norwegian churches by the 13th and 14th centuries. In more modern times, classical Hollywood actors would pose on bearskin rugs, notably Marilyn Monroe. Such images often had sexual connotations. Conflicts When the sea ice melts, polar bears, particularly subadults, conflict with humans over resources on land. They are attracted to the smell of human-made foods, particularly at garbage dumps and may be shot when they encroach on private property. In Churchill, Manitoba, local authorities maintain a "polar bear jail" where nuisance bears are held until the sea ice freezes again. Climate change has increased conflicts between the two species. Over 50 polar bears swarmed a town in Novaya Zemlya in February 2019, leading local authorities to declare a state of emergency. From 1870 to 2014, there were an estimated 73 polar bear attacks on humans, which led to 20 deaths. The majority of attacks were by hungry males, typically subadults, while female attacks were usually in defence of the young. In comparison to brown and American black bears, attacks by polar bears were more often near and around where humans lived. This may be due to the bears getting desperate for food and thus more likely to seek out human settlements. As with the other two bear species, polar bears are unlikely to target more than two people at once. Though popularly thought of as the most dangerous bear, the polar bear is no more aggressive to humans than other species. Captivity The polar bear was for long a particularly sought-after species for exotic animal collectors, since it was relatively rare and remote living and had a reputation as a ferocious beast. It is one of the few marine mammals that will reproduce well in captivity. They were originally kept only by royals and elites. The Tower of London got a polar bear as early as 1252 under King Henry III. In 1609, James VI and I of Scotland, England and Ireland was given two polar bear cubs by the sailor Jonas Poole, who got them during a trip to Svalbard. At the end of the 17th century, Frederick I of Prussia housed polar bears in menageries with other wild animals. He had their claws and canines removed to allow them to perform mock fights safely. Around 1726, Catherine I of Russia gifted two polar bears to Augustus II the Strong of Poland, who desired them for his animal collection. Later, polar bears were displayed to the public in zoos and circuses. In early 19th century, the species was exhibited at the Exeter Exchange in London, as well as menageries in Vienna and Paris. The first zoo in North America to exhibit a polar bear was the Philadelphia Zoo in 1859. Polar bear exhibits were innovated by Carl Hagenbeck, who replaced cages and pits with settings that mimicked the animal's natural environment. In 1907, he revealed a complex panoramic structure at the Tierpark Hagenbeck Zoo in Hamburg consisting of exhibits made of artificial snow and ice separated by moats. Different polar animals were displayed on each platform, giving the illusion of them living together. Starting in 1975, Hellabrunn Zoo in Munich housed its polar bears in an exhibit which consisted of a glass barrier, a house, concrete platforms mimicking ice floes and a large pool. Inside the house were maternity dens, and rooms for the staff to prepare and store the food. The exhibit was connected to an outdoor yard for extra room. Similar naturalistic and "immersive" exhibits were opened in the early 21st century, such as the "Arctic Ring of Life" at the Detroit Zoo and Ontario's Cochrane Polar Bear Habitat. Many zoos in Europe and North America have stopped keeping polar bears because of the size and costs of their complex exhibits. In North America, the population of polar bears in zoos reached its zenith in 1975 with 229 animals and declined in the 21st century. Polar bears have been trained to perform in circuses. Bears in general, being large, powerful, easy to train and human-like in form, were widespread in circuses, and the white coat of polar bears made them particularly attractive. Circuses helped change the polar bear's image from a fearsome monster to something more comical. Performing polar bears were used in 1888 by Circus Krone in Germany and later in 1904 by the Bostock and Wombwell Menagerie in England. Circus director Wilhelm Hagenbeck trained up to 75 polar bears to slide into a large tank through a chute. He began performing with them in 1908 and they had a particularly well-received show at the Hippodrome in London. Other circus tricks performed by polar bears involved tightropes, balls, roller skates and motorcycles. One of the most famous polar bear trainers in the second half of the twentieth century was the East German Ursula Böttcher, whose small stature contrasted with that of the large bears. Starting in the late 20th century, most polar bear acts were retired and the use of these bears for the circus is now prohibited in the US. Several captive polar bears gained celebrity status in the late 20th and early 21st century, notably Knut of the Berlin Zoological Garden, who was rejected by his mother and had to be hand-reared by zookeepers. Another bear, Binky of the Alaska Zoo in Anchorage, became famous for attacking two visitors who got too close. Captive polar bears may pace back and forth, a stereotypical behaviour. In one study, they were recorded to have spent 14 percent of their days pacing. Gus of the Central Park Zoo was prescribed Prozac by a therapist for constantly swimming in his pool. To reduce stereotypical behaviours, zookeepers provide the bears with enrichment items to trigger their play behaviour. In sufficiently warm conditions, algae concentrated in the medulla of their fur's guard hairs may cause zoo polar bears to appear green. Cultural significance Polar bears have prominent roles in Inuit culture and religion. The deity Torngarsuk is sometimes imagined as a giant polar bear. He resides underneath the sea floor in an underworld of the dead and has power over sea creatures. Kalaallit shamans would worship him through singing and dancing and were expected to be taken by him to the sea and consumed if he considered them worthy. Polar bears were also associated with the goddess Nuliajuk who was responsible for their creation, along with other sea creatures. It is believed that shamans could reach the Moon or the bottom of the ocean by riding on a guardian spirit in the form of a polar bear. Some folklore involves people turning into or disguising themselves as polar bears by donning their skins or the reverse, with polar bears removing their skins. In Inuit astronomy, the Pleiades star cluster is conceived of as a polar bear trapped by dogs while Orion's Belt, the Hyades and Aldebaran represent hunters, dogs and a wounded bear respectively. Nordic folklore and literature have also featured polar bears. In The Tale of Auðun of the West Fjords, written around 1275, a poor man named Auðun spends all his money on a polar bear in Greenland, but ends up wealthy after giving the bear to the king of Denmark. In the 14th-century manuscript Hauksbók, a man named Odd kills and eats a polar bear that killed his father and brother. In the story of The Grimsey Man and the Bear, a mother bear nurses and rescues a farmer stuck on an ice floe and is repaid with sheep meat. 18th-century Icelandic writings mention the legend of a "polar bear king" known as the . This beast was depicted as a polar bear with "ruddy cheeks" and a unicorn-like horn, which glows in the dark. The king could understand when humans talk and was considered to be very astute. Two Norwegian fairy tales, "East of the Sun and West of the Moon" and "White-Bear-King-Valemon", involve white bears turning into men and seducing women. Drawings of polar bears have been featured on maps of the northern regions. Possibly the earliest depictions of a polar bear on a map is the Swedish Carta marina of 1539, which has a white bear on Iceland or "Islandia". A 1544 map of North America includes two polar bears near Quebec. Notable paintings featuring polar bears include François-Auguste Biard's Fighting Polar Bears (1839) and Edwin Landseer's Man Proposes, God Disposes (1864). Polar bears have also been filmed for cinema. An Inuit polar bear hunt was shot for the 1932 documentary Igloo, while the 1974 film The White Dawn filmed a simulated stabbing of a trained bear for a scene. In the film The Big Show (1961), two characters are killed by a circus polar bear. The scenes were shot using animal trainers instead of the actors. In modern literature, polar bears have been characters in both children's fiction, like Hans Beer's Little Polar Bear and the Whales and Sakiasi Qaunaq's The Orphan and the Polar Bear, and fantasy novels, like Philip Pullman's His Dark Materials series. In radio, Mel Blanc provided the vocals for Jack Benny's pet polar bear Carmichael on The Jack Benny Program. The polar bear is featured on flags and coats of arms, like the coat of arms of Greenland, and in many advertisements, notably for Coca-Cola since 1922. As charismatic megafauna, polar bears have been used to raise awareness of the dangers of climate change. Aurora the polar bear is a giant marionette created by Greenpeace for climate protests. The World Wide Fund for Nature has sold plush polar bears as part of its "Arctic Home" campaign. Photographs of polar bears have been featured in National Geographic and Time magazines, including ones of them standing on ice floes, while the climate change documentary and advocacy film An Inconvenient Truth (2006) includes an animated bear swimming. Automobile manufacturer Nissan used a polar bear in one of its commercials, hugging a man for using an electric car. To make a statement about global warming, in 2009 a Copenhagen ice statue of a polar bear with a bronze skeleton was purposely left to melt in the sun.
Biology and health sciences
Carnivora
null
24420
https://en.wikipedia.org/wiki/Punched%20card
Punched card
A punched card (also punch card or punched-card) is a piece of card stock that stores digital data using punched holes. Punched cards were once common in data processing and the control of automated machines. Punched cards were widely used in the 20th century, where unit record machines, organized into data processing systems, used punched cards for data input, output, and storage. The IBM 12-row/80-column punched card format came to dominate the industry. Many early digital computers used punched cards as the primary medium for input of both computer programs and data. Data can be entered onto a punched card using a keypunch. While punched cards are now obsolete as a storage medium, as of 2012, some voting machines still used punched cards to record votes. Punched cards also had a significant cultural impact in the 20th century. History The idea of control and data storage via punched holes was developed independently on several occasions in the modern period. In most cases there is no evidence that each of the inventors was aware of the earlier work. Precursors Basile Bouchon developed the control of a loom by punched holes in paper tape in 1725. The design was improved by his assistant Jean-Baptiste Falcon and by Jacques Vaucanson. Although these improvements controlled the patterns woven, they still required an assistant to operate the mechanism. In 1804 Joseph Marie Jacquard demonstrated a mechanism to automate loom operation. A number of punched cards were linked into a chain of any length. Each card held the instructions for shedding (raising and lowering the warp) and selecting the shuttle for a single pass. Semyon Korsakov was reputedly the first to propose punched cards in informatics for information store and search. Korsakov announced his new method and machines in September 1832. Charles Babbage proposed the use of "Number Cards", "pierced with certain holes and stand[ing] opposite levers connected with a set of figure wheels ... advanced they push in those levers opposite to which there are no holes on the cards and thus transfer that number together with its sign" in his description of the Calculating Engine's Store. There is no evidence that he built a practical example. In 1881, Jules Carpentier developed a method of recording and playing back performances on a harmonium using punched cards. The system was called the Mélographe Répétiteur and "writes down ordinary music played on the keyboard dans le langage de Jacquard", that is as holes punched in a series of cards. By 1887 Carpentier had separated the mechanism into the Melograph which recorded the player's key presses and the Melotrope which played the music. 20th century At the end of the 1800s Herman Hollerith created a method for recording data on a medium that could then be read by a machine, developing punched card data processing technology for the 1890 U.S. census. His tabulating machines read and summarized data stored on punched cards and they began use for government and commercial data processing. Initially, these electromechanical machines only counted holes, but by the 1920s they had units for carrying out basic arithmetic operations. Hollerith founded the Tabulating Machine Company (1896) which was one of four companies that were amalgamated via stock acquisition to form a fifth company, Computing-Tabulating-Recording Company (CTR) in 1911, later renamed International Business Machines Corporation (IBM) in 1924. Other companies entering the punched card business included The Tabulator Limited (Britain, 1902), Deutsche Hollerith-Maschinen Gesellschaft mbH (Dehomag) (Germany, 1911), Powers Accounting Machine Company (US, 1911), Remington Rand (US, 1927), and H.W. Egli Bull (France, 1931). These companies, and others, manufactured and marketed a variety of punched cards and unit record machines for creating, sorting, and tabulating punched cards, even after the development of electronic computers in the 1950s. Both IBM and Remington Rand tied punched card purchases to machine leases, a violation of the US 1914 Clayton Antitrust Act. In 1932, the US government took both to court on this issue. Remington Rand settled quickly. IBM viewed its business as providing a service and that the cards were part of the machine. IBM fought all the way to the Supreme Court and lost in 1936; the court ruled that IBM could only set card specifications. "By 1937... IBM had 32 presses at work in Endicott, N.Y., printing, cutting and stacking five to 10 million punched cards every day." Punched cards were even used as legal documents, such as U.S. Government checks and savings bonds. During World War II punched card equipment was used by the Allies in some of their efforts to decrypt Axis communications. See, for example, Central Bureau in Australia. At Bletchley Park in England, "some 2 million punched cards a week were being produced, indicating the sheer scale of this part of the operation". In Nazi Germany, punched cards were used for the censuses of various regions and other purposes (see IBM and the Holocaust). Punched card technology developed into a powerful tool for business data-processing. By 1950 punched cards had become ubiquitous in industry and government. "Do not fold, spindle or mutilate," a warning that appeared on some punched cards distributed as documents such as checks and utility bills to be returned for processing, became a motto for the post-World War II era. In 1956 IBM signed a consent decree requiring, amongst other things, that IBM would by 1962 have no more than one-half of the punched card manufacturing capacity in the United States. Tom Watson Jr.'s decision to sign this decree, where IBM saw the punched card provisions as the most significant point, completed the transfer of power to him from Thomas Watson Sr. The Univac UNITYPER introduced magnetic tape for data entry in the 1950s. During the 1960s, the punched card was gradually replaced as the primary means for data storage by magnetic tape, as better, more capable computers became available. Mohawk Data Sciences introduced a magnetic tape encoder in 1965, a system marketed as a keypunch replacement which was somewhat successful. Punched cards were still commonly used for entering both data and computer programs until the mid-1980s when the combination of lower cost magnetic disk storage, and affordable interactive terminals on less expensive minicomputers made punched cards obsolete for these roles as well. However, their influence lives on through many standard conventions and file formats. The terminals that replaced the punched cards, the IBM 3270 for example, displayed 80 columns of text in text mode, for compatibility with existing software. Some programs still operate on the convention of 80 text columns, although fewer and fewer do as newer systems employ graphical user interfaces with variable-width type fonts. Nomenclature The terms punched card, punch card, and punchcard were all commonly used, as were IBM card and Hollerith card (after Herman Hollerith). IBM used "IBM card" or, later, "punched card" at first mention in its documentation and thereafter simply "card" or "cards". Specific formats were often indicated by the number of character positions available, e.g. 80-column card. A sequence of cards that is input to or output from some step in an application's processing is called a card deck or simply deck. The rectangular, round, or oval bits of paper punched out were called chad (chads) or chips (in IBM usage). Sequential card columns allocated for a specific use, such as names, addresses, multi-digit numbers, etc., are known as a field. The first card of a group of cards, containing fixed or indicative information for that group, is known as a master card. Cards that are not master cards are detail cards. Formats The Hollerith punched cards used for the 1890 U.S. census were blank. Following that, cards commonly had printing such that the row and column position of a hole could be easily seen. Printing could include having fields named and marked by vertical lines, logos, and more. "General purpose" layouts (see, for example, the IBM 5081 below) were also available. For applications requiring master cards to be separated from following detail cards, the respective cards had different upper corner diagonal cuts and thus could be separated by a sorter. Other cards typically had one upper corner diagonal cut so that cards not oriented correctly, or cards with different corner cuts, could be identified. Hollerith's early cards Herman Hollerith was awarded three patents in 1889 for electromechanical tabulating machines. These patents described both paper tape and rectangular cards as possible recording media. The card shown in of January 8 was printed with a template and had hole positions arranged close to the edges so they could be reached by a railroad conductor's ticket punch, with the center reserved for written descriptions. Hollerith was originally inspired by railroad tickets that let the conductor encode a rough description of the passenger: When use of the ticket punch proved tiring and error-prone, Hollerith developed the pantograph "keyboard punch". It featured an enlarged diagram of the card, indicating the positions of the holes to be punched. A printed reading board could be placed under a card that was to be read manually. Hollerith envisioned a number of card sizes. In an article he wrote describing his proposed system for tabulating the 1890 U.S. census, Hollerith suggested a card of Manila stock "would be sufficient to answer all ordinary purposes." The cards used in the 1890 census had round holes, 12 rows and 24 columns. A reading board for these cards can be seen at the Columbia University Computing History site. At some point, became the standard card size. These are the dimensions of the then-current paper currency of 1862–1923. This size was needed in order to use available banking-type storage for the 60,000,000 punched cards to come nationwide. Hollerith's original system used an ad hoc coding system for each application, with groups of holes assigned specific meanings, e.g. sex or marital status. His tabulating machine had up to 40 counters, each with a dial divided into 100 divisions, with two indicator hands; one which stepped one unit with each counting pulse, the other which advanced one unit every time the other dial made a complete revolution. This arrangement allowed a count up to 9,999. During a given tabulating run counters were assigned specific holes or, using relay logic, combination of holes. Later designs led to a card with ten rows, each row assigned a digit value, 0 through 9, and 45 columns. This card provided for fields to record multi-digit numbers that tabulators could sum, instead of their simply counting cards. Hollerith's 45 column punched cards are illustrated in Comrie's The application of the Hollerith Tabulating Machine to Brown's Tables of the Moon. IBM 80-column format and character codes By the late 1920s, customers wanted to store more data on each punched card. Thomas J. Watson Sr., IBM's head, asked two of his top inventors, Clair D. Lake and J. Royden Pierce, to independently develop ways to increase data capacity without increasing the size of the punched card. Pierce wanted to keep round holes and 45 columns but to allow each column to store more data; Lake suggested rectangular holes, which could be spaced more tightly, allowing 80 columns per punched card, thereby nearly doubling the capacity of the older format. Watson picked the latter solution, introduced as The IBM Card, in part because it was compatible with existing tabulator designs and in part because it could be protected by patents and give the company a distinctive advantage. This IBM card format, introduced in 1928, has rectangular holes, 80 columns, and 10 rows. Card size is . The cards are made of smooth stock, thick. There are about 143 cards to the inch (/cm). In 1964, IBM changed from square to round corners. They come typically in boxes of 2,000 cards or as continuous form cards. Continuous form cards could be both pre-numbered and pre-punched for document control (checks, for example). Initially designed to record responses to yes–no questions, support for numeric, alphabetic and special characters was added through the use of columns and zones. The top three positions of a column are called zone punching positions, 12 (top), 11, and 0 (0 may be either a zone punch or a digit punch). For decimal data the lower ten positions are called digit punching positions, 0 (top) through 9. An arithmetic sign can be specified for a decimal field by overpunching the field's rightmost column with a zone punch: 12 for plus, 11 for minus (CR). For Pound sterling pre-decimalization currency a penny column represents the values zero through eleven; 10 (top), 11, then 0 through 9 as above. An arithmetic sign can be punched in the adjacent shilling column. Zone punches had other uses in processing, such as indicating a master card. Diagram: Note: The 11 and 12 zones were also called the X and Y zones, respectively. ___ / &-0123456789ABCDEFGHIJKLMNOPQR/STUVWXYZ 12| x xxxxxxxxx 11| x xxxxxxxxx 0| x xxxxxxxxx 1| x x x x 2| x x x x 3| x x x x 4| x x x x 5| x x x x 6| x x x x 7| x x x x 8| x x x x 9| x x x x | In 1931, IBM began introducing upper-case letters and special characters (Powers-Samas had developed the first commercial alphabetic punched card representation in 1921). The 26 letters have two punches (zone [12,11,0] + digit [1–9]). The languages of Germany, Sweden, Denmark, Norway, Spain, Portugal and Finland require up to three additional letters; their punching is not shown here. Most special characters have two or three punches (zone [12,11,0, or none] + digit [2–7] + 8); a few special characters were exceptions: "&" is 12 only, "-" is 11 only, and "/" is 0 + 1). The Space character has no punches. The information represented in a column by a combination of zones [12, 11, 0] and digits [0–9] is dependent on the use of that column. For example, the combination "12-1" is the letter "A" in an alphabetic column, a plus signed digit "1" in a signed numeric column, or an unsigned digit "1" in a column where the "12" has some other use. The introduction of EBCDIC in 1964 defined columns with as many as six punches (zones [12,11,0,8,9] + digit [1–7]). IBM and other manufacturers used many different 80-column card character encodings. A 1969 American National Standard defined the punches for 128 characters and was named the Hollerith Punched Card Code (often referred to simply as Hollerith Card Code), honoring Hollerith. For some computer applications, binary formats were used, where each hole represented a single binary digit (or "bit"), every column (or row) is treated as a simple bit field, and every combination of holes is permitted. For example, on the IBM 701 and IBM 704, card data was read, using an IBM 711, into memory in row binary format. For each of the twelve rows of the card, 72 of the 80 columns, skipping the other eight, would be read into two 36-bit words, requiring 864 bits to store the whole card; a control panel was used to select the 72 columns to be read. Software would translate this data into the desired form. One convention was to use columns 1 through 72 for data, and columns 73 through 80 to sequentially number the cards, as shown in the picture above of a punched card for FORTRAN. Such numbered cards could be sorted by machine so that if a deck was dropped the sorting machine could be used to arrange it back in order. This convention continued to be used in FORTRAN, even in later systems where the data in all 80 columns could be read. The IBM card readers 3504, 3505 and the multifunction unit 3525 used a different encoding scheme for column binary data, also known as card image, where each column, split into two rows of 6 (12–3 and 4–9) was encoded into two 8-bit bytes, holes in each group represented by bits 2 to 7 (MSb numbering, bit 0 and 1 unused ) in successive bytes. This required 160 8-bit bytes, or 1280 bits, to store the whole card. As an aid to humans who had to deal with the punched cards, the IBM 026 and later 029 and 129 key punch machines could print human-readable text above each of the 80 columns. As a prank, punched cards could be made where every possible punch position had a hole. Such "lace cards" lacked structural strength, and would frequently buckle and jam inside the machine. The IBM 80-column punched card format dominated the industry, becoming known as just IBM cards, even though other companies made cards and equipment to process them. One of the most common punched card formats is the IBM 5081 card format, a general purpose layout with no field divisions. This format has digits printed on it corresponding to the punch positions of the digits in each of the 80 columns. Other punched card vendors manufactured cards with this same layout and number. IBM Stub card and Short card formats Long cards were available with a scored stub on either end which, when torn off, left an 80 column card. The torn off card is called a stub card. 80-column cards were available scored, on either end, creating both a short card and a stub card when torn apart. Short cards can be processed by other IBM machines. A common length for stub cards was 51 columns. Stub cards were used in applications requiring tags, labels, or carbon copies. IBM 40-column Port-A-Punch card format According to the IBM Archive: IBM's Supplies Division introduced the Port-A-Punch in 1958 as a fast, accurate means of manually punching holes in specially scored IBM punched cards. Designed to fit in the pocket, Port-A-Punch made it possible to create punched card documents anywhere. The product was intended for "on-the-spot" recording operations—such as physical inventories, job tickets and statistical surveys—because it eliminated the need for preliminary writing or typing of source documents. IBM 96-column format In 1969 IBM introduced a new, smaller, round-hole, 96-column card format along with the IBM System/3 low-end business computer. These cards have tiny, 1 mm diameter circular holes, smaller than those in paper tape. Data is stored in 6-bit BCD, with three rows of 32 characters each, or 8-bit EBCDIC. In this format, each column of the top tiers are combined with two punch rows from the bottom tier to form an 8-bit byte, and the middle tier is combined with two more punch rows, so that each card contains 64 bytes of 8-bit-per-byte binary coded data. As in the 80 column card, readable text was printed in the top section of the card. There was also a 4th row of 32 characters that could be printed. This format was never widely used; it was IBM-only, but they did not support it on any equipment beyond the System/3, where it was quickly superseded by the 1973 IBM 3740 Data Entry System using 8-inch floppy disks. The format was however recycled in the 1978 when IBM re-used the mechanism in it’s IBM_3624 ATMs as print-only receipt printers. Powers/Remington Rand/UNIVAC 90-column format The Powers/Remington Rand card format was initially the same as Hollerith's; 45 columns and round holes. In 1930, Remington Rand leap-frogged IBM's 80 column format from 1928 by coding two characters in each of the 45 columns – producing what is now commonly called the 90-column card. There are two sets of six rows across each card. The rows in each set are labeled 0, 1/2, 3/4, 5/6, 7/8 and 9. The even numbers in a pair are formed by combining that punch with a 9 punch. Alphabetic and special characters use 3 or more punches. Powers-Samas formats The British Powers-Samas company used a variety of card formats for their unit record equipment. They began with 45 columns and round holes. Later 36, 40 and 65 column cards were provided. A 130 column card was also available – formed by dividing the card into two rows, each row with 65 columns and each character space with 5 punch positions. A 21 column card was comparable to the IBM Stub card. Mark sense format Mark sense (electrographic) cards, developed by Reynold B. Johnson at IBM, have printed ovals that could be marked with a special electrographic pencil. Cards would typically be punched with some initial information, such as the name and location of an inventory item. Information to be added, such as quantity of the item on hand, would be marked in the ovals. Card punches with an option to detect mark sense cards could then punch the corresponding information into the card. Aperture format Aperture cards have a cut-out hole on the right side of the punched card. A piece of 35 mm microfilm containing a microform image is mounted in the hole. Aperture cards are used for engineering drawings from all engineering disciplines. Information about the drawing, for example the drawing number, is typically punched and printed on the remainder of the card. Manufacturing IBM's Fred M. Carroll developed a series of rotary presses that were used to produce punched cards, including a 1921 model that operated at 460 cards per minute (cpm). In 1936 he introduced a completely different press that operated at 850 cpm. Carroll's high-speed press, containing a printing cylinder, revolutionized the company's manufacturing of punched cards. It is estimated that between 1930 and 1950, the Carroll press accounted for as much as 25 percent of the company's profits. Discarded printing plates from these card presses, each printing plate the size of an IBM card and formed into a cylinder, often found use as desk pen/pencil holders, and even today are collectible IBM artifacts (every card layout had its own printing plate). In the mid-1930s a box of 1,000 cards cost $1.05 (). Cultural impact While punched cards have not been widely used for generations, the impact was so great for most of the 20th century that they still appear from time to time in popular culture. For example: Accommodation of people's names: The Man Whose Name Wouldn't Fit Artist and architect Maya Lin in 2004 designed a public art installation at Ohio University, titled "Input", that looks like a punched card from the air. Tucker Hall at the University of Missouri – Columbia features architecture that is rumored to be influenced by punched cards. Although there are only two rows of windows on the building, a rumor holds that their spacing and pattern will spell out "M-I-Z beat k-U!" on a punched card, making reference to the university and state's rivalry with neighboring state Kansas. At the University of Wisconsin – Madison, the exterior windows of the Engineering Research Building were modeled after a punched card layout, during its construction in 1966. At the University of North Dakota in Grand Forks, a portion of the exterior of Gamble Hall (College of Business and Public Administration), has a series of light-colored bricks that resembles a punched card spelling out "University of North Dakota." In the 1964–1965 Free Speech Movement, punched cards became a metaphor... symbol of the "system"—first the registration system and then bureaucratic systems more generally ... a symbol of alienation ... Punched cards were the symbol of information machines, and so they became the symbolic point of attack. Punched cards, used for class registration, were first and foremost a symbol of uniformity. .... A student might feel "he is one of out of 27,500 IBM cards" ... The president of the Undergraduate Association criticized the University as "a machine ... IBM pattern of education."... Robert Blaumer explicated the symbolism: he referred to the "sense of impersonality... symbolized by the IBM technology."... — Steven Lubar A legacy of the 80 column punched card format is that a display of 80 characters per row was a common choice in the design of character-based terminals. As of September 2014, some character interface defaults, such as the command prompt window's width in Microsoft Windows, remain set at 80 columns and some file formats, such as FITS, still use 80-character card images. The two-line element set format for tracking objects in Earth orbit is based on punch cards. In Arthur C. Clarke's early short story "Rescue Party", the alien explorers find a "... wonderful battery of almost human Hollerith analyzers and the five thousand million punched cards holding all that could be recorded on each man, woman and child on the planet". Writing in 1946, Clarke, like almost all SF authors, had not then foreseen the development and eventual ubiquity of the computer. In "I.B.M.", the final track of her album This Is a Recording, comedian Lily Tomlin gives instructions that, if followed, would purportedly shrink the holes on a punch card (used by AT&T at the time for customer billing), making it unreadable. Do Not Fold, Spindle or Mutilate A common example of the requests often printed on punched cards which were to be individually handled, especially those intended for the public to use and return is "Do Not Fold, Spindle or Mutilate" (in the UK "Do not bend, spike, fold or mutilate"). Coined by Charles A. Phillips, it became a motto for the post–World War II era (even though many people had no idea what spindle meant), and was widely mocked and satirized. Some 1960s students at Berkeley wore buttons saying: "Do not fold, spindle or mutilate. I am a student". The motto was also used for a 1970 book by Doris Miles Disney with a plot based around an early computer dating service and a 1971 made-for-TV movie based on that book, and a similarly titled 1967 Canadian short film, Do Not Fold, Staple, Spindle or Mutilate. Standards ANSI INCITS 21-1967 (R2002), Rectangular Holes in Twelve-Row Punched Cards (formerly ANSI X3.21-1967 (R1997)) Specifies the size and location of rectangular holes in twelve-row punched cards. ANSI X3.11-1990 American National Standard Specifications for General Purpose Paper Cards for Information Processing ANSI X3.26-1980 (R1991) Hollerith Punched Card Code ISO 1681:1973 Information processing – Unpunched paper cards – Specification ISO 6586:1980 Data processing – Implementation of the ISO 7- bit and 8- bit coded character sets on punched cards. Defines ISO 7-bit and 8-bit character sets on punched cards as well as the representation of 7-bit and 8-bit combinations on 12-row punched cards. Derived from, and compatible with, the Hollerith Code, ensuring compatibility with existing punched card files. Punched card devices Processing of punched cards was handled by a variety of machines, including: Keypunches—machines with a keyboard that punched cards from operator entered data. Unit record equipment—machines that process data on punched cards. Employed prior to the widespread use of digital computers. Includes card sorters, tabulating machines and a variety of other machines Computer punched card reader—a computer input device used to read executable computer programs and data from punched cards under computer control. Card readers, found in early computers, could read up to 100 cards per minute, while traditional "high-speed" card readers could read about 1,000 cards per minute. Computer card punch—a computer output device that punches holes in cards under computer control. Voting machines—used into the 21st century
Technology
Data storage
null
24438
https://en.wikipedia.org/wiki/PAL
PAL
Phase Alternating Line (PAL) is a colour encoding system for analog television. It was one of three major analogue colour television standards, the others being NTSC and SECAM. In most countries it was broadcast at 625 lines, 50 fields (25 frames) per second, and associated with CCIR analogue broadcast television systems B, D, G, H, I or K. The articles on analog broadcast television systems further describe frame rates, image resolution, and audio modulation. PAL video is composite video because luminance (luma, monochrome image) and chrominance (chroma, colour applied to the monochrome image) are transmitted together as one signal. A latter evolution of the standard, PALplus, added support for widescreen broadcasts with no loss of vertical image resolution, while retaining compatibility with existing sets. Almost all of the countries using PAL are currently in the process of conversion, or have already converted transmission standards to DVB, ISDB or DTMB. The PAL designation continues to be used in some non-broadcast contexts, especially regarding console video games. Geographic reach PAL was adopted by most European countries, by several African countries, by Argentina, Brazil, Paraguay, Uruguay, and by most of Asia Pacific (including the Middle East and South Asia). Countries in those regions that did not adopt PAL were France, Francophone Africa, several ex-Soviet states, Japan, South Korea, Liberia, Myanmar, the Philippines, and Taiwan. PAL region With the introduction of home video releases and later digital sources (e.g. DVD-Video), the name "PAL" might be used to refer to digital formats, even though they use completely different colour encoding systems. For instance, 576i (576 interlaced lines) digital video with colour encoded as YCbCr, intended to be backward compatible and easily displayed on legacy PAL devices, is usually mentioned as "PAL" (eg: "PAL DVD"). Likewise, video game consoles outputting a 50 Hz signal might be labeled as "PAL", as opposed to 60 Hz on NTSC machines. These designations should not be confused with the analog colour system itself. History In the 1950s, the Western European countries began plans to introduce colour television, and were faced with the problem that the NTSC standard demonstrated several weaknesses, including colour tone shifting under poor transmission conditions, which became a major issue considering Europe's geographical and weather-related particularities. To overcome NTSC's shortcomings, alternative standards were devised, resulting in the development of the PAL and SECAM standards. The goal was to provide a colour TV standard for the European picture frequency of 50 fields per second (50 hertz), and finding a way to eliminate the problems with NTSC. PAL was developed by Walter Bruch at Telefunken in Hanover, West Germany, with important input from . The format was patented by Telefunken in December 1962, citing Bruch as inventor, and unveiled to members of the European Broadcasting Union (EBU) on 3 January 1963. When asked why the system was named "PAL" and not "Bruch", the inventor answered that a "Bruch system" would probably not have sold very well ("Bruch" is the German word for "breakage"). The first broadcasts began in the United Kingdom in July 1967, followed by West Germany at the Berlin IFA on August 25. The BBC channel initially using the broadcast standard was BBC2, which had been the first UK TV service to introduce "625-lines" during 1964. The Netherlands and Switzerland started PAL broadcasts by 1968, with Austria following the next year. Telefunken PALcolour 708T was the first PAL commercial TV set. It was followed by Loewe-Farbfernseher S 920 and F 900. Telefunken was later bought by the French electronics manufacturer Thomson. Thomson also bought the Compagnie Générale de Télévision where Henri de France developed SECAM, the first European Standard for colour television. Thomson, now called Technicolour SA, also owns the RCA brand and licences it to other companies; Radio Corporation of America, the originator of that brand, created the NTSC colour TV standard before Thomson became involved. The Soviets developed two further systems, mixing concepts from PAL and SECAM, known as TRIPAL and NIIR, that never went beyond tests. In 1993, an evolution of PAL aimed to improve and enhance format by allowing 16:9 aspect ratio broadcasts, while remaining compatible with existing television receivers, was introduced. Named PALplus, it was defined by ITU recommendation BT.1197-1. It was developed at the University of Dortmund in Germany, in cooperation with German terrestrial broadcasters and European and Japanese manufacturers. Adoption was limited to European countries. With the introduction of digital broadcasts and signal sources (ex: DVDs, game consoles), the term PAL was used imprecisely to refer to the 625-line/50 Hz television system in general, to differentiate from the 525-line/60 Hz system generally used with NTSC. For example, DVDs were labelled as PAL or NTSC (referring to the line count and frame rate) even though technically the discs carry neither PAL nor NTSC encoded signal. These devices would still have analog outputs (ex; composite video output), and would convert the digital signals (576i or 480i) to the analog standards to assure compatibility. CCIR 625/50 and EIA 525/60 are the proper names for these (line count and field rate) standards; PAL and NTSC on the other hand are methods of encoding colour information in the signal. Color decoding methods "PAL-D", "PAL-N", "PAL-H" and "PAL-K" designations on this section describe PAL decoding methods and are unrelated to broadcast systems with similar names. The Telefunken licence covered any decoding method that relied on the alternating subcarrier phase to reduce phase errors, described as "PAL-D" for "delay", and "PAL-N" for "new" or "Chrominance Lock". This excluded very basic PAL decoders that relied on the human eye to average out the odd/even line phase errors, and in the early 1970s some Japanese set manufacturers developed basic decoding systems to avoid paying royalties to Telefunken. These variations are known as "PAL-S" (for "simple" or "Volks-PAL"), operating without a delay line and suffering from the “Hanover bars” effect. An example of this solution is the Kuba Porta Color CK211P set. Another solution was to use a 1H analogue delay line to allow decoding of only the odd or even lines. For example, the chrominance on odd lines would be switched directly through to the decoder and also be stored in the delay line. Then, on even lines, the stored odd line would be decoded again. This method (known as 'gated NTSC') was adopted by Sony on their 1970s Trinitron sets (KV-1300UB to KV-1330UB), and came in two versions: "PAL-H" and "PAL-K" (averaging over multiple lines). It effectively treated PAL as NTSC, suffering from hue errors and other problems inherent in NTSC and required the addition of a manual hue control. Colour encoding Most PAL systems encode the colour information using a variant of the Y'UV colour space. comprises the monochrome luma signal, with the three RGB colour channels mixed down onto two, and . Like NTSC, PAL uses a quadrature amplitude modulated subcarrier carrying the chrominance information added to the luma video signal to form a composite video baseband signal. The frequency of this subcarrier is 4.43361875 MHz for PAL 4.43, compared to 3.579545 MHz for NTSC 3.58. The SECAM system, on the other hand, uses a frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz. The name "Phase Alternating Line" describes the way that the phase of part of the colour information on the video signal is reversed with each line, which automatically corrects phase errors in the transmission of the signal by cancelling them out, at the expense of vertical frame colour resolution. Lines where the colour phase is reversed compared to NTSC are often called PAL or phase-alternation lines, which justifies one of the expansions of the acronym, while the other lines are called NTSC lines. Early PAL receivers relied on the human eye to do that cancelling; however, this resulted in a comb-like effect known as Hanover bars on larger phase errors. Thus, most receivers now use a chrominance analogue delay line, which stores the received colour information on each line of display; an average of the colour information from the previous line and the current line is then used to drive the picture tube. The effect is that phase errors result in saturation changes, which are less objectionable than the equivalent hue changes of NTSC. A minor drawback is that the vertical colour resolution is poorer than the NTSC system's, but since the human eye also has a colour resolution that is much lower than its brightness resolution, this effect is not visible. In any case, NTSC, PAL, and SECAM all have chrominance bandwidth (horizontal colour detail) reduced greatly compared to the luma signal. The 4.43361875 MHz frequency of the colour carrier is a result of 283.75 colour clock cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency (number of lines per second) is 15625 Hz (625 lines × 50 Hz ÷ 2), the colour carrier frequency calculates as follows: 4.43361875 MHz = 283.75 × 15625 Hz + 25 Hz. The frequency 50 Hz is the optional refresh frequency of the monitor to be able to create an illusion of motion, while 625 lines means the vertical lines or resolution that the PAL system supports. The original colour carrier is required by the colour decoder to recreate the colour difference signals. Since the carrier is not transmitted with the video information it has to be generated locally in the receiver. In order that the phase of this locally generated signal can match the transmitted information, a 10 cycle burst of colour subcarrier is added to the video signal shortly after the line sync pulse, but before the picture information, during the so-called back porch. This colour burst is not actually in phase with the original colour subcarrier, but leads it by 45 degrees on the odd lines and lags it by 45 degrees on the even lines. This swinging burst enables the colour decoder circuitry to distinguish the phase of the vector which reverses every line. PAL signal details For PAL-B/G the signal has these characteristics. (Total horizontal sync time 12.05 μs) After 0.9 μs a colourburst of cycles is sent. Most rise/fall times are in range. Amplitude is 100% for white level, 30% for black, and 0% for sync. The CVBS electrical amplitude is Vpp and impedance of 75 Ω. The vertical timings are: (Total vertical sync time 1.6 ms) As PAL is interlaced, every two fields are summed to make a complete picture frame. Colorimetry PAL colorimetry, as defined by the ITU on REC-BT.470, and based on CIE 1931 x,y coordinates: The assumed display gamma is defined as 2.8. The PAL-M system uses color primary and gamma values similar to NTSC. Color is encoded using the YUV color space. Luma () is derived from red, green, and blue () gamma pre-corrected () primary signals: and are used to transmit chrominance. Each has a typical bandwidth of 1.3 MHz. Composite PAL signal timing where . Subcarrier frequency is 4.43361875 MHz (±5 Hz) for PAL-B/D/G/H/I/N. PAL broadcast systems The PAL colour system is usually used with a video format that has 625 lines per frame (576 visible lines, the rest being used for other information such as sync data and captioning) and a refresh rate of 50 interlaced fields per second (compatible with 25 full frames per second), such systems being B, G, H, I, and N (see broadcast television systems for the technical details of each format). This ensures video interoperability. However, as some of these standards (B/G/H, I and D/K) use different sound carriers (5.5 MHz, 6.0 MHz and 6.5 MHz respectively), it may result in a video image without audio when viewing a signal broadcast over the air or cable. Some countries in Eastern Europe which formerly used SECAM with systems D and K have switched to PAL while leaving other aspects of their video system the same, resulting in the different sound carrier. Instead, other European countries have changed completely from SECAM-D/K to PAL-B/G. The PAL-N system has a different sound carrier, and also a different colour subcarrier, and decoding on incompatible PAL systems results in a black-and-white image without sound. The PAL-M system has a different sound carrier and a different colour subcarrier, and does not use 625 lines or 50 frames/second. This would result in no video or audio at all when viewing a European signal. System A The BBC tested their pre-war (but still broadcast until 1985) 405-line monochrome system (CCIR System A) with all three colour standards including PAL, before the decision was made to abandon 405 and transmit colour on 625/System I only. PAL-B/G/D/K/I Many countries have turned off analogue transmissions, so the following does not apply anymore, except for using devices which output RF signals, such as video recorders. The majority of countries using or having used PAL have television standards with 625 lines and 50 fields per second. Differences concern the audio carrier frequency and channel bandwidths. The variants are: Standards B/G are used in most of Western Europe, former Yugoslavia, South Asia, Thailand, Australia, and New Zealand Standard I in the UK, Ireland, Hong Kong, South Africa, and Macau Standards D/K (along with SECAM usually) in most of Central and Eastern Europe and mainland China. Systems B and G are similar. System B specifies 7 MHz channel bandwidth, while System G specifies 8 MHz channel bandwidth. Australia and China used Systems B and D respectively for VHF and UHF channels. Similarly, Systems D and K are similar except for the bands they use: System D is only used on VHF, while System K is only used on UHF. Although System I is used on both bands, it has only been used on UHF in the United Kingdom. PAL-L The PAL-L (Phase Alternating Line with CCIR System L broadcast system) standard uses the same video system as PAL-B/G/H (625 lines, 50 Hz field rate, 15.625 kHz line rate), but with a larger 6 MHz video bandwidth rather than 5.5 MHz and moving the audio subcarrier to 6.5 MHz. An 8 MHz channel spacing is used for PAL-L, to maintain compatibility with System L channel spacings. PAL-N (Argentina, Paraguay and Uruguay) The PAL-N standard was created in Argentina, through Resolution No. 100 ME/76, which determined the creation of a study commission for a national color standard. The commission recommended using PAL under CCIR System N that Paraguay and Uruguay also used. It employs the 625 line/50 field per second waveform of PAL-B/G, D/K, H, and I, but on a 6 MHz channel with a chrominance subcarrier frequency of 3.582056 MHz (917/4*H) similar to NTSC (910/4*H). On the studio production level, standard PAL cameras and equipment were used, with video signals then transcoded to PAL-N for broadcast. This allows 625 line, 50 frames per second video to be broadcast in a 6 MHz channel, at some cost in horizontal resolution. PAL-M (Brazil) In Brazil, PAL is used in conjunction with the 525 line, 60 field/s CCIR System M, using (very nearly) the NTSC colour subcarrier frequency. Exact colour subcarrier frequency of PAL-M is 3.575611 MHz, or 227.25 times System M's horizontal scan frequency. Almost all other countries using system M use NTSC. The PAL colour system (either baseband or with any RF system, with the normal 4.43 MHz subcarrier unlike PAL-M) can also be applied to an NTSC-like 525-line picture to form what is often known as "PAL 60" (sometimes "PAL 60/525", "Quasi-PAL" or "Pseudo PAL"). PAL-M (a broadcast standard) however should not be confused with "PAL 60" (a video playback system—see below). Home devices Multisystem TVs PAL television receivers manufactured since the 1990s can typically decode all of the PAL variants except, in some cases PAL-M and PAL-N. Many such receivers can also receive Eastern European and Middle Eastern SECAM, though rarely French-broadcast SECAM (because France used a quasi-unique positive video modulation, system L) unless they are manufactured for the French market. They will correctly display plain (non-broadcast) CVBS or S-video SECAM signals. Many can also accept baseband NTSC-M, such as from a VCR or game console, and RF modulated NTSC with a PAL standard audio subcarrier (i.e., from a modulator), though not usually broadcast NTSC (as its 4.5 MHz audio subcarrier is not supported). Many sets also support NTSC with a 4.43 MHz color subcarrier (see PAL 60 on the next section). VHS and DVD players VHS tapes recorded from a PAL-N or a PAL-B/G, D/K, H, or I broadcast are indistinguishable because the downconverted subcarrier on the tape is the same. A VHS recorded off TV (or released) in Europe will play in colour on any PAL-N VCR and PAL-N TV in Argentina, Paraguay and Uruguay. Likewise, any tape recorded in Argentina, Paraguay or Uruguay off a PAL-N TV broadcast can be sent to anyone in European countries that use PAL (and Australia/New Zealand, etc.) and it will display in colour. This will also play back successfully in Russia and other SECAM countries, as the USSR mandated PAL compatibility in 1985—this has proved to be very convenient for video collectors. People in Argentina, Paraguay and Uruguay usually own TV sets that also display NTSC-M, in addition to PAL-N. DirecTV also conveniently broadcasts in NTSC-M for North, Central, and South America. Most DVD players sold in Argentina, Paraguay and Uruguay also play PAL discs—however, this is usually output in the European variant (colour subcarrier frequency 4.433618 MHz), so people who own a TV set which only works in PAL-N (plus NTSC-M in most cases) will have to watch those PAL DVD imports in black and white (unless the TV supports RGB SCART) as the colour subcarrier frequency in the TV set is the PAL-N variation, 3.582056 MHz. In the case that a VHS or DVD player works in PAL (and not in PAL-N) and the TV set works in PAL-N (and not in PAL), there are two options: images can be seen in black and white, or an inexpensive transcoder (PAL -> PAL-N) can be purchased in order to see the colours Some DVD players (usually lesser known brands) include an internal transcoder and the signal can be output in NTSC-M, with some video quality loss due to the standard conversion from a 625/50 PAL DVD to the NTSC-M 525/60 output format. A few DVD players sold in Argentina, Paraguay and Uruguay also allow a signal output of NTSC-M, PAL, or PAL-N. In that case, a PAL disc (imported from Europe) can be played back on a PAL-N TV because there are no field/line conversions, quality is generally excellent. Some special VHS video recorders are available which can allow viewers the flexibility of enjoying PAL-N recordings using a standard PAL (625/50 Hz) colour TV, or even through multi-system TV sets. Video recorders like Panasonic NV-W1E (AG-W1 for the US), AG-W2, AG-W3, NV-J700AM, Aiwa HV-M110S, HV-M1U, Samsung SV-4000W and SV-7000W feature a digital TV system conversion circuitry. PAL 60 Many 1990s-onwards videocassette recorders sold in Europe can play back NTSC tapes. When operating in this mode most of them do not output a true (625/50) PAL signal, but rather a hybrid consisting of the original NTSC line standard (525/60), with colour converted to PAL 4.43 MHz (instead of 3.58 as with NTSC and South American PAL variants and with the PAL-specific phase alternation of colour difference signal between the lines) — this is known as "PAL 60" (also "quasi-PAL" or "pseudo-PAL") with "60" standing for 60 Hz (for 525/30), instead of 50 Hz (for 625/25). Some video game consoles also output a signal in this mode. The Dreamcast pioneered PAL 60 with most of its games being able to play games at full speed like NTSC and without borders. Xbox and GameCube also support PAL 60 unlike PlayStation 2. The PlayStation 2 did not actually offer a true PAL 60 mode; while many PlayStation 2 games did offer a "PAL 60" mode as an option, the console would in fact generate an NTSC signal during 60 Hz operation. Most newer television sets can display a "PAL 60" signal correctly, but some will only do so (if at all) in black and white and/or with flickering/foldover at the bottom of the picture, or picture rolling (however, many old TV sets can display the picture properly by means of adjusting the V-Hold and V-Height knobs—assuming they have them). Some TV tuner cards or video capture cards will support this mode (although software/driver modification can be required and the manufacturers' specs may be unclear). Some DVD players offer a choice of PAL vs NTSC output for NTSC discs. PAL vs. NTSC PAL usually has 576 visible lines compared with 480 lines with NTSC, meaning that PAL has a 20% higher resolution, in fact it even has a higher resolution than Enhanced Definition standard (852x480). Most TV output for PAL and NTSC use interlaced frames meaning that even lines update on one field and odd lines update on the next field. Interlacing frames gives a smoother motion with half the frame rate. NTSC is used with a frame rate of 60i or 30p whereas PAL generally uses 50i or 25p; both use a high enough frame rate to give the illusion of fluid motion. This is due to the fact that NTSC is generally used in countries with a utility frequency of 60 Hz and PAL in countries with 50 Hz, although there are many exceptions. Both PAL and NTSC have a higher frame rate than film which uses 24 frames per second. PAL has a closer frame rate to that of film, so most films are sped up 4% to play on PAL systems, shortening the runtime of the film and, without adjustment, slightly raising the pitch of the audio track. Film conversions for NTSC instead use 3:2 pull down to spread the 24 frames of film across 60 interlaced fields. This maintains the runtime of the film and preserves the original audio, but may cause worse interlacing artefacts during fast motion. NTSC receivers have a tint control to perform colour correction manually. If this is not adjusted correctly, the colours may be faulty. The PAL standard automatically cancels hue errors by phase reversal, so a tint control is unnecessary yet Saturation control can be more useful. Chrominance phase errors in the PAL system are cancelled out using a 1H delay line resulting in lower saturation, which is much less noticeable to the eye than NTSC hue errors. However, the alternation of colour information—Hanover bars—can lead to picture grain on pictures with extreme phase errors even in PAL systems, if decoder circuits are misaligned or use the simplified decoders of early designs (typically to overcome royalty restrictions). This effect will usually be observed when the transmission path is poor, typically in built up areas or where the terrain is unfavourable. The effect is more noticeable on UHF than VHF signals as VHF signals tend to be more robust. In most cases such extreme phase shifts do not occur. PAL and NTSC have slightly divergent colour spaces, but the colour decoder differences here are ignored. Outside of film and TV broadcasts, the differences between PAL and NTSC when used in the context of video games were quite dramatic. For comparison, the NTSC standard is 60 fields/30 frames per second while PAL is 50 fields/25 frames per second. To avoid timing problems or unfeasible code changes, games were slowed down by approximately 16.7%. This has led to games ported over to PAL regions being historically known for their inferior speed and frame rates compared to their NTSC counterparts, especially when they are not properly optimized for PAL standards. Full motion video rendered and encoded at 30 frames per second by the Japanese/US (NTSC) developers were often down-sampled to 25 frames per second or considered to be 50 frames per second video for PAL release—usually by means of 3:2 pull-down, resulting in motion judder. In addition, the increased resolution of PAL was often not utilised at all during conversion, creating a pseudo-letterbox effect with borders on the top and bottom of the screen, looking similar to 14:9 letterbox. This leaves the graphics with a slightly squashed look due to an incorrect aspect ratio caused by the borders. These practices were prevalent in previous generations, especially during the 8-bit and 16-bit era of video games where 2D graphics were the norm at that time. The gameplay of many games with an emphasis on speed, such as the original Sonic the Hedgehog for the Sega Genesis (Mega Drive), suffered in their PAL incarnations. Starting with the sixth generation of video games, game consoles started to offer true 60 Hz modes in games ported to PAL regions. The Dreamcast was the first to offer a true "PAL 60" mode, with many games made for the system in PAL regions being closely on-par with their NTSC counterparts in terms of speed and frame rates using "PAL 60" modes. The Xbox and GameCube also featured "PAL 60" modes in games made for the region as well. The only lone exception was the PlayStation 2, where games ported over to PAL regions are oftentimes (but not always) running in 50 Hz modes. PAL region games supporting 60 Hz modes for the PlayStation 2 also requires a display with NTSC output unless RGB or component connections were used, since these allowed for colour outputs without the need for NTSC or PAL colour encoding. Otherwise, the games would display in monochrome on PAL-only displays. The problems usually associated with PAL region video games are not necessarily encountered in Brazil with the PAL-M standard used in that region, since its video system uses an identical number of visible lines and refresh rate as NTSC but with a slightly different colour encoding frequency based on PAL, modified for use with the CCIR System M broadcast television system. PAL vs. SECAM The SECAM patents predate those of PAL by several years (1956 vs. 1962). Its creator, Henri de France, in search of a response to known NTSC hue problems, came up with ideas that were to become fundamental to both European systems, namely: colour information on two successive TV lines is very similar and vertical resolution can be halved without serious impact on perceived visual quality more robust colour transmission can be achieved by spreading information on two TV lines instead of just one information from the two TV lines can be recombined using a delay line. SECAM applies those principles by transmitting alternately only one of the U and V components on each TV line, and getting the other from the delay line. QAM is not required, and frequency modulation of the subcarrier is used instead for additional robustness (sequential transmission of U and V was to be reused much later in Europe's last "analog" video systems: the MAC standards). SECAM is free of both hue and saturation errors. It is not sensitive to phase shifts between the colour burst and the chrominance signal, and for this reason was sometimes used in early attempts at colour video recording, where tape speed fluctuations could get the other systems into trouble. In the receiver, it did not require a quartz crystal (which was an expensive component at the time) and generally could do with lower accuracy delay lines and components. SECAM transmissions are more robust over longer distances than NTSC or PAL. However, owing to their FM nature, the colour signal remains present, although at reduced amplitude, even in monochrome portions of the image, thus being subject to stronger cross colour. One serious drawback for studio work is that the addition of two SECAM signals does not yield valid colour information, due to its use of frequency modulation. It was necessary to demodulate the FM and handle it as AM for proper mixing, before finally remodulating as FM, at the cost of some added complexity and signal degradation. In its later years, this was no longer a problem, due to the wider use of component and digital equipment. PAL can work without a delay line (PAL-S), but this configuration, sometimes referred to as "poor man's PAL", could not match SECAM in terms of picture quality. To compete with it at the same level, it had to make use of the main ideas outlined above, and as a consequence PAL had to pay licence fees to SECAM. Over the years, this contributed significantly to the estimated 500 million francs gathered by the SECAM patents (for an initial 100 million francs invested in research). Hence, PAL could be considered as a hybrid system, with its signal structure closer to NTSC, but its decoding borrowing much from SECAM. There were initial specifications to use colour with the French 819 line format (system E). However, "SECAM E" only ever existed in development phases. Actual deployment used the 625 line format. This made for easy interchange and conversion between PAL and SECAM in Europe. Conversion was often not even needed, as more and more receivers and VCRs became compliant with both standards, helped in this by the common decoding steps and components. When the SCART plug became standard, it could take RGB as an input, effectively bypassing all the colour coding formats' peculiarities. When it comes to home VCRs, all video standards use what is called "colour under" format. Colour is extracted from the high frequencies of the video spectrum, and moved to the lower part of the spectrum available from tape. Luma then uses what remains of it, above the colour frequency range. This is usually done by heterodyning for PAL (as well as NTSC). But the FM nature of colour in SECAM allows for a cheaper trick: division by 4 of the subcarrier frequency (and multiplication on replay). This became the standard for SECAM VHS recording in France. Most other countries kept using the same heterodyning process as for PAL or NTSC and this is known as MESECAM recording (as it was more convenient for some Middle East countries that used both PAL and SECAM broadcasts). Another difference in colour management is related to the proximity of successive tracks on the tape, which is a cause for chroma crosstalk in PAL. A cyclic sequence of 90° chroma phase shifts from one line to the next is used to overcome this problem. This is not needed in SECAM, as FM provides sufficient protection. Regarding early (analogue) videodiscs, the established Laserdisc standard supported only NTSC and PAL. However, a different optical disc format, the Thomson transmissive optical disc made a brief appearance on the market. At some point, it used a modified SECAM signal (single FM subcarrier at 3.6 MHz). The media's flexible and transmissive material allowed for direct access to both sides without flipping the disc, a concept that reappeared in multi-layered DVDs about fifteen years later. Countries and territories that are using or once used PAL Below are lists of countries and territories that used or once used the PAL system. Many of these have converted or are converting PAL to DVB-T (most countries), DVB-T2 (most countries), DTMB (China, Hong Kong and Macau) or ISDB-Tb (Sri Lanka, Maldives, Botswana, Brazil, Argentina, Paraguay and Uruguay). A legacy list of PAL users in 1998 is available on Recommendation ITU-R BT.470-6 - Conventional Television Systems, Appendix 1 to Annex 1. Using PAL B, D, G, H, K or I (used SECAM) (DVB-T introduction in assessment) (see New Zealand) (UHF only) (DVB-T introduction in assessment) (DVB-T introduction in assessment) (Once experimented in PAL-M) (uses PAL for Lebanese channels) (see Australia) (used SECAM until 1990s) (DVB-T introduction in assessment) (DVB-T introduction in assessment) (Two PAL-I analogue TV services operated by BFBS) (Simulcast in DVB-T) (along with SECAM) (Simulcast in DVB-T) (DVB-T introduction in assessment) PAL-M (Simulcast in ISDB-Tb started on 2 December 2007. PAL broadcasting in its final stages of abandonment, the complete shutdown is scheduled to 2025.) PAL-N (Simulcast in ISDB-Tb started on 28 August 2008. PAL broadcasting in its final stages of abandonment, the complete shutdown is scheduled to 2025.) (Simulcast in ISDB-Tb) (Simulcast in ISDB-Tb) Countries and territories that have ceased using PAL The following countries and territories no longer use PAL for terrestrial broadcasts, and are in process of converting from PAL to DVB-T/T2, DTMB or ISDB-T.
Technology
Broadcasting
null
24458
https://en.wikipedia.org/wiki/Polyvinyl%20chloride
Polyvinyl chloride
Polyvinyl chloride (alternatively: poly(vinyl chloride), colloquial: vinyl or polyvinyl; abbreviated: PVC) is the world's third-most widely produced synthetic polymer of plastic (after polyethylene and polypropylene). About 40 million tons of PVC are produced each year. PVC comes in rigid (sometimes abbreviated as RPVC) and flexible forms. Rigid PVC is used in construction for pipes, doors and windows. It is also used in making plastic bottles, packaging, and bank or membership cards. Adding plasticizers makes PVC softer and more flexible. It is used in plumbing, electrical cable insulation, flooring, signage, phonograph records, inflatable products, and in rubber substitutes. With cotton or linen, it is used in the production of canvas. Polyvinyl chloride is a white, brittle solid. It is soluble in ketones, chlorinated solvents, dimethylformamide, THF and DMAc. Discovery PVC was synthesized in 1872 by German chemist Eugen Baumann after extended investigation and experimentation. The polymer appeared as a white solid inside a flask of vinyl chloride that had been left on a shelf sheltered from sunlight for four weeks. In the early 20th century, the Russian chemist Ivan Ostromislensky and Fritz Klatte of the German chemical company Griesheim-Elektron both attempted to use PVC in commercial products, but difficulties in processing the rigid, sometimes brittle polymer thwarted their efforts. Waldo Semon and the B.F. Goodrich Company developed a method in 1926 to plasticize PVC by blending it with various additives, including the use of dibutyl phthalate by 1933. Production Polyvinyl chloride is produced by polymerization of the vinyl chloride monomer (VCM), as shown. About 80% of production involves suspension polymerization. Emulsion polymerization accounts for about 12%, and bulk polymerization accounts for 8%. Suspension polymerization produces particles with average diameters of 100–180 μm, whereas emulsion polymerization gives much smaller particles of average size around 0.2 μm. VCM and water are introduced into the reactor along with a polymerization initiator and other additives. The contents of the reaction vessel are pressurized and continually mixed to maintain the suspension and ensure a uniform particle size of the PVC resin. The reaction is exothermic and thus requires cooling. As the volume is reduced during the reaction (PVC is denser than VCM), water is continually added to the mixture to maintain the suspension. PVC may be manufactured from ethylene, which can be produced from either naphtha or ethane feedstock. Microstructure The polymers are linear and are strong. The monomers are mainly arranged head-to-tail, meaning that chloride is located on alternating carbon centres. PVC has mainly an atactic stereochemistry, which means that the relative stereochemistry of the chloride centres are random. Some degree of syndiotacticity of the chain gives a few percent crystallinity that is influential on the properties of the material. About 57% of the mass of PVC is chlorine. The presence of chloride groups gives the polymer very different properties from the structurally related material polyethylene. At 1.4 g/cm3, PVC's density is also higher than structurally related plastics such as polyethylene (0.88–0.96 g/cm3) and polymethylmethacrylate (1.18 g/cm3). Producers About half of the world's PVC production capacity is in China, despite the closure of many Chinese PVC plants due to issues complying with environmental regulations and poor capacities of scale. The largest single producer of PVC as of 2018 is Shin-Etsu Chemical of Japan, with a global share of around 30%. Additives The product of the polymerization process is unmodified PVC. Before PVC can be made into finished products, it always requires conversion into a compound by the incorporation of additives (but not necessarily all of the following) such as heat stabilizers, UV stabilizers, plasticizers, processing aids, impact modifiers, thermal modifiers, fillers, flame retardants, biocides, blowing agents and smoke suppressors, and, optionally, pigments. The choice of additives used for the PVC finished product is controlled by the cost performance requirements of the end use specification (underground pipe, window frames, intravenous tubing and flooring all have very different ingredients to suit their performance requirements). Previously, polychlorinated biphenyls (PCBs) were added to certain PVC products as flame retardants and stabilizers. Plasticizers Among the common plastics, PVC is unique in its acceptance of large amounts of plasticizer with gradual changes in physical properties from a rigid solid to a soft gel, and almost 90% of all plasticizer production is used in making flexible PVC. The majority is used in films and cable sheathing. Flexible PVC can consist of over 85% plasticizer by mass, however unplasticized PVC (UPVC) should not contain any. Phthalates The most common class of plasticizers used in PVC is phthalates, which are diesters of phthalic acid. Phthalates can be categorized as high and low, depending on their molecular weight. Low phthalates such as Bis(2-ethylhexyl) phthalate (DEHP) and Dibutyl phthalate (DBP) have increased health risks and are generally being phased out. High-molecular-weight phthalates such as diisononyl phthalate (DINP) and diisodecyl phthalate (DIDP) are generally considered safer. While DEHP has been medically approved for many years for use in medical devices, it was permanently banned for use in children's products in the US in 2008 by US Congress; the PVC-DEHP combination had proved to be very suitable for making blood bags because DEHP stabilizes red blood cells, minimizing hemolysis (red blood cell rupture). However, DEHP is coming under increasing pressure in Europe. The assessment of potential risks related to phthalates, and in particular the use of DEHP in PVC medical devices, was subject to scientific and policy review by the European Union authorities, and on 21 March 2010, a specific labeling requirement was introduced across the EU for all devices containing phthalates that are classified as CMR (carcinogenic, mutagenic or toxic to reproduction). The label aims to enable healthcare professionals to use this equipment safely, and, where needed, take appropriate precautionary measures for patients at risk of over-exposure Metal stabilizers BaZn stabilisers have successfully replaced cadmium-based stabilisers in Europe in many PVC semi-rigid and flexible applications. In Europe, particularly Belgium, there has been a commitment to eliminate the use of cadmium (previously used as a part component of heat stabilizers in window profiles) and phase out lead-based heat stabilizers (as used in pipe and profile areas) such as liquid autodiachromate and calcium polyhydrocummate by 2015. According to the final report of Vinyl 2010, cadmium was eliminated across Europe by 2007. The progressive substitution of lead-based stabilizers is also confirmed in the same document showing a reduction of 75% since 2000 and ongoing. This is confirmed by the corresponding growth in calcium-based stabilizers, used as an alternative to lead-based stabilizers, more and more, also outside Europe. Heat stabilizers Some of the most crucial additives are heat stabilizers. These agents minimize loss of HCl, a degradation process that starts above 70 °C (158 °F) and is autocatalytic. Many diverse agents have been used including, traditionally, derivatives of heavy metals (lead, cadmium). Metallic soaps (metal "salts" of fatty acids such as calcium stearate) are common in flexible PVC applications. Properties PVC is a thermoplastic polymer. Its properties are usually categorized based on rigid and flexible PVCs.
Physical sciences
Salts
null
24471
https://en.wikipedia.org/wiki/Phonograph
Phonograph
A phonograph, later called a gramophone (as a trademark since 1887, as a generic name in the UK since 1910), and since the 1940s a record player, or more recently a turntable, is a device for the mechanical and analogue reproduction of sound. The sound vibration waveforms are recorded as corresponding physical deviations of a helical or spiral groove engraved, etched, incised, or impressed into the surface of a rotating cylinder or disc, called a record. To recreate the sound, the surface is similarly rotated while a playback stylus traces the groove and is therefore vibrated by it, faintly reproducing the recorded sound. In early acoustic phonographs, the stylus vibrated a diaphragm that produced sound waves coupled to the open air through a flaring horn, or directly to the listener's ears through stethoscope-type earphones. The phonograph was invented in 1877 by Thomas Edison; its use would rise the following year. Alexander Graham Bell's Volta Laboratory made several improvements in the 1880s and introduced the graphophone, including the use of wax-coated cardboard cylinders and a cutting stylus that moved from side to side in a zigzag groove around the record. In the 1890s, Emile Berliner initiated the transition from phonograph cylinders to flat discs with a spiral groove running from the periphery to near the centre, coining the term gramophone for disc record players, which is predominantly used in many languages. Later improvements through the years included modifications to the turntable and its drive system, stylus, pickup system, and the sound and equalization systems. The disc phonograph record was the dominant commercial audio distribution format throughout most of the 20th century, and phonographs became the first example of home audio that people owned and used at their residences. In the 1960s, the use of 8-track cartridges and cassette tapes were introduced as alternatives. By 1987, phonograph use had declined sharply due to the popularity of cassettes and the rise of the compact disc. However, records have undergone a revival since the late 2000s. Terminology The terminology used to describe record-playing devices is not uniform across the English-speaking world. In modern contexts, the playback device is often referred to as a "turntable", "record player", or "record changer". Each of these terms denotes distinct items. When integrated into a DJ setup with a mixer, turntables are colloquially known as "decks". In later versions of electric phonographs, commonly known since the 1940s as record players or turntables, the movements of the stylus are transformed into an electrical signal by a transducer. This signal is then converted back into sound through an amplifier and one or more loudspeakers. The term "phonograph", meaning "sound writing", originates from the Greek words φωνή (phonē, meaning 'sound' or 'voice') and γραφή (graphē, meaning 'writing'). Similarly, the terms "gramophone" and "graphophone" have roots in the Greek words γράμμα (gramma, meaning 'letter') and φωνή (phōnē, meaning 'voice'). In British English, "gramophone" may refer to any sound-reproducing machine that utilizes disc records. These were introduced and popularized in the UK by the Gramophone Company. Initially, "gramophone" was a proprietary trademark of the company, and any use of the name by competing disc record manufacturers was rigorously challenged in court. However, in 1910, an English court decision ruled that the term had become generic; United States In American English, "phonograph", properly specific to machines made by Edison, was sometimes used in a generic sense as early as the 1890s to include cylinder-playing machines made by others. But it was then considered strictly incorrect to apply it to Emile Berliner's Gramophone, a different machine that played nonrecordable discs (although Edison's original Phonograph patent included the use of discs.) Australia In Australian English, "record player" was the term; "turntable" was a more technical term; "gramophone" was restricted to the old mechanical (i.e., wind-up) players; and "phonograph" was used as in British English. The "phonograph" was first demonstrated in Australia on 14 June 1878 to a meeting of the Royal Society of Victoria by the Society's Honorary Secretary, Alex Sutherland who published "The Sounds of the Consonants, as Indicated by the Phonograph" in the Society's journal in November that year. On 8 August 1878 the phonograph was publicly demonstrated at the Society's annual conversazione, along with a range of other new inventions, including the microphone. Early history Phonautograph The phonautograph was invented on March 25, 1857, by Frenchman Édouard-Léon Scott de Martinville, an editor and typographer of manuscripts at a scientific publishing house in Paris. One day while editing Professor Longet's Traité de Physiologie, he happened upon that customer's engraved illustration of the anatomy of the human ear, and conceived of "the imprudent idea of photographing the word." In 1853 or 1854 (Scott cited both years) he began working on "le problème de la parole s'écrivant elle-même" ("the problem of speech writing itself"), aiming to build a device that could replicate the function of the human ear. Scott coated a plate of glass with a thin layer of lampblack. He then took an acoustic trumpet, and at its tapered end affixed a thin membrane that served as the analog to the eardrum. At the center of that membrane, he attached a rigid boar's bristle approximately a centimetre long, placed so that it just grazed the lampblack. As the glass plate was slid horizontally in a well formed groove at a speed of one meter per second, a person would speak into the trumpet, causing the membrane to vibrate and the stylus to trace figures that were scratched into the lampblack. On March 25, 1857, Scott received the French patent #17,897/31,470 for his device, which he called a phonautograph. The earliest known surviving recorded sound of a human voice was conducted on April 9, 1860, when Scott recorded someone singing the song "Au Clair de la Lune" ("By the Light of the Moon") on the device. However, the device was not designed to play back sounds, as Scott intended for people to read back the tracings, which he called phonautograms. This was not the first time someone had used a device to create direct tracings of the vibrations of sound-producing objects, as tuning forks had been used in this way by English physicist Thomas Young in 1807. By late 1857, with support from the Société d'encouragement pour l'industrie nationale, Scott's phonautograph was recording sounds with sufficient precision to be adopted by the scientific community, paving the way for the nascent science of acoustics. The device's true significance in the history of recorded sound was not fully realized prior to March 2008, when it was discovered and resurrected in a Paris patent office by First Sounds, an informal collaborative of American audio historians, recording engineers, and sound archivists founded to make the earliest sound recordings available to the public. The phonautograms were then digitally converted by scientists at the Lawrence Berkeley National Laboratory in California, who were able to play back the recorded sounds, something Scott had never conceived of. Prior to this point, the earliest known record of a human voice was thought to be an 1877 phonograph recording by Thomas Edison. The phonautograph would play a role in the development of the gramophone, whose inventor, Emile Berliner, worked with the phonautograph in the course of developing his own device. Paleophone Charles Cros, a French poet and amateur scientist, is the first person known to have made the conceptual leap from recording sound as a traced line to the theoretical possibility of reproducing the sound from the tracing and then to devising a definite method for accomplishing the reproduction. On April 30, 1877, he deposited a sealed envelope containing a summary of his ideas with the French Academy of Sciences, a standard procedure used by scientists and inventors to establish priority of conception of unpublished ideas in the event of any later dispute. An account of his invention was published on October 10, 1877, by which date Cros had devised a more direct procedure: the recording stylus could scribe its tracing through a thin coating of acid-resistant material on a metal surface and the surface could then be etched in an acid bath, producing the desired groove without the complication of an intermediate photographic procedure. The author of this article called the device a , but Cros himself favored the word , sometimes rendered in French as ('voice of the past'). Cros was a poet of meager means, not in a position to pay a machinist to build a working model, and largely content to bequeath his ideas to the public domain free of charge and let others reduce them to practice, but after the earliest reports of Edison's presumably independent invention crossed the Atlantic he had his sealed letter of April 30 opened and read at the December 3, 1877 meeting of the French Academy of Sciences, claiming due scientific credit for priority of conception. Throughout the first decade (1890–1900) of commercial production of the earliest crude disc records, the direct acid-etch method first invented by Cros was used to create the metal master discs, but Cros was not around to claim any credit or to witness the humble beginnings of the eventually rich phonographic library he had foreseen. He had died in 1888 at the age of 45. The early phonographs Thomas Edison conceived the principle of recording and reproducing sound between May and July 1877 as a byproduct of his efforts to "play back" recorded telegraph messages and to automate speech sounds for transmission by telephone. His first experiments were with waxed paper. He announced his invention of the first phonograph, a device for recording and replaying sound, on November 21, 1877 (early reports appear in Scientific American and several newspapers in the beginning of November, and an even earlier announcement of Edison working on a "talking-machine" can be found in the Chicago Daily Tribune on May 9), and he demonstrated the device for the first time on November 29 (it was patented on February 19, 1878, as US Patent 200,521). "In December, 1877, a young man came into the office of the Scientific American, and placed before the editors a small, simple machine about which few preliminary remarks were offered. The visitor without any ceremony whatever turned the crank, and to the astonishment of all present the machine said: 'Good morning. How do you do? How do you like the phonograph?' The machine thus spoke for itself, and made known the fact that it was the phonograph..."The music critic Herman Klein attended an early demonstration (1881–82) of a similar machine. On the early phonograph's reproductive capabilities he wrote in retrospect: "It sounded to my ear like someone singing about half a mile away, or talking at the other end of a big hall; but the effect was rather pleasant, save for a peculiar nasal quality wholly due to the mechanism, although there was little of the scratching that later was a prominent feature of the flat disc. Recording for that primitive machine was a comparatively simple matter. I had to keep my mouth about six inches away from the horn and remember not to make my voice too loud if I wanted anything approximating to a clear reproduction; that was all. When it was played over to me and I heard my own voice for the first time, one or two friends who were present said that it sounded rather like mine; others declared that they would never have recognised it. I daresay both opinions were correct." The Argus newspaper from Melbourne, Australia, reported on an 1878 demonstration at the Royal Society of Victoria, writing "There was a large attendance of ladies and gentlemen, who appeared greatly interested in the various scientific instruments exhibited. Among these the most interesting, perhaps, was the trial made by Mr. Sutherland with the phonograph, which was most amusing. Several trials were made, and were all more or less successful. 'Rule Britannia' was distinctly repeated, but great laughter was caused by the repetition of the convivial song of 'He's a jolly good fellow,' which sounded as if it was being sung by an old man of 80 with a cracked voice." Early machines Edison's early phonographs recorded onto a thin sheet of metal, normally tinfoil, which was temporarily wrapped around a helically grooved cylinder mounted on a correspondingly threaded rod supported by plain and threaded bearings. While the cylinder was rotated and slowly progressed along its axis, the airborne sound vibrated a diaphragm connected to a stylus that indented the foil into the cylinder's groove, thereby recording the vibrations as "hill-and-dale" variations of the depth of the indentation. Introduction of the disc record By 1890, record manufacturers had begun using a rudimentary duplication process to mass-produce their product. While the live performers recorded the master phonograph, up to ten tubes led to blank cylinders in other phonographs. Until this development, each record had to be custom-made. Before long, a more advanced pantograph-based process made it possible to simultaneously produce 90–150 copies of each record. However, as demand for certain records grew, popular artists still needed to re-record and re-re-record their songs. Reportedly, the medium's first major African-American star George Washington Johnson was obliged to perform his "The Laughing Song" (or the separate "The Whistling Coon") up to thousands of times in a studio during his recording career. Sometimes he would sing "The Laughing Song" more than fifty times in a day, at twenty cents per rendition. (The average price of a single cylinder in the mid-1890s was about fifty cents.) Oldest surviving recordings Lambert's lead cylinder recording for an experimental talking clock is often identified as the oldest surviving playable sound recording, although the evidence advanced for its early date is controversial. Wax phonograph cylinder recordings of Handel's choral music made on June 29, 1888, at The Crystal Palace in London were thought to be the oldest-known surviving musical recordings, until the recent playback by a group of American historians of a phonautograph recording of Au clair de la lune recorded on April 9, 1860. The 1860 phonautogram had not until then been played, as it was only a transcription of sound waves into graphic form on paper for visual study. Recently developed optical scanning and image processing techniques have given new life to early recordings by making it possible to play unusually delicate or physically unplayable media without physical contact. A recording made on a sheet of tinfoil at an 1878 demonstration of Edison's phonograph in St. Louis, Missouri, has been played back by optical scanning and digital analysis. A few other early tinfoil recordings are known to survive, including a slightly earlier one that is believed to preserve the voice of U.S. President Rutherford B. Hayes, but as of May 2014 they have not yet been scanned. These antique tinfoil recordings, which have typically been stored folded, are too fragile to be played back with a stylus without seriously damaging them. Edison's 1877 tinfoil recording of Mary Had a Little Lamb, not preserved, has been called the first instance of recorded verse. On the occasion of the 50th anniversary of the phonograph, Edison recounted reciting Mary Had a Little Lamb to test his first machine. The 1927 event was filmed by an early sound-on-film newsreel camera, and an audio clip from that film's soundtrack is sometimes mistakenly presented as the original 1877 recording. Wax cylinder recordings made by 19th-century media legends such as P. T. Barnum and Shakespearean actor Edwin Booth are amongst the earliest verified recordings by the famous that have survived to the present. Improvements at the Volta Laboratory Alexander Graham Bell and his two associates took Edison's tinfoil phonograph and modified it considerably to make it reproduce sound from wax instead of tinfoil. They began their work at Bell's Volta Laboratory in Washington, D. C., in 1879, and continued until they were granted basic patents in 1886 for recording in wax. Although Edison had invented the phonograph in 1877, the fame bestowed on him for this invention was not due to its efficiency. Recording with his tinfoil phonograph was too difficult to be practical, as the tinfoil tore easily, and even when the stylus was properly adjusted, its reproduction of sound was distorted, and good for only a few playbacks; nevertheless Edison had discovered the idea of sound recording. However immediately after his discovery he did not improve it, allegedly because of an agreement to spend the next five years developing the New York City electric light and power system. Volta's early challenge Meanwhile, Bell, a scientist and experimenter at heart, was looking for new worlds to conquer after having patented the telephone. According to Sumner Tainter, it was through Gardiner Green Hubbard that Bell took up the phonograph challenge. Bell had married Hubbard's daughter Mabel in 1879 while Hubbard was president of the Edison Speaking Phonograph Co., and his organization, which had purchased the Edison patent, was financially troubled because people did not want to buy a machine that seldom worked well and proved difficult for the average person to operate. Volta Graphophone The sound vibrations had been indented in the wax that had been applied to the Edison phonograph. The following was the text of one of their recordings: "There are more things in heaven and earth, Horatio, than are dreamed of in your philosophy. I am a Graphophone and my mother was a phonograph." Most of the disc machines designed at the Volta Lab had their disc mounted on vertical turntables. The explanation is that in the early experiments, the turntable, with disc, was mounted on the shop lathe, along with the recording and reproducing heads. Later, when the complete models were built, most of them featured vertical turntables. One interesting exception was a horizontal seven inch turntable. The machine, although made in 1886, was a duplicate of one made earlier but taken to Europe by Chichester Bell. Tainter was granted on July 10, 1888. The playing arm is rigid, except for a pivoted vertical motion of 90 degrees to allow removal of the record or a return to starting position. While recording or playing, the record not only rotated, but moved laterally under the stylus, which thus described a spiral, recording 150 grooves to the inch. The basic distinction between the Edison's first phonograph patent and the Bell and Tainter patent of 1886 was the method of recording. Edison's method was to indent the sound waves on a piece of tin foil, while Bell and Tainter's invention called for cutting, or "engraving", the sound waves into a wax record with a sharp recording stylus. Graphophone commercialization In 1885, when the Volta Associates were sure that they had a number of practical inventions, they filed patent applications and began to seek out investors. The Volta Graphophone Company of Alexandria, Virginia, was created on January 6, 1886, and incorporated on February 3, 1886. It was formed to control the patents and to handle the commercial development of their sound recording and reproduction inventions, one of which became the first Dictaphone. After the Volta Associates gave several demonstrations in the City of Washington, businessmen from Philadelphia created the American Graphophone Company on March 28, 1887, in order to produce and sell the machines for the budding phonograph marketplace. The Volta Graphophone Company then merged with American Graphophone, which itself later evolved into Columbia Records. A coin-operated version of the Graphophone, , was developed by Tainter in 1893 to compete with nickel-in-the-slot entertainment phonograph demonstrated in 1889 by Louis T. Glass, manager of the Pacific Phonograph Company. The work of the Volta Associates laid the foundation for the successful use of dictating machines in business, because their wax recording process was practical and their machines were durable. But it would take several more years and the renewed efforts of Edison and the further improvements of Emile Berliner and many others, before the recording industry became a major factor in home entertainment. The technology quickly became popular abroad, where it was also used in new ways. In 1895, for example, Hungary became the first country to use phonographs to conduct folklore and ethnomusicological research, after which it became common practice in ethnography. Disc vs. cylinder as a recording medium Discs are not inherently better than cylinders at providing audio fidelity. Rather, the advantages of the format are seen in the manufacturing process: discs can be stamped, and the matrixes to stamp disc can be shipped to other printing plants for a global distribution of recordings; cylinders could not be stamped until 1901–1902, when the gold moulding process was introduced by Edison. Through experimentation, in 1892 Berliner began commercial production of his disc records and "gramophones". His "phonograph record" was the first disc record to be offered to the public. They were in diameter and recorded on one side only. Seven-inch (17.5 cm) records followed in 1895. Also in 1895 Berliner replaced the hard rubber used to make the discs with a shellac compound. Berliner's early records had poor sound quality, however. Work by Eldridge R. Johnson eventually improved the sound fidelity to a point where it was as good as the cylinder. Dominance of the disc record In the 1930s, vinyl (originally known as vinylite) was introduced as a record material for radio transcription discs, and for radio commercials. At that time, virtually no discs for home use were made from this material. Vinyl was used for the popular 78-rpm V-discs issued to US soldiers during World War II. This significantly reduced breakage during transport. The first commercial vinylite record was the set of five 12" discs "Prince Igor" (Asch Records album S-800, dubbed from Soviet masters in 1945). Victor began selling some home-use vinyl 78s in late 1945; but most 78s were made of a shellac compound until the 78-rpm format was completely phased out. (Shellac records were heavier and more brittle.) 33s and 45s were, however, made exclusively of vinyl, with the exception of some 45s manufactured out of polystyrene. First all-transistor phonograph In 1955, Philco developed and produced the world's first all-transistor phonograph models TPA-1 and TPA-2, which were announced in the June 28, 1955 edition of The Wall Street Journal. Philco started to sell these all-transistor phonographs in the fall of 1955, for the price of $59.95. The October 1955 issue of Radio & Television News magazine (page 41), had a full page detailed article on Philco's new consumer product. The all-transistor portable phonograph TPA-1 and TPA-2 models played only 45rpm records and used four 1.5 volt "D" batteries for their power supply. The "TPA" stands for "Transistor Phonograph Amplifier". Their circuitry used three Philco germanium PNP alloy-fused junction audio frequency transistors. After the 1956 season had ended, Philco decided to discontinue both models, for transistors were too expensive compared to vacuum tubes, but by 1961 a $49.95 ($ in ) portable, battery-powered radio-phonograph with seven transistors was available. Turntable designs There are presently three main phonograph designs: belt-drive, direct-drive, and idler-wheel. In a belt-drive turntable the motor is located off-center from the platter, either underneath it or entirely outside of it, and is connected to the platter or counter-platter by a drive belt made from elastomeric material. The direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita (now Panasonic). In 1969, Matsushita released it as the Technics SP-10, the first direct-drive turntable on the market. The most influential direct-drive turntable was the Technics SL-1200, which, following the spread of turntablism in hip hop culture, became the most widely-used turntable in DJ culture for several decades. Arm systems In some high quality equipment the arm carrying the pickup, known as a tonearm, is manufactured separately from the motor and turntable unit. Companies specialising in the manufacture of tonearms include the English company SME. Cue lever More sophisticated turntables were (and still are) frequently manufactured so as to incorporate a "cue lever", a device that mechanically lowers the tonearm on to the record. It enables the user to locate an individual track more easily, to pause a record, and to avoid the risk of scratching the record, which may require practice to avoid when lowering the tonearm manually. Linear tracking Early developments in linear turntables were from Rek-O-Kut (portable lathe/phonograph) and Ortho-Sonic in the 1950s, and Acoustical in the early 1960s. These were eclipsed by more successful implementations of the concept from the late 1960s through the early 1980s. Pickup systems The pickup or cartridge is a transducer that converts mechanical vibrations from a stylus into an electrical signal. The electrical signal is amplified and converted into sound by one or more loudspeakers. Crystal and ceramic pickups that use the piezoelectric effect have largely been replaced by magnetic cartridges. The pickup includes a stylus with a small diamond or sapphire tip that runs in the record groove. The stylus eventually becomes worn by contact with the groove, and it is usually replaceable. Styli are classified as spherical or elliptical, although the tip is actually shaped as a half-sphere or a half-ellipsoid. Spherical styli are generally more robust than other types, but do not follow the groove as accurately, giving diminished high frequency response. Elliptical styli usually track the groove more accurately, with increased high frequency response and less distortion. For DJ use, the relative robustness of spherical styli make them generally preferred for back-cuing and scratching. There are a number of derivations of the basic elliptical type, including the Shibata or fine line stylus, which can more accurately reproduce high frequency information contained in the record groove. This is especially important for playback of quadraphonic recordings. Optical readout A few specialist laser turntables read the groove optically using a laser pickup. Since there is no physical contact with the record, no wear is incurred. However, this "no wear" advantage is debatable, since vinyl records have been tested to withstand even 1200 plays with no significant audio degradation, provided that it is played with a high quality cartridge and that the surfaces are clean. An alternative approach is to take a high-resolution photograph or scan of each side of the record and interpret the image of the grooves using computer software. An amateur attempt using a flatbed scanner lacked satisfactory fidelity. A professional system employed by the Library of Congress produces excellent quality. Stylus A development in stylus form came about by the attention to the CD-4 quadraphonic sound modulation process, which requires up to 50 kHz frequency response, with cartridges like Technics EPC-100CMK4 capable of playback on frequencies up to 100 kHz. This requires a stylus with a narrow side radius, such as . A narrow-profile elliptical stylus is able to read the higher frequencies (greater than 20 kHz), but at an increased wear, since the contact surface is narrower. For overcoming this problem, the Shibata stylus was invented around 1972 in Japan by Norio Shibata of JVC. The Shibata-designed stylus offers a greater contact surface with the groove, which in turn means less pressure over the vinyl surface and thus less wear. A positive side effect is that the greater contact surface also means the stylus reads sections of the vinyl that were not worn by the common spherical stylus. In a demonstration by JVC records worn after 500 plays at a relatively high 4.5 g tracking force with a spherical stylus, played perfectly with the Shibata profile. Other advanced stylus shapes appeared following the same goal of increasing contact surface, improving on the Shibata. Chronologically: "Hughes" Shibata variant (1975), "Ogura" (1978), Van den Hul (1982). Such a stylus may be marketed as "Hyperelliptical" (Shure), "Alliptic", "Fine Line" (Ortofon), "Line contact" (Audio Technica), "Polyhedron", "LAC", or "Stereohedron" (Stanton). A keel-shaped diamond stylus appeared as a byproduct of the invention of the CED Videodisc. This, together with laser-diamond-cutting technologies, made possible the "ridge" shaped stylus, such as the Namiki (1985) design, and Fritz Gyger (1989) design. This type of stylus is marketed as "MicroLine" (Audio technica), "Micro-Ridge" (Shure), or "Replicant" (Ortofon). To address the problem of steel needle wear upon records, which resulted in the cracking of the latter, RCA Victor devised unbreakable records in 1930, by mixing polyvinyl chloride with plasticisers, in a proprietary formula they called Victrolac, which was first used in 1931, in motion picture discs. Equalization Since the late 1950s, almost all phono input stages have used the RIAA equalization standard. Before settling on that standard, there were many different equalizations in use, including EMI, HMV, Columbia, Decca FFRR, NAB, Ortho, BBC transcription, etc. Recordings made using these other equalization schemes typically sound odd if they are played through a RIAA-equalized preamplifier. High-performance (so-called "multicurve disc") preamplifiers, which include multiple, selectable equalizations, are no longer commonly available. However, some vintage preamplifiers, such as the LEAK varislope series, are still obtainable and can be refurbished. Newer preamplifiers like the Esoteric Sound Re-Equalizer or the K-A-B MK2 Vintage Signal Processor are also available. Contemporary use and models Although largely replaced since the introduction of the compact disc in 1982, record albums still sold in small numbers throughout the 1980s and 1990s, but gradually sidelined in favor of CD players and tape decks in home audio environments. Record players continued to be manufactured and sold into the 21st century, although in small numbers and mainly for DJs. Following a resurgence in sales of records since the late 2000s, an increasing number of turntables have been manufactured and sold. Notably, Japanese company Panasonic brought back its well-known advanced Technics SL-1200 at the 2016 Consumer Electronics Show during which Sony also headlined a turntable, amid increasing interest in the format. Similarly, Audio-Technica revived its 1980s Sound Burger portable player in 2023. At the low-end of the market, Crosley has been especially popular with its suitcase record players and have played a big part in the vinyl revival and its adoption among younger people and children in the 2010s. New interest in records has led to the development of turntables with additional modern features. USB turntables have a built-in audio interface, which transfers the analog sound directly to the connected computer. Some USB turntables transfer the audio without equalization, but are sold with software that allows the EQ of the transferred audio file to be adjusted. There are also many turntables on the market designed to be plugged into a computer via a USB port for needle dropping purposes. Modern turntables have also been released featuring Bluetooth technology to output a record's sound wirelessly through speakers. Sony have also released a high-end turntable with an analog-to-digital converter to convert the sound from a playing record into a 24-bit high-resolution audio file in DSD or WAV formats.
Technology
Media and communication: Basics
null
24489
https://en.wikipedia.org/wiki/Outline%20of%20physics
Outline of physics
The following outline is provided as an overview of and topical guide to physics: Physics – natural science that involves the study of matter and its motion through spacetime, along with related concepts such as energy and force. More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves. What type of subject is physics? Physics can be described as all of the following: An academic discipline – one with academic departments, curricula and degrees; national and international societies; and specialized journals. A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods. A physical science – one that studies non-living systems. A biological science – one that studies the role of physical processes in living organisms. See Outline of biophysics. Branches Astronomy – studies the universe beyond Earth, including its formation and development, and the evolution, physics, chemistry, meteorology, and motion of celestial objects (such as galaxies, planets, etc.) and phenomena that originate outside the atmosphere of Earth (such as the cosmic background radiation). Astrodynamics – application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. Astrometry – the branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. Astrophysics – the study of the physical aspects of celestial objects Celestial mechanics – the branch of theoretical astronomy that deals with the calculation of the motions of celestial objects such as planets. Extragalactic astronomy – the branch of astronomy concerned with objects outside our own Milky Way Galaxy Galactic astronomy – the study of our own Milky Way galaxy and all its contents. Physical cosmology – the study of the largest-scale structures and dynamics of the universe and is concerned with fundamental questions about its formation and evolution. Planetary science – the scientific study of planets (including Earth), moons, and planetary systems, in particular those of the Solar System and the processes that form them. Stellar astronomy – natural science that deals with the study of celestial objects (such as stars, planets, comets, nebulae, star clusters, and galaxies) and phenomena that originate outside the atmosphere of Earth (such as cosmic background radiation) Atmospheric physics – the study of the application of physics to the atmosphere Atomic, molecular, and optical physics – the study of how matter and light interact Optics – the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Biophysics – interdisciplinary science that uses the methods of physics to study biological systems Neurophysics – branch of biophysics dealing with the nervous system. Polymer physics – field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation and polymerization of polymers and monomers respectively. Quantum biology – application of quantum mechanics to biological phenomenon. Chemical physics – the branch of physics that studies chemical processes from physics. Computational physics – study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists. Condensed matter physics – the study of the physical properties of condensed phases of matter. Electricity – the study of electrical phenomena. Electromagnetism – branch of science concerned with the forces that occur between electrically charged particles. Geophysics – the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods Magnetism – the study of physical phenomena that are mediated by magnetic field. Mathematical physics – application of mathematics to problems in physics and the development of mathematical methods for such applications and the formulation of physical theories. Mechanics – the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. Aerodynamics – study of the motion of air. Biomechanics – the study of the structure and function of biological systems such as humans, animals, plants, organs, and cells using the methods of mechanics. Classical mechanics – one of the two major sub-fields of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces. Kinematics – branch of classical mechanics that describes the motion of points, bodies (objects) and systems of bodies (groups of objects) without consideration of the causes of motion. Homeokinetics – the physics of complex, self-organizing systems Continuum mechanics – the branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. Dynamics – the study of the causes of motion and changes in motion Fluid mechanics – the study of fluids and the forces on them. Fluid statics – study of fluids at rest Fluid kinematics – study of fluids in motion Fluid dynamics – study of the effect of forces on fluid motion Statics – the branch of mechanics concerned with the analysis of loads (force, torque/moment) on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity. Medical Physics – the branch of physics that deals with the application of physics in medicine – such as imaging exams (NMR, PET scans, and so on), radiotherapy and nuclear medicine. Statistical mechanics – the branch of physics which studies any physical system that has a large number of degrees of freedom. Thermodynamics – the branch of physical science concerned with heat and its relation to other forms of energy and work. Nuclear physics – field of physics that studies the building blocks and interactions of atomic nuclei. Particle physics – the branch of physics that studies the properties and interactions of the fundamental constituents of matter and energy. Psychophysics – quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they affect. Plasma physics – the study of plasma, a state of matter similar to gas in which a certain portion of the particles are ionized. Quantum physics – branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. Quantum field theory – the application of quantum theory to the study of fields (systems with infinite degrees of freedom). Quantum information theory – the study of the information-processing capabilities afforded by quantum mechanics. Quantum foundations – the discipline focusing in understanding the counterintuitive aspects of the theory, including trying to find physical principles underlying them, and proposing generalisations of quantum theory. Quantum gravity – the search for an account of gravitation fully compatible with quantum theory. Relativity – theory of physics which describes the relationship between space and time. General Relativity – a geometric, non-quantum theory of gravitation. Special Relativity – a theory that describes the propagation of matter and light at high speeds. Other Agrophysics – the study of physics applied to agroecosystems Soil physics – the study of soil physical properties and processes. Cryogenics – cryogenics is the study of the production of very low temperature (below −150 °C, −238 °F or 123 K) and the behavior of materials at those temperatures. Econophysics – interdisciplinary research field, applying theories and methods originally developed by physicists to solve problems in economics Materials physics – use of physics to describe materials in many different ways such as force, heat, light, and mechanics. Vehicle dynamics – dynamics of vehicles, here assumed to be ground vehicles. Philosophy of physics – deals with conceptual and interpretational issues in modern physics, many of which overlap with research done by certain kinds of theoretical physicists. History History of physics – history of the physical science that studies matter and its motion through space-time, and related concepts such as energy and force History of acoustics – history of the study of mechanical waves in solids, liquids, and gases (such as vibration and sound) History of agrophysics – history of the study of physics applied to agroecosystems History of astrophysics – history of the study of the physical aspects of celestial objects History of astronomy – history of the studies the universe beyond Earth, including its formation and development, and the evolution, physics, chemistry, meteorology, and motion of celestial objects (such as galaxies, planets, etc.) and phenomena that originate outside the atmosphere of Earth (such as the cosmic background radiation). History of astrodynamics – history of the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. History of astrometry – history of the branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. History of cosmology – history of the discipline that deals with the nature of the Universe as a whole. History of the Big Bang theory – origin of the universe History of physical cosmology – history of the study of the largest-scale structures and dynamics of the universe and is concerned with fundamental questions about its formation and evolution. History of planetary science – history of the scientific study of planets (including Earth), moons, and planetary systems, in particular those of the Solar System and the processes that form them. History of stellar astronomy – history of the natural science that deals with the study of celestial objects (such as stars, planets, comets, nebulae, star clusters and galaxies) and phenomena that originate outside the atmosphere of Earth (such as cosmic background radiation) History of atomic, molecular, and optical physics – history of the study of how matter and light interact History of biophysics – history of the study of physical processes relating to biology History of condensed matter physics – history of the study of the physical properties of condensed phases of matter. History of econophysics – history of the interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics History of electromagnetism – history of the branch of science concerned with the forces that occur between electrically charged particles. History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods History of gravitational theory – the earliest physics theory with application in daily life through cosmology History of mechanics – history of the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. History of biomechanics – history of the study of the structure and function of biological systems such as humans, animals, plants, organs, and cells by means of the methods of mechanics. History of classical mechanics – history of the one of the two major sub-fields of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces. History of variational principles in physics – mathematical basis of classical and quantum mechanics. History of fluid mechanics – history of the study of fluids and the forces on them. History of quantum mechanics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of quantum field theory – modern branch of quantum theory. History of string theory – branch of mathematics driven by open questions in quantum physics History of thermodynamics – history of the branch of physical science concerned with heat and its relation to other forms of energy and work. History of nuclear physics – history of the field of physics that studies the building blocks and interactions of atomic nuclei. History of nuclear fusion – mechanism powering stars and modern weapons of mass destruction. History of electromagnetism – electricity, magnets, and light from radio waves to gamma rays History of Maxwell's equations – classical field equation of electromagnetism History of materials science – From stones to silicon, understanding and manipulating matter. History of optics – history of the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. History of spectroscopy – measuring the response of materials to energy dependent probes of light and matter. History of subatomic physics – history of the branch of physics that studies the existence and interactions of particles that are the constituents of what is usually referred to as matter or radiation. History of the periodic table – Tabular summary of the relationship between elements. History of psychophysics – history of the quantitative investigations of the relationship between physical stimuli and the sensations and perceptions they affect. History of special relativity – history of the study of the relationship between space and time in the absence of gravity History of Lorentz transformations – deep dive into one mathematical aspect of special relativity History of general relativity – history of the non-quantum theory of gravity History of solid-state physics – history of the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. History of Solar System formation and evolution hypotheses long enough to explain itself History of superconductivity – ultra-cold state of matter. General concepts Basic principles Physics – branch of science that studies matter and its motion through space and time, along with related concepts such as energy and force. Physics is one of the "fundamental sciences" because the other natural sciences (like biology, geology etc.) deal with systems that seem to obey the laws of physics. According to physics, the physical laws of matter, energy and the fundamental forces of nature govern the interactions between particles and physical entities (such as planets, molecules, atoms or the subatomic particles). Some of the basic pursuits of physics, which include some of the most prominent developments in modern science in the last millennium, include: Describing the nature, measuring and quantifying of bodies and their motion, dynamics etc. Newton's laws of motion Mass, force and weight (mass versus weight) Momentum and conservation of energy Gravity, theories of gravity Energy, work, and their relationship Motion, position, and energy Different forms of Energy, their inter-conversion and the inevitable loss of energy in the form of heat (thermodynamics) Energy conservation, conversion, and transfer. Energy source the transfer of energy from one source to work in another. Kinetic molecular theory Phases of matter and phase transitions Temperature and thermometers Energy and heat Heat flow: conduction, convection, and radiation The four laws of thermodynamics The principles of waves and sound The principles of electricity, magnetism, and electromagnetism The principles, sources, and properties of light Basic quantities Acceleration Electric charge Energy Entropy Force Length Mass Matter Momentum Potential energy Space Temperature Time Velocity Gravity, light, physical system, physical observation, physical quantity, physical state, physical unit, physical theory, physical experiment Theoretical concepts: mass–energy equivalence, elementary particle, physical law, fundamental force, physical constant Fundamental concepts Causality Symmetry Action Covariance Space Time Oscillations and Waves Physical field Physical interaction Statistical ensemble Quantum Particle Measurement Measurement SI units Conversion of units Length Time Mass Density Motion Motion Velocity Speed Acceleration Constant acceleration Newton's laws of motion Overview This is a list of the primary theories in physics, major subtopics, and concepts. Note: the Theory column below contains links to articles with infoboxes at the top of their respective pages which list the major concepts. Concepts by field Lists Index of physics articles List of common physics notations Lists of physics equations List of important publications in physics List of laws in science List of letters used in mathematics and science List of physicists List of physics journals List of scientific units named after people Variables commonly used in physics List of physics awards
Physical sciences
Physics basics: General
Physics
24505
https://en.wikipedia.org/wiki/Phospholipid
Phospholipid
Phospholipids are a class of lipids whose molecule has a hydrophilic "head" containing a phosphate group and two hydrophobic "tails" derived from fatty acids, joined by an alcohol residue (usually a glycerol molecule). Marine phospholipids typically have omega-3 fatty acids EPA and DHA integrated as part of the phospholipid molecule. The phosphate group can be modified with simple organic molecules such as choline, ethanolamine or serine. Phospholipids are a key component of all cell membranes. They can form lipid bilayers because of their amphiphilic characteristic. In eukaryotes, cell membranes also contain another class of lipid, sterol, interspersed among the phospholipids. The combination provides fluidity in two dimensions combined with mechanical strength against rupture. Purified phospholipids are produced commercially and have found applications in nanotechnology and materials science. The first phospholipid identified in 1847 as such in biological tissues was lecithin, or phosphatidylcholine, in the egg yolk of chickens by the French chemist and pharmacist Theodore Nicolas Gobley. Phospholipids in biological membranes Arrangement The phospholipids are amphiphilic. The hydrophilic end usually contains a negatively charged phosphate group, and the hydrophobic end usually consists of two "tails" that are long fatty acid residues. In aqueous solutions, phospholipids are driven by hydrophobic interactions, which result in the fatty acid tails aggregating to minimize interactions with the water molecules. The result is often a phospholipid bilayer: a membrane that consists of two layers of oppositely oriented phospholipid molecules, with their heads exposed to the liquid on both sides, and with the tails directed into the membrane. That is the dominant structural motif of the membranes of all cells and of some other biological structures, such as vesicles or virus coatings. In biological membranes, the phospholipids often occur with other molecules (e.g., proteins, glycolipids, sterols) in a bilayer such as a cell membrane. Lipid bilayers occur when hydrophobic tails line up against one another, forming a membrane of hydrophilic heads on both sides facing the water. Dynamics These specific properties allow phospholipids to play an important role in the cell membrane. Their movement can be described by the fluid mosaic model, which describes the membrane as a mosaic of lipid molecules that act as a solvent for all the substances and proteins within it, so proteins and lipid molecules are then free to diffuse laterally through the lipid matrix and migrate over the membrane. Sterols contribute to membrane fluidity by hindering the packing together of phospholipids. However, this model has now been superseded, as through the study of lipid polymorphism it is now known that the behaviour of lipids under physiological (and other) conditions is not simple. Main phospholipids Diacylglyceride structures See: Glycerophospholipid Phosphatidic acid (phosphatidate) (PA) Phosphatidylethanolamine (cephalin) (PE) Phosphatidylcholine (lecithin) (PC) Phosphatidylserine (PS) Phosphoinositides: Phosphatidylinositol (PI) Phosphatidylinositol phosphate (PIP) Phosphatidylinositol bisphosphate (PIP2) and Phosphatidylinositol trisphosphate (PIP3) Phosphosphingolipids See Sphingolipid Ceramide phosphorylcholine (Sphingomyelin) (SPH) Ceramide phosphorylethanolamine (Sphingomyelin) (Cer-PE) Ceramide phosphoryllipid Applications Phospholipids have been widely used to prepare liposomal, ethosomal and other nanoformulations of topical, oral and parenteral drugs for differing reasons like improved bio-availability, reduced toxicity and increased permeability across membranes. Liposomes are often composed of phosphatidylcholine-enriched phospholipids and may also contain mixed phospholipid chains with surfactant properties. The ethosomal formulation of ketoconazole using phospholipids is a promising option for transdermal delivery in fungal infections. Advances in phospholipid research lead to exploring these biomolecules and their conformations using lipidomics. Simulations Computational simulations of phospholipids are often performed using molecular dynamics with force fields such as GROMOS, CHARMM, or AMBER. Characterization Phospholipids are optically highly birefringent, i.e. their refractive index is different along their axis as opposed to perpendicular to it. Measurement of birefringence can be achieved using cross polarisers in a microscope to obtain an image of e.g. vesicle walls or using techniques such as dual polarisation interferometry to quantify lipid order or disruption in supported bilayers. Analysis There are no simple methods available for analysis of phospholipids, since the close range of polarity between different phospholipid species makes detection difficult. Oil chemists often use spectroscopy to determine total phosphorus abundance and then calculate approximate mass of phospholipids based on molecular weight of expected fatty acid species. Modern lipid profiling employs more absolute methods of analysis, with NMR spectroscopy, particularly 31P-NMR, while HPLC-ELSD provides relative values. Phospholipid synthesis Phospholipid synthesis occurs in the cytosolic side of ER membrane that is studded with proteins that act in synthesis (GPAT and LPAAT acyl transferases, phosphatase and choline phosphotransferase) and allocation (flippase and floppase). Eventually a vesicle will bud off from the ER containing phospholipids destined for the cytoplasmic cellular membrane on its exterior leaflet and phospholipids destined for the exoplasmic cellular membrane on its inner leaflet. Sources Common sources of industrially produced phospholipids are soya, rapeseed, sunflower, chicken eggs, bovine milk, fish eggs etc. Phospholipids for gene delivery, such as distearoylphosphatidylcholine and dioleoyl-3-trimethylammonium propane, are produced synthetically. Each source has a unique profile of individual phospholipid species, as well as fatty acids, and consequently differing applications in food, nutrition, pharmaceuticals, cosmetics, and drug delivery. In signal transduction Some types of phospholipid can be split to produce products that function as second messengers in signal transduction. Examples include phosphatidylinositol (4,5)-bisphosphate (PIP2), that can be split by the enzyme phospholipase C into inositol triphosphate (IP3) and diacylglycerol (DAG), which both carry out the functions of the Gq type of G protein in response to various stimuli and intervene in various processes from long term depression in neurons to leukocyte signal pathways started by chemokine receptors. Phospholipids also intervene in prostaglandin signal pathways as the raw material used by lipase enzymes to produce the prostaglandin precursors. In plants they serve as the raw material to produce jasmonic acid, a plant hormone similar in structure to prostaglandins that mediates defensive responses against pathogens. Food technology Phospholipids can act as emulsifiers, enabling oils to form a colloid with water. Phospholipids are one of the components of lecithin, which is found in egg yolks, as well as being extracted from soybeans, and is used as a food additive in many products and can be purchased as a dietary supplement. Lysolecithins are typically used for water–oil emulsions like margarine, due to their higher HLB ratio. Phospholipid derivatives See table below for an extensive list. Natural phospholipid derivates: egg PC (Egg lecithin), egg PG, soy PC, hydrogenated soy PC, sphingomyelin as natural phospholipids. Synthetic phospholipid derivates: Phosphatidic acid (DMPA, DPPA, DSPA) Phosphatidylcholine (DDPC, DLPC, DMPC, DPPC, DSPC, DOPC, POPC, DEPC) Phosphatidylglycerol (DMPG, DPPG, DSPG, POPG) Phosphatidylethanolamine (DMPE, DPPE, DSPE DOPE) Phosphatidylserine (DOPS) PEG phospholipid (mPEG-phospholipid, polyglycerin-phospholipid, functionalized-phospholipid, terminal activated-phospholipid) Abbreviations used and chemical information of glycerophospholipids
Biology and health sciences
Lipids
Biology
24508
https://en.wikipedia.org/wiki/Pencil
Pencil
A pencil () is a writing or drawing implement with a solid pigment core in a protective casing that reduces the risk of core breakage and keeps it from marking the user's hand. Pencils create marks by physical abrasion, leaving a trail of solid core material that adheres to a sheet of paper or other surface. They are distinct from pens, which dispense liquid or gel ink onto the marked surface. Most pencil cores are made of graphite powder mixed with a clay binder. Graphite pencils (traditionally known as "lead pencils") produce grey or black marks that are easily erased, but otherwise resistant to moisture, most solvents, ultraviolet radiation and natural aging. Other types of pencil cores, such as those of charcoal, are mainly used for drawing and sketching. Coloured pencils are sometimes used by teachers or editors to correct submitted texts, but are typically regarded as art supplies, especially those with cores made from wax-based binders that tend to smear when erasers are applied to them. Grease pencils have a softer, oily core that can leave marks on smooth surfaces such as glass or porcelain. The most common pencil casing is thin wood, usually hexagonal in section, but sometimes cylindrical or triangular, permanently bonded to the core. Casings may be of other materials, such as plastic or paper. To use the pencil, the casing must be carved or peeled off to expose the working end of the core as a sharp point. Mechanical pencils have more elaborate casings which are not bonded to the core; instead, they support separate, mobile pigment cores that can be extended or retracted (usually through the casing's tip) as needed. These casings can be reloaded with new cores (usually graphite) as the previous ones are exhausted. History Camel hair Pencil, from Old French pincel, from late Latin a "little tail" (see penis; pincellus) originally referred to an artist's fine brush of camel hair, also used for writing before modern lead or chalk pencils. Though the archetypal pencil was an artist's brush, the stylus, a thin metal stick used for scratching in papyrus or wax tablets, was used extensively by the Romans and for palm-leaf manuscripts. Graphite deposit discoveries As a technique for drawing, the closest predecessor to the pencil was silverpoint or leadpoint until, in 1565 (some sources say as early as 1500), a large deposit of graphite was discovered on the approach to Grey Knotts from the hamlet of Seathwaite in Borrowdale parish, Cumbria, England. This particular deposit of graphite was extremely pure and solid, and it could easily be sawn into sticks. It remains the only large-scale deposit of graphite ever found in this solid form. Chemistry was in its infancy and the substance was thought to be a form of lead. Consequently, it was called plumbago (Latin for "lead ore"). Because the pencil core is still referred to as "lead", or "a lead", many people have the misconception that the graphite in the pencil is lead, and the black core of pencils is still referred to as lead, even though it never contained the element lead. The words for pencil in German (Bleistift), Irish (peann luaidhe), Arabic (قلم رصاص qalam raṣāṣ), and some other languages literally mean lead pen. The value of graphite would soon be realised to be enormous, mainly because it could be used to line the moulds for cannonballs; the mines were taken over by the Crown and were guarded. When sufficient stores of graphite had been accumulated, the mines were flooded to prevent theft until more was required. The usefulness of graphite for pencils was discovered as well, but initially graphite for pencils had to be smuggled out of England. Because graphite is soft, it requires some form of encasement. Graphite sticks were initially wrapped in string or sheepskin for stability. England would enjoy a monopoly on the production of pencils until a method of reconstituting the graphite powder was found in 1662 in Germany. However, the distinctively square English pencils continued to be made with sticks cut from natural graphite into the 1860s. The town of Keswick, near the original findings of block graphite, still manufactures pencils, the factory also being the location of the Derwent Pencil Museum. The meaning of "graphite writing implement" apparently evolved late in the 16th century. Wood encasement Around 1560, an Italian couple named Simonio and Lyndiana Bernacotti made what are likely the first blueprints for the modern, wood-encased carpentry pencil. Their version was a flat, oval, more compact type of pencil. Their concept involved the hollowing out of a stick of juniper wood. Shortly thereafter, a superior technique was discovered: two wooden halves were carved, a graphite stick inserted, and the halves then glued together—essentially the same method in use to this day. Graphite powder and clay The first attempt to manufacture graphite sticks from powdered graphite was in Nuremberg, Germany, in 1662. It used a mixture of graphite, sulphur, and antimony. English and German pencils were not available to the French during the Napoleonic Wars; France, under naval blockade imposed by Great Britain, was unable to import the pure graphite sticks from the British Grey Knotts mines – the only known source in the world. France was also unable to import the inferior German graphite pencil substitute. It took the efforts of an officer in Napoleon's army to change this. In 1795, Nicolas-Jacques Conté discovered a method of mixing powdered graphite with clay and forming the mixture into rods that were then fired in a kiln. By varying the ratio of graphite to clay, the hardness of the graphite rod could also be varied. This method of manufacture, which had been earlier discovered by the Austrian Joseph Hardtmuth, the founder of the Koh-I-Noor in 1790, remains in use. In 1802, the production of graphite leads from graphite and clay was patented by the Koh-I-Noor company in Vienna. In England, pencils continued to be made from whole sawn graphite. Henry Bessemer's first successful invention (1838) was a method of compressing graphite powder into solid graphite thus allowing the waste from sawing to be reused. United States American colonists imported pencils from Europe until after the American Revolution. Benjamin Franklin advertised pencils for sale in his Pennsylvania Gazette in 1729, and George Washington used a pencil when he surveyed the Ohio Country in 1762. William Munroe, a cabinetmaker in Concord, Massachusetts, made the first American wood pencils in 1812. This was not the only pencil-making occurring in Concord. According to Henry Petroski, transcendentalist philosopher Henry David Thoreau discovered how to make a good pencil out of inferior graphite using clay as the binder; this invention was prompted by his father's pencil factory in Concord, which employed graphite found in New Hampshire in 1821 by Charles Dunbar. Munroe's method of making pencils was painstakingly slow, and in the neighbouring town of Acton, a pencil mill owner named Ebenezer Wood set out to automate the process at his own pencil mill located at Nashoba Brook. He used the first circular saw in pencil production. He constructed the first of the hexagon- and octagon-shaped wooden casings. Ebenezer did not patent his invention and shared his techniques with anyone. One of those was Eberhard Faber, which built a factory in New York and became the leader in pencil production. Joseph Dixon, an inventor and entrepreneur involved with the Tantiusques graphite mine in Sturbridge, Massachusetts, developed a means to mass-produce pencils. By 1870, The Joseph Dixon Crucible Company was the world's largest dealer and consumer of graphite and later became the contemporary Dixon Ticonderoga pencil and art supplies company. By the end of the nineteenth century, over 240,000 pencils were used each day in the US. The favoured timber for pencils was Red Cedar as it was aromatic and did not splinter when sharpened. In the early twentieth century supplies of Red Cedar were dwindling so that pencil manufacturers were forced to recycle the wood from cedar fences and barns to maintain supply. One effect of this was that "during World War II rotary pencil sharpeners were outlawed in Britain because they wasted so much scarce lead and wood, and pencils had to be sharpened in the more conservative manner – with knives." It was soon discovered that incense cedar, when dyed and perfumed to resemble Red Cedar, was a suitable alternative. Most pencils today are made from this timber, which is grown in managed forests. Over 14 billion pencils are manufactured worldwide annually. Less popular alternatives to cedar include basswood and alder. In Southeast Asia, the wood Jelutong may be used to create pencils (though the use of this rainforest species is controversial). Environmentalists prefer the use of Pulai – another wood native to the region in pencil manufacturing. Eraser attachment On 30 March 1858, Hymen Lipman received the first patent for attaching an eraser to the end of a pencil. In 1862, Lipman sold his patent to Joseph Reckendorfer for $100,000, who went on to sue pencil manufacturer Faber-Castell for infringement. In Reckendorfer v. Faber (1875), the Supreme Court of the United States ruled against Reckendorfer, declaring the patent invalid. Extenders Historian Henry Petroski notes that while ever more efficient means of mass production of pencils has driven the replacement cost of a pencil down, before this people would continue to use even the stub of a pencil. For those who did not feel comfortable using a stub, pencil extenders were sold. These devices function something like a porte-crayon...the pencil stub can be inserted into the end of a shaft...Extenders were especially common among engineers and draftsmen, whose favorite pencils were priced dearly. The use of an extender also has the advantage that the pencil does not appreciably change its heft as it wears down." Artists use extenders to maximize the use of their colored pencils. Types By marking material Graphite Graphite pencils are the most common types of pencil, and are encased in wood. They are made of a mixture of clay and graphite and their darkness varies from light grey to black. Their composition allows for the smoothest strokes. Solid Solid graphite pencils are solid sticks of graphite and clay composite (as found in a 'graphite pencil'), about the diameter of a common pencil, which have no casing other than a wrapper or label. They are often called "woodless" pencils. They are used primarily for art purposes as the lack of casing allows for covering larger spaces more easily, creating different effects, and providing greater economy as the entirety of the pencil is used. They are available in the same darkness range as wood-encased graphite pencils. Liquid Liquid graphite pencils are pencils that write like pens. The technology was first invented in 1955 by Scripto and Parker Pens. Scripto's liquid graphite formula came out about three months before Parker's liquid lead formula. To avoid a lengthy patent fight the two companies agreed to share their formulas. Charcoal Charcoal pencils are made of charcoal and provide fuller blacks than graphite pencils, but tend to smudge easily and are more abrasive than graphite. Sepia-toned and white pencils are also available for duotone techniques. Carbon pencils Carbon pencils are generally made of a mixture of clay and lamp black, but are sometimes blended with charcoal or graphite depending on the darkness and manufacturer. They produce a fuller black than graphite pencils, are smoother than charcoal, and have minimal dust and smudging. They also blend very well, much like charcoal. Colored Colored pencils, or pencil crayons, have wax-like cores with pigment and other fillers. Several colors are sometimes blended together. Grease Grease pencils can write on virtually any surface (including glass, plastic, metal and photographs). The most commonly found grease pencils are encased in paper (Berol and Sanford Peel-off), but they can also be encased in wood (Staedtler Omnichrom). Watercolor Watercolor pencils are designed for use with watercolor techniques. Their cores can be diluted by water. The pencils can be used by themselves for sharp, bold lines. Strokes made by the pencil can also be saturated with water and spread with brushes. By use Carpentry Carpenter's pencils are pencils that have two main properties: their shape prevents them from rolling, and their graphite is strong. The oldest surviving pencil is a German carpenter's pencil dating from the 17th Century and now in the Faber-Castell collection. Copying Copying pencils (or indelible pencils) are graphite pencils with an added dye that creates an indelible mark. They were invented in the late 19th century for press copying and as a practical substitute for fountain pens. Their markings are often visually indistinguishable from those of standard graphite pencils, but when moistened their markings dissolve into a coloured ink, which is then pressed into another piece of paper. They were widely used until the mid-20th century when ball pens slowly replaced them. In Italy their use is still mandated by law for voting paper ballots in elections and referendums. Eyeliner Eye liner pencils are used for make-up. Unlike traditional copying pencils, eyeliner pencils usually contain non-toxic dyes. Erasable coloring Unlike wax-based colored pencils, the erasable variants can be easily erased. Their main use is in sketching, where the objective is to create an outline using the same color that other media (such as wax pencils, or watercolor paints) would fill or when the objective is to scan the color sketch. Some animators prefer erasable color pencils as opposed to graphite pencils because they do not smudge as easily, and the different colors allow for better separation of objects in the sketch. Copy-editors find them useful too as markings stand out more than those of graphite, but can be erased. Non-reproduction Also known as non-photo blue pencils, the non-reproducing types make marks that are not reproducible by photocopiers (examples include "Copy-not" by Sanford and "Mars Non-photo" by Staedtler) or by whiteprint copiers (such as "Mars Non-Print" by Staedtler). Stenography Stenographer's pencils, also known as a steno pencil, are expected to be very reliable, and their lead is break-proof. Nevertheless, steno pencils are sometimes sharpened at both ends to enhance reliability. They are round to avoid pressure pain during long texts. Golf Golf pencils are usually short (a common length is ) and very cheap. They are also known as library pencils, as many libraries offer them as disposable writing instruments. By shape Triangular (more accurately a Reuleaux triangle) Hexagonal Round Bendable (flexible plastic) By size Typical A standard, hexagonal, "#2 pencil" is cut to a hexagonal height of , but the outer diameter is slightly larger (about ) A standard, "#2", hexagonal pencil is long. Biggest On 3 September 2007, Ashrita Furman unveiled his giant US$20,000 pencil – long, with over for the graphite centre – after three weeks of creation in August 2007 as a birthday gift for teacher Sri Chinmoy. It is longer than the pencil outside the Malaysia HQ of stationers Faber-Castell. By manufacture Mechanical Mechanical pencils use mechanical methods to push lead through a hole at the end. These can be divided into two groups: with propelling pencils an internal mechanism is employed to push the lead out from an internal compartment, while clutch pencils merely hold the lead in place (the lead is extended by releasing it and allowing some external force, usually gravity, to pull it out of the body). The erasers (sometimes replaced by a sharpener on pencils with larger lead sizes) are also removable (and thus replaceable), and usually cover a place to store replacement leads. Mechanical pencils are popular for their longevity and the fact that they may never need sharpening. Lead types are based on grade and size; with standard sizes being , , , , , , , , and (ISO 9175-1)—the size is available, but is not considered a standard ISO size. Pop a Point Pioneered by Taiwanese stationery manufacturer Bensia Pioneer Industrial Corporation in the early 1970s, Pop a Point Pencils are also known as Bensia Pencils, stackable pencils or non-sharpening pencils. It is a type of pencil where many short pencil tips are housed in a cartridge-style plastic holder. A blunt tip is removed by pulling it from the writing end of the body and re-inserting it into the open-ended bottom of the body, thereby pushing a new tip to the top. Plastic Invented by Harold Grossman for the Empire Pencil Company in 1967, plastic pencils were subsequently improved upon by Arthur D. Little for Empire from 1969 through the early 1970s; the plastic pencil was commercialised by Empire as the "EPCON" Pencil. These pencils were co-extruded, extruding a plasticised graphite mix within a wood-composite core. Other aspects By factory state: sharpened, unsharpened By casing material: wood, paper, plastic The P&P Office Waste Paper Processor recycles paper into pencils Health Residual graphite from a pencil stick is not poisonous, and graphite is harmless if consumed. Although lead has not been used for writing since antiquity, such as in Roman styli, lead poisoning from pencils was not uncommon. Until the middle of the 20th century the paint used for the outer coating could contain high concentrations of lead, and this could be ingested when the pencil was sucked or chewed. Manufacture The lead of the pencil is a mix of finely ground graphite and clay powders. Before the two substances are mixed, they are separately cleaned of foreign matter and dried in a manner that creates large square cakes. Once the cakes have fully dried, the graphite and the clay squares are mixed together using water. The amount of clay content added to the graphite depends on the intended pencil hardness (lower proportions of clay makes the core softer), and the amount of time spent on grinding the mixture determines the quality of the lead. The mixture is then shaped into long spaghetti-like strings, straightened, dried, cut, and then tempered in a kiln. The resulting strings are dipped in oil or molten wax, which seeps into the tiny holes of the material and allows for the smooth writing ability of the pencil. A juniper or incense-cedar plank with several long parallel grooves is cut to fashion a "slat," and the graphite/clay strings are inserted into the grooves. Another grooved plank is glued on top, and the whole assembly is then cut into individual pencils, which are then varnished or painted. Many pencils feature an eraser on the top and so the process is usually still considered incomplete at this point. Each pencil has a shoulder cut on one end of the pencil to allow for a metal ferrule to be secured onto the wood. A rubber plug is then inserted into the ferrule for a functioning eraser on the end of the pencil. Grading and classification Graphite pencils are made of a mixture of clay and graphite and their darkness varies from black to light grey. A higher amount of clay added to the pencil makes it harder, leaving lighter marks. There is a wide range of grades available, mainly for artists who are interested in creating a full range of tones from light grey to black. Engineers prefer harder pencils which allow for a greater control in the shape of the lead. Manufacturers distinguish their pencils by grading them, but there is no common standard. Two pencils of the same grade but different manufacturers will not necessarily make a mark of identical tone nor have the same hardness. Most manufacturers, and almost all in Europe, designate their pencils with the letters H (commonly interpreted as "hardness") to B (commonly "blackness"), as well as F (usually taken to mean "fineness", although F pencils are no more fine or more easily sharpened than any other grade. Also referred as "firm" by many manufacturers). The standard writing pencil is graded HB. This designation, in the form "H. B.", was in use at least as early as 1814. Softer or harder pencil grades were described by a sequence or successive Bs or Hs such as BB and BBB for successively softer leads, and HH and HHH for successively harder ones. The Koh-i-Noor Hardtmuth pencil manufacturers claim to have first used the HB designations, with H standing for Hardtmuth, B for the company's location of Budějovice, and F for Franz Hardtmuth, who was responsible for technological improvements in pencil manufacture. As of 2021, a set of pencils ranging from a very soft, black-marking pencil to a very hard, light-marking pencil usually ranges from softest to hardest as follows: Koh-i-noor offers twenty grades from 10H to 8B for its 1500 series. Mitsubishi Pencil offers twenty-two grades from 10H to 10B for its Hi-uni range. Derwent produces twenty grades from 9H to 9B for its graphic pencils. Staedtler produces 24 from 10H to 12B for its Mars Lumograph pencils. Numbers as designation were first used by Conté and later by John Thoreau, father of Henry David Thoreau, in the 19th century. Although Conté/Thoreau's equivalence table is widely accepted, not all manufacturers follow it; for example, Faber-Castell uses a different equivalence table in its Grip 2001 pencils: 1 = 2B, 2 = B, 2½ = HB, 3 = H, 4 = 2H. Hardness test Graded pencils can be used for a rapid test that provides relative ratings for a series of coated panels but cannot be used to compare the pencil hardness of different coatings. This test defines a "pencil hardness" of a coating as the grade of the hardest pencil that does not permanently mark the coating when pressed firmly against it at a 45 degree angle. For standardized measurements, there are Mohs hardness testing pencils on the market. External colour and shape The majority of pencils made in the US are painted yellow. According to Henry Petroski, this tradition began in 1890 when the L. & C. Hardtmuth Company of Austria-Hungary introduced their Koh-I-Noor brand, named after the famous diamond. It was intended to be the world's best and most expensive pencil, as the ends of the pencil was dipped in 14-carat gold, and at a time when most pencils were either painted in dark colours or not at all, the Koh-I-Noor was yellow. As well as simply being distinctive, the colour may have been inspired by the Austro-Hungarian flag; it was also suggestive of the Orient at a time when the best-quality graphite came from Siberia. Other companies then copied the yellow colour so that their pencils would be associated with this high-quality brand, and chose brand names with explicit Oriental references, such as Mikado (renamed Mirado) and Mongol. Not all countries use yellow pencils. German and Brazilian pencils, for example, are often green, blue or black, based on the trademark colours of Faber-Castell, a major German stationery company which has plants in those countries. In southern European countries, pencils tend to be dark red or black with yellow lines, while in Australia, they are red with black bands at one end. In India, the most common pencil colour scheme was dark red with black lines, and pencils with a large number of colour schemes are produced. Pencils are commonly round, hexagonal, or sometimes triangular in section. Carpenters' pencils are typically oval or rectangular, so they cannot easily roll away during work. Manufacturers Prominent global manufacturers of wood-cased (including wood-free) pencils:
Technology
Artist's tools
null
24510
https://en.wikipedia.org/wiki/Pushdown%20automaton
Pushdown automaton
In the theory of computation, a branch of theoretical computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack. Pushdown automata are used in theories about what can be computed by machines. They are more capable than finite-state machines but less capable than Turing machines (see below). Deterministic pushdown automata can recognize all deterministic context-free languages while nondeterministic ones can recognize all context-free languages, with the former often used in parser design. The term "pushdown" refers to the fact that the stack can be regarded as being "pushed down" like a tray dispenser at a cafeteria, since the operations never work on elements other than the top element. A stack automaton, by contrast, does allow access to and operations on deeper elements. Stack automata can recognize a strictly larger set of languages than pushdown automata. A nested stack automaton allows full access, and also allows stacked values to be entire sub-stacks rather than just single finite symbols. Informal description A finite-state machine just looks at the input signal and the current state: it has no stack to work with, and therefore is unable to access previous values of the input. It can only choose a new state, the result of following the transition. A pushdown automaton (PDA) differs from a finite state machine in two ways: It can use the top of the stack to decide which transition to take. It can manipulate the stack as part of performing a transition. A pushdown automaton reads a given input string from left to right. In each step, it chooses a transition by indexing a table by input symbol, current state, and the symbol at the top of the stack. A pushdown automaton can also manipulate the stack, as part of performing a transition. The manipulation can be to push a particular symbol to the top of the stack, or to pop off the top of the stack. The automaton can alternatively ignore the stack, and leave it as it is. Put together: Given an input symbol, current state, and stack symbol, the automaton can follow a transition to another state, and optionally manipulate (push or pop) the stack. If, in every situation, at most one such transition action is possible, then the automaton is called a deterministic pushdown automaton (DPDA). In general, if several actions are possible, then the automaton is called a general, or nondeterministic, PDA. A given input string may drive a nondeterministic pushdown automaton to one of several configuration sequences; if one of them leads to an accepting configuration after reading the complete input string, the latter is said to belong to the language accepted by the automaton. Formal definition We use standard formal language notation: denotes the set of finite-length strings over alphabet and denotes the empty string. A PDA is formally defined as a 7-tuple: where is a finite set of states is a finite set which is called the input alphabet is a finite set which is called the stack alphabet is a finite subset of , the transition relation is the start state is the initial stack symbol is the set of accepting states An element is a transition of . It has the intended meaning that , in state , on the input and with as topmost stack symbol, may read , change the state to , pop , replacing it by pushing . The component of the transition relation is used to formalize that the PDA can either read a letter from the input, or proceed leaving the input untouched. In many texts the transition relation is replaced by an (equivalent) formalization, where is the transition function, mapping into finite subsets of Here contains all possible actions in state with on the stack, while reading on the input. One writes for example precisely when because . Note that finite in this definition is essential. Computations In order to formalize the semantics of the pushdown automaton a description of the current situation is introduced. Any 3-tuple is called an instantaneous description (ID) of , which includes the current state, the part of the input tape that has not been read, and the contents of the stack (topmost symbol written first). The transition relation defines the step-relation of on instantaneous descriptions. For instruction there exists a step , for every and every . In general pushdown automata are nondeterministic meaning that in a given instantaneous description there may be several possible steps. Any of these steps can be chosen in a computation. With the above definition in each step always a single symbol (top of the stack) is popped, replacing it with as many symbols as necessary. As a consequence no step is defined when the stack is empty. Computations of the pushdown automaton are sequences of steps. The computation starts in the initial state with the initial stack symbol on the stack, and a string on the input tape, thus with initial description . There are two modes of accepting. The pushdown automaton either accepts by final state, which means after reading its input the automaton reaches an accepting state (in ), or it accepts by empty stack (), which means after reading its input the automaton empties its stack. The first acceptance mode uses the internal memory (state), the second the external memory (stack). Formally one defines with and (final state) with (empty stack) Here represents the reflexive and transitive closure of the step relation meaning any number of consecutive steps (zero, one or more). For each single pushdown automaton these two languages need to have no relation: they may be equal but usually this is not the case. A specification of the automaton should also include the intended mode of acceptance. Taken over all pushdown automata both acceptance conditions define the same family of languages. Theorem. For each pushdown automaton one may construct a pushdown automaton such that , and vice versa, for each pushdown automaton one may construct a pushdown automaton such that Example The following is the formal description of the PDA which recognizes the language by final state: , where states: input alphabet: stack alphabet: start state: start stack symbol: accepting states: The transition relation consists of the following six instructions: , , , , , and . In words, the first two instructions say that in state any time the symbol is read, one is pushed onto the stack. Pushing symbol on top of another is formalized as replacing top by (and similarly for pushing symbol on top of a ). The third and fourth instructions say that, at any moment the automaton may move from state to state . The fifth instruction says that in state , for each symbol read, one is popped. Finally, the sixth instruction says that the machine may move from state to accepting state only when the stack consists of a single . There seems to be no generally used representation for PDA. Here we have depicted the instruction by an edge from state to state labelled by (read ; replace by ). Explanation The following illustrates how the above PDA computes on different input strings. The subscript from the step symbol is here omitted. Context-free languages Every context-free grammar can be transformed into an equivalent nondeterministic pushdown automaton. The derivation process of the grammar is simulated in a leftmost way. Where the grammar rewrites a nonterminal, the PDA takes the topmost nonterminal from its stack and replaces it by the right-hand part of a grammatical rule (expand). Where the grammar generates a terminal symbol, the PDA reads a symbol from input when it is the topmost symbol on the stack (match). In a sense the stack of the PDA contains the unprocessed data of the grammar, corresponding to a pre-order traversal of a derivation tree. Technically, given a context-free grammar, the PDA has a single state, 1, and its transition relation is constructed as follows. for each rule (expand) for each terminal symbol (match) The PDA accepts by empty stack. Its initial stack symbol is the grammar's start symbol. For a context-free grammar in Greibach normal form, defining (1,γ) ∈ δ(1,a,A) for each grammar rule A → aγ also yields an equivalent nondeterministic pushdown automaton. The converse, finding a grammar for a given PDA, is not that easy. The trick is to code two states of the PDA into the nonterminals of the grammar. Theorem. For each pushdown automaton one may construct a context-free grammar such that The language of strings accepted by a deterministic pushdown automaton (DPDA) is called a deterministic context-free language. Not all context-free languages are deterministic. As a consequence, the DPDA is a strictly weaker variant of the PDA. Even for regular languages, there is a size explosion problem: for any recursive function and for arbitrarily large integers , there is a PDA of size describing a regular language whose smallest DPDA has at least states. For many non-regular PDAs, any equivalent DPDA would require an unbounded number of states. A finite automaton with access to two stacks is a more powerful device, equivalent in power to a Turing machine. A linear bounded automaton is a device which is more powerful than a pushdown automaton but less so than a Turing machine. Turing machines A pushdown automaton is computationally equivalent to a 'restricted' Turing Machine (TM) with two tapes which is restricted in the following manner- On the first tape, the TM can only read the input and move from left to right (it cannot make changes). On the second tape, it can only 'push' and 'pop' data. Or equivalently, it can read, write and move left and right with the restriction that the only action it can perform at each step is to either delete the left-most character in the string (pop) or add an extra character left to the left-most character in the string (push). That a PDA is weaker than a TM can be brought down to the fact that the procedure 'pop' deletes some data. In order to make a PDA as strong as a TM, we need to save somewhere the data lost through 'pop'. We can achieve this by introducing a second stack. In the TM model of PDA of last paragraph, this is equivalent to a TM with 3 tapes, where the first tape is the read-only input tape, and the 2nd and the 3rd tape are the 'push and pop' (stack) tapes. In order for such a PDA to simulate any given TM, we give the input of the PDA to the first tape, while keeping both the stacks empty. It then goes on to push all the input from the input tape to the first stack. When the entire input is transferred to the 1st stack, now we proceed like a normal TM, where moving right on the tape is the same as popping a symbol from the 1st stack and pushing a (possibly updated) symbol into the second stack, and moving left corresponds to popping a symbol from the 2nd stack and pushing a (possibly updated) symbol into the first stack. We hence have a PDA with 2 stacks that can simulate any TM. Generalization A generalized pushdown automaton (GPDA) is a PDA that writes an entire string of some known length to the stack or removes an entire string from the stack in one step. A GPDA is formally defined as a 6-tuple: where , and are defined the same way as a PDA. : is the transition function. Computation rules for a GPDA are the same as a PDA except that the 's and 's are now strings instead of symbols. GPDA's and PDA's are equivalent in that if a language is recognized by a PDA, it is also recognized by a GPDA and vice versa. One can formulate an analytic proof for the equivalence of GPDA's and PDA's using the following simulation: Let be a transition of the GPDA where . Construct the following transitions for the PDA: Stack automata As a generalization of pushdown automata, Ginsburg, Greibach, and Harrison (1967) investigated stack automata, which may additionally step left or right in the input string (surrounded by special endmarker symbols to prevent slipping out), and step up or down in the stack in read-only mode. A stack automaton is called nonerasing if it never pops from the stack. The class of languages accepted by nondeterministic, nonerasing stack automata is NSPACE(n2), which is a superset of the context-sensitive languages. The class of languages accepted by deterministic, nonerasing stack automata is DSPACE(n⋅log(n)). Alternating pushdown automata An alternating pushdown automaton (APDA) is a pushdown automaton with a state set where . States in and are called existential resp. universal. In an existential state an APDA nondeterministically chooses the next state and accepts if at least one of the resulting computations accepts. In a universal state APDA moves to all next states and accepts if all the resulting computations accept. The model was introduced by Chandra, Kozen and Stockmeyer. Ladner, Lipton and Stockmeyer proved that this model is equivalent to EXPTIME i.e. a language is accepted by some APDA if, and only if, it can be decided by an exponential-time algorithm. Aizikowitz and Kaminski introduced synchronized alternating pushdown automata (SAPDA) that are equivalent to conjunctive grammars in the same way as nondeterministic PDA are equivalent to context-free grammars.
Mathematics
Automata theory
null
24514
https://en.wikipedia.org/wiki/Psychosis
Psychosis
Psychosis is a condition of the mind or psyche that results in difficulties determining what is real and what is not real. Symptoms may include delusions and hallucinations, among other features. Additional symptoms are disorganized thinking and incoherent speech and behavior that is inappropriate for a given situation. There may also be sleep problems, social withdrawal, lack of motivation, and difficulties carrying out daily activities. Psychosis can have serious adverse outcomes. Psychosis can have several different causes. These include mental illness, such as schizophrenia or schizoaffective disorder, bipolar disorder, sensory deprivation, Wernicke–Korsakoff syndrome or cerebral beriberi and in rare cases major depression (psychotic depression). Other causes include: trauma, sleep deprivation, some medical conditions, certain medications, and drugs such as alcohol, cannabis, hallucinogens, and stimulants. One type, known as postpartum psychosis, can occur after giving birth. The neurotransmitter dopamine is believed to play an important role. Acute psychosis is termed primary if it results from a psychiatric condition and secondary if it is caused by another medical condition or drugs. The diagnosis of a mental-health condition requires excluding other potential causes. Testing may be done to check for central nervous system diseases, toxins, or other health problems as a cause. Treatment may include antipsychotic medication, psychotherapy, and social support. Early treatment appears to improve outcomes. Medications appear to have a moderate effect. Outcomes depend on the underlying cause. In the United States about 3% of people develop psychosis at some point in their lives. The condition has been described since at least the 4th century BC by Hippocrates and possibly as early as 1500 BC in the Egyptian Ebers Papyrus. Signs and symptoms Hallucinations A hallucination is defined as sensory perception in the absence of external stimuli. Hallucinations are different from illusions and perceptual distortions, which are the misperception of external stimuli. Hallucinations may occur in any of the senses and take on almost any form. They may consist of simple sensations (such as lights, colors, sounds, tastes, or smells) or more detailed experiences (such as seeing and interacting with animals and people, hearing voices, and having complex tactile sensations). Hallucinations are generally characterized as being vivid and uncontrollable. Auditory hallucinations, particularly experiences of hearing voices, are the most common and often prominent feature of psychosis. Up to 15% of the general population may experience auditory hallucinations (though not all are due to psychosis). The prevalence of auditory hallucinations in patients with schizophrenia is generally put around 70%, but may go as high as 98%. Reported prevalence in bipolar disorder ranges between 11% and 68%. During the early 20th century, auditory hallucinations were second to visual hallucinations in frequency, but they are now the most common manifestation of schizophrenia, although rates vary between cultures and regions. Auditory hallucinations are most commonly intelligible voices. When voices are present, the average number has been estimated at three. Content, like frequency, differs significantly, especially across cultures and demographics. People who experience auditory hallucinations can frequently identify the loudness, location of origin, and may settle on identities for voices. Western cultures are associated with auditory experiences concerning religious content, frequently related to sin. Hallucinations may command a person to do something potentially dangerous when combined with delusions. So-called "minor hallucinations", such as extracampine hallucinations, or false perceptions of people or movement occurring outside of one's visual field, frequently occur in neurocognitive disorders, such as Parkinson's disease. Visual hallucinations occur in roughly a third of people with schizophrenia, although rates as high as 55% are reported. The prevalence in bipolar disorder is around 15%. Content commonly involves animate objects, although perceptual abnormalities such as changes in lighting, shading, streaks, or lines may be seen. Visual abnormalities may conflict with proprioceptive information, and visions may include experiences such as the ground tilting. Lilliputian hallucinations are less common in schizophrenia, and are more common in various types of encephalopathy, such as peduncular hallucinosis. A visceral hallucination, also called a cenesthetic hallucination, is characterized by visceral sensations in the absence of stimuli. Cenesthetic hallucinations may include sensations of burning, or re-arrangement of internal organs. Delusions Psychosis may involve delusional beliefs. A delusion is a fixed, false idiosyncratic belief, which does not change even when presented with incontrovertible evidence to the contrary. Delusions are context- and culture-dependent: a belief that inhibits critical functioning and is widely considered delusional in one population may be common (and even adaptive) in another, or in the same population at a later time. Since normative views may contradict available evidence, a belief need not contravene cultural standards in order to be considered delusional. Prevalence in schizophrenia is generally considered at least 90%, and around 50% in bipolar disorder. The DSM-5 characterizes certain delusions as "bizarre" if they are clearly implausible, or are incompatible with the surrounding cultural context. The concept of bizarre delusions has many criticisms, the most prominent being judging its presence is not highly reliable even among trained individuals. A delusion may involve diverse thematic content. The most common type is a persecutory delusion, in which a person believes that an entity seeks to harm them. Others include delusions of reference (the belief that some element of one's experience represents a deliberate and specific act by or message from some other entity), delusions of grandeur (the belief that one possesses special power or influence beyond one's actual limits), thought broadcasting (the belief that one's thoughts are audible) and thought insertion (the belief that one's thoughts are not one's own). A delusion may also involve misidentification of objects, persons, or environs that the afflicted should reasonably be able to recognize; such examples include Cotard's syndrome (the belief that oneself is partly or wholly dead) and clinical lycanthropy (the belief that oneself is or has transformed into an animal). The subject matter of delusions seems to reflect the current culture in a particular time and location. For example, in the US, during the early 1900s syphilis was a common topic, during the Second World War Germany, during the Cold War communists, and in recent years, technology has been a focus. Some psychologists, such as those who practice the Open Dialogue method, believe that the content of psychosis represents an underlying thought process that may, in part, be responsible for psychosis, though the accepted medical position is that psychosis is due to a brain disorder. Historically, Karl Jaspers classified psychotic delusions into primary and secondary types. Primary delusions are defined as arising suddenly and not being comprehensible in terms of normal mental processes, whereas secondary delusions are typically understood as being influenced by the person's background or current situation (e.g., ethnicity; also religious, superstitious, or political beliefs). Disorganization of speech/thought or behavior Disorganization is split into disorganized speech (or thought), and grossly disorganized motor behavior. Disorganized speech or thought, also called formal thought disorder, is disorganization of thinking that is inferred from speech. Characteristics of disorganized speech include rapidly switching topics, called derailment or loose association; switching to topics that are unrelated, called tangential thinking; incomprehensible speech, called word salad or incoherence. Disorganized motor behavior includes repetitive, odd, or sometimes purposeless movement. Disorganized motor behavior rarely includes catatonia, and although it was a historically prominent symptom, it is rarely seen today. Whether this is due to historically used treatments or the lack thereof is unknown. Catatonia describes a profoundly agitated state in which the experience of reality is generally considered impaired. There are two primary manifestations of catatonic behavior. The classic presentation is a person who does not move or interact with the world in any way while awake. This type of catatonia presents with waxy flexibility. Waxy flexibility is when someone physically moves part of a catatonic person's body and the person stays in the position even if it is bizarre and otherwise nonfunctional (such as moving a person's arm straight up in the air and the arm staying there). The other type of catatonia is more of an outward presentation of the profoundly agitated state described above. It involves excessive and purposeless motor behaviour, as well as an extreme mental preoccupation that prevents an intact experience of reality. An example is someone walking very fast in circles to the exclusion of anything else with a level of mental preoccupation (meaning not focused on anything relevant to the situation) that was not typical of the person prior to the symptom onset. In both types of catatonia, there is generally no reaction to anything that happens outside of them. It is important to distinguish catatonic agitation from severe bipolar mania, although someone could have both. Negative symptoms Negative symptoms include reduced emotional expression, decreased motivation (avolition), and reduced spontaneous speech (poverty of speech, alogia). Individuals with this condition lack interest and spontaneity, and have the inability to feel pleasure (anhedonia). Altered Behavioral Inhibition System functioning could possibly cause reduced sustained attention in psychosis and overall contribute to more negative reactions. Psychosis in adolescents Psychosis is rare in adolescents. Young people who have psychosis may have trouble connecting with the world around them and may experience hallucinations or delusions. Adolescents with psychosis may also have cognitive deficits that may make it harder for the youth to socialize and work. Potential impairments include reduced speed of mental processing, ability to focus without getting distracted (limited attention span), and deficits in verbal memory. If an adolescent is experiencing psychosis, they most likely have comorbidity, meaning that they could have multiple mental illnesses. Because of this, it may be difficult to determine whether it is psychosis or autism spectrum disorder, social or generalized anxiety disorder, or obsessive-compulsive disorder. Causes The symptoms of psychosis may be caused by serious psychiatric disorders such as schizophrenia, a number of medical illnesses, and trauma. Psychosis may also be temporary or transient, and be caused by medications or substance use disorder (substance-induced psychosis). Normal states Brief hallucinations are not uncommon in those without any psychiatric disease, including healthy children. Causes or triggers include: Falling asleep and waking: hypnagogic and hypnopompic hallucinations Bereavement, in which hallucinations of a deceased loved one are common Severe sleep deprivation Extreme stress (see below) Abnormal brainwaves Abnormal brain networks TBI Trauma and stress Traumatic life events have been linked with an elevated risk of developing psychotic symptoms. Childhood trauma has specifically been shown to be a predictor of adolescent and adult psychosis. Individuals with psychotic symptoms are three times more likely to have experienced childhood trauma (e.g., physical or sexual abuse, physical or emotional neglect) than those in the general population. Increased individual vulnerability toward psychosis may interact with traumatic experiences promoting an onset of future psychotic symptoms, particularly during sensitive developmental periods. Importantly, the relationship between traumatic life events and psychotic symptoms appears to be dose-dependent in which multiple traumatic life events accumulate, compounding symptom expression and severity. However, acute, stressful events can also trigger brief psychotic episodes. Trauma prevention and early intervention may be an important target for decreasing the incidence of psychotic disorders and ameliorating its effects. A healthy person could become psychotic if he is placed in an empty room with no light and sound after 15 minutes, a phenomenon known as sensory deprivation. Neuroticism, a personality trait associated with vulnerability to stressors, is an independent predictor of the development of psychosis. Psychiatric disorders From a diagnostic standpoint, organic disorders were believed to be caused by physical illness affecting the brain (that is, psychiatric disorders secondary to other conditions) while functional disorders were considered disorders of the functioning of the mind in the absence of physical disorders (that is, primary psychological or psychiatric disorders). Subtle physical abnormalities have been found in illnesses traditionally considered functional, such as schizophrenia. The DSM-IV-TR avoids the functional/organic distinction, and instead lists traditional psychotic illnesses, psychosis due to general medical conditions, and substance-induced psychosis. Primary psychiatric causes of psychosis include the following: schizophrenia mood disorders including psychotic depression and bipolar disorder in the manic and mixed episodes of bipolar I disorder and depressive episodes of both bipolar I and bipolar II schizoaffective disorder delusional disorder brief psychotic disorder schizophreniform disorder Psychotic symptoms may also be seen in: Personality disorders including Schizotypal personality disorder and borderline personality disorder Post-traumatic stress disorder obsessive–compulsive disorder dissociative identity disorder. Subtypes Subtypes of psychosis include: Postpartum psychosis, occurring shortly after giving birth, primarily associated with maternal bipolar disorder Monothematic delusions Myxedematous psychosis Stimulant psychosis Tardive psychosis Shared psychosis Cycloid psychosis Cycloid psychosis is typically an acute, self-limiting form of psychosis with psychotic and mood symptoms that progress from normal to full-blown, usually between a few hours to days, and not related to drug intake or brain injury. While proposed as a distinct entity, clinically separate from schizophrenia and affective disorders, cycloid psychosis is not formally acknowledged by current ICD or DSM criteria. Its unclear place in psychiatric nosology has likely contributed to the limited scientific investigation and literature on the topic. Postpartum psychosis Postpartum psychosis is a rare yet serious and debilitating form of psychosis. Symptoms range from fluctuating moods and insomnia to mood-incongruent delusions related to the individual or the infant. Women experiencing postpartum psychosis are at increased risk for suicide or infanticide. Many women who experience first-time psychosis from postpartum often have bipolar disorder, meaning they could experience an increase of psychotic episodes even after postpartum. Medical conditions A very large number of medical conditions can cause psychosis, sometimes called secondary psychosis. Examples include: disorders causing delirium (toxic psychosis), in which consciousness is disturbed neurodevelopmental disorders and chromosomal abnormalities, including velocardiofacial syndrome neurodegenerative disorders, such as Alzheimer's disease, dementia with Lewy bodies, and Parkinson's disease focal neurological disease, such as stroke, brain tumors, multiple sclerosis, and some forms of epilepsy malignancy (typically via masses in the brain, paraneoplastic syndromes) infectious and postinfectious syndromes, including infections causing delirium, viral encephalitis, HIV/AIDS, malaria, syphilis endocrine disease, such as hypothyroidism, hyperthyroidism, Cushing's syndrome, hypoparathyroidism and hyperparathyroidism; sex hormones also affect psychotic symptoms and sometimes giving birth can provoke psychosis, termed postpartum psychosis inborn errors of metabolism, such as Wilson's disease, porphyria, and homocysteinemia. nutritional deficiency, such as vitamin B12 deficiency other acquired metabolic disorders, including electrolyte disturbances such as hypocalcemia, hypernatremia, hyponatremia, hypokalemia, hypomagnesemia, hypermagnesemia, hypercalcemia, and hypophosphatemia, but also hypoglycemia, hypoxia, and failure of the liver or kidneys autoimmune and related disorders, such as systemic lupus erythematosus (lupus, SLE), sarcoidosis, Hashimoto's encephalopathy, anti-NMDA-receptor encephalitis, and non-celiac gluten sensitivity poisoning by a range of plants, fungi, metals, organic compounds, and a few animal toxins sleep disorders, such as in narcolepsy (in which REM sleep intrudes into wakefulness) parasitic diseases, such as neurocysticercosis Psychoactive drugs Various psychoactive substances (both legal and illegal) have been implicated in causing, exacerbating, or precipitating psychotic states or disorders in users, with varying levels of evidence. This may be upon intoxication for a more prolonged period after use, or upon withdrawal. Individuals who experience substance-induced psychosis tend to have a greater awareness of their psychosis and tend to have higher levels of suicidal thinking compared to those who have a primary psychotic illness. Drugs commonly alleged to induce psychotic symptoms include alcohol, cannabis, cocaine, amphetamines, cathinones, psychedelic drugs (such as LSD and psilocybin), κ-opioid receptor agonists (such as enadoline and salvinorin A) and NMDA receptor antagonists (such as phencyclidine and ketamine). Caffeine may worsen symptoms in those with schizophrenia and cause psychosis at very high doses in people without the condition. Cannabis and other illicit recreational drugs are often associated with psychosis in adolescents and cannabis use before 15 years old may increase the risk of psychosis in adulthood. Alcohol Approximately three percent of people with alcoholism experience psychosis during acute intoxication or withdrawal. Alcohol related psychosis may manifest itself through a kindling mechanism. The mechanism of alcohol-related psychosis is due to the long-term effects of alcohol consumption resulting in distortions to neuronal membranes, gene expression, as well as thiamine deficiency. It is possible that hazardous alcohol use via a kindling mechanism can cause the development of a chronic substance-induced psychotic disorder, i.e. schizophrenia. The effects of an alcohol-related psychosis include an increased risk of depression and suicide as well as causing psychosocial impairments. Delirium tremens, a symptom of chronic alcoholism that can appear in the acute withdrawal phase, shares many symptoms with alcohol-related psychosis suggesting a common mechanism. Cannabis According to current studies, cannabis use is associated with increased risk of psychotic disorders, and the more often cannabis is used the more likely a person is to develop a psychotic illness. Furthermore, people with a history of cannabis use develop psychotic symptoms earlier than those who have never used cannabis. Some debate exists regarding the causal relationship between cannabis use and psychosis with some studies suggesting that cannabis use hastens the onset of psychosis primarily in those with pre-existing vulnerability. Indeed, cannabis use plays an important role in the development of psychosis in vulnerable individuals, and cannabis use in adolescence should be discouraged. Some studies indicate that the effects of two active compounds in cannabis, tetrahydrocannabinol (THC) and cannabidiol (CBD), have opposite effects with respect to psychosis. While THC can induce psychotic symptoms in healthy individuals, limited evidence suggests that CBD may have antipsychotic effects. Methamphetamine Methamphetamine induces a psychosis in 26–46 percent of heavy users. Some of these people develop a long-lasting psychosis that can persist for longer than six months. Those who have had a short-lived psychosis from methamphetamine can have a relapse of the methamphetamine psychosis years later after a stressful event such as severe insomnia or a period of hazardous alcohol use despite not relapsing back to methamphetamine. Individuals who have a long history of methamphetamine use and who have experienced psychosis in the past from methamphetamine use are highly likely to re-experience methamphetamine psychosis if drug use is recommenced. Methamphetamine-induced psychosis is likely gated by genetic vulnerability, which can produce long-term changes in brain neurochemistry following repetitive use. Psychedelics A 2024 study found that psychedelic use may potentially reduce, or have no effect on, psychotic symptoms in individuals with a personal or family history of psychotic disorders. A 2023 study found an interaction between lifetime psychedelic use and family history of psychosis or bipolar disorder on psychotic symptoms over the past two weeks. Psychotic symptoms were highest among individuals with both a family history of psychosis or bipolar disorder and lifetime psychedelic use, while they were lowest among those with lifetime psychedelic use but no family history of these disorders. Medication Administration, or sometimes withdrawal, of a large number of medications may provoke psychotic symptoms. Drugs that can induce psychosis experimentally or in a significant proportion of people include: stimulants, such as amphetamine and other sympathomimetics, dopamine agonists, ketamine, corticosteroids (often with mood changes in addition), and some anticonvulsants such as vigabatrin. Pathophysiology Neuroimaging The first brain image of an individual with psychosis was completed as far back as 1935 using a technique called pneumoencephalography (a painful and now obsolete procedure where cerebrospinal fluid is drained from around the brain and replaced with air to allow the structure of the brain to show up more clearly on an X-ray picture). Both first episode psychosis, and high risk status is associated with reductions in grey matter volume (GMV). First episode psychotic and high risk populations are associated with similar but distinct abnormalities in GMV. Reductions in the right middle temporal gyrus, right superior temporal gyrus (STG), right parahippocampus, right hippocampus, right middle frontal gyrus, and left anterior cingulate cortex (ACC) are observed in high risk populations. Reductions in first episode psychosis span a region from the right STG to the right insula, left insula, and cerebellum, and are more severe in the right ACC, right STG, insula and cerebellum. Another meta analysis reported bilateral reductions in insula, operculum, STG, medial frontal cortex, and ACC, but also reported increased GMV in the right lingual gyrus and left precentral gyrus. The Kraepelinian dichotomy is made questionable by grey matter abnormalities in bipolar and schizophrenia; schizophrenia is distinguishable from bipolar in that regions of grey matter reduction are generally larger in magnitude, although adjusting for gender differences reduces the difference to the left dorsomedial prefrontal cortex, and right dorsolateral prefrontal cortex. During attentional tasks, first episode psychosis is associated with hypoactivation in the right middle frontal gyrus, a region generally described as encompassing the dorsolateral prefrontal cortex (dlPFC).Altered Behavioral Inhibition System functioning could possibly cause reduced sustained attention in psychosis and overall contribute to more negative reactions. In congruence with studies on grey matter volume, hypoactivity in the right insula, and right inferior parietal lobe is also reported. During cognitive tasks, hypoactivities in the right insula, dACC, and the left precuneus, as well as reduced deactivations in the right basal ganglia, right thalamus, right inferior frontal and left precentral gyri are observed. These results are highly consistent and replicable possibly except the abnormalities of the right inferior frontal gyrus. Decreased grey matter volume in conjunction with bilateral hypoactivity is observed in anterior insula, dorsal medial frontal cortex, and dorsal ACC. Decreased grey matter volume and bilateral hyperactivity is reported in posterior insula, ventral medial frontal cortex, and ventral ACC. Hallucinations Studies during acute experiences of hallucinations demonstrate increased activity in primary or secondary sensory cortices. As auditory hallucinations are most common in psychosis, most robust evidence exists for increased activity in the left middle temporal gyrus, left superior temporal gyrus, and left inferior frontal gyrus (i.e. Broca's area). Activity in the ventral striatum, hippocampus, and ACC are related to the lucidity of hallucinations, and indicate that activation or involvement of emotional circuitry are key to the impact of abnormal activity in sensory cortices. Together, these findings indicate abnormal processing of internally generated sensory experiences, coupled with abnormal emotional processing, results in hallucinations. One proposed model involves a failure of feedforward networks from sensory cortices to the inferior frontal cortex, which normally cancel out sensory cortex activity during internally generated speech. The resulting disruption in expected and perceived speech is thought to produce lucid hallucinatory experiences. Delusions The two-factor model of delusions posits that dysfunction in both belief formation systems and belief evaluation systems are necessary for delusions. Dysfunction in evaluations systems localized to the right lateral prefrontal cortex, regardless of delusion content, is supported by neuroimaging studies and is congruent with its role in conflict monitoring in healthy persons. Abnormal activation and reduced volume is seen in people with delusions, as well as in disorders associated with delusions such as frontotemporal dementia, psychosis and Lewy body dementia. Furthermore, lesions to this region are associated with "jumping to conclusions", damage to this region is associated with post-stroke delusions, and hypometabolism this region associated with caudate strokes presenting with delusions. The aberrant salience model suggests that delusions are a result of people assigning excessive importance to irrelevant stimuli. In support of this hypothesis, regions normally associated with the salience network demonstrate reduced grey matter in people with delusions, and the neurotransmitter dopamine, which is widely implicated in salience processing, is also widely implicated in psychotic disorders. Specific regions have been associated with specific types of delusions. The volume of the hippocampus and parahippocampus is related to paranoid delusions in Alzheimer's disease, and has been reported to be abnormal post mortem in one person with delusions. Capgras delusions have been associated with occipito-temporal damage, and may be related to failure to elicit normal emotions or memories in response to faces. Negative symptoms Psychosis is associated with ventral striatal (VS), which is the part of the brain that is involved with the desire to naturally satisfy the body's needs. When high reports of negative symptoms were recorded, there were significant irregularities in the left VS. Anhedonia, the inability to feel pleasure, is a commonly reported symptom in psychosis; experiences are present in most people with schizophrenia. Anhedonia arises as a result of the inability to feel motivation and drive towards both the desire to engage in as well as to complete tasks and goals. Previous research has indicated that a deficiency in the neural representation in regards to goals and the motivation to achieve them, has demonstrated that when a reward is not present, a strong reaction is noted in the ventral striatum; reinforcement learning is intact when contingencies about stimulus-reward are implicit, but not when they require explicit neural processing; reward prediction errors are what the actual reward is versus what the reward was predicted to be. In most cases positive prediction errors are considered an abnormal occurrence. A positive prediction error response occurs when there is an increased activation in a brain region, typically the striatum, in response to unexpected rewards. A negative prediction error response occurs when there is a decreased activation in a region when predicted rewards do not occur. Anterior Cingulate Cortex (ACC) response, taken as an indicator of effort allocation, does not increase with reward or reward probability increase, and is associated with negative symptoms; deficits in Dorsolateral Prefrontal Cortex (dlPFC) activity and failure to improve performance on cognitive tasks when offered monetary incentives are present; and dopamine mediated functions are abnormal. Neurobiology Psychosis has been traditionally linked to the overactivity of the neurotransmitter dopamine. In particular to its effect in the mesolimbic pathway. The two major sources of evidence given to support this theory are that dopamine receptor D2 blocking drugs (i.e., antipsychotics) tend to reduce the intensity of psychotic symptoms, and that drugs that accentuate dopamine release, or inhibit its reuptake (such as amphetamines and cocaine) can trigger psychosis in some people (see stimulant psychosis). However, there is substantial evidence that dopaminergic overactivity does not fully explain psychosis, and that neurodegerative pathophysiology plays a significant role. This is evidenced by the fact that psychosis commonly occurs in neurodegenerative diseases of the dopaminergic nervous system, such as Parkinson's disease, which involved reduced, rather than increased, dopaminergic activity. The endocannabinoid system is also implicated in psychosis. This is evidenced by the propensity of CB1 receptor agonists such as THC to induce psychotic symptoms, and the efficacy of CB1 receptor antagonists such as CBD in ameliorating psychosis. NMDA receptor dysfunction has been proposed as a mechanism in psychosis. This theory is reinforced by the fact that dissociative NMDA receptor antagonists such as ketamine, PCP and dextromethorphan (at large overdoses) induce a psychotic state. The symptoms of dissociative intoxication are also considered to mirror the symptoms of schizophrenia, including negative symptoms. NMDA receptor antagonism, in addition to producing symptoms reminiscent of psychosis, mimics the neurophysiological aspects, such as reduction in the amplitude of P50, P300, and MMN evoked potentials. Hierarchical Bayesian neurocomputational models of sensory feedback, in agreement with neuroimaging literature, link NMDA receptor hypofunction to delusional or hallucinatory symptoms via proposing a failure of NMDA mediated top down predictions to adequately cancel out enhanced bottom up AMPA mediated predictions errors. Excessive prediction errors in response to stimuli that would normally not produce such a response is thought to root from conferring excessive salience to otherwise mundane events. Dysfunction higher up in the hierarchy, where representation is more abstract, could result in delusions. The common finding of reduced GAD67 expression in psychotic disorders may explain enhanced AMPA mediated signaling, caused by reduced GABAergic inhibition. The connection between dopamine and psychosis is generally believed to be complex. While dopamine receptor D2 suppresses adenylate cyclase activity, the D1 receptor increases it. If D2-blocking drugs are administered, the blocked dopamine spills over to the D1 receptors. The increased adenylate cyclase activity affects genetic expression in the nerve cell, which takes time. Hence antipsychotic drugs take a week or two to reduce the symptoms of psychosis. Moreover, newer and equally effective antipsychotic drugs actually block slightly less dopamine in the brain than older drugs whilst also blocking 5-HT2A receptors, suggesting the 'dopamine hypothesis' may be oversimplified. Soyka and colleagues found no evidence of dopaminergic dysfunction in people with alcohol-induced psychosis and Zoldan et al. reported moderately successful use of ondansetron, a 5-HT3 receptor antagonist, in the treatment of levodopa psychosis in Parkinson's disease patients. A review found an association between a first-episode of psychosis and prediabetes. Prolonged or high dose use of psychostimulants can alter normal functioning, making it similar to the manic phase of bipolar disorder. NMDA antagonists replicate some of the so-called "negative" symptoms like thought disorder in subanesthetic doses (doses insufficient to induce anesthesia), and catatonia in high doses. Psychostimulants, especially in one already prone to psychotic thinking, can cause some "positive" symptoms, such as delusional beliefs, particularly those persecutory in nature. Culture Cross-cultural studies into schizophrenia have found that individual experiences of psychosis and 'hearing voices' vary across cultures. In countries such as the United States where there exists a predominantly biomedical understanding of the body, the mind and in turn, mental health, subjects were found to report their hallucinations as having 'violent content' and self-describing as 'crazy'. This lived experience is at odds with the lived experience of subjects in Accra, Ghana, who describe the voices they hear as having 'spiritual meaning' and are often reported as positive in nature; or subjects in Chennai, India, who describe their hallucinations as kin, family members or close friends, and offering guidance. These differences are attributed to 'social kindling' or how one's social context shapes how an individual interprets and experiences sensations such as hallucinations. This concept aligns with pre-existing cognitive theory such as reality modelling and is supported by recent research that demonstrates that individuals with psychosis can be taught to attend to their hallucinations differently, which in turn alters the hallucinations themselves. Such research creates pathways for social or community-based treatment, such as reality monitoring, for individuals with schizophrenia and other psychotic disorders, providing alternatives to, or supplementing traditional pharmacologic management. Cross-cultural studies explore the way in which psychosis varies in different cultures, countries and religions. The cultural differences are based on the individual or shared illness narratives surrounding cultural meanings of illness experience. In countries such as India, Cambodia and Muslim majority countries, they each share alternative epistemologies. These are known as knowledge systems that focus on the connections between mind, body, culture, nature, and society. Cultural perceptions of mental disorders such as psychosis or schizophrenia are believed to be caused by jinn (spirits) in Muslim majority countries. Furthermore, those in Arab-Muslim societies perceive those who act differently than the social norm as "crazy" or as abnormal behaviour. This differs from the lived experience of individuals in India and how they attain their perspectives on mental health issues through a variety of spiritual and healing traditions. In Cambodia, hallucinations are linked with spirit visitation, a term they call "cultural kindling". These examples of differences are attributed to culture and the way it shapes conceptions of mental disorders. These cultural differences can be useful in bridging the gap of cultural understanding and psychiatric signs and symptoms. Diagnosis To make a diagnosis of a mental illness in someone with psychosis other potential causes must be excluded. An initial assessment includes a comprehensive history and physical examination by a health care provider. Tests may be done to exclude substance use, medication, toxins, surgical complications, or other medical illnesses. A person with psychosis is referred to as psychotic. Delirium should be ruled out, which can be distinguished by visual hallucinations, acute onset and fluctuating level of consciousness, indicating other underlying factors, including medical illnesses. Excluding medical illnesses associated with psychosis is performed by using blood tests to measure: thyroid-stimulating hormone to exclude hypo- or hyperthyroidism, vitamin B12 serum and urinary MMA to role out pernicious anemia or vitamin B12 deficiency, basic electrolytes and serum calcium to rule out a metabolic disturbance, full blood count including ESR to rule out a systemic infection or chronic disease, and serology to exclude syphilis or HIV infection. Other investigations include: EEG to exclude epilepsy, and an MRI or CT scan of the head to exclude brain lesions. Because psychosis may be precipitated or exacerbated by common classes of medications, medication-induced psychosis should be ruled out, particularly for first-episode psychosis. Both substance- and medication-induced psychosis can be excluded to a high level of certainty, using toxicology screening. Because some dietary supplements may also induce psychosis or mania, but cannot be ruled out with laboratory tests, a psychotic individual's family, partner, or friends should be asked whether the patient is currently taking any dietary supplements. Common mistakes made when diagnosing people who are psychotic include: Not properly excluding delirium, Not appreciating medical abnormalities (e.g., vital signs), Not obtaining a medical history and family history, Indiscriminate screening without an organizing framework, Missing a toxic psychosis by not screening for substances and medications, Not asking their family or others about dietary supplements, Premature diagnostic closure, and Not revisiting or questioning the initial diagnostic impression of primary psychiatric disorder. Only after relevant and known causes of psychosis are excluded, a mental health clinician may make a psychiatric differential diagnosis using a person's family history, incorporating information from the person with psychosis, and information from family, friends, or significant others. Types of psychosis in psychiatric disorders may be established by formal rating scales. The Brief Psychiatric Rating Scale (BPRS) assesses the level of 18 symptom constructs of psychosis such as hostility, suspicion, hallucination, and grandiosity. It is based on the clinician's interview with the patient and observations of the patient's behavior over the previous 2–3 days. The patient's family can also answer questions on the behavior report. During the initial assessment and the follow-up, both positive and negative symptoms of psychosis can be assessed using the 30 item Positive and Negative Symptom Scale (PANSS). The DSM-5 characterizes disorders as psychotic or on the schizophrenia spectrum if they involve hallucinations, delusions, disorganized thinking, grossly disorganized motor behavior, or negative symptoms. The DSM-5 does not include psychosis as a definition in the glossary, although it defines "psychotic features", as well as "psychoticism" with respect to personality disorder. The ICD-10 has no specific definition of psychosis. The PSQ (Psychosis Screening Questionnaire) is the most common tool in detecting psychotic symptoms and it includes five root questions that assess the presence of PLE (mania, thought insertion, paranoia, strange experiences and perceptual disturbances) The different tools used to assess symptom severity include the Revised Behavior and Symptom Identification Scale (BASIS-R), a 24-item self-report instrument with six scales: psychosis, depression/functioning, interpersonal problems, alcohol/drug use, self-harm, and emotional lability. The Symptom Checklist-90-Revised (SCL-90-R), a 90-item self assessment tool that measures psychoticism and paranoid ideation in addition to seven other symptom scales. Finally, the Brief Symptom Inventory (BSI), a 53-item self-administered scale developed from the SCL-90-R. The BSI has good psychometric properties and is an acceptable brief alternative to the SCL-90-R. These seem to be the most accurate tools at the moment, but a research in 2007 that focused on quantifying self-reports of auditory verbal hallucinations (AVH) in persons with psychosis, suggest that The Hamilton Program for Schizophrenia Voices Questionnaire (HPSVQ) is also potentially a reliable and useful measure for specifically quantifying AVHs in relation to psychosis. Factor analysis of symptoms generally regarded as psychosis frequently yields a five factor solution, albeit five factors that are distinct from the five domains defined by the DSM-5 to encompass psychotic or schizophrenia spectrum disorders. The five factors are frequently labeled as hallucinations, delusions, disorganization, excitement, and emotional distress. The DSM-5 emphasizes a psychotic spectrum, wherein the low end is characterized by schizoid personality disorder, and the high end is characterized by schizophrenia. Gouzoulis-Mayfrank et al. said that the pleasant or emotionally positive experiences that are common in psychosis, particularly in the early stages, are more easily overlooked in clinical practice than the negative experiences. Nev Jones and Mona Shattel wrote that there is less curiosity towards the complications, or towards the richness of the good things as well as the bad things. Prevention The evidence for the effectiveness of early interventions to prevent psychosis appeared inconclusive. But psychosis caused by drugs can be prevented. Whilst early intervention in those with a psychotic episode might improve short-term outcomes, little benefit was seen from these measures after five years. However, there is evidence that cognitive behavioral therapy (CBT) may reduce the risk of becoming psychotic in those at high risk, and in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT for people at risk of psychosis. Treatment The treatment of psychosis depends on the specific diagnosis (such as schizophrenia, bipolar disorder or substance intoxication). The first-line treatment for many psychotic disorders is antipsychotic medication, which can reduce the positive symptoms of psychosis in about 7 to 14 days. For youth or adolescents, treatment options include medications, psychological interventions, and social interventions. Medication The choice of which antipsychotic to use is based on benefits, risks, and costs. It is debatable whether, as a class, typical or atypical antipsychotics are better. Tentative evidence supports that amisulpride, olanzapine, risperidone and clozapine may be more effective for positive symptoms but result in more side effects. Typical antipsychotics have equal drop-out and symptom relapse rates to atypicals when used at low to moderate dosages. There is a good response in 40–50%, a partial response in 30–40%, and treatment resistance (failure of symptoms to respond satisfactorily after six weeks to two or three different antipsychotics) in 20% of people. Clozapine is an effective treatment for those who respond poorly to other drugs ("treatment-resistant" or "refractory" schizophrenia), but it has the potentially serious side effect of agranulocytosis (lowered white blood cell count) in less than 4% of people. Most people on antipsychotics get side effects. People on typical antipsychotics tend to have a higher rate of extrapyramidal side effects while some atypicals are associated with considerable weight gain, diabetes and risk of metabolic syndrome; this is most pronounced with olanzapine, while risperidone and quetiapine are also associated with weight gain. Risperidone has a similar rate of extrapyramidal symptoms to haloperidol. Psychotherapy Psychological treatments such as acceptance and commitment therapy (ACT) are possibly useful in the treatment of psychosis, helping people to focus more on what they can do in terms of valued life directions despite challenging symptomology. Metacognitive training (MCT) is associated with reduced delusions, hallucinations and negative symptoms as well as improved self-esteem and functioning in individuals with schizophrenia spectrum disorders. There are many psychosocial interventions that seek to treat the symptoms of psychosis: need adapted treatment, Open Dialogue, psychoanalysis/psychodynamic psychotherapy, major role therapy, soteria, psychosocial outpatient and inpatient treatment, milieu therapy, and cognitive behavioral therapy (CBT). In relation to the success of CBT for psychosis, a randomized controlled trial for a Web-based CBTp (Cognitive Behavioral Therapy for Psychosis) skills program named Coping With Voices (CWV) suggest that the program has promise for increasing access to CBTp. It also associated benefits in the management of distressing psychotic symptoms and improved social functioning. When CBT and the other psychosocial interventions these are used without antipsychotic medications, they may be somewhat effective for some people, especially for CBT, need-adapted treatment, and soteria. Early intervention Early intervention in psychosis is based on the observation that identifying and treating someone in the early stages of a psychosis can improve their longer term outcome. This approach advocates the use of an intensive multi-disciplinary approach during what is known as the critical period, where intervention is the most effective, and prevents the long-term morbidity associated with chronic psychotic illness. Systematic reform Addressing systematic reform is essential to creating effective prevention as well as supporting treatments and recovery for those with psychosis. Waghorn et al. suggest that education interventions can be a building block to support those with psychosis to successfully participate in society. In their study they analyse the relationship between successful education attainment and psychosis. Findings suggest proportionately more school aged persons with psychosis discontinued their education, compared to those without psychosis. Waghorn et al. finds that specialised supported education for those with psychotic disorders can help lead to successful education attainment. Additionally, future employment outcomes are relative to such education attainment. Established approaches to supported education in the US include three basic models, self-contained classrooms, onsite support model and the mobile support model. Each model includes the participation of mental health service staff or educational facility staff in the student's education arrangements. Potential benefits of specialised supported education found from this study include coordination with other service providers (e.g. income support, housing, etc.) to prevent disrupting education, providing specialised career counselling, development of coping skills in the academic environment. These examples provide beneficial ways for people with psychosis to finish studies successfully as well as counter future experiences of psychosis. History Etymology The word psychosis was introduced to the psychiatric literature in 1841 by Karl Friedrich Canstatt in his work Handbuch der Medizinischen Klinik. He used it as a shorthand for 'psychic neurosis'. At that time neurosis meant any disease of the nervous system, and Canstatt was thus referring to what was considered a psychological manifestation of brain disease. Ernst von Feuchtersleben is also widely credited as introducing the term in 1845, as an alternative to insanity and mania. The term stems from Modern Latin psychosis, "a giving soul or life to, animating, quickening" and that from Ancient Greek ψυχή (), "soul" and the suffix -ωσις (-), in this case "abnormal condition". In its adjective form "psychotic", references to psychosis can be found in both clinical and non-clinical discussions. However, in a non-clinical context, "psychotic" is a nonspecific colloquialism used to mean "insane". Classification The word was also used to distinguish a condition considered a disorder of the mind, as opposed to neurosis, which was considered a disorder of the nervous system. The psychoses thus became the modern equivalent of the old notion of madness, and hence there was much debate on whether there was only one (unitary) or many forms of the new disease. One type of broad usage would later be narrowed down by Koch in 1891 to the 'psychopathic inferiorities'—later renamed abnormal personalities by Schneider. The division of the major psychoses into manic depressive illness (now called bipolar disorder) and dementia praecox (now called schizophrenia) was made by Emil Kraepelin, who attempted to create a synthesis of the various mental disorders identified by 19th-century psychiatrists, by grouping diseases together based on classification of common symptoms. Kraepelin used the term 'manic depressive insanity' to describe the whole spectrum of mood disorders, in a far wider sense than it is usually used today. In Kraepelin's classification this would include 'unipolar' clinical depression, as well as bipolar disorder and other mood disorders such as cyclothymia. These are characterised by problems with mood control and the psychotic episodes appear associated with disturbances in mood, and patients often have periods of normal functioning between psychotic episodes even without medication. Schizophrenia is characterized by psychotic episodes that appear unrelated to disturbances in mood, and most non-medicated patients show signs of disturbance between psychotic episodes. Treatment Written record of supernatural causes and resultant treatments can be traced back to the New Testament. Mark 5:8–13 describes a man displaying what would today be described as psychotic symptoms. Christ cured this "demonic madness" by casting out the demons and hurling them into a herd of swine. Exorcism is still utilized in some religious circles as a treatment for psychosis presumed to be demonic possession. A research study of out-patients in psychiatric clinics found that 30 percent of religious patients attributed the cause of their psychotic symptoms to evil spirits. Many of these patients underwent exorcistic healing rituals that, though largely regarded as positive experiences by the patients, had no effect on symptomology. Results did, however, show a significant worsening of psychotic symptoms associated with exclusion of medical treatment for coercive forms of exorcism. The medical teachings of the fourth-century philosopher and physician Hippocrates of Cos proposed a natural, rather than supernatural, cause of human illness. In Hippocrates' work, the Hippocratic corpus, a holistic explanation for health and disease was developed to include madness and other "diseases of the mind". Hippocrates writes: Hippocrates espoused a theory of humoralism wherein disease is resultant of a shifting balance in bodily fluids including blood, phlegm, black bile, and yellow bile. According to humoralism, each fluid or "humour" has temperamental or behavioral correlates. In the case of psychosis, symptoms are thought to be caused by an excess of both blood and yellow bile. Thus, the proposed surgical intervention for psychotic or manic behavior was bloodletting. 18th-century physician, educator, and widely considered "founder of American psychiatry", Benjamin Rush, also prescribed bloodletting as a first-line treatment for psychosis. Although not a proponent of humoralism, Rush believed that active purging and bloodletting were efficacious corrections for disruptions in the circulatory system, a complication he believed was the primary cause of "insanity". Although Rush's treatment modalities are now considered antiquated and brutish, his contributions to psychiatry, namely the biological underpinnings of psychiatric phenomenon including psychosis, have been invaluable to the field. In honor of such contributions, Benjamin Rush's image is in the official seal of the American Psychiatric Association. Early 20th-century treatments for severe and persisting psychosis were characterized by an emphasis on shocking the nervous system. Such therapies include insulin shock therapy, cardiazol shock therapy, and electroconvulsive therapy. Despite considerable risk, shock therapy was considered highly efficacious in the treatment of psychosis including schizophrenia. The acceptance of high-risk treatments led to more invasive medical interventions including psychosurgery. In 1888, Swiss psychiatrist Gottlieb Burckhardt performed the first medically sanctioned psychosurgery in which the cerebral cortex was excised. Although some patients showed improvement of symptoms and became more subdued, one patient died and several developed aphasia or seizure disorders. Burckhardt would go on to publish his clinical outcomes in a scholarly paper. This procedure was met with criticism from the medical community and his academic and surgical endeavors were largely ignored. In the late 1930s, Egas Moniz conceived the leucotomy (AKA prefrontal lobotomy) in which the fibers connecting the frontal lobes to the rest of the brain were severed. Moniz's primary inspiration stemmed from a demonstration by neuroscientists John Fulton and Carlyle's 1935 experiment in which two chimpanzees were given leucotomies and pre- and post-surgical behavior was compared. Prior to the leucotomy, the chimps engaged in typical behavior including throwing feces and fighting. After the procedure, both chimps were pacified and less violent. During the Q&A, Moniz asked if such a procedure could be extended to human subjects, a question that Fulton admitted was quite startling. Moniz would go on to extend the controversial practice to humans with various psychotic disorders, an endeavor for which he received a Nobel Prize in 1949. Between the late 1930s and early 1970s, the leucotomy was a widely accepted practice, often performed in non-sterile environments such as small outpatient clinics and patient homes. Psychosurgery remained standard practice until the discovery of antipsychotic pharmacology in the 1950s. The first clinical trial of antipsychotics (also commonly known as neuroleptics) for the treatment of psychosis took place in 1952. Chlorpromazine (brand name: Thorazine) passed clinical trials and became the first antipsychotic medication approved for the treatment of both acute and chronic psychosis. Although the mechanism of action was not discovered until 1963, the administration of chlorpromazine marked the advent of the dopamine antagonist, or first generation antipsychotic. While clinical trials showed a high response rate for both acute psychosis and disorders with psychotic features, the side effects were particularly harsh, which included high rates of often irreversible Parkinsonian symptoms such as tardive dyskinesia. With the advent of atypical antipsychotics (also known as second generation antipsychotics) came a dopamine antagonist with a comparable response rate but a far different, though still extensive, side-effect profile that included a lower risk of Parkinsonian symptoms but a higher risk of cardiovascular disease. Atypical antipsychotics remain the first-line treatment for psychosis associated with various psychiatric and neurological disorders including schizophrenia, bipolar disorder, major depressive disorder, anxiety disorders, dementia, and some autism spectrum disorders. Dopamine is now one of the primary neurotransmitters implicated in psychotic symptomology. Blocking dopamine receptors (namely, the dopamine D2 receptors) and decreasing dopaminergic activity continues to be an effective but highly unrefined effect of antipsychotics, which are commonly used to treat psychosis. Recent pharmacological research suggests that the decrease in dopaminergic activity does not eradicate psychotic delusions or hallucinations, but rather attenuates the reward mechanisms involved in the development of delusional thinking; that is, connecting or finding meaningful relationships between unrelated stimuli or ideas. The author of this research paper acknowledges the importance of future investigation: Freud's former student Wilhelm Reich explored independent insights into the physical effects of neurotic and traumatic upbringing, and published his holistic psychoanalytic treatment with a schizophrenic. With his incorporation of breathwork and insight with the patient, a young woman, she achieved sufficient self-management skills to end the therapy. Lacan extended Freud's ideas to create a psychoanalytic model of psychosis based upon the concept of "foreclosure", the rejection of the symbolic concept of the father. Psychiatrist David Healy has criticised pharmaceutical companies for promoting simplified biological theories of mental illness that seem to imply the primacy of pharmaceutical treatments while ignoring social and developmental factors that are known important influences in the etiology of psychosis. Society and culture Symptoms of psychosis can also include visions or quasi-visual experiences, felt presences, alterations of time, alterations of space, or alterations of spatiotemporal qualities of objects and things. While there are many overwhelmingly negative experiences of psychosis, some experiences of psychosis can be overwhelmingly positive and can be experienced as uplifting or as healing or as difficult but meaningful. Jones and Shattell said that mutual dialogue in clinical practice would in theory allow the meaning and complexity of psychotic experiences to emerge. Disability The classification of psychosis as a social disability is a common occurrence. Psychosis is considered to be among the top 10 causes of social disability among adult men and women in developed countries. The traditional, negative narrative around disability has been shown to adversely influence employment and education for people experiencing psychosis. Social disability by way of social disconnection is a significant public health concern and is associated with a broad range of negative outcomes, including premature mortality. Social disconnection refers to the ongoing absence of family or social relationships with marginal participation in social activities. Research on psychosis found that reduced participation in social networks, not only negatively effects the individual on a physical and mental level, it has been shown that failure to be included in social networks influences the individual's ability to participate in the wider community through employment and education opportunities. Equal opportunity to participate in meaningful relationships with friends, family and partners, as well as engaging in social constructs such as employment, can provide significant physical and mental value to people's lives. And how breaking the disability mindset around people experiencing psychosis is imperative for their overall, long-term health and well-being as well as the contributions they are able to make to their immediate social connections and the wider community. Research Further research in the form of randomized controlled trials is needed to determine the effectiveness of treatment approaches for helping adolescents with psychosis. Through 10 randomized clinical trials, studies showed that Early Intervention Services (EIS) for patients with early-phase schizophrenia spectrum disorders have generated promising outcomes. EIS are specifically intended to fulfill the needs of patients with early-phase psychosis. In addition, one meta-analysis that consisted of four randomized clinical trials has examined and discovered the efficacy of EIS to Therapy as Usual (TAU) for early-phase psychosis, revealing that EIS techniques are superior to TAU. A study suggests that combining cognitive behavioral therapy (CBT) with SlowMo, an app that helps notice their "unhelpful quick-thinking", might be more effective for treating paranoia in people with psychosis than CBT alone.
Biology and health sciences
Miscellaneous
null
24515
https://en.wikipedia.org/wiki/Paranoia
Paranoia
Paranoia is an instinct or thought process that is believed to be heavily influenced by anxiety, suspicion, or fear, often to the point of delusion and irrationality. Paranoid thinking typically includes persecutory beliefs, or beliefs of conspiracy concerning a perceived threat towards oneself (i.e., "Everyone is out to get me"). Paranoia is distinct from phobias, which also involve irrational fear, but usually no blame. Making false accusations and the general distrust of other people also frequently accompany paranoia. For example, a paranoid person might believe an incident was intentional when most people would view it as an accident or coincidence. Paranoia is a central symptom of psychosis. Signs and symptoms A common symptom of paranoia is attribution bias. These individuals typically have a biased perception of reality, often exhibiting more hostile beliefs than average. A paranoid person may view someone else's accidental behavior as though it is intentional or signifies a threat. An investigation of a non-clinical paranoid population found that characteristics such as feeling powerless and depressed, isolating oneself, and relinquishing activities, were associated with more frequent paranoia. Some scientists have created different subtypes for the various symptoms of paranoia, including erotic, persecutory, litigious, and exalted. Most commonly paranoid individuals tend to be of a single status, perhaps because paranoia results in difficulty with interpersonal relationships. Some researchers have arranged types of paranoia by commonality. The least common types of paranoia at the very top of the hierarchy would be those involving more serious threats. Social anxiety is at the bottom of this hierarchy as the most frequently exhibited level of paranoia. Causes Social and environmental Social circumstances appear to be highly influential on paranoid beliefs. According to a mental health survey distributed to residents of Ciudad Juárez, Chihuahua (in Mexico) and El Paso, Texas (in the United States), paranoid beliefs seem to be associated with feelings of powerlessness and victimization, enhanced by social situations. Paranoid symptoms were associated with an attitude of mistrust and an external locus of control. Citing research showing that women and those with lower socioeconomic status are more prone to locating locus of control externally, the researchers suggested that women may be especially affected by the effects of socioeconomic status on paranoia. Surveys have revealed that paranoia can develop from difficult parental relationships and untrustworthy environments, for instance those that were highly disciplinary, strict, and unstable, could contribute to paranoia. Some sources have also noted that indulging and pampering the child could contribute to greater paranoia, via disrupting the child's understanding of their relationship with the world. Experiences found to enhance or create paranoia included frequent disappointment, stress, and a sense of hopelessness. Discrimination has also been reported as a potential predictor of paranoid delusions. Such reports that paranoia seemed to appear more in older patients who had experienced greater discrimination throughout their lives. Immigrants are more subject to some forms of psychosis than the general population, which may be related to more frequent experiences of discrimination and humiliation. Psychological Many more mood-based symptoms, for example grandiosity and guilt, may underlie functional paranoia. Colby (1981) defined paranoid cognition as "persecutory delusions and false beliefs whose propositional content clusters around ideas of being harassed, threatened, harmed, subjugated, persecuted, accused, mistreated, killed, wronged, tormented, disparaged, vilified, and so on, by malevolent others, either specific individuals or groups" (p. 518). Three components of paranoid cognition have been identified by Robins & Post: "a) suspicions without enough basis that others are exploiting, harming, or deceiving them; b) preoccupation with unjustified doubts about the loyalty, or trustworthiness, of friends or associates; c) reluctance to confide in others because of unwarranted fear that the information will be used maliciously against them" (1997, p. 3). Paranoid cognition has been conceptualized by clinical psychology almost exclusively in terms of psychodynamic constructs and dispositional variables. From this point of view, paranoid cognition is a manifestation of an intra-psychic conflict or disturbance. For instance, Colby (1981) suggested that the biases of blaming others for one's problems serve to alleviate the distress produced by the feeling of being humiliated, and helps to repudiate the belief that the self is to blame for such incompetence. This intra-psychic perspective emphasizes that the cause of paranoid cognitions is inside the head of the people (social perceiver), and dismisses the possibility that paranoid cognition may be related to the social context in which such cognitions are embedded. This point is extremely relevant because when origins of distrust and suspicion (two components of paranoid cognition) are studied many researchers have accentuated the importance of social interaction, particularly when social interaction has gone awry. Even more, a model of trust development pointed out that trust increases or decreases as a function of the cumulative history of interaction between two or more persons. Another relevant difference can be discerned among "pathological and non-pathological forms of trust and distrust". According to Deutsch, the main difference is that non-pathological forms are flexible and responsive to changing circumstances. Pathological forms reflect exaggerated perceptual biases and judgmental predispositions that can arise and perpetuate them, are reflexively caused errors similar to a self-fulfilling prophecy. It has been suggested that a "hierarchy" of paranoia exists, extending from mild social evaluative concerns, through ideas of social reference, to persecutory beliefs concerning mild, moderate, and severe threats. Physical A paranoid reaction may be caused from a decline in brain circulation as a result of high blood pressure or hardening of the arterial walls. Drug-induced paranoia, associated with cannabis and stimulants like amphetamines or methamphetamine, has much in common with schizophrenic paranoia; the relationship has been under investigation since 2012. Drug-induced paranoia has a better prognosis than schizophrenic paranoia once the drug has been removed. For further information, see stimulant psychosis and substance-induced psychosis. Based on data obtained by the Dutch NEMESIS project in 2005, there was an association between impaired hearing and the onset of symptoms of psychosis, which was based on a five-year follow up. Some older studies have actually declared that a state of paranoia can be produced in patients that were under a hypnotic state of deafness. This idea however generated much skepticism during its time. Diagnosis In the DSM-IV-TR, paranoia is diagnosed in the form of: Paranoid personality disorder () Paranoid schizophrenia (a subtype of schizophrenia) () The persecutory type of delusional disorder () According to clinical psychologist P. J. McKenna, "As a noun, paranoia denotes a disorder which has been argued in and out of existence, and whose clinical features, course, boundaries, and virtually every other aspect of which is controversial. Employed as an adjective, paranoid has become attached to a diverse set of presentations, from paranoid schizophrenia, through paranoid depression, to paranoid personality—not to mention a motley collection of paranoid 'psychoses', 'reactions', and 'states'—and this is to restrict discussion to functional disorders. Even when abbreviated down to the prefix para-, the term crops up causing trouble as the contentious but stubbornly persistent concept of paraphrenia". At least 50% of the diagnosed cases of schizophrenia experience delusions of reference and delusions of persecution. Paranoia perceptions and behavior may be part of many mental illnesses, such as depression and dementia, but they are more prevalent in three mental disorders: paranoid schizophrenia, delusional disorder (persecutory type), and paranoid personality disorder. Treatment Paranoid delusions are often treated with antipsychotic medication, which exert a medium effect size. Cognitive behavioral therapy (CBT) lessens paranoid delusions relative to control conditions according to a meta-analysis. A meta-analysis of 43 studies reported that metacognitive training (MCT) reduces (paranoid) delusions at a medium to large effect size relative to control conditions. History The word paranoia comes from the Greek παράνοια (paránoia), "madness", and that from παρά (pará), "beside, by" and νόος (nóos), "mind". The term was used to describe a mental illness in which a delusional belief is the sole or most prominent feature. In this definition, the belief does not have to be persecutory to be classified as paranoid, so any number of delusional beliefs can be classified as paranoia. For example, a person who has the sole delusional belief that they are an important religious figure would be classified by Kraepelin as having "pure paranoia". The word "paranoia" is associated from the Greek word "para-noeo". Its meaning was "derangement", or "departure from the normal". However, the word was used strictly and other words were used such as "insanity" or "crazy", as these words were introduced by Aurelius Cornelius Celsus. The term "paranoia" first made an appearance during plays of Greek tragedians, and was also used by philosophers such as Plato and Hippocrates. Nevertheless, the word "paranoia" was the equivalent of "delirium" or "high fever". Eventually, the term made its way out of everyday language for two millennia. "Paranoia" was soon revived as it made an appearance in the writings of the nosologists. It began to take appearance in France, with the writings of Rudolph August Vogel (1772) and François Boissier de Sauvage (1759). According to Michael Phelan, Padraig Wright, and Julian Stern (2000), paranoia and paraphrenia are debated entities that were detached from dementia praecox by Kraepelin, who explained paranoia as a continuous systematized delusion arising much later in life with no presence of either hallucinations or a deteriorating course, paraphrenia as an identical syndrome to paranoia but with hallucinations. Even at the present time, a delusion need not be suspicious or fearful to be classified as paranoid. A person might be diagnosed with paranoid schizophrenia without delusions of persecution, simply because their delusions refer mainly to themselves. Relations to violence It has generally been agreed upon that individuals with paranoid delusions will have the tendency to take action based on their beliefs. More research is needed on the particular types of actions that are pursued based on paranoid delusions. Some researchers have made attempts to distinguish the different variations of actions brought on as a result of delusions. Wessely et al. (1993) did just this by studying individuals with delusions of which more than half had reportedly taken action or behaved as a result of these delusions. However, the overall actions were not of a violent nature in most of the informants. The authors note that other studies such as one by Taylor (1985), have shown that violent behaviors were more common in certain types of paranoid individuals, mainly those considered to be offensive such as prisoners. Other researchers have found associations between childhood abusive behaviors and the appearance of violent behaviors in psychotic individuals. This could be a result of their inability to cope with aggression as well as other people, especially when constantly attending to potential threats in their environment. The attention to threat itself has been proposed as one of the major contributors of violent actions in paranoid people, although there has been much deliberation about this as well. Other studies have shown that there may only be certain types of delusions that promote any violent behaviors, persecutory delusions seem to be one of these. Having resentful emotions towards others and the inability to understand what other people are feeling seem to have an association with violence in paranoid individuals. This was based on a study of people with paranoid schizophrenia (one of the common mental disorders that exhibit paranoid symptoms) theories of mind capabilities in relation to empathy. The results of this study revealed specifically that although the violent patients were more successful at the higher level theory of mind tasks, they were not as able to interpret others' emotions or claims. Paranoid social cognition Social psychological research has proposed a mild form of paranoid cognition, paranoid social cognition, that has its origins in social determinants more than intra-psychic conflict. This perspective states that in milder forms, paranoid cognitions may be very common among normal individuals. For instance, it is not strange that people may exhibit in their daily life, self-centered thought such as they are being talked about, suspicion about others' intentions, and assumptions of ill-will or hostility (e.g., people may feel as if everything is going against them). According to Kramer (1998), these milder forms of paranoid cognition may be considered as an adaptive response to cope with or make sense of a disturbing and threatening social environment. Paranoid cognition captures the idea that dysphoric self-consciousness may be related with the position that people occupy within a social system. This self-consciousness conduces to a hypervigilant and ruminative mode to process social information that finally will stimulate a variety of paranoid-like forms of social misperception and misjudgment. This model identifies four components that are essential to understanding paranoid social cognition: situational antecedents, dysphoric self-consciousness, hypervigilance and rumination, and judgmental biases. Situational antecedents Perceived social distinctiveness, perceived evaluative scrutiny and uncertainty about the social standing. Perceived social distinctiveness: According to the social identity theory, people categorize themselves in terms of characteristics that made them unique or different from others under certain circumstances. Gender, ethnicity, age, or experience may become extremely relevant to explain people's behavior when these attributes make them unique in a social group. This distinctive attribute may have influence not only in how people are perceived, but may also affect the way they perceive themselves. Perceived evaluative scrutiny: According to this model, dysphoric self-consciousness may increase when people feel under moderate or intensive evaluative social scrutiny such as when an asymmetric relationship is analyzed. For example, when asked about their relationships, doctoral students remembered events that they interpreted as significant to their degree of trust in their advisors when compared with their advisors. This suggests that students are more willing to pay more attention to their advisor than their advisor is motivated to pay attention to them. Also students spent more time ruminating about the behaviors, events, and their relationship in general. Uncertainty about social standing: The knowledge about the social standing is another factor that may induce paranoid social cognition. Many researchers have argued that experiencing uncertainty about a social position in a social system constitutes an adverse psychological state, one which people are highly motivated to reduce. Dysphoric self-consciousness Refers to an aversive form of heightened 'public self-consciousness' characterized by the feelings that one is under intensive evaluation or scrutiny. Becoming self-tormenting will increase the odds of interpreting others' behaviors in a self-referential way. Hypervigilance and rumination Self-consciousness was characterized as an aversive psychological state. According to this model, people experiencing self-consciousness will be highly motivated to reduce it, trying to make sense of what they are experiencing. These attempts promote hypervigilance and rumination in a circular relationship: more hypervigilance generates more rumination, whereupon more rumination generates more hypervigilance. Hypervigilance can be thought of as a way to appraise threatening social information, but in contrast to adaptive vigilance, hypervigilance will produce elevated levels of arousal, fear, anxiety, and threat perception. Rumination is another possible response to threatening social information. Rumination can be related to the paranoid social cognition because it can increase negative thinking about negative events, and evoke a pessimistic explanatory style. Judgmental and cognitive biases Three main judgmental consequences have been identified: The sinister attribution error: This bias captures the tendency that social perceivers have to overattribute lack of trustworthiness to others. The overly personalistic construal of social interaction: Refers to the inclination that paranoid perceiver has to interpret others' action in a disproportional self-referential way, increasing the belief that they are the target of others' thoughts and actions. A special kind of bias in the biased punctuation of social interaction, which entail an overperception of causal linking among independent events. The exaggerated perception of conspiracy: Refers to the disposition that the paranoid perceiver has to overattribute social coherence and coordination to others' actions. Meta-analyses have confirmed that individuals with paranoia tend to jump to conclusions and are incorrigible in their judgements, even for delusion-neutral scenarios.
Biology and health sciences
Mental disorders
Health
24530
https://en.wikipedia.org/wiki/PH
PH
In chemistry, pH ( ), also referred to as acidity or basicity, historically denotes "potential of hydrogen" (or "power of hydrogen"). It is a logarithmic scale used to specify the acidity or basicity of aqueous solutions. Acidic solutions (solutions with higher concentrations of hydrogen () ions) are measured to have lower pH values than basic or alkaline solutions. The pH scale is logarithmic and inversely indicates the activity of hydrogen ions in the solution where [H+] is the equilibrium molar concentration of H+ (in M = mol/L) in the solution. At 25 °C (77 °F), solutions of which the pH is less than 7 are acidic, and solutions of which the pH is greater than 7 are basic. Solutions with a pH of 7 at 25 °C are neutral (i.e. have the same concentration of H+ ions as OH− ions, i.e. the same as pure water). The neutral value of the pH depends on the temperature and is lower than 7 if the temperature increases above 25 °C. The pH range is commonly given as zero to 14, but a pH value can be less than 0 for very concentrated strong acids or greater than 14 for very concentrated strong bases. The pH scale is traceable to a set of standard solutions whose pH is established by international agreement. Primary pH standard values are determined using a concentration cell with transference by measuring the potential difference between a hydrogen electrode and a standard electrode such as the silver chloride electrode. The pH of aqueous solutions can be measured with a glass electrode and a pH meter or a color-changing indicator. Measurements of pH are important in chemistry, agronomy, medicine, water treatment, and many other applications. History In 1909, the Danish chemist Søren Peter Lauritz Sørensen introduced the concept of pH at the Carlsberg Laboratory, originally using the notation "pH•", with H• as a subscript to the lowercase p. The concept was later revised in 1924 to the modern pH to accommodate definitions and measurements in terms of electrochemical cells.For the sign p, I propose the name 'hydrogen ion exponent' and the symbol pH•. Then, for the hydrogen ion exponent (pH•) of a solution, the negative value of the Briggsian logarithm of the related hydrogen ion normality factor is to be understood.Sørensen did not explain why he used the letter p, and the exact meaning of the letter is still disputed. Sørensen described a way of measuring pH using potential differences, and it represents the negative power of 10 in the concentration of hydrogen ions. The letter p could stand for the French puissance, German Potenz, or Danish potens, all meaning "power", or it could mean "potential". All of these words start with the letter p in French, German, and Danish, which were the languages in which Sørensen published: Carlsberg Laboratory was French-speaking; German was the dominant language of scientific publishing; Sørensen was Danish. He also used the letter q in much the same way elsewhere in the paper, and he might have arbitrarily labelled the test solution "p" and the reference solution "q"; these letters are often paired with e4 then e5. Some literature sources suggest that "pH" stands for the Latin term pondus hydrogenii (quantity of hydrogen) or potentia hydrogenii (power of hydrogen), although this is not supported by Sørensen's writings. In modern chemistry, the p stands for "the negative decimal logarithm of", and is used in the term pKa for acid dissociation constants, so pH is "the negative decimal logarithm of H+ ion concentration", while pOH is "the negative decimal logarithm of OH− ion concentration". Bacteriologist Alice Catherine Evans, who influenced dairying and food safety, credited William Mansfield Clark and colleagues, including herself, with developing pH measuring methods in the 1910s, which had a wide influence on laboratory and industrial use thereafter. In her memoir, she does not mention how much, or how little, Clark and colleagues knew about Sørensen's work a few years prior. She said:In these studies [of bacterial metabolism] Dr. Clark's attention was directed to the effect of acid on the growth of bacteria. He found that it is the intensity of the acid in terms of hydrogen-ion concentration that affects their growth. But existing methods of measuring acidity determined the quantity, not the intensity, of the acid. Next, with his collaborators, Dr. Clark developed accurate methods for measuring hydrogen-ion concentration. These methods replaced the inaccurate titration method of determining the acid content in use in biologic laboratories throughout the world. Also they were found to be applicable in many industrial and other processes in which they came into wide usage.The first electronic method for measuring pH was invented by Arnold Orville Beckman, a professor at the California Institute of Technology in 1934. It was in response to a request from the local citrus grower Sunkist, which wanted a better method for quickly testing the pH of lemons they were picking from their nearby orchards. Definition pH The pH of a solution is defined as the decimal logarithm of the reciprocal of the hydrogen ion activity, aH+. Mathematically, pH is expressed as: For example, for a solution with a hydrogen ion activity of (i.e., the concentration of hydrogen ions), the pH of the solution can be calculated as follows: The concept of pH was developed because ion-selective electrodes, which are used to measure pH, respond to activity. The electrode potential, E, follows the Nernst equation for the hydrogen ion, which can be expressed as: where E is a measured potential, E0 is the standard electrode potential, R is the molar gas constant, T is the thermodynamic temperature, F is the Faraday constant. For , the number of electrons transferred is one. The electrode potential is proportional to pH when pH is defined in terms of activity. The precise measurement of pH is presented in International Standard ISO 31-8 as follows: A galvanic cell is set up to measure the electromotive force (e.m.f.) between a reference electrode and an electrode sensitive to the hydrogen ion activity when they are both immersed in the same aqueous solution. The reference electrode may be a silver chloride electrode or a calomel electrode, and the hydrogen-ion selective electrode is a standard hydrogen electrode. Firstly, the cell is filled with a solution of known hydrogen ion activity and the electromotive force, ES, is measured. Then the electromotive force, EX, of the same cell containing the solution of unknown pH is measured. The difference between the two measured electromotive force values is proportional to pH. This method of calibration avoids the need to know the standard electrode potential. The proportionality constant, 1/z, is ideally equal to , the "Nernstian slope". In practice, a glass electrode is used instead of the cumbersome hydrogen electrode. A combined glass electrode has an in-built reference electrode. It is calibrated against Buffer solutions of known hydrogen ion () activity proposed by the International Union of Pure and Applied Chemistry (IUPAC). Two or more buffer solutions are used in order to accommodate the fact that the "slope" may differ slightly from ideal. To calibrate the electrode, it is first immersed in a standard solution, and the reading on a pH meter is adjusted to be equal to the standard buffer's value. The reading from a second standard buffer solution is then adjusted using the "slope" control to be equal to the pH for that solution. Further details, are given in the IUPAC recommendations. When more than two buffer solutions are used the electrode is calibrated by fitting observed pH values to a straight line with respect to standard buffer values. Commercial standard buffer solutions usually come with information on the value at 25 °C and a correction factor to be applied for other temperatures. The pH scale is logarithmic and therefore pH is a dimensionless quantity. p[H] This was the original definition of Sørensen in 1909, which was superseded in favor of pH in 1924. [H] is the concentration of hydrogen ions, denoted [] in modern chemistry. More correctly, the thermodynamic activity of in dilute solution should be replaced by []/c0, where the standard state concentration c0 = 1 mol/L. This ratio is a pure number whose logarithm can be defined. It is possible to measure the concentration of hydrogen ions directly using an electrode calibrated in terms of hydrogen ion concentrations. One common method is to titrate a solution of known concentration of a strong acid with a solution of known concentration of strong base in the presence of a relatively high concentration of background electrolyte. By knowing the concentrations of the acid and base, the concentration of hydrogen ions can be calculated and the measured potential can be correlated with concentrations. The calibration is usually carried out using a Gran plot. This procedure makes the activity of hydrogen ions equal to the numerical value of concentration. The glass electrode (and other Ion selective electrodes) should be calibrated in a medium similar to the one being investigated. For instance, if one wishes to measure the pH of a seawater sample, the electrode should be calibrated in a solution resembling seawater in its chemical composition. The difference between p[H] and pH is quite small, and it has been stated that pH = p[H] + 0.04. However, it is common practice to use the term "pH" for both types of measurement. pOH pOH is sometimes used as a measure of the concentration of hydroxide ions, . By definition, pOH is the negative logarithm (to the base 10) of the hydroxide ion concentration (mol/L). pOH values can be derived from pH measurements and vice-versa. The concentration of hydroxide ions in water is related to the concentration of hydrogen ions by where KW is the self-ionization constant of water. Taking Logarithms, So, at room temperature, pOH ≈ 14 − pH. However this relationship is not strictly valid in other circumstances, such as in measurements of soil alkalinity. Measurement pH Indicators pH can be measured using indicators, which change color depending on the pH of the solution they are in. By comparing the color of a test solution to a standard color chart, the pH can be estimated to the nearest whole number. For more precise measurements, the color can be measured using a colorimeter or spectrophotometer. A Universal indicator is a mixture of several indicators that can provide a continuous color change over a range of pH values, typically from about pH 2 to pH 10. Universal indicator paper is made from absorbent paper that has been impregnated with a universal indicator. An alternative method of measuring pH is using an electronic pH meter, which directly measures the voltage difference between a pH-sensitive electrode and a reference electrode. Non-aqueous solutions pH values can be measured in non-aqueous solutions, but they are based on a different scale from aqueous pH values because the standard states used for calculating hydrogen ion concentrations (activities) are different. The hydrogen ion activity, aH+, is defined as: where μH+ is the chemical potential of the hydrogen ion, is its chemical potential in the chosen standard state, R is the molar gas constant and T is the thermodynamic temperature. Therefore, pH values on the different scales cannot be compared directly because of differences in the solvated proton ions, such as lyonium ions, which require an insolvent scale that involves the transfer activity coefficient of hydronium/lyonium ion. pH is an example of an acidity function, but others can be defined. For example, the Hammett acidity function, H0, has been developed in connection with Superacids. Unified absolute pH scale In 2010, a new approach to measuring pH was proposed, called the unified absolute pH scale. This approach allows for a common reference standard to be used across different solutions, regardless of their pH range. The unified absolute pH scale is based on the absolute chemical potential of the proton, as defined by the Lewis acid–base theory. This scale applies to liquids, gases, and even solids. The advantages of the unified absolute pH scale include consistency, accuracy, and applicability to a wide range of sample types. It is precise and versatile because it serves as a common reference standard for pH measurements. However, implementation efforts, compatibility with existing data, complexity, and potential costs are some challenges. Extremes of pH measurements The measurement of pH can become difficult at extremely acidic or alkaline conditions, such as below pH 2.5 (ca. 0.003 mol/dm3 acid) or above pH 10.5 (above ca. 0.0003  mol/dm3 alkaline). This is due to the breakdown of the Nernst equation in such conditions when using a glass electrode. Several factors contribute to this problem. First, liquid junction potentials may not be independent of pH. Second, the high ionic strength of concentrated solutions can affect the electrode potentials. At high pH the glass electrode may be affected by "alkaline error", because the electrode becomes sensitive to the concentration of cations such as and in the solution. To overcome these problems, specially constructed electrodes are available. Runoff from mines or mine tailings can produce some extremely low pH values, down to −3.6. Applications Pure water has a pH of 7 at 25 °C, meaning it is neutral. When an acid is dissolved in water, the pH will be less than 7, while a base, or alkali, will have a pH greater than 7. A strong acid, such as hydrochloric acid, at concentration 1 mol dm−3 has a pH of 0, while a strong alkali like sodium hydroxide, at the same concentration, has a pH of 14. Since pH is a logarithmic scale, a difference of one in pH is equivalent to a tenfold difference in hydrogen ion concentration. Neutrality is not exactly 7 at 25 °C, but 7 serves as a good approximation in most cases. Neutrality occurs when the concentration of hydrogen ions ([]) equals the concentration of hydroxide ions ([]), or when their activities are equal. Since self-ionization of water holds the product of these concentration [] × [] = Kw, it can be seen that at neutrality [] = [] = , or pH = pKw/2. pKw is approximately 14 but depends on ionic strength and temperature, and so the pH of neutrality does also. Pure water and a solution of NaCl in pure water are both neutral, since dissociation of water produces equal numbers of both ions. However the pH of the neutral NaCl solution will be slightly different from that of neutral pure water because the hydrogen and hydroxide ions' activity is dependent on ionic strength, so Kw varies with ionic strength. When pure water is exposed to air, it becomes mildly acidic. This is because water absorbs carbon dioxide from the air, which is then slowly converted into bicarbonate and hydrogen ions (essentially creating carbonic acid). pH in soil The United States Department of Agriculture Natural Resources Conservation Service, formerly Soil Conservation Service classifies soil pH ranges as follows: Topsoil pH is influenced by soil parent material, erosional effects, climate and vegetation. A recent map of topsoil pH in Europe shows the alkaline soils in Mediterranean, Hungary, East Romania, North France. Scandinavian countries, Portugal, Poland and North Germany have more acid soils. pH in plants Plants contain pH-dependent pigments that can be used as pH indicators, such as those found in hibiscus, red cabbage (anthocyanin), and grapes (red wine). Citrus fruits have acidic juice primarily due to the presence of citric acid, while other carboxylic acids can be found in various living systems. The protonation state of phosphate derivatives, including ATP, is pH-dependent. Hemoglobin, an oxygen-transport enzyme, is also affected by pH in a phenomenon known as the Root effect. pH in the ocean The pH of seawater plays an important role in the ocean's carbon cycle. There is evidence of ongoing ocean acidification (meaning a drop in pH value): Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. Carbon dioxide emissions from human activities are the primary cause of ocean acidification, with atmospheric carbon dioxide (CO2) levels exceeding 410 ppm (in 2020). CO2 from the atmosphere is absorbed by the oceans. This produces carbonic acid (H2CO3) which dissociates into a bicarbonate ion () and a hydrogen ion (H+). The presence of free hydrogen ions (H+) lowers the pH of the ocean. Three pH scales in oceanography The measurement of pH in seawater is complicated by the chemical properties of seawater, and three distinct pH scales exist in chemical oceanography. In practical terms, the three seawater pH scales differ in their pH values up to 0.10, differences that are much larger than the accuracy of pH measurements typically required, in particular, in relation to the ocean's carbonate system. Since it omits consideration of sulfate and fluoride ions, the free scale is significantly different from both the total and seawater scales. Because of the relative unimportance of the fluoride ion, the total and seawater scales differ only very slightly. As part of its operational definition of the pH scale, the IUPAC defines a series of Buffer solutions across a range of pH values (often denoted with National Bureau of Standards (NBS) or National Institute of Standards and Technology (NIST) designation). These solutions have a relatively low ionic strength (≈ 0.1) compared to that of seawater (≈&mnsp;0.7), and, as a consequence, are not recommended for use in characterizing the pH of seawater, since the ionic strength differences cause changes in electrode potential. To resolve this problem, an alternative series of buffers based on artificial seawater was developed. This new series resolves the problem of ionic strength differences between samples and the buffers, and the new pH scale is referred to as the total scale, often denoted as pHT. The total scale was defined using a medium containing sulfate ions. These ions experience protonation, + , such that the total scale includes the effect of both protons (free hydrogen ions) and hydrogen sulfate ions: []T = []F + [] An alternative scale, the free scale, often denoted pHF, omits this consideration and focuses solely on []F, in principle making it a simpler representation of hydrogen ion concentration. Only []T can be determined, therefore []F must be estimated using the [] and the stability constant of , : []F = []T − [] = []T ( 1 + [] / K )−1 However, it is difficult to estimate K in seawater, limiting the utility of the otherwise more straightforward free scale. Another scale, known as the seawater scale, often denoted pHSWS, takes account of a further protonation relationship between hydrogen ions and fluoride ions, + ⇌ HF. Resulting in the following expression for []SWS: []SWS = []F + [] + [HF] However, the advantage of considering this additional complexity is dependent upon the abundance of fluoride in the medium. In seawater, for instance, sulfate ions occur at much greater concentrations (> 400 times) than those of fluoride. As a consequence, for most practical purposes, the difference between the total and seawater scales is very small. The following three equations summarize the three scales of pH: pHF = −log10[]F pHT = −log10([]F + []) = −log10[]T pHSWS = −log10(]F + [] + [HF]) = −log10[v]SWS pH in food The pH level of food influences its flavor, texture, and shelf life. Acidic foods, such as citrus fruits, tomatoes, and vinegar, typically have a pH below 4.6 with sharp and tangy taste, while basic foods taste bitter or soapy. Maintaining the appropriate pH in foods is essential for preventing the growth of harmful microorganisms. The alkalinity of vegetables such as spinach and kale can also influence their texture and color during cooking. The pH also influences the Maillard reaction, which is responsible for the browning of food during cooking, impacting both flavor and appearance. pH of various body fluids {| class="wikitable" |+pH of various body fluids |- ! Compartment ! pH |- | Gastric acid || 1.5–3.5 |- | Lysosomes || 4.5 |- | Human skin || 4.7 |- | Granules of chromaffin cells || 5.5 |- | Urine || 6.0 |- | Breast milk || 7.0–7.45 |- | Cytosol || 7.2 |- | Blood (natural pH) || 7.34–7.45 |- | Cerebrospinal fluid (CSF) || 7.5 |- | Mitochondrial matrix || 7.5 |- | Pancreas secretions || 8.1 |} In living organisms, the pH of various Body fluids, cellular compartments, and organs is tightly regulated to maintain a state of acid-base balance known as acid–base homeostasis. Acidosis, defined by blood pH below 7.35, is the most common disorder of acid–base homeostasis and occurs when there is an excess of acid in the body. In contrast, alkalosis is characterized by excessively high blood pH. Blood pH is usually slightly basic, with a pH of 7.365, referred to as physiological pH in biology and medicine. Plaque formation in teeth can create a local acidic environment that results in tooth decay through demineralization. Enzymes and other Proteins have an optimal pH range for function and can become inactivated or denatured outside this range. pH calculations When calculating the pH of a solution containing acids and/or bases, a chemical speciation calculation is used to determine the concentration of all chemical species present in the solution. The complexity of the procedure depends on the nature of the solution. Strong acids and bases are compounds that are almost completely dissociated in water, which simplifies the calculation. However, for weak acids, a quadratic equation must be solved, and for weak bases, a cubic equation is required. In general, a set of non-linear simultaneous equations must be solved. Water itself is a weak acid and a weak base, so its dissociation must be taken into account at high pH and low solute concentration (see Amphoterism). It dissociates according to the equilibrium with a dissociation constant, defined as where [H+] stands for the concentration of the aqueous hydronium ion and [OH−] represents the concentration of the hydroxide ion. This equilibrium needs to be taken into account at high pH and when the solute concentration is extremely low. Strong acids and bases Strong acids and bases are compounds that are essentially fully dissociated in water. This means that in an acidic solution, the concentration of hydrogen ions (H+) can be considered equal to the concentration of the acid. Similarly, in a basic solution, the concentration of hydroxide ions (OH-) can be considered equal to the concentration of the base. The pH of a solution is defined as the negative logarithm of the concentration of H+, and the pOH is defined as the negative logarithm of the concentration of OH−. For example, the pH of a 0.01 in moles per litreM solution of hydrochloric acid (HCl) is equal to 2 (pH = −log10(0.01)), while the pOH of a 0.01 M solution of sodium hydroxide (NaOH) is equal to 2 (pOH = −log10(0.01)), which corresponds to a pH of about 12. However, self-ionization of water must also be considered when concentrations of a strong acid or base is very low or high. For instance, a solution of HCl would be expected to have a pH of 7.3 based on the above procedure, which is incorrect as it is acidic and should have a pH of less than 7. In such cases, the system can be treated as a mixture of the acid or base and water, which is an amphoteric substance. By accounting for the self-ionization of water, the true pH of the solution can be calculated. For example, a solution of HCl would have a pH of 6.89 when treated as a mixture of HCl and water. The self-ionization equilibrium of solutions of sodium hydroxide at higher concentrations must also be considered. Weak acids and bases A weak acid or the conjugate acid of a weak base can be treated using the same formalism. Acid HA: Base A: First, an acid dissociation constant is defined as follows. Electrical charges are omitted from subsequent equations for the sake of generality and its value is assumed to have been determined by experiment. This being so, there are three unknown concentrations, [HA], [H+] and [A−] to determine by calculation. Two additional equations are needed. One way to provide them is to apply the law of mass conservation in terms of the two "reagents" H and A. C stands for analytical concentration. In some texts, one mass balance equation is replaced by an equation of charge balance. This is satisfactory for simple cases like this one, but is more difficult to apply to more complicated cases as those below. Together with the equation defining Ka, there are now three equations in three unknowns. When an acid is dissolved in water CA = CH = Ca, the concentration of the acid, so [A] = [H]. After some further algebraic manipulation an equation in the hydrogen ion concentration may be obtained. Solution of this quadratic equation gives the hydrogen ion concentration and hence p[H] or, more loosely, pH. This procedure is illustrated in an ICE table which can also be used to calculate the pH when some additional (strong) acid or alkaline has been added to the system, that is, when CA ≠ CH. For example, what is the pH of a 0.01 M solution of benzoic acid, pKa = 4.19? Step 1: Step 2: Set up the quadratic equation. Step 3: Solve the quadratic equation. For alkaline solutions, an additional term is added to the mass-balance equation for hydrogen. Since the addition of hydroxide reduces the hydrogen ion concentration, and the hydroxide ion concentration is constrained by the self-ionization equilibrium to be equal to , the resulting equation is: General method Some systems, such as with polyprotic acids, are amenable to spreadsheet calculations. With three or more reagents or when many complexes are formed with general formulae such as ApBqHr, the following general method can be used to calculate the pH of a solution. For example, with three reagents, each equilibrium is characterized by an equilibrium constant, β. Next, write down the mass-balance equations for each reagent: There are no approximations involved in these equations, except that each stability constant is defined as a quotient of concentrations, not activities. Much more complicated expressions are required if activities are to be used. There are three simultaneous equations in the three unknowns, [A], [B] and [H]. Because the equations are non-linear and their concentrations may range over many powers of 10, the solution of these equations is not straightforward. However, many computer programs are available which can be used to perform these calculations. There may be more than three reagents. The calculation of hydrogen ion concentrations, using this approach, is a key element in the determination of equilibrium constants by potentiometric titration.
Physical sciences
Chemistry: General
null
24533
https://en.wikipedia.org/wiki/Pastel
Pastel
A pastel () is an art medium that consist of powdered pigment and a binder. It can exist in a variety of forms, including a stick, a square, a pebble, and a pan of color, among other forms. The pigments used in pastels are similar to those used to produce some other colored visual arts media, such as oil paints; the binder is of a neutral hue and low saturation. The color effect of pastels is closer to the natural dry pigments than that of any other process. Pastels have been used by artists since the Renaissance, and gained considerable popularity in the 18th century, when a number of notable artists made pastel their primary medium. An artwork made using pastels is called a pastel (or a pastel drawing or pastel painting). Pastel used as a verb means to produce an artwork with pastels; as an adjective it means pale in color. Pastel media Pastel sticks or crayons consist of powdered pigment combined with a binder. The exact composition and characteristics of an individual pastel stick depend on the type of pastel and the type and amount of binder used. It also varies by individual manufacturer. Dry pastels have historically used binders such as gum arabic and gum tragacanth. Methyl cellulose was introduced as a binder in the 20th century. Often a chalk or gypsum component is present. They are available in varying degrees of hardness, the softer varieties being wrapped in paper. Some pastel brands use pumice in the binder to abrade the paper and create more tooth. Dry pastel media can be subdivided as follows: Soft pastels: This is the most widely used form of pastel. The sticks have a higher portion of pigment and less binder. The drawing can be readily smudged and blended, but it results in a higher proportion of dust. Finished drawings made with soft pastels require protecting, either framing under glass or spraying with a fixative to prevent smudging, although fixatives may affect the color or texture of the drawing. While possible, it is not recommended to use hair spray as fixative, as it might not be pH neutral and it might contain non-archival ingredients. White chalk may be used as a filler in producing pale and bright hues with greater luminosity. Pan pastels: These are formulated with a minimum of binder in flat compacts (similar to some makeup) and applied with special soft micropore sponge tools. No liquid is involved. A 21st century invention, pan pastels can be used for the entire painting or in combination with soft and hard sticks. Hard pastels: These have a higher portion of binder and less pigment, producing a sharp drawing material that is useful for fine details. These can be used with other pastels for drawing outlines and adding accents. Hard pastels are traditionally used to create the preliminary sketching out of a composition. However, the colors are less brilliant and are available in a restricted range in contrast to soft pastels. Pastel pencils: These are pencils with a pastel lead. They are useful for adding fine details. In addition, pastels using a different approach to manufacture have been developed: Oil pastels: These have a soft, buttery consistency and intense colors. They are dense and fill the grain of paper and are slightly more difficult to blend than soft pastels, but do not require a fixative. They may be spread across the work surface by thinning with turpentine. Water-soluble pastels: These are similar to soft pastels, but contain a water-soluble component, such as polyethylene glycol. This allows the colors to be thinned out to an even, semi-transparent consistency using a water wash. Water-soluble pastels are made in a restricted range of hues in strong colors. They have the advantages of enabling easy blending and mixing of the hues, given their fluidity, as well as allowing a range of color tint effects depending upon the amount of water applied with a brush to the working surface. There has been some debate within art societies as to what exactly qualifies as a pastel. The Pastel Society within the UK (the oldest pastel society) states the following are acceptable media for its exhibitions: "Pastels, including Oil pastel, Charcoal, Pencil, Conté, Sanguine, or any dry media". The emphasis appears to be on "dry media" but the debate continues. Manufacture In order to create hard and soft pastels, pigments are ground into a paste with water and a gum binder and then rolled, pressed or extruded into sticks. The name pastel is derived from Medieval Latin "woad paste," from Late Latin "paste." The French word pastel first appeared in 1662. Most brands produce gradations of a color, the original pigment of which tends to be dark, from pure pigment to near-white by mixing in differing quantities of chalk. This mixing of pigments with chalks is the origin of the word pastel in reference to pale color as it is commonly used in cosmetic and fashion contexts. A pastel is made by letting the sticks move over an abrasive ground, leaving color on the grain of the painting surface. When fully covered with pastel, the work is called a pastel painting; when not, a pastel sketch or drawing. Pastel paintings, being made with a medium that has the highest pigment concentration of all, reflect light without darkening refraction, allowing for very saturated colors. Pastel supports Pastel supports need to provide a "tooth" for the pastel to adhere and hold the pigment in place. Supports include: laid paper (e.g. Ingres, Canson Mi Teintes) abrasive supports (e.g. with a surface of finely ground pumice, marble dust, or rottenstone) velour paper (e.g. Hannemühle Pastellpapier Velour) suitable for use with soft pastels is a composite of synthetic fibers attached to acid-free backing Protection of pastel paintings Pastels can be used to produce a permanent painting if the artist meets appropriate archival considerations. This means: Only pastels with lightfast pigments are used. As it is not protected by a binder the pigment in pastels is especially vulnerable to light. Pastel paintings made with pigments that change color or tone when exposed to light suffer comparable problems to gouache paintings using the same pigments. Works are done on an acid-free archival quality support. Historically some works have been executed on supports which are now extremely fragile and the support rather than the pigment needs to be protected under glass and away from light. Works are properly mounted and framed under glass so that the glass does not touch the artwork. This prevents the deterioration which is associated with environmental hazards such as air quality, humidity, mildew problems associated with condensation and smudging. Plexiglas is not used as it will create static electricity and dislodge the particles of pastel pigment. Some artists protect their finished pieces by spraying them with a fixative. A pastel fixative is an aerosol varnish which can be used to help stabilize the small charcoal or pastel particles on a painting or drawing. It cannot prevent smearing entirely without dulling and darkening the bright and fresh colors of pastels. The use of hairspray as a fixative is generally not recommended as it is not acid-free and therefore can degrade the artwork in the long term. Traditional fixatives will discolor eventually. For these reasons, some pastelists avoid the use of a fixative except in cases where the pastel has been overworked so much that the surface will no longer hold any more pastel. The fixative will restore the "tooth" and more pastel can be applied on top. It is the tooth of the painting surface that holds the pastels, not a fixative. Abrasive supports avoid or minimize the need to apply further fixative in this way. SpectraFix, a modern casein fixative available premixed in a pump misting bottle or as concentrate to be mixed with alcohol, is not toxic and does not darken or dull pastel colors. However, SpectraFix takes some practice to use because it's applied with a pump misting bottle instead of an aerosol spray can. It is easy to use too much SpectraFix and leave puddles of liquid that may dissolve passages of color; also it takes a little longer to dry than conventional spray fixatives between light layers. Glassine (paper) is used by artists to protect artwork which is being stored or transported. Some good quality books of pastel papers also include glassine to separate pages. Techniques Pastel techniques can be challenging since the medium is mixed and blended directly on the working surface, and unlike paint, colors cannot be tested on a palette before applying to the surface. Pastel errors cannot be covered the way a paint error can be painted out. Experimentation with the pastel medium on a small scale in order to learn various techniques gives the user a better command over a larger composition. Pastels have some techniques in common with painting, such as blending, masking, building up layers of color, adding accents and highlighting, and shading. Some techniques are characteristic of both pastels and sketching mediums such as charcoal and lead, for example, hatching and crosshatching, and gradation. Other techniques are particular to the pastel medium. Colored grounds: the use of a colored working surface to produce an effect such as a softening of the pastel hues, or a contrast Dry wash: coverage of a large area using the broad side of the pastel stick. A cotton ball, paper towel, or brush may be used to spread the pigment more thinly and evenly. Erasure: lifting of pigment from an area using a kneaded eraser or other tool Feathering Frottage Impasto: pastel applied thickly enough to produce a discernible texture or relief Pouncing Resist techniques Scraping out Scumbling Sfumato Sgraffito Stippling Textured grounds: the use of coarse or smooth paper texture to create an effect, a technique also often used in watercolor painting Wet brushing Health and safety hazards Pastels are a dry medium and produce a great deal of dust, which can cause respiratory irritation. More seriously, pastels might use the same pigments as artists' paints, many of which are toxic. For example, exposure to cadmium pigments, which are common and popular bright yellows, oranges, and reds, can lead to cadmium poisoning. Pastel artists, who use the pigments without a strong painting binder, are especially susceptible to such poisoning. For this reason, many modern pastels are made using substitutions for cadmium, chromium, and other toxic pigments, while retaining the traditional pigment names. All brands that have the AP Label by ASTM International are not considered toxic, and they might use extremely insoluble varieties of cadmium or cobalt pigments that will not be readily absorbed by the human body. Although less toxic when swallowed, they should still be treated with care. Pastel art in art history The manufacture of pastels originated in the 15th century. The pastel medium was mentioned by Leonardo da Vinci, who learned of it from the French artist Jean Perréal after that artist's arrival in Milan in 1499. Pastel was sometimes used as a medium for preparatory studies by 16th-century artists, notably Federico Barocci. The first French artist to specialize in pastel portraits was Joseph Vivien. During the 18th century the medium became fashionable for portrait painting, sometimes in a mixed technique with gouache. Pastel was an important medium for artists such as Jean-Baptiste Perronneau, Maurice Quentin de La Tour (who never painted in oils), and Rosalba Carriera. The pastel still life paintings and portraits of Jean-Baptiste-Siméon Chardin are much admired, as are the works of the Swiss-French artist Jean-Étienne Liotard. In 18th-century England the outstanding practitioner was John Russell. In Colonial America, John Singleton Copley used pastel occasionally for portraits. In France, pastel briefly became unpopular during and after the Revolution, as the medium was identified with the frivolity of the Ancien Régime. By the mid-19th century, French artists such as Eugène Delacroix and especially Jean-François Millet were again making significant use of pastel. Their countryman Édouard Manet painted a number of portraits in pastel on canvas, an unconventional ground for the medium. Edgar Degas was an innovator in pastel technique, and used it with an almost expressionist vigor after about 1885, when it became his primary medium. Odilon Redon produced a large body of works in pastel. James Abbott McNeill Whistler produced a quantity of pastels around 1880, including a body of work relating to Venice, and this probably contributed to a growing enthusiasm for the medium in the United States. In particular, he demonstrated how few strokes were required to evoke a place or an atmosphere. Mary Cassatt, an American artist active in France, introduced the Impressionists and pastel to her friends in Philadelphia and Washington. According to the Metropolitan Museum of Art's Time Line of Art History: Nineteenth Century American Drawings: On the East Coast of the United States, the Society of Painters in Pastel was founded in 1883 by William Merritt Chase, Robert Blum, and others. The Pastellists, led by Leon Dabo, was organized in New York in late 1910 and included among its ranks Everett Shinn and Arthur Bowen Davies. On the American West Coast the influential artist and teacher Pedro Joseph de Lemos, who served as Chief Administrator of the San Francisco Art Institute and Director of the Stanford University Museum and Art Gallery, popularized pastels in regional exhibitions. Beginning in 1919 de Lemos published a series of articles on "painting" with pastels, which included such notable innovations as allowing the intensity of light on the subject to determine the distinct color of laid paper and the use of special optics for making "night sketches" in both urban and rural settings. His night scenes, which were often called "dreamscapes" in the press, were influenced by French Symbolism, and especially Odilon Redon. Pastels have been favored by many modern and contemporary artists because of the medium's broad range of bright colors. Recent notable artists who have worked extensively in pastels include Fernando Botero, Francesco Clemente, Daniel Greene, Wolf Kahn, Paula Rego and R. B. Kitaj. Pastels
Technology
Artist's tools
null
24543
https://en.wikipedia.org/wiki/Phenotype
Phenotype
In genetics, the phenotype () is the set of observable characteristics or traits of an organism. The term covers the organism's morphology (physical form and structure), its developmental processes, its biochemical and physiological properties, its behavior, and the products of behavior. An organism's phenotype results from two basic factors: the expression of an organism's genetic code (its genotype) and the influence of environmental factors. Both factors may interact, further affecting the phenotype. When two or more clearly different phenotypes exist in the same population of a species, the species is called polymorphic. A well-documented example of polymorphism is Labrador Retriever coloring; while the coat color depends on many genes, it is clearly seen in the environment as yellow, black, and brown. Richard Dawkins in 1978 and then again in his 1982 book The Extended Phenotype suggested that one can regard bird nests and other built structures such as caddisfly larva cases and beaver dams as "extended phenotypes". Wilhelm Johannsen proposed the genotype–phenotype distinction in 1911 to make clear the difference between an organism's hereditary material and what that hereditary material produces. The distinction resembles that proposed by August Weismann (1834–1914), who distinguished between germ plasm (heredity) and somatic cells (the body). More recently, in The Selfish Gene (1976), Dawkins distinguished these concepts as replicators and vehicles. Definition Despite its seemingly straightforward definition, the concept of the phenotype has hidden subtleties. It may seem that anything dependent on the genotype is a phenotype, including molecules such as RNA and proteins. Most molecules and structures coded by the genetic material are not visible in the appearance of an organism, yet they are observable (for example by Western blotting) and are thus part of the phenotype; human blood groups are an example. It may seem that this goes beyond the original intentions of the concept with its focus on the (living) organism in itself. Either way, the term phenotype includes inherent traits or characteristics that are observable or traits that can be made visible by some technical procedure. The term "phenotype" has sometimes been incorrectly used as a shorthand for the phenotypic difference between a mutant and its wild type, which would lead to the false statement that a "mutation has no phenotype". Behaviors and their consequences are also phenotypes, since behaviors are observable characteristics. Behavioral phenotypes include cognitive, personality, and behavioral patterns. Some behavioral phenotypes may characterize psychiatric disorders or syndromes. A phenome is the set of all traits expressed by a cell, tissue, organ, organism, or species. The term was first used by Davis in 1949, "We here propose the name phenome for the sum total of extragenic, non-autoreproductive portions of the cell, whether cytoplasmic or nuclear. The phenome would be the material basis of the phenotype, just as the genome is the material basis of the genotype." Although phenome has been in use for many years, the distinction between the use of phenome and phenotype is problematic. A proposed definition for both terms as the "physical totality of all traits of an organism or of one of its subsystems" was put forth by Mahner and Kary in 1997, who argue that although scientists tend to intuitively use these and related terms in a manner that does not impede research, the terms are not well defined and usage of the terms is not consistent. Some usages of the term suggest that the phenome of a given organism is best understood as a kind of matrix of data representing physical manifestation of phenotype. For example, discussions led by A. Varki among those who had used the term up to 2003 suggested the following definition: "The body of information describing an organism's phenotypes, under the influences of genetic and environmental factors". Another team of researchers characterize "the human phenome [as] a multidimensional search space with several neurobiological levels, spanning the proteome, cellular systems (e.g., signaling pathways), neural systems and cognitive and behavioural phenotypes." Plant biologists have started to explore the phenome in the study of plant physiology. In 2009, a research team demonstrated the feasibility of identifying genotype–phenotype associations using electronic health records (EHRs) linked to DNA biobanks. They called this method phenome-wide association study (PheWAS). Inspired by the evolution from genotype to genome to pan-genome, a concept of exploring the relationship ultimately among pan-phenome, pan-genome, and pan-envirome was proposed in 2023. Phenotypic variation Phenotypic variation (due to underlying heritable genetic variation) is a fundamental prerequisite for evolution by natural selection. It is the living organism as a whole that contributes (or not) to the next generation, so natural selection affects the genetic structure of a population indirectly via the contribution of phenotypes. Without phenotypic variation, there would be no evolution by natural selection. The interaction between genotype and phenotype has often been conceptualized by the following relationship: genotype (G) + environment (E) → phenotype (P) A more nuanced version of the relationship is: genotype (G) + environment (E) + genotype & environment interactions (GE) → phenotype (P) Genotypes often have much flexibility in the modification and expression of phenotypes; in many organisms these phenotypes are very different under varying environmental conditions. The plant Hieracium umbellatum is found growing in two different habitats in Sweden. One habitat is rocky, sea-side cliffs, where the plants are bushy with broad leaves and expanded inflorescences; the other is among sand dunes where the plants grow prostrate with narrow leaves and compact inflorescences. These habitats alternate along the coast of Sweden and the habitat that the seeds of Hieracium umbellatum land in, determine the phenotype that grows. An example of random variation in Drosophila flies is the number of ommatidia, which may vary (randomly) between left and right eyes in a single individual as much as they do between different genotypes overall, or between clones raised in different environments. The concept of phenotype can be extended to variations below the level of the gene that affect an organism's fitness. For example, silent mutations that do not change the corresponding amino acid sequence of a gene may change the frequency of guanine-cytosine base pairs (GC content). These base pairs have a higher thermal stability (melting point) than adenine-thymine, a property that might convey, among organisms living in high-temperature environments, a selective advantage on variants enriched in GC content. The extended phenotype Richard Dawkins described a phenotype that included all effects that a gene has on its surroundings, including other organisms, as an extended phenotype, arguing that "An animal's behavior tends to maximize the survival of the genes 'for' that behavior, whether or not those genes happen to be in the body of the particular animal performing it." For instance, an organism such as a beaver modifies its environment by building a beaver dam; this can be considered an expression of its genes, just as its incisor teeth are—which it uses to modify its environment. Similarly, when a bird feeds a brood parasite such as a cuckoo, it is unwittingly extending its phenotype; and when genes in an orchid affect orchid bee behavior to increase pollination, or when genes in a peacock affect the copulatory decisions of peahens, again, the phenotype is being extended. Genes are, in Dawkins's view, selected by their phenotypic effects. Other biologists broadly agree that the extended phenotype concept is relevant, but consider that its role is largely explanatory, rather than assisting in the design of experimental tests. Genes and phenotypes Phenotypes are determined by an interaction of genes and the environment, but the mechanism for each gene and phenotype is different. For instance, an albino phenotype may be caused by a mutation in the gene encoding tyrosinase which is a key enzyme in melanin formation. However, exposure to UV radiation can increase melanin production, hence the environment plays a role in this phenotype as well. For most complex phenotypes the precise genetic mechanism remains unknown. For instance, it is largely unclear how genes determine the shape of bones or the human ear. Gene expression plays a crucial role in determining the phenotypes of organisms. The level of gene expression can affect the phenotype of an organism. For example, if a gene that codes for a particular enzyme is expressed at high levels, the organism may produce more of that enzyme and exhibit a particular trait as a result. On the other hand, if the gene is expressed at low levels, the organism may produce less of the enzyme and exhibit a different trait. Gene expression is regulated at various levels and thus each level can affect certain phenotypes, including transcriptional and post-transcriptional regulation. Changes in the levels of gene expression can be influenced by a variety of factors, such as environmental conditions, genetic variations, and epigenetic modifications. These modifications can be influenced by environmental factors such as diet, stress, and exposure to toxins, and can have a significant impact on an individual's phenotype. Some phenotypes may be the result of changes in gene expression due to these factors, rather than changes in genotype. An experiment involving machine learning methods utilizing gene expressions measured from RNA sequencing found that they can contain enough signal to separate individuals in the context of phenotype prediction. Phenome and phenomics Although a phenotype is the ensemble of observable characteristics displayed by an organism, the word phenome is sometimes used to refer to a collection of traits, while the simultaneous study of such a collection is referred to as phenomics. Phenomics is an important field of study because it can be used to figure out which genomic variants affect phenotypes which then can be used to explain things like health, disease, and evolutionary fitness. Phenomics forms a large part of the Human Genome Project. Phenomics has applications in agriculture. For instance, genomic variations such as drought and heat resistance can be identified through phenomics to create more durable GMOs. Phenomics may be a stepping stone towards personalized medicine, particularly drug therapy. Once the phenomic database has acquired enough data, a person's phenomic information can be used to select specific drugs tailored to the individual. Large-scale phenotyping and genetic screens Large-scale genetic screens can identify the genes or mutations that affect the phenotype of an organism. Analyzing the phenotypes of mutant genes can also aid in determining gene function. Most genetic screens have used microorganisms, in which genes can be easily deleted. For instance, nearly all genes have been deleted in E. coli and many other bacteria, but also in several eukaryotic model organisms such as baker's yeast and fission yeast. Among other discoveries, such studies have revealed lists of essential genes . More recently, large-scale phenotypic screens have also been used in animals, e.g. to study lesser understood phenotypes such as behavior. In one screen, the role of mutations in mice were studied in areas such as learning and memory, circadian rhythmicity, vision, responses to stress and response to psychostimulants. This experiment involved the progeny of mice treated with ENU, or N-ethyl-N-nitrosourea, which is a potent mutagen that causes point mutations. The mice were phenotypically screened for alterations in the different behavioral domains in order to find the number of putative mutants (see table for details). Putative mutants are then tested for heritability in order to help determine the inheritance pattern as well as map out the mutations. Once they have been mapped out, cloned, and identified, it can be determined whether a mutation represents a new gene or not. These experiments showed that mutations in the rhodopsin gene affected vision and can even cause retinal degeneration in mice. The same amino acid change causes human familial blindness, showing how phenotyping in animals can inform medical diagnostics and possibly therapy. Evolutionary origin of phenotype The RNA world is the hypothesized pre-cellular stage in the evolutionary history of life on earth, in which self-replicating RNA molecules proliferated prior to the evolution of DNA and proteins. The folded three-dimensional physical structure of the first RNA molecule that possessed ribozyme activity promoting replication while avoiding destruction would have been the first phenotype, and the nucleotide sequence of the first self-replicating RNA molecule would have been the original genotype.
Biology and health sciences
Genetics
Biology
24544
https://en.wikipedia.org/wiki/Photosynthesis
Photosynthesis
Photosynthesis ( ) is a system of biological processes by which photosynthetic organisms, such as most plants, algae, and cyanobacteria, convert light energy, typically from sunlight, into the chemical energy necessary to fuel their metabolism. Photosynthesis usually refers to oxygenic photosynthesis, a process that produces oxygen. Photosynthetic organisms store the chemical energy so produced within intracellular organic compounds (compounds containing carbon) like sugars, glycogen, cellulose and starches. To use this stored chemical energy, an organism's cells metabolize the organic compounds through cellular respiration. Photosynthesis plays a critical role in producing and maintaining the oxygen content of the Earth's atmosphere, and it supplies most of the biological energy necessary for complex life on Earth. Some bacteria also perform anoxygenic photosynthesis, which uses bacteriochlorophyll to split hydrogen sulfide as a reductant instead of water, producing sulfur instead of oxygen. Archaea such as Halobacterium also perform a type of non-carbon-fixing anoxygenic photosynthesis, where the simpler photopigment retinal and its microbial rhodopsin derivatives are used to absorb green light and power proton pumps to directly synthesize adenosine triphosphate (ATP), the "energy currency" of cells. Such archaeal photosynthesis might have been the earliest form of photosynthesis that evolved on Earth, as far back as the Paleoarchean, preceding that of cyanobacteria (see Purple Earth hypothesis). While the details may differ between species, the process always begins when light energy is absorbed by the reaction centers, proteins that contain photosynthetic pigments or chromophores. In plants, these pigments are chlorophylls (a porphyrin derivative that absorbs the red and blue spectrums of light, thus reflecting green) held inside chloroplasts, abundant in leaf cells. In bacteria, they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by the splitting of water is used in the creation of two important molecules that participate in energetic processes: reduced nicotinamide adenine dinucleotide phosphate (NADPH) and ATP. In plants, algae, and cyanobacteria, sugars are synthesized by a subsequent sequence of reactions called the Calvin cycle. In this process, atmospheric carbon dioxide is incorporated into already existing organic compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose. In other bacteria, different mechanisms like the reverse Krebs cycle are used to achieve the same end. The first photosynthetic organisms probably evolved early in the evolutionary history of life using reducing agents such as hydrogen or hydrogen sulfide, rather than water, as sources of electrons. Cyanobacteria appeared later; the excess oxygen they produced contributed directly to the oxygenation of the Earth, which rendered the evolution of complex life possible. The average rate of energy captured by global photosynthesis is approximately 130 terawatts, which is about eight times the total power consumption of human civilization. Photosynthetic organisms also convert around 100–115 billion tons (91–104 Pg petagrams, or billions of metric tons), of carbon into biomass per year. Photosynthesis was discovered in 1779 by Jan Ingenhousz who showed that plants need light, not just soil and water. Overview Most photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from carbon dioxide and water using energy from light. However, not all organisms use carbon dioxide as a source of carbon atoms to carry out photosynthesis; photoheterotrophs use organic compounds, rather than carbon dioxide, as a source of carbon. In plants, algae, and cyanobacteria, photosynthesis releases oxygen. This oxygenic photosynthesis is by far the most common type of photosynthesis used by living organisms. Some shade-loving plants (sciophytes) produce such low levels of oxygen during photosynthesis that they use all of it themselves instead of releasing it to the atmosphere. Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. There are also many varieties of anoxygenic photosynthesis, used mostly by bacteria, which consume carbon dioxide but do not release oxygen. Carbon dioxide is converted into sugars in a process called carbon fixation; photosynthesis captures energy from sunlight to convert carbon dioxide into carbohydrates. Carbon fixation is an endothermic redox reaction. In general outline, photosynthesis is the opposite of cellular respiration: while photosynthesis is a process of reduction of carbon dioxide to carbohydrates, cellular respiration is the oxidation of carbohydrates or other nutrients to carbon dioxide. Nutrients used in cellular respiration include carbohydrates, amino acids and fatty acids. These nutrients are oxidized to produce carbon dioxide and water, and to release chemical energy to drive the organism's metabolism. Photosynthesis and cellular respiration are distinct processes, as they take place through different sequences of chemical reactions and in different cellular compartments (cellular respiration in mitochondria). The general equation for photosynthesis as first proposed by Cornelis van Niel is: + + → + + Since water is used as the electron donor in oxygenic photosynthesis, the equation for this process is: + + → + + This equation emphasizes that water is both a reactant in the light-dependent reaction and a product of the light-independent reaction, but canceling n water molecules from each side gives the net equation: + + → + Other processes substitute other compounds (such as arsenite) for water in the electron-supply role; for example some microbes use sunlight to oxidize arsenite to arsenate: The equation for this reaction is: + + → + (used to build other compounds in subsequent reactions) Photosynthesis occurs in two stages. In the first stage, light-dependent reactions or light reactions capture the energy of light and use it to make the hydrogen carrier NADPH and the energy-storage molecule ATP. During the second stage, the light-independent reactions use these products to capture and reduce carbon dioxide. Most organisms that use oxygenic photosynthesis use visible light for the light-dependent reactions, although at least three use shortwave infrared or, more specifically, far-red radiation. Some organisms employ even more radical variants of photosynthesis. Some archaea use a simpler method that employs a pigment similar to those used for vision in animals. The bacteriorhodopsin changes its configuration in response to sunlight, acting as a proton pump. This produces a proton gradient more directly, which is then converted to chemical energy. The process does not involve carbon dioxide fixation and does not release oxygen, and seems to have evolved separately from the more common types of photosynthesis. Photosynthetic membranes and organelles In photosynthetic bacteria, the proteins that gather light for photosynthesis are embedded in cell membranes. In its simplest form, this involves the membrane surrounding the cell itself. However, the membrane may be tightly folded into cylindrical sheets called thylakoids, or bunched up into round vesicles called intracytoplasmic membranes. These structures can fill most of the interior of a cell, giving the membrane a very large surface area and therefore increasing the amount of light that the bacteria can absorb. In plants and algae, photosynthesis takes place in organelles called chloroplasts. A typical plant cell contains about 10 to 100 chloroplasts. The chloroplast is enclosed by a membrane. This membrane is composed of a phospholipid inner membrane, a phospholipid outer membrane, and an intermembrane space. Enclosed by the membrane is an aqueous fluid called the stroma. Embedded within the stroma are stacks of thylakoids (grana), which are the site of photosynthesis. The thylakoids appear as flattened disks. The thylakoid itself is enclosed by the thylakoid membrane, and within the enclosed volume is a lumen or thylakoid space. Embedded in the thylakoid membrane are integral and peripheral membrane protein complexes of the photosynthetic system. Plants absorb light primarily using the pigment chlorophyll. The green part of the light spectrum is not absorbed but is reflected, which is the reason that most plants have a green color. Besides chlorophyll, plants also use pigments such as carotenes and xanthophylls. Algae also use chlorophyll, but various other pigments are present, such as phycocyanin, carotenes, and xanthophylls in green algae, phycoerythrin in red algae (rhodophytes) and fucoxanthin in brown algae and diatoms resulting in a wide variety of colors. These pigments are embedded in plants and algae in complexes called antenna proteins. In such proteins, the pigments are arranged to work together. Such a combination of proteins is also called a light-harvesting complex. Although all cells in the green parts of a plant have chloroplasts, the majority of those are found in specially adapted structures called leaves. Certain species adapted to conditions of strong sunlight and aridity, such as many Euphorbia and cactus species, have their main photosynthetic organs in their stems. The cells in the interior tissues of a leaf, called the mesophyll, can contain between 450,000 and 800,000 chloroplasts for every square millimeter of leaf. The surface of the leaf is coated with a water-resistant waxy cuticle that protects the leaf from excessive evaporation of water and decreases the absorption of ultraviolet or blue light to minimize heating. The transparent epidermis layer allows light to pass through to the palisade mesophyll cells where most of the photosynthesis takes place. Light-dependent reactions In the light-dependent reactions, one molecule of the pigment chlorophyll absorbs one photon and loses one electron. This electron is taken up by a modified form of chlorophyll called pheophytin, which passes the electron to a quinone molecule, starting the flow of electrons down an electron transport chain that leads to the ultimate reduction of NADP to NADPH. In addition, this creates a proton gradient (energy gradient) across the chloroplast membrane, which is used by ATP synthase in the synthesis of ATP. The chlorophyll molecule ultimately regains the electron it lost when a water molecule is split in a process called photolysis, which releases oxygen. The overall equation for the light-dependent reactions under the conditions of non-cyclic electron flow in green plants is: Not all wavelengths of light can support photosynthesis. The photosynthetic action spectrum depends on the type of accessory pigments present. For example, in green plants, the action spectrum resembles the absorption spectrum for chlorophylls and carotenoids with absorption peaks in violet-blue and red light. In red algae, the action spectrum is blue-green light, which allows these algae to use the blue end of the spectrum to grow in the deeper waters that filter out the longer wavelengths (red light) used by above-ground green plants. The non-absorbed part of the light spectrum is what gives photosynthetic organisms their color (e.g., green plants, red algae, purple bacteria) and is the least effective for photosynthesis in the respective organisms. Z scheme In plants, light-dependent reactions occur in the thylakoid membranes of the chloroplasts where they drive the synthesis of ATP and NADPH. The light-dependent reactions are of two forms: cyclic and non-cyclic. In the non-cyclic reaction, the photons are captured in the light-harvesting antenna complexes of photosystem II by chlorophyll and other accessory pigments (see diagram "Z-scheme"). The absorption of a photon by the antenna complex loosens an electron by a process called photoinduced charge separation. The antenna system is at the core of the chlorophyll molecule of the photosystem II reaction center. That loosened electron is taken up by the primary electron-acceptor molecule, pheophytin. As the electrons are shuttled through an electron transport chain (the so-called Z-scheme shown in the diagram), a chemiosmotic potential is generated by pumping proton cations (H+) across the membrane and into the thylakoid space. An ATP synthase enzyme uses that chemiosmotic potential to make ATP during photophosphorylation, whereas NADPH is a product of the terminal redox reaction in the Z-scheme. The electron enters a chlorophyll molecule in Photosystem I. There it is further excited by the light absorbed by that photosystem. The electron is then passed along a chain of electron acceptors to which it transfers some of its energy. The energy delivered to the electron acceptors is used to move hydrogen ions across the thylakoid membrane into the lumen. The electron is eventually used to reduce the coenzyme NADP with an H+ to NADPH (which has functions in the light-independent reaction); at that point, the path of that electron ends. The cyclic reaction is similar to that of the non-cyclic but differs in that it generates only ATP, and no reduced NADP (NADPH) is created. The cyclic reaction takes place only at photosystem I. Once the electron is displaced from the photosystem, the electron is passed down the electron acceptor molecules and returns to photosystem I, from where it was emitted, hence the name cyclic reaction. Water photolysis Linear electron transport through a photosystem will leave the reaction center of that photosystem oxidized. Elevating another electron will first require re-reduction of the reaction center. The excited electrons lost from the reaction center (P700) of photosystem I are replaced by transfer from plastocyanin, whose electrons come from electron transport through photosystem II. Photosystem II, as the first step of the Z-scheme, requires an external source of electrons to reduce its oxidized chlorophyll a reaction center. The source of electrons for photosynthesis in green plants and cyanobacteria is water. Two water molecules are oxidized by the energy of four successive charge-separation reactions of photosystem II to yield a molecule of diatomic oxygen and four hydrogen ions. The electrons yielded are transferred to a redox-active tyrosine residue that is oxidized by the energy of P680. This resets the ability of P680 to absorb another photon and release another photo-dissociated electron. The oxidation of water is catalyzed in photosystem II by a redox-active structure that contains four manganese ions and a calcium ion; this oxygen-evolving complex binds two water molecules and contains the four oxidizing equivalents that are used to drive the water-oxidizing reaction (Kok's S-state diagrams). The hydrogen ions are released in the thylakoid lumen and therefore contribute to the transmembrane chemiosmotic potential that leads to ATP synthesis. Oxygen is a waste product of light-dependent reactions, but the majority of organisms on Earth use oxygen and its energy for cellular respiration, including photosynthetic organisms. Light-independent reactions Calvin cycle In the light-independent (or "dark") reactions, the enzyme RuBisCO captures CO2 from the atmosphere and, in a process called the Calvin cycle, uses the newly formed NADPH and releases three-carbon sugars, which are later combined to form sucrose and starch. The overall equation for the light-independent reactions in green plants is Carbon fixation produces the three-carbon sugar intermediate, which is then converted into the final carbohydrate products. The simple carbon sugars photosynthesis produces are then used to form other organic compounds, such as the building material cellulose, the precursors for lipid and amino acid biosynthesis, or as a fuel in cellular respiration. The latter occurs not only in plants but also in animals when the carbon and energy from plants is passed through a food chain. The fixation or reduction of carbon dioxide is a process in which carbon dioxide combines with a five-carbon sugar, ribulose 1,5-bisphosphate, to yield two molecules of a three-carbon compound, glycerate 3-phosphate, also known as 3-phosphoglycerate. Glycerate 3-phosphate, in the presence of ATP and NADPH produced during the light-dependent stages, is reduced to glyceraldehyde 3-phosphate. This product is also referred to as 3-phosphoglyceraldehyde (PGAL) or, more generically, as triose phosphate. Most (five out of six molecules) of the glyceraldehyde 3-phosphate produced are used to regenerate ribulose 1,5-bisphosphate so the process can continue. The triose phosphates not thus "recycled" often condense to form hexose phosphates, which ultimately yield sucrose, starch, and cellulose, as well as glucose and fructose. The sugars produced during carbon metabolism yield carbon skeletons that can be used for other metabolic reactions like the production of amino acids and lipids. Carbon concentrating mechanisms On land In hot and dry conditions, plants close their stomata to prevent water loss. Under these conditions, will decrease and oxygen gas, produced by the light reactions of photosynthesis, will increase, causing an increase of photorespiration by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO) and decrease in carbon fixation. Some plants have evolved mechanisms to increase the concentration in the leaves under these conditions. Plants that use the C4 carbon fixation process chemically fix carbon dioxide in the cells of the mesophyll by adding it to the three-carbon molecule phosphoenolpyruvate (PEP), a reaction catalyzed by an enzyme called PEP carboxylase, creating the four-carbon organic acid oxaloacetic acid. Oxaloacetic acid or malate synthesized by this process is then translocated to specialized bundle sheath cells where the enzyme RuBisCO and other Calvin cycle enzymes are located, and where released by decarboxylation of the four-carbon acids is then fixed by RuBisCO activity to the three-carbon 3-phosphoglyceric acids. The physical separation of RuBisCO from the oxygen-generating light reactions reduces photorespiration and increases fixation and, thus, the photosynthetic capacity of the leaf. plants can produce more sugar than plants in conditions of high light and temperature. Many important crop plants are plants, including maize, sorghum, sugarcane, and millet. Plants that do not use PEP-carboxylase in carbon fixation are called C3 plants because the primary carboxylation reaction, catalyzed by RuBisCO, produces the three-carbon 3-phosphoglyceric acids directly in the Calvin-Benson cycle. Over 90% of plants use carbon fixation, compared to 3% that use carbon fixation; however, the evolution of in over sixty plant lineages makes it a striking example of convergent evolution. C2 photosynthesis, which involves carbon-concentration by selective breakdown of photorespiratory glycine, is both an evolutionary precursor to and a useful carbon-concentrating mechanism in its own right. Xerophytes, such as cacti and most succulents, also use PEP carboxylase to capture carbon dioxide in a process called Crassulacean acid metabolism (CAM). In contrast to metabolism, which spatially separates the fixation to PEP from the Calvin cycle, CAM temporally separates these two processes. CAM plants have a different leaf anatomy from plants, and fix the at night, when their stomata are open. CAM plants store the mostly in the form of malic acid via carboxylation of phosphoenolpyruvate to oxaloacetate, which is then reduced to malate. Decarboxylation of malate during the day releases inside the leaves, thus allowing carbon fixation to 3-phosphoglycerate by RuBisCO. CAM is used by 16,000 species of plants. Calcium-oxalate-accumulating plants, such as Amaranthus hybridus and Colobanthus quitensis, show a variation of photosynthesis where calcium oxalate crystals function as dynamic carbon pools, supplying carbon dioxide (CO2) to photosynthetic cells when stomata are partially or totally closed. This process was named alarm photosynthesis. Under stress conditions (e.g., water deficit), oxalate released from calcium oxalate crystals is converted to CO2 by an oxalate oxidase enzyme, and the produced CO2 can support the Calvin cycle reactions. Reactive hydrogen peroxide (H2O2), the byproduct of oxalate oxidase reaction, can be neutralized by catalase. Alarm photosynthesis represents a photosynthetic variant to be added to the well-known C4 and CAM pathways. However, alarm photosynthesis, in contrast to these pathways, operates as a biochemical pump that collects carbon from the organ interior (or from the soil) and not from the atmosphere. In water Cyanobacteria possess carboxysomes, which increase the concentration of around RuBisCO to increase the rate of photosynthesis. An enzyme, carbonic anhydrase, located within the carboxysome, releases CO2 from dissolved hydrocarbonate ions (HCO). Before the CO2 can diffuse out, RuBisCO concentrated within the carboxysome quickly sponges it up. HCO ions are made from CO2 outside the cell by another carbonic anhydrase and are actively pumped into the cell by a membrane protein. They cannot cross the membrane as they are charged, and within the cytosol they turn back into CO2 very slowly without the help of carbonic anhydrase. This causes the HCO ions to accumulate within the cell from where they diffuse into the carboxysomes. Pyrenoids in algae and hornworts also act to concentrate around RuBisCO. Order and kinetics The overall process of photosynthesis takes place in four stages: Efficiency Plants usually convert light into chemical energy with a photosynthetic efficiency of 3–6%. Absorbed light that is unconverted is dissipated primarily as heat, with a small fraction (1–2%) reemitted as chlorophyll fluorescence at longer (redder) wavelengths. This fact allows measurement of the light reaction of photosynthesis by using chlorophyll fluorometers. Actual plants' photosynthetic efficiency varies with the frequency of the light being converted, light intensity, temperature, and proportion of carbon dioxide in the atmosphere, and can vary from 0.1% to 8%. By comparison, solar panels convert light into electric energy at an efficiency of approximately 6–20% for mass-produced panels, and above 40% in laboratory devices. Scientists are studying photosynthesis in hopes of developing plants with increased yield. The efficiency of both light and dark reactions can be measured, but the relationship between the two can be complex. For example, the light reaction creates ATP and NADPH energy molecules, which C3 plants can use for carbon fixation or photorespiration. Electrons may also flow to other electron sinks. For this reason, it is not uncommon for authors to differentiate between work done under non-photorespiratory conditions and under photorespiratory conditions. Chlorophyll fluorescence of photosystem II can measure the light reaction, and infrared gas analyzers can measure the dark reaction. An integrated chlorophyll fluorometer and gas exchange system can investigate both light and dark reactions when researchers use the two separate systems together. Infrared gas analyzers and some moisture sensors are sensitive enough to measure the photosynthetic assimilation of CO2 and of ΔH2O using reliable methods. CO2 is commonly measured in /(m2/s), parts per million, or volume per million; and H2O is commonly measured in /(m2/s) or in . By measuring CO2 assimilation, ΔH2O, leaf temperature, barometric pressure, leaf area, and photosynthetically active radiation (PAR), it becomes possible to estimate, "A" or carbon assimilation, "E" or transpiration, "gs" or stomatal conductance, and "Ci" or intracellular CO2. However, it is more common to use chlorophyll fluorescence for plant stress measurement, where appropriate, because the most commonly used parameters FV/FM and Y(II) or F/FM' can be measured in a few seconds, allowing the investigation of larger plant populations. Gas exchange systems that offer control of CO2 levels, above and below ambient, allow the common practice of measurement of A/Ci curves, at different CO2 levels, to characterize a plant's photosynthetic response. Integrated chlorophyll fluorometer – gas exchange systems allow a more precise measure of photosynthetic response and mechanisms. While standard gas exchange photosynthesis systems can measure Ci, or substomatal CO2 levels, the addition of integrated chlorophyll fluorescence measurements allows a more precise measurement of CC, the estimation of CO2 concentration at the site of carboxylation in the chloroplast, to replace Ci. CO2 concentration in the chloroplast becomes possible to estimate with the measurement of mesophyll conductance or gm using an integrated system. Photosynthesis measurement systems are not designed to directly measure the amount of light the leaf absorbs, but analysis of chlorophyll fluorescence, P700- and P515-absorbance, and gas exchange measurements reveal detailed information about, e.g., the photosystems, quantum efficiency and the CO2 assimilation rates. With some instruments, even wavelength dependency of the photosynthetic efficiency can be analyzed. A phenomenon known as quantum walk increases the efficiency of the energy transport of light significantly. In the photosynthetic cell of an alga, bacterium, or plant, there are light-sensitive molecules called chromophores arranged in an antenna-shaped structure called a photocomplex. When a photon is absorbed by a chromophore, it is converted into a quasiparticle referred to as an exciton, which jumps from chromophore to chromophore towards the reaction center of the photocomplex, a collection of molecules that traps its energy in a chemical form accessible to the cell's metabolism. The exciton's wave properties enable it to cover a wider area and try out several possible paths simultaneously, allowing it to instantaneously "choose" the most efficient route, where it will have the highest probability of arriving at its destination in the minimum possible time. Because that quantum walking takes place at temperatures far higher than quantum phenomena usually occur, it is only possible over very short distances. Obstacles in the form of destructive interference cause the particle to lose its wave properties for an instant before it regains them once again after it is freed from its locked position through a classic "hop". The movement of the electron towards the photo center is therefore covered in a series of conventional hops and quantum walks. Evolution Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old. More recent studies also suggest that photosynthesis may have begun about 3.4 billion years ago, though the first direct evidence of photosynthesis comes from thylakoid membranes preserved in 1.75-billion-year-old cherts. Oxygenic photosynthesis is the main source of oxygen in the Earth's atmosphere, and its earliest appearance is sometimes referred to as the oxygen catastrophe. Geological evidence suggests that oxygenic photosynthesis, such as that in cyanobacteria, became important during the Paleoproterozoic era around two billion years ago. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic, using water as an electron donor, which is oxidized to molecular oxygen in the photosynthetic reaction center. Symbiosis and the origin of chloroplasts Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges, and sea anemones. Scientists presume that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks, such as Elysia viridis and Elysia chlorotica, also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies (see Kleptoplasty). This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins they need to survive. An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosome, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those found in cyanobacteria. DNA in chloroplasts codes for redox proteins such as those found in the photosynthetic reaction centers. The CoRR Hypothesis proposes that this co-location of genes with their gene products is required for redox regulation of gene expression, and accounts for the persistence of DNA in bioenergetic organelles. Photosynthetic eukaryotic lineages Symbiotic and kleptoplastic organisms excluded: The glaucophytes and the red and green algae—clade Archaeplastida (uni- and multicellular) The cryptophytes—clade Cryptista (unicellular) The haptophytes—clade Haptista (unicellular) The dinoflagellates and chromerids in the superphylum Myzozoa, and Pseudoblepharisma in the phylum Ciliophora—clade Alveolata (unicellular) The ochrophytes—clade Stramenopila (uni- and multicellular) The chlorarachniophytes and three species of Paulinella in the phylum Cercozoa—clade Rhizaria (unicellular) The euglenids—clade Excavata (unicellular) Except for the euglenids, which are found within the Excavata, all of these belong to the Diaphoretickes. Archaeplastida and the photosynthetic Paulinella got their plastids, which are surrounded by two membranes, through primary endosymbiosis in two separate events, by engulfing a cyanobacterium. The plastids in all the other groups have either a red or green algal origin, and are referred to as the "red lineages" and the "green lineages". The only known exception is the ciliate Pseudoblepharisma tenue, which in addition to its plastids that originated from green algae also has a purple sulfur bacterium as symbiont. In dinoflagellates and euglenids the plastids are surrounded by three membranes, and in the remaining lines by four. A nucleomorph, remnants of the original algal nucleus located between the inner and outer membranes of the plastid, is present in the cryptophytes (from a red alga) and chlorarachniophytes (from a green alga). Some dinoflagellates that lost their photosynthetic ability later regained it again through new endosymbiotic events with different algae. While able to perform photosynthesis, many of these eukaryotic groups are mixotrophs and practice heterotrophy to various degrees. Photosynthetic prokaryotic lineages Early photosynthetic systems, such as those in green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, and used various other molecules than water as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and sulfur as electron donors. Green nonsulfur bacteria used various amino and other organic acids as electron donors. Purple nonsulfur bacteria used a variety of nonspecific organic molecules. The use of these molecules is consistent with the geological evidence that Earth's early atmosphere was highly reducing at that time. With a possible exception of Heimdallarchaeota, photosynthesis is not found in archaea. Haloarchaea are photoheterotrophic; they can absorb energy from the sun, but do not harvest carbon from the atmosphere and are therefore not photosynthetic. Instead of chlorophyll they use rhodopsins, which convert light-energy to ion gradients but cannot mediate electron transfer reactions. In bacteria eight photosynthetic lineages are currently known: Cyanobacteria, the only prokaryotes performing oxygenic photosynthesis and the only prokaryotes that contain two types of photosystems (type I (RCI), also known as Fe-S type, and type II (RCII), also known as quinone type). The seven remaining prokaryotes have anoxygenic photosynthesis and use versions of either type I or type II. Chlorobi (green sulfur bacteria) Type I Heliobacteria Type I Chloracidobacterium Type I Proteobacteria (purple sulfur bacteria and purple non-sulfur bacteria) Type II (see: Purple bacteria) Chloroflexota (green non-sulfur bacteria) Type II Gemmatimonadota Type II Eremiobacterota Type II Cyanobacteria and the evolution of photosynthesis The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria (formerly called blue-green algae). The geological record indicates that this transforming event took place early in Earth's history, at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. Because the Earth's atmosphere contained almost no oxygen during the estimated development of photosynthesis, it is believed that the first photosynthetic cyanobacteria did not generate oxygen. Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved is still unanswered. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of cyanobacteria. Cyanobacteria remained the principal primary producers of oxygen throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined cyanobacteria as the major primary producers of oxygen on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251–66 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did the primary production of oxygen in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers of oxygen in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae. Experimental history Discovery Although some of the steps in photosynthesis are still not completely understood, the overall photosynthetic equation has been known since the 19th century. Jan van Helmont began the research of the process in the mid-17th century when he carefully measured the mass of the soil a plant was using and the mass of the plant as it grew. After noticing that the soil mass changed very little, he hypothesized that the mass of the growing plant must come from the water, the only substance he added to the potted plant. His hypothesis was partially accurate – much of the gained mass comes from carbon dioxide as well as water. However, this was a signaling point to the idea that the bulk of a plant's biomass comes from the inputs of photosynthesis, not the soil itself. Joseph Priestley, a chemist and minister, discovered that when he isolated a volume of air under an inverted jar and burned a candle in it (which gave off CO2), the candle would burn out very quickly, much before it ran out of wax. He further discovered that a mouse could similarly "injure" air. He then showed that a plant could restore the air the candle and the mouse had "injured." In 1779, Jan Ingenhousz repeated Priestley's experiments. He discovered that it was the influence of sunlight on the plant that could cause it to revive a mouse in a matter of hours. In 1796, Jean Senebier, a Swiss pastor, botanist, and naturalist, demonstrated that green plants consume carbon dioxide and release oxygen under the influence of light. Soon afterward, Nicolas-Théodore de Saussure showed that the increase in mass of the plant as it grows could not be due only to uptake of CO2 but also to the incorporation of water. Thus, the basic reaction by which organisms use photosynthesis to produce food (such as glucose) was outlined. Refinements Cornelis Van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green bacteria, he was the first to demonstrate that photosynthesis is a light-dependent redox reaction in which hydrogen reduces (donates its atoms as electrons and protons to) carbon dioxide. Robert Emerson discovered two light reactions by testing plant productivity using different wavelengths of light. With the red alone, the light reactions were suppressed. When blue and red were combined, the output was much more substantial. Thus, there were two photosystems, one absorbing up to 600 nm wavelengths, the other up to 700 nm. The former is known as PSII, the latter is PSI. PSI contains only chlorophyll "a", PSII contains primarily chlorophyll "a" with most of the available chlorophyll "b", among other pigments. These include phycobilins, which are the red and blue pigments of red and blue algae, respectively, and fucoxanthol for brown algae and diatoms. The process is most productive when the absorption of quanta is equal in both PSII and PSI, assuring that input energy from the antenna complex is divided between the PSI and PSII systems, which in turn powers the photochemistry. Robert Hill thought that a complex of reactions consisted of an intermediate to cytochrome b6 (now a plastoquinone), and that another was from cytochrome f to a step in the carbohydrate-generating mechanisms. These are linked by plastoquinone, which does require energy to reduce cytochrome f. Further experiments to prove that the oxygen developed during the photosynthesis of green plants came from water were performed by Hill in 1937 and 1939. He showed that isolated chloroplasts give off oxygen in the presence of unnatural reducing agents like iron oxalate, ferricyanide or benzoquinone after exposure to light. In the Hill reaction: 2 H2O + 2 A + (light, chloroplasts) → 2 AH2 + O2 A is the electron acceptor. Therefore, in light, the electron acceptor is reduced and oxygen is evolved. Samuel Ruben and Martin Kamen used radioactive isotopes to determine that the oxygen liberated in photosynthesis came from the water. Melvin Calvin and Andrew Benson, along with James Bassham, elucidated the path of carbon assimilation (the photosynthetic carbon reduction cycle) in plants. The carbon reduction cycle is known as the Calvin cycle, but many scientists refer to it as the Calvin-Benson, Benson-Calvin, or even Calvin-Benson-Bassham (or CBB) Cycle. Nobel Prize–winning scientist Rudolph A. Marcus was later able to discover the function and significance of the electron transport chain. Otto Heinrich Warburg and Dean Burk discovered the I-quantum photosynthesis reaction that splits CO2, activated by the respiration. In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation. In 1954, Daniel I. Arnon et al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32. Louis N. M. Duysens and Jan Amesz discovered that chlorophyll "a" will absorb one light, oxidize cytochrome f, while chlorophyll "a" (and other pigments) will absorb another light but will reduce this same oxidized cytochrome, stating the two light reactions are in series. Development of the concept In 1893, the American botanist Charles Reid Barnes proposed two terms, photosyntax and photosynthesis, for the biological process of synthesis of complex carbon compounds out of carbonic acid, in the presence of chlorophyll, under the influence of light. The term photosynthesis is derived from the Greek phōs (φῶς, gleam) and sýnthesis (σύνθεσις, arranging together), while another word that he designated was photosyntax, from sýntaxis (σύνταξις, configuration). Over time, the term photosynthesis came into common usage. Later discovery of anoxygenic photosynthetic bacteria and photophosphorylation necessitated redefinition of the term. C3 : C4 photosynthesis research In the late 1940s at the University of California, Berkeley, the details of photosynthetic carbon metabolism were sorted out by the chemists Melvin Calvin, Andrew Benson, James Bassham and a score of students and researchers utilizing the carbon-14 isotope and paper chromatography techniques. The pathway of CO2 fixation by the algae Chlorella in a fraction of a second in light resulted in a three carbon molecule called phosphoglyceric acid (PGA). For that original and ground-breaking work, a Nobel Prize in Chemistry was awarded to Melvin Calvin in 1961. In parallel, plant physiologists studied leaf gas exchanges using the new method of infrared gas analysis and a leaf chamber where the net photosynthetic rates ranged from 10 to 13 μmol CO2·m−2·s−1, with the conclusion that all terrestrial plants have the same photosynthetic capacities, that are light saturated at less than 50% of sunlight. Later in 1958–1963 at Cornell University, field grown maize was reported to have much greater leaf photosynthetic rates of 40 μmol CO2·m−2·s−1 and not be saturated at near full sunlight. This higher rate in maize was almost double of those observed in other species such as wheat and soybean, indicating that large differences in photosynthesis exist among higher plants. At the University of Arizona, detailed gas exchange research on more than 15 species of monocots and dicots uncovered for the first time that differences in leaf anatomy are crucial factors in differentiating photosynthetic capacities among species. In tropical grasses, including maize, sorghum, sugarcane, Bermuda grass and in the dicot amaranthus, leaf photosynthetic rates were around 38−40 μmol CO2·m−2·s−1, and the leaves have two types of green cells, i.e. outer layer of mesophyll cells surrounding a tightly packed cholorophyllous vascular bundle sheath cells. This type of anatomy was termed Kranz anatomy in the 19th century by the botanist Gottlieb Haberlandt while studying leaf anatomy of sugarcane. Plant species with the greatest photosynthetic rates and Kranz anatomy showed no apparent photorespiration, very low CO2 compensation point, high optimum temperature, high stomatal resistances and lower mesophyll resistances for gas diffusion and rates never saturated at full sun light. The research at Arizona was designated a Citation Classic in 1986. These species were later termed C4 plants as the first stable compound of CO2 fixation in light has four carbons as malate and aspartate. Other species that lack Kranz anatomy were termed C3 type such as cotton and sunflower, as the first stable carbon compound is the three-carbon PGA. At 1000 ppm CO2 in measuring air, both the C3 and C4 plants had similar leaf photosynthetic rates around 60 μmol CO2·m−2·s−1 indicating the suppression of photorespiration in C3 plants. Factors There are four main factors influencing photosynthesis and several corollary factors. The four main are: Light irradiance and wavelength Water absorption Carbon dioxide concentration Temperature. Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis. Light intensity (irradiance), wavelength and temperature The process of photosynthesis provides the main input of free energy into the biosphere, and is one of four main ways in which radiation is important for plant life. The radiation climate within plant communities is extremely variable, in both time and space. In the early 20th century, Frederick Blackman and Gabrielle Matthaei investigated the effects of light intensity (irradiance) and temperature on the rate of carbon assimilation. At constant temperature, the rate of carbon assimilation varies with irradiance, increasing as the irradiance increases, but reaching a plateau at higher irradiance. At low irradiance, increasing the temperature has little influence on the rate of carbon assimilation. At constant high irradiance, the rate of carbon assimilation increases as the temperature is increased. These two experiments illustrate several important points: First, it is known that, in general, photochemical reactions are not affected by temperature. However, these experiments clearly show that temperature affects the rate of carbon assimilation, so there must be two sets of reactions in the full process of carbon assimilation. These are the light-dependent 'photochemical' temperature-independent stage, and the light-independent, temperature-dependent stage. Second, Blackman's experiments illustrate the concept of limiting factors. Another limiting factor is the wavelength of light. Cyanobacteria, which reside several meters underwater, cannot receive the correct wavelengths required to cause photoinduced charge separation in conventional photosynthetic pigments. To combat this problem, Cyanobacteria have a light-harvesting complex called Phycobilisome. This complex is made up of a series of proteins with different pigments which surround the reaction center. Carbon dioxide levels and photorespiration As carbon dioxide concentrations rise, the rate at which sugars are made by the light-independent reactions increases until limited by other factors. RuBisCO, the enzyme that captures carbon dioxide in the light-independent reactions, has a binding affinity for both carbon dioxide and oxygen. When the concentration of carbon dioxide is high, RuBisCO will fix carbon dioxide. However, if the carbon dioxide concentration is low, RuBisCO will bind oxygen instead of carbon dioxide. This process, called photorespiration, uses energy, but does not produce sugars. RuBisCO oxygenase activity is disadvantageous to plants for several reasons: One product of oxygenase activity is phosphoglycolate (2 carbon) instead of 3-phosphoglycerate (3 carbon). Phosphoglycolate cannot be metabolized by the Calvin-Benson cycle and represents carbon lost from the cycle. A high oxygenase activity, therefore, drains the sugars that are required to recycle ribulose 5-bisphosphate and for the continuation of the Calvin-Benson cycle. Phosphoglycolate is quickly metabolized to glycolate that is toxic to a plant at a high concentration; it inhibits photosynthesis. Salvaging glycolate is an energetically expensive process that uses the glycolate pathway, and only 75% of the carbon is returned to the Calvin-Benson cycle as 3-phosphoglycerate. The reactions also produce ammonia (NH3), which is able to diffuse out of the plant, leading to a loss of nitrogen. A highly simplified summary is: 2 glycolate + ATP → 3-phosphoglycerate + carbon dioxide + ADP + NH3 The salvaging pathway for the products of RuBisCO oxygenase activity is more commonly known as photorespiration, since it is characterized by light-dependent oxygen consumption and the release of carbon dioxide.
Biology and health sciences
Biology
null
24553
https://en.wikipedia.org/wiki/Protein%20biosynthesis
Protein biosynthesis
Protein biosynthesis (or protein synthesis) is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences. Protein synthesis can be divided broadly into two phases: transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain. Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications, which can alter the protein's ability to function, its location within the cell (e.g. cytoplasm or nucleus) and its ability to interact with other proteins. Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins have a tendency to form dense protein clumps, which are often implicated in diseases, particularly neurological disorders including Alzheimer's and Parkinson's disease. Transcription Transcription occurs in the nucleus using DNA as a template to produce mRNA. In eukaryotes, this mRNA molecule is known as pre-mRNA as it undergoes post-transcriptional modifications in the nucleus to produce a mature mRNA molecule. However, in prokaryotes post-transcriptional modifications are not required so the mature mRNA molecule is immediately produced by transcription. Initially, an enzyme known as a helicase acts on the molecule of DNA. DNA has an antiparallel, double helix structure composed of two, complementary polynucleotide strands, held together by hydrogen bonds between the base pairs. The helicase disrupts the hydrogen bonds causing a region of DNAcorresponding to a geneto unwind, separating the two DNA strands and exposing a series of bases. Despite DNA being a double-stranded molecule, only one of the strands acts as a template for pre-mRNA synthesis; this strand is known as the template strand. The other DNA strand (which is complementary to the template strand) is known as the coding strand. Both DNA and RNA have intrinsic directionality, meaning there are two distinct ends of the molecule. This property of directionality is due to the asymmetrical underlying nucleotide subunits, with a phosphate group on one side of the pentose sugar and a base on the other. The five carbons in the pentose sugar are numbered from 1' (where ' means prime) to 5'. Therefore, the phosphodiester bonds connecting the nucleotides are formed by joining the hydroxyl group on the 3' carbon of one nucleotide to the phosphate group on the 5' carbon of another nucleotide. Hence, the coding strand of DNA runs in a 5' to 3' direction and the complementary, template DNA strand runs in the opposite direction from 3' to 5'. The enzyme RNA polymerase binds to the exposed template strand and reads from the gene in the 3' to 5' direction. Simultaneously, the RNA polymerase synthesizes a single strand of pre-mRNA in the 5'-to-3' direction by catalysing the formation of phosphodiester bonds between activated nucleotides (free in the nucleus) that are capable of complementary base pairing with the template strand. Behind the moving RNA polymerase the two strands of DNA rejoin, so only 12 base pairs of DNA are exposed at one time. RNA polymerase builds the pre-mRNA molecule at a rate of 20 nucleotides per second enabling the production of thousands of pre-mRNA molecules from the same gene in an hour. Despite the fast rate of synthesis, the RNA polymerase enzyme contains its own proofreading mechanism. The proofreading mechanisms allows the RNA polymerase to remove incorrect nucleotides (which are not complementary to the template strand of DNA) from the growing pre-mRNA molecule through an excision reaction. When RNA polymerases reaches a specific DNA sequence which terminates transcription, RNA polymerase detaches and pre-mRNA synthesis is complete. The pre-mRNA molecule synthesized is complementary to the template DNA strand and shares the same nucleotide sequence as the coding DNA strand. However, there is one crucial difference in the nucleotide composition of DNA and mRNA molecules. DNA is composed of the bases: guanine, cytosine, adenine and thymine (G, C, A and T). RNA is also composed of four bases: guanine, cytosine, adenine and uracil. In RNA molecules, the DNA base thymine is replaced by uracil which is able to base pair with adenine. Therefore, in the pre-mRNA molecule, all complementary bases which would be thymine in the coding DNA strand are replaced by uracil. Post-transcriptional modifications Once transcription is complete, the pre-mRNA molecule undergoes post-transcriptional modifications to produce a mature mRNA molecule. There are 3 key steps within post-transcriptional modifications: Addition of a 5' cap to the 5' end of the pre-mRNA molecule Addition of a 3' poly(A) tail is added to the 3' end pre-mRNA molecule Removal of introns via RNA splicing The 5' cap is added to the 5' end of the pre-mRNA molecule and is composed of a guanine nucleotide modified through methylation. The purpose of the 5' cap is to prevent break down of mature mRNA molecules before translation, the cap also aids binding of the ribosome to the mRNA to start translation and enables mRNA to be differentiated from other RNAs in the cell. In contrast, the 3' Poly(A) tail is added to the 3' end of the mRNA molecule and is composed of 100-200 adenine bases. These distinct mRNA modifications enable the cell to detect that the full mRNA message is intact if both the 5' cap and 3' tail are present. This modified pre-mRNA molecule then undergoes the process of RNA splicing. Genes are composed of a series of introns and exons, introns are nucleotide sequences which do not encode a protein while, exons are nucleotide sequences that directly encode a protein. Introns and exons are present in both the underlying DNA sequence and the pre-mRNA molecule, therefore, to produce a mature mRNA molecule encoding a protein, splicing must occur. During splicing, the intervening introns are removed from the pre-mRNA molecule by a multi-protein complex known as a spliceosome (composed of over 150 proteins and RNA). This mature mRNA molecule is then exported into the cytoplasm through nuclear pores in the envelope of the nucleus. Translation During translation, ribosomes synthesize polypeptide chains from mRNA template molecules. In eukaryotes, translation occurs in the cytoplasm of the cell, where the ribosomes are located either free floating or attached to the endoplasmic reticulum. In prokaryotes, which lack a nucleus, the processes of both transcription and translation occur in the cytoplasm. Ribosomes are complex molecular machines, made of a mixture of protein and ribosomal RNA, arranged into two subunits (a large and a small subunit), which surround the mRNA molecule. The ribosome reads the mRNA molecule in a 5'-3' direction and uses it as a template to determine the order of amino acids in the polypeptide chain. To translate the mRNA molecule, the ribosome uses small molecules, known as transfer RNAs (tRNA), to deliver the correct amino acids to the ribosome. Each tRNA is composed of 70-80 nucleotides and adopts a characteristic cloverleaf structure due to the formation of hydrogen bonds between the nucleotides within the molecule. There are around 60 different types of tRNAs, each tRNA binds to a specific sequence of three nucleotides (known as a codon) within the mRNA molecule and delivers a specific amino acid. The ribosome initially attaches to the mRNA at the start codon (AUG) and begins to translate the molecule. The mRNA nucleotide sequence is read in triplets; three adjacent nucleotides in the mRNA molecule correspond to a single codon. Each tRNA has an exposed sequence of three nucleotides, known as the anticodon, which are complementary in sequence to a specific codon that may be present in mRNA. For example, the first codon encountered is the start codon composed of the nucleotides AUG. The correct tRNA with the anticodon (complementary 3 nucleotide sequence UAC) binds to the mRNA using the ribosome. This tRNA delivers the correct amino acid corresponding to the mRNA codon, in the case of the start codon, this is the amino acid methionine. The next codon (adjacent to the start codon) is then bound by the correct tRNA with complementary anticodon, delivering the next amino acid to ribosome. The ribosome then uses its peptidyl transferase enzymatic activity to catalyze the formation of the covalent peptide bond between the two adjacent amino acids. The ribosome then moves along the mRNA molecule to the third codon. The ribosome then releases the first tRNA molecule, as only two tRNA molecules can be brought together by a single ribosome at one time. The next complementary tRNA with the correct anticodon complementary to the third codon is selected, delivering the next amino acid to the ribosome which is covalently joined to the growing polypeptide chain. This process continues with the ribosome moving along the mRNA molecule adding up to 15 amino acids per second to the polypeptide chain. Behind the first ribosome, up to 50 additional ribosomes can bind to the mRNA molecule forming a polysome, this enables simultaneous synthesis of multiple identical polypeptide chains. Termination of the growing polypeptide chain occurs when the ribosome encounters a stop codon (UAA, UAG, or UGA) in the mRNA molecule. When this occurs, no tRNA can recognise it and a release factor induces the release of the complete polypeptide chain from the ribosome. Dr. Har Gobind Khorana, a scientist originating from India, decoded the RNA sequences for about 20 amino acids. He was awarded the Nobel prize in 1968, along with two other scientists, for his work. Protein folding Once synthesis of the polypeptide chain is complete, the polypeptide chain folds to adopt a specific structure which enables the protein to carry out its functions. The basic form of protein structure is known as the primary structure, which is simply the polypeptide chain i.e. a sequence of covalently bonded amino acids. The primary structure of a protein is encoded by a gene. Therefore, any changes to the sequence of the gene can alter the primary structure of the protein and all subsequent levels of protein structure, ultimately changing the overall structure and function. The primary structure of a protein (the polypeptide chain) can then fold or coil to form the secondary structure of the protein. The most common types of secondary structure are known as an alpha helix or beta sheet, these are small structures produced by hydrogen bonds forming within the polypeptide chain. This secondary structure then folds to produce the tertiary structure of the protein. The tertiary structure is the proteins overall 3D structure which is made of different secondary structures folding together. In the tertiary structure, key protein features e.g. the active site, are folded and formed enabling the protein to function. Finally, some proteins may adopt a complex quaternary structure. Most proteins are made of a single polypeptide chain, however, some proteins are composed of multiple polypeptide chains (known as subunits) which fold and interact to form the quaternary structure. Hence, the overall protein is a multi-subunit complex composed of multiple folded, polypeptide chain subunits e.g. haemoglobin. Post-translation events There are events that follow protein biosynthesis such as proteolysis and protein-folding. Proteolysis refers to the cleavage of proteins by proteases and the breakdown of proteins into amino acids by the action of enzymes. Post-translational modifications When protein folding into the mature, functional 3D state is complete, it is not necessarily the end of the protein maturation pathway. A folded protein can still undergo further processing through post-translational modifications. There are over 200 known types of post-translational modification, these modifications can alter protein activity, the ability of the protein to interact with other proteins and where the protein is found within the cell e.g. in the cell nucleus or cytoplasm. Through post-translational modifications, the diversity of proteins encoded by the genome is expanded by 2 to 3 orders of magnitude. There are four key classes of post-translational modification: Cleavage Addition of chemical groups Addition of complex molecules Formation of intramolecular bonds Cleavage Cleavage of proteins is an irreversible post-translational modification carried out by enzymes known as proteases. These proteases are often highly specific and cause hydrolysis of a limited number of peptide bonds within the target protein. The resulting shortened protein has an altered polypeptide chain with different amino acids at the start and end of the chain. This post-translational modification often alters the proteins function, the protein can be inactivated or activated by the cleavage and can display new biological activities. Addition of chemical groups Following translation, small chemical groups can be added onto amino acids within the mature protein structure. Examples of processes which add chemical groups to the target protein include methylation, acetylation and phosphorylation. Methylation is the reversible addition of a methyl group onto an amino acid catalyzed by methyltransferase enzymes. Methylation occurs on at least 9 of the 20 common amino acids, however, it mainly occurs on the amino acids lysine and arginine. One example of a protein which is commonly methylated is a histone. Histones are proteins found in the nucleus of the cell. DNA is tightly wrapped round histones and held in place by other proteins and interactions between negative charges in the DNA and positive charges on the histone. A highly specific pattern of amino acid methylation on the histone proteins is used to determine which regions of DNA are tightly wound and unable to be transcribed and which regions are loosely wound and able to be transcribed. Histone-based regulation of DNA transcription is also modified by acetylation. Acetylation is the reversible covalent addition of an acetyl group onto a lysine amino acid by the enzyme acetyltransferase. The acetyl group is removed from a donor molecule known as acetyl coenzyme A and transferred onto the target protein. Histones undergo acetylation on their lysine residues by enzymes known as histone acetyltransferase. The effect of acetylation is to weaken the charge interactions between the histone and DNA, thereby making more genes in the DNA accessible for transcription. The final, prevalent post-translational chemical group modification is phosphorylation. Phosphorylation is the reversible, covalent addition of a phosphate group to specific amino acids (serine, threonine and tyrosine) within the protein. The phosphate group is removed from the donor molecule ATP by a protein kinase and transferred onto the hydroxyl group of the target amino acid, this produces adenosine diphosphate as a byproduct. This process can be reversed and the phosphate group removed by the enzyme protein phosphatase. Phosphorylation can create a binding site on the phosphorylated protein which enables it to interact with other proteins and generate large, multi-protein complexes. Alternatively, phosphorylation can change the level of protein activity by altering the ability of the protein to bind its substrate. Addition of complex molecules Post-translational modifications can incorporate more complex, large molecules into the folded protein structure. One common example of this is glycosylation, the addition of a polysaccharide molecule, which is widely considered to be most common post-translational modification. In glycosylation, a polysaccharide molecule (known as a glycan) is covalently added to the target protein by glycosyltransferases enzymes and modified by glycosidases in the endoplasmic reticulum and Golgi apparatus. Glycosylation can have a critical role in determining the final, folded 3D structure of the target protein. In some cases glycosylation is necessary for correct folding. N-linked glycosylation promotes protein folding by increasing solubility and mediates the protein binding to protein chaperones. Chaperones are proteins responsible for folding and maintaining the structure of other proteins. There are broadly two types of glycosylation, N-linked glycosylation and O-linked glycosylation. N-linked glycosylation starts in the endoplasmic reticulum with the addition of a precursor glycan. The precursor glycan is modified in the Golgi apparatus to produce complex glycan bound covalently to the nitrogen in an asparagine amino acid. In contrast, O-linked glycosylation is the sequential covalent addition of individual sugars onto the oxygen in the amino acids serine and threonine within the mature protein structure. Formation of covalent bonds Many proteins produced within the cell are secreted outside the cell to function as extracellular proteins. Extracellular proteins are exposed to a wide variety of conditions. To stabilize the 3D protein structure, covalent bonds are formed either within the protein or between the different polypeptide chains in the quaternary structure. The most prevalent type is a disulfide bond (also known as a disulfide bridge). A disulfide bond is formed between two cysteine amino acids using their side chain chemical groups containing a Sulphur atom, these chemical groups are known as thiol functional groups. Disulfide bonds act to stabilize the pre-existing structure of the protein. Disulfide bonds are formed in an oxidation reaction between two thiol groups and therefore, need an oxidizing environment to react. As a result, disulfide bonds are typically formed in the oxidizing environment of the endoplasmic reticulum catalyzed by enzymes called protein disulfide isomerases. Disulfide bonds are rarely formed in the cytoplasm as it is a reducing environment. Role of protein synthesis in disease Many diseases are caused by mutations in genes, due to the direct connection between the DNA nucleotide sequence and the amino acid sequence of the encoded protein. Changes to the primary structure of the protein can result in the protein mis-folding or malfunctioning. Mutations within a single gene have been identified as a cause of multiple diseases, including sickle cell disease, known as single gene disorders. Sickle cell disease Sickle cell disease is a group of diseases caused by a mutation in a subunit of hemoglobin, a protein found in red blood cells responsible for transporting oxygen. The most dangerous of the sickle cell diseases is known as sickle cell anemia. Sickle cell anemia is the most common homozygous recessive single gene disorder, meaning the affected individual must carry a mutation in both copies of the affected gene (one inherited from each parent) to experience the disease. Hemoglobin has a complex quaternary structure and is composed of four polypeptide subunitstwo A subunits and two B subunits. Patients with sickle cell anemia have a missense or substitution mutation in the gene encoding the hemoglobin B subunit polypeptide chain. A missense mutation means the nucleotide mutation alters the overall codon triplet such that a different amino acid is paired with the new codon. In the case of sickle cell anemia, the most common missense mutation is a single nucleotide mutation from thymine to adenine in the hemoglobin B subunit gene. This changes codon 6 from encoding the amino acid glutamic acid to encoding valine. This change in the primary structure of the hemoglobin B subunit polypeptide chain alters the functionality of the hemoglobin multi-subunit complex in low oxygen conditions. When red blood cells unload oxygen into the tissues of the body, the mutated haemoglobin protein starts to stick together to form a semi-solid structure within the red blood cell. This distorts the shape of the red blood cell, resulting in the characteristic "sickle" shape, and reduces cell flexibility. This rigid, distorted red blood cell can accumulate in blood vessels creating a blockage. The blockage prevents blood flow to tissues and can lead to tissue death which causes great pain to the individual. Cancer Cancers form as a result of gene mutations as well as improper protein translation. In addition to cancer cells proliferating abnormally, they suppress the expression of anti-apoptotic or pro-apoptotic genes or proteins. Most cancer cells see a mutation in the signaling protein Ras, which functions as an on/off signal transductor in cells. In cancer cells, the RAS protein becomes persistently active, thus promoting the proliferation of the cell due to the absence of any regulation. Additionally, most cancer cells carry two mutant copies of the regulator gene p53, which acts as a gatekeeper for damaged genes and initiates apoptosis in malignant cells. In its absence, the cell cannot initiate apoptosis or signal for other cells to destroy it. As the tumor cells proliferate, they either remain confined to one area and are called benign, or become malignant cells that migrate to other areas of the body. Oftentimes, these malignant cells secrete proteases that break apart the extracellular matrix of tissues. This then allows the cancer to enter its terminal stage called Metastasis, in which the cells enter the bloodstream or the lymphatic system to travel to a new part of the body.
Biology and health sciences
Proteins
Biology
24579
https://en.wikipedia.org/wiki/Pelvic%20inflammatory%20disease
Pelvic inflammatory disease
Pelvic inflammatory disease, also known as pelvic inflammatory disorder (PID), is an infection of the upper part of the female reproductive system, mainly the uterus, fallopian tubes, and ovaries, and inside of the pelvis. Often, there may be no symptoms. Signs and symptoms, when present, may include lower abdominal pain, vaginal discharge, fever, burning with urination, pain with sex, bleeding after sex, or irregular menstruation. Untreated PID can result in long-term complications including infertility, ectopic pregnancy, chronic pelvic pain, and cancer. The disease is caused by bacteria that spread from the vagina and cervix. It has been reported that infections by Neisseria gonorrhoeae or Chlamydia trachomatis are present in 75 to 90 percent of cases. However, in the UK it is reported by the NHS that infections by Neisseria gonorrhoeae and Chlamydia trachomatis are responsible for only a quarter of PID cases. Often, multiple different bacteria are involved. Without treatment, about 10 percent of those with a chlamydial infection and 40 percent of those with a gonorrhea infection will develop PID. Risk factors are generally similar to those of sexually transmitted infections and include a high number of sexual partners and drug use. Vaginal douching may also increase the risk. The diagnosis is typically based on the presenting signs and symptoms. It is recommended that the disease be considered in all women of childbearing age who have lower abdominal pain. A definitive diagnosis of PID is made by finding pus involving the fallopian tubes during surgery. Ultrasound may also be useful in diagnosis. Efforts to prevent the disease include not having sex or having few sexual partners and using condoms. Screening women at risk for chlamydial infection followed by treatment decreases the risk of PID. If the diagnosis is suspected, treatment is typically advised. Treating a woman's sexual partners should also occur. In those with mild or moderate symptoms, a single injection of the antibiotic ceftriaxone along with two weeks of doxycycline and possibly metronidazole by mouth is recommended. For those who do not improve after three days or who have severe disease, intravenous antibiotics should be used. Globally, about 106 million cases of chlamydia and 106 million cases of gonorrhea occurred in 2008. The number of cases of PID, however, is not clear. It is estimated to affect about 1.5 percent of young women yearly. In the United States, PID is estimated to affect about one million people each year. A type of intrauterine device (IUD) known as the Dalkon shield led to increased rates of PID in the 1970s. Current IUDs are not associated with this problem after the first month. Signs and symptoms Symptoms in PID range from none to severe. If there are symptoms, fever, cervical motion tenderness, lower abdominal pain, new or different discharge, painful intercourse, uterine tenderness, adnexal tenderness, or irregular menstruation may be noted. Other complications include endometritis, salpingitis, tubo-ovarian abscess, pelvic peritonitis, periappendicitis, and perihepatitis. Complications PID can cause scarring inside the reproductive system, which can later cause serious complications, including chronic pelvic pain, infertility, ectopic pregnancy (the leading cause of pregnancy-related deaths in adult females), and other complications of pregnancy. Occasionally, the infection can spread to the peritoneum causing inflammation and the formation of scar tissue on the external surface of the liver (Fitz-Hugh–Curtis syndrome). Cause Chlamydia trachomatis and Neisseria gonorrhoeae are common causes of PID. However, PID can also be caused by other untreated infections, like bacterial vaginosis. Data suggest that PID is often polymicrobial. Isolated anaerobes and facultative microorganisms have been obtained from the upper genital tract. N. gonorrhoeae has been isolated from fallopian tubes, facultative and anaerobic organisms were recovered from endometrial tissues. The anatomical structure of the internal organs and tissues of the female reproductive tract provides a pathway for pathogens to ascend from the vagina to the pelvic cavity through the infundibulum. The disturbance of the naturally occurring vaginal microbiota associated with bacterial vaginosis increases the risk of PID. N. gonorrhoea and C. trachomatis are the most common organisms. The least common were infections caused exclusively by anaerobes and facultative organisms. Anaerobes and facultative bacteria were also isolated from 50 percent of the patients from whom Chlamydia and Neisseria were recovered; thus, anaerobes and facultative bacteria were present in the upper genital tract of nearly two-thirds of the PID patients. PCR and serological tests have associated extremely fastidious organism with endometritis, PID, and tubal factor infertility. Microorganisms associated with PID are listed below. Cases of PID have developed in people who have stated they have never had sex. Bacteria Chlamydia trachomatis Neisseria gonorrhoeae Prevotella spp. Streptococcus pyogenes Prevotella bivia Prevotella disiens Bacteroides spp. Peptostreptococcus asaccharolyticus Peptostreptococcus anaerobius Gardnerella vaginalis Escherichia coli Group B streptococcus α-hemolytic streptococcus Coagulase-negative staphylococcus Atopobium vaginae Acinetobacter spp. Dialister spp. Fusobacterium gonidiaformans Gemella spp. Leptotrichia spp. Mogibacterium spp. Porphyromonas spp. Sphingomonas spp. Veillonella spp. Cutibacterium acnes Mycoplasma genitalium Mycoplasma hominis Ureaplasma spp. Diagnosis Upon a pelvic examination, cervical motion, uterine, or adnexal tenderness will be experienced. Mucopurulent cervicitis and or urethritis may be observed. In severe cases more testing may be required such as laparoscopy, intra-abdominal bacteria sampling and culturing, or tissue biopsy. Laparoscopy can visualize "violin-string" adhesions, characteristic of Fitz-Hugh–Curtis perihepatitis and other abscesses that may be present. Other imaging methods, such as ultrasonography, computed tomography (CT), and magnetic imaging (MRI), can aid in diagnosis. Blood tests can also help identify the presence of infection: the erythrocyte sedimentation rate (ESR), the C-reactive protein (CRP) level, and chlamydial and gonococcal DNA probes. Nucleic acid amplification tests (NAATs), direct fluorescein tests (DFA), and enzyme-linked immunosorbent assays (ELISA) are highly sensitive tests that can identify specific pathogens present. Serology testing for antibodies is not as useful since the presence of the microorganisms in healthy people can confound interpreting the antibody titer levels, although antibody levels can indicate whether an infection is recent or long-term. Definitive criteria include histopathologic evidence of endometritis, thickened filled fallopian tubes, or laparoscopic findings. Gram stain/smear becomes definitive in the identification of rare, atypical and possibly more serious organisms. Two thirds of patients with laparoscopic evidence of previous PID were not aware they had PID, but even asymptomatic PID can cause serious harm. Laparoscopic identification is helpful in diagnosing tubal disease; a 65 percent to 90 percent positive predictive value exists in patients with presumed PID. Upon gynecologic ultrasound, a potential finding is tubo-ovarian complex, which is edematous and dilated pelvic structures as evidenced by vague margins, but without abscess formation. Differential diagnosis A number of other causes may produce similar symptoms including appendicitis, ectopic pregnancy, hemorrhagic or ruptured ovarian cysts, ovarian torsion, and endometriosis and gastroenteritis, peritonitis, and bacterial vaginosis among others. Pelvic inflammatory disease is more likely to reoccur when there is a prior history of the infection, recent sexual contact, recent onset of menses, or an IUD (intrauterine device) in place or if the partner has a sexually transmitted infection. Acute pelvic inflammatory disease is highly unlikely when recent intercourse has not taken place or an IUD is not being used. A sensitive serum pregnancy test is typically obtained to rule out ectopic pregnancy. Culdocentesis will differentiate hemoperitoneum (ruptured ectopic pregnancy or hemorrhagic cyst) from pelvic sepsis (salpingitis, ruptured pelvic abscess, or ruptured appendix). Pelvic and vaginal ultrasounds are helpful in the diagnosis of PID. In the early stages of infection, the ultrasound may appear normal. As the disease progresses, nonspecific findings can include free pelvic fluid, endometrial thickening, uterine cavity distension by fluid or gas. In some instances the borders of the uterus and ovaries appear indistinct. Enlarged ovaries accompanied by increased numbers of small cysts correlates with PID. Laparoscopy is infrequently used to diagnose pelvic inflammatory disease since it is not readily available. Moreover, it might not detect subtle inflammation of the fallopian tubes, and it fails to detect endometritis. Nevertheless, laparoscopy is conducted if the diagnosis is not certain or if the person has not responded to antibiotic therapy after 48 hours. No single test has adequate sensitivity and specificity to diagnose pelvic inflammatory disease. A large multisite U.S. study found that cervical motion tenderness as a minimum clinical criterion increases the sensitivity of the CDC diagnostic criteria from 83 percent to 95 percent. However, even the modified 2002 CDC criteria do not identify women with subclinical disease. Prevention Regular testing for sexually transmitted infections is encouraged for prevention. The risk of contracting pelvic inflammatory disease can be reduced by the following: Using barrier methods such as condoms; see human sexual behaviour for other listings. Using latex condoms to prevent STIs that may go untreated. Seeking medical attention if you are experiencing symptoms of PID. Using hormonal combined contraceptive pills also helps in reducing the chances of PID by thickening the cervical mucosal plug & hence preventing the ascent of causative organisms from the lower genital tract. Seeking medical attention after learning that a current or former sex partner has, or might have had a sexually transmitted infection. Getting a STI history from your current partner and strongly encouraging they be tested and treated before intercourse. Diligence in avoiding vaginal activity, particularly intercourse, after the end of a pregnancy (delivery, miscarriage, or abortion) or certain gynecological procedures, to ensure that the cervix closes. Reducing the number of sexual partners; As in sexual monogamy. Treatment Treatment is often started without confirmation of infection because of the serious complications that may result from delayed treatment. Treatment depends on the infectious agent and generally involves the use of antibiotic therapy although there is no clear evidence of which antibiotic regimen is more effective and safe in the management of PID. If there is no improvement within two to three days, the patient is typically advised to seek further medical attention. Hospitalization sometimes becomes necessary if there are other complications. Treating sexual partners for possible STIs can help in treatment and prevention. There should be no wait for STI results to start treatment. Treatment should not be avoided for longer than 2-3 days due to increasing the risk of infertility. For women with PID of mild to moderate severity, parenteral and oral therapies appear to be effective. It does not matter to their short- or long-term outcome whether antibiotics are administered to them as inpatients or outpatients. Typical regimens include cefoxitin or cefotetan plus doxycycline, and clindamycin plus gentamicin. An alternative parenteral regimen is ampicillin/sulbactam plus doxycycline. Erythromycin-based medications can also be used. A single study suggests superiority of azithromycin over doxycycline. Another alternative is to use a parenteral regimen with ceftriaxone or cefoxitin plus doxycycline. Clinical experience guides decisions regarding transition from parenteral to oral therapy, which usually can be initiated within 24–48 hours of clinical improvement. When PID is caught early there are treatments that can be utilized, however these treatments will not undo any damage PID may has caused. If previously having a PID diagnosis and were to be exposed to another STI the risk of having PID reoccur is higher Early treatment can not prevent the following: chronic abdominal pain. infertility and or ectopic pregnancies. scar tissue within or outside the fallopian tubes. Prognosis Early diagnosis and immediate treatment are vital in reducing the chances of later complications from PID. Delaying treatment for even a few days could greatly increase the chances of further complications. Even when the PID infection is cured, effects of the infection may be permanent, or long lasting. This makes early identification essential. A limitation of this is that diagnostic tests are not included in routine check-ups, and cannot be done using signs and symptoms alone; the required diagnostic tests are more invasive than that. Treatment resulting in cure is very important in the prevention of damage to the reproductive system. Around 20 percent of women with PID develop infertility. Even women who do not experience intense symptoms or are asymptomatic can become infertile. This can be caused by the formation of scar tissue due to one or more episodes of PID, and can lead to tubal blockage. Both of these increase the risk of the inability to get pregnant, and 1% results in an ectopic pregnancy. Chronic pelvic/abdominal pain develops post PID 40% of the time. Certain occurrences such as a post pelvic operation, the period of time immediately after childbirth (postpartum), miscarriage or abortion increase the risk of acquiring another infection leading to PID. Epidemiology Globally about 106 million cases of chlamydia and 106 million cases of gonorrhea occurred in 2008. The number of cases of PID; however, is not clear.This is largely due to diagnostic tests being invasive and not included in routine check-ups, despite PID being the most common reason for individuals to admit themselves under gynecological care. It is estimated to affect about 1.5 percent of young women yearly. In the United States PID is estimated to affect about one million people yearly. Rates are highest with teenagers and first time mothers. PID causes over 100,000 women to become infertile in the US each year. Prevalence Records show that... 18/10000 recorded discharges in the US after a diagnosis of PID. Prevalence of self reported cases of PID for 18–44 was approximately 4.4%. Findings that PID has an associated risk with previous STI diagnosis compared to women with no previous STI diagnosis 1.1% of women, 16-46 years of age, in England and Wales are diagnosed with PID. Despite the indications of a general decrease in PID rates, there is an observed rise in the prevalence of gonorrhea and chlamydia. With that, in order to decrease the prevalence of PID, one should test for gonorrhea and chlamydia. Two nationally representative probability surveys referenced are the National Health and Nutrition Examination Survey (NHANES) and the National Survey of Family Growth (NSFG) surveyed women aged 18 to 44 from 2013 to 2014. The results: 2.5 million women have had a PID diagnosis in the past. The self-reported history decreased from 4.1% in 2013 to 3.6% in 2017. It is possible that increased screening at annual gynecologist appointments has led to an earlier detection and prevention of PID. In white non-Hispanic women, the prevalence decreased from 4.9% to 3.9%, and in Hispanic women, the prevalence decreased from 5.3% to 3.7%. In black non-Hispanic women, the prevalence increased from 3.8% to 6.3%. The highest burden of PID recently is in black women and women living in the Southern United States where there is a higher prevalence of STIs as well. Disparities between races could be due to lower socioeconomic status. Those with a lower income are less likely to get an annual gynecologist appointment or other preventative measures and are more likely to be uninsured. Population at risk. Those who are sexually active with female (intact)reproductive organs and are under the age of 25 Rarely observed in females who have had a hysterectomy Overall age range 18-44 Those who have an STI that has gone untreated. Women with more than one sexual partner Inconsistent condom use for those not in a mutually monogamous relationship Etiology of PID: Untreated STI multiple sexual partners Sexually active under the age of 25 Usage of a douche Causes damage to the bacteria that lives within the vagina Slight increase risk when using an IUD not a massive increase in risk.
Biology and health sciences
Infectious diseases by site
Health
24598
https://en.wikipedia.org/wiki/Pointing%20device
Pointing device
A pointing device is a human interface device that allows a user to input spatial (i.e., continuous and multi-dimensional) data to a computer. Graphical user interfaces (GUI) and CAD systems allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer (or cursor) and other visual changes. Common gestures are point and click and drag and drop. While the most common pointing device by far is the mouse, many more devices have been developed. However, the term mouse is commonly used as a metaphor for devices that move a computer cursor. Fitts's law can be used to predict the speed with which users can use a pointing device. Classification To classify several pointing devices, a certain number of features can be considered. For example, the device's movement, controlling, positioning or resistance. The following points should provide an overview of the different classifications. direct vs. indirect input In case of a direct-input pointing device, the on-screen pointer is at the same physical position as the pointing device (e.g., finger on a touch screen, stylus on a tablet computer). An indirect-input pointing device is not at the same physical position as the pointer but translates its movement onto the screen (e.g., computer mouse, joystick, stylus on a graphics tablet). absolute vs. relative movement An absolute-movement input device (e.g., stylus, finger on touch screen) provides a consistent mapping between a point in the input space (location/state of the input device) and a point in the output space (position of pointer on screen). A relative-movement input device (e.g., mouse, joystick) maps displacement in the input space to displacement in the output state. It therefore controls the relative position of the cursor compared to its initial position. isotonic vs. elastic vs. isometric An isotonic pointing device is movable and measures its displacement (mouse, pen, human arm) whereas an isometric device is fixed and measures the force which acts on it (trackpoint, force-sensing touch screen). An elastic device increases its force resistance with displacement (joystick). position control vs. rate control A position-control input device (e.g., mouse, finger on touch screen) directly changes the absolute or relative position of the on-screen pointer. A rate-control input device (e.g., trackpoint, joystick) changes the speed and direction of the movement of the on-screen pointer. translation vs. rotation Another classification is the differentiation between whether the device is physically translated or rotated. degrees of freedom Different pointing devices have different degrees of freedom (DOF). A computer mouse has two degrees of freedom, namely its movement on the x- and y-axis. However the Wiimote has 6 degrees of freedom: x-, y- and z-axis for movement as well as for rotation. possible states As mentioned later in this article, pointing devices have different possible states. Examples for these states are out of range, tracking or dragging. Examples a computer mouse is an indirect, relative, isotonic, position-control, translational input device with two degrees of freedom (x, y position) and two states (tracking, dragging). a touch screen is a direct, absolute, isometric, position-control input device with two or more degrees of freedom (x, y position and optionally pressure) and two states (out of range, dragging). a joystick is an indirect, relative, elastic, rate-control, translational input device with two degrees of freedom (x, y angle) and two states (tracked, dragging). a Wiimote is an indirect, relative, elastic, rate-control, translational input device with six degrees of freedom (x, y, z orientation and x, y, z position) and two or three states (tracking, dragging for orientation and position; out-of-range for position). Buxton's taxonomy The following table shows a classification of pointing devices by their number of dimensions (columns) and which property is sensed (rows) introduced by Bill Buxton. The sub-rows distinguish between mechanical intermediary (i.e. stylus) (M) and touch-sensitive (T). It is rooted in the human motor/sensory system. Continuous manual input devices are categorized. Sub-columns distinguish devices that use comparable motor control for their operation. The table is based on the original graphic of Bill Buxton's work on "Taxonomies of Input". Buxton's Three-State-Model This model describes different states that a pointing device can assume. The three common states as described by Buxton are out of range, tracking and dragging. Not every pointing device can switch to all states. Fitts' Law Fitts's law (often cited as Fitts' law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. This scientific law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. In other words, this means for example, that more time is needed to click on a small button which is distant to the cursor, than to click a large button near the cursor. Thereby it is generally possible to predict the speed which is needed for a selective movement to a certain target. Mathematical formulation The common metric to calculate the average time to complete the movement is the following: where: MT is the average time to complete the movement. a and b are constants that depend on the choice of input device and are usually determined empirically by regression analysis. ID is the index of difficulty. D is the distance from the starting point to the center of the target. W is the width of the target measured along the axis of motion. W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within ± of the target's center. This results in the interpretation that, as mentioned before, large and close targets can be reached faster than little, distant targets. Applying Fitts' Law in user interface design As mentioned above, the size and distance of an object influence its selection. Additionally this effects the user experience. Therefore, it is important, that Fitts' Law is considered while designing user interfaces. Below some basic principles are mentioned. Interactive elements Command buttons for example should have different sizes than non-interactive elements. Larger interactive objects are easier to select with any pointing device. Edges and corners Due to the fact, that the cursor gets pinned on the edges and corners of a graphical user interface, those points can be accessed faster than other spots on the display. Pop-up menus They should support immediate selection of interactive elements in order to reduce the user's "travel time". Options for selecting Within menus like dropdown menus or top-level navigation, the distance increases the further the user goes down the list. However in pie menus, the distance to the different buttons is always the same. In addition, the target areas in pie menus are larger. Task bars To operate a task bar, the user needs a higher level of precision, thus more time. Generally they hinder the movement through the interface. Control-Display Gain The Control-Display Gain (or CD gain) describes the proportion between movements in the control space to the movements in the display space. For example, a hardware mouse moves in another speed or distance than the cursor on the screen. Even if these movements take place in two different spaces, the units for measurement have to be the same in order to be meaningful (e.g. meters instead of pixels). The CD gain refers to the scale factor of these two movements: The CD gain settings can be adjusted in most cases. However, a compromise has to be found: with high gains it is easier to approach a distant target, with low gains this takes longer. High gains hinder the selection of targets, whereas low gains facilitate this process. The Microsoft, macOS and X window systems have implemented mechanisms which adapt the CD gain to the user's needs. e.g. the CD gain increases when the user's movement velocity increases (historically referred to as "mouse acceleration"). Common pointing devices Motion-tracking pointing devices Mouse A mouse is a small handheld device pushed over a horizontal surface. A mouse moves the graphical pointer by being slid across a smooth surface. The conventional roller-ball mouse uses a ball to create this action: the ball is in contact with two small shafts that are set at right angles to each other. As the ball moves these shafts rotate, and the rotation is measured by sensors within the mouse. The distance and direction information from the sensors is then transmitted to the computer, and the computer moves the graphical pointer on the screen by following the movements of the mouse. Another common mouse is the optical mouse. This device is very similar to the conventional mouse but uses visible or infrared light instead of a roller-ball to detect the changes in position. Additionally there is the mini-mouse, which is a small egg-sized mouse for use with laptop computers; usually small enough for use on a free area of the laptop body itself, it is typically optical, includes a retractable cord and uses a USB port to save battery life. Trackball A trackball is a pointing device consisting of a ball housed in a socket containing sensors to detect rotation of the ball about two axis, similar to an upside-down mouse: as the user rolls the ball with a thumb, fingers, or palm the pointer on the screen will also move. Tracker balls are commonly used on CAD workstations for ease of use, where there may be no desk space on which to use a mouse. Some are able to clip onto the side of the keyboard and have buttons with the same functionality as mouse buttons. There are also wireless trackballs which offer a wider range of ergonomic positions to the user. Joystick Isotonic joysticks are handle sticks where the user can freely change the position of the stick, with more or less constant force. Isometric joysticks are where the user controls the stick by varying the amount of force they push with, and the position of the stick remains more or less constant. Isometric joysticks are often cited as more difficult to use due to the lack of tactile feedback provided by an actual moving joystick. Pointing stick A pointing stick is a pressure-sensitive small nub used like a joystick. It is usually found on laptops embedded between the G, H, and B keys. It operates by sensing the force applied by the user. The corresponding "mouse" buttons are commonly placed just below the space bar. It is also found on mice and some desktop keyboards. Wii Remote The Wii Remote, also known colloquially as the Wiimote, is the primary controller for Nintendo's Wii console. A main feature of the Wii Remote is its motion sensing capability, which allows the user to interact with and manipulate items on screen via gesture recognition and pointing through the use of accelerometer and optical sensor technology. Finger tracking A finger tracking device tracks fingers in the 3D space or close to the surface without contact with a screen. Fingers are triangulated by technologies like stereo camera, time-of-flight and laser. Good examples of finger tracking pointing devices are LM3LABS' Ubiq'window and AirStrike Position-tracking pointing devices Graphics tablet A graphics tablet or digitizing tablet is a special tablet similar to a touchpad, but controlled with a pen or stylus that is held and used like a normal pen or pencil. The thumb usually controls the clicking via a two-way button on the top of the pen, or by tapping on the tablet's surface. A cursor (also called a puck) is similar to a mouse, except that it has a window with cross hairs for pinpoint placement, and it can have as many as 16 buttons. A pen (also called a stylus) looks like a simple ballpoint pen but uses an electronic head instead of ink. The tablet contains electronics that enable it to detect movement of the cursor or pen and translate the movements into digital signals that it sends to the computer." This is different from a mouse because each point on the tablet represents a point on the screen. Stylus A stylus is a small pen-shaped instrument that is used to input commands to a computer screen, mobile device or graphics tablet. The stylus is the primary input device for personal digital assistants, smartphones and some handheld gaming systems such as the Nintendo DS that require accurate input, although devices featuring multi-touch finger-input with capacitive touchscreens have become more popular than stylus-driven devices in the smartphone market. Touchpad A touchpad or trackpad is a flat surface that can detect finger contact. It is a stationary pointing device, commonly used on laptop computers. At least one physical button normally comes with the touchpad, but the user can also generate a mouse click by tapping on the pad. Advanced features include pressure sensitivity and special gestures such as scrolling by moving one's finger along an edge. It uses a two-layer grid of electrodes to measure finger movement: one layer has vertical electrode strips that handle vertical movement, and the other layer has horizontal electrode strips to handle horizontal movements. Touchscreen A touchscreen is a device embedded into the screen of the TV monitor, or system LCD monitor screens of laptop computers. Users interact with the device by physically pressing items shown on the screen, either with their fingers or some helping tool. Several technologies can be used to detect touch. Resistive and capacitive touchscreens have conductive materials embedded in the glass and detect the position of the touch by measuring changes in electric current. Infrared controllers project a grid of infrared beams inserted into the frame surrounding the monitor screen itself, and detect where an object intercepts the beams. Modern touchscreens could be used in conjunction with stylus pointing devices, while those powered by infrared do not require physical touch, but just recognize the movement of hand and fingers in some minimum range distance from the real screen. Touchscreens became popular with the introduction of palmtop computers like those sold by the Palm, Inc. hardware manufacturer, some high range classes of laptop computers, mobile smartphone like HTC or the Apple iPhone, and the availability of standard touchscreen device drivers into the Symbian, Palm OS, Mac OS X, and Microsoft Windows operating systems. Pressure-tracking pointing devices Isometric Joystick In contrast to a 3D Joystick, the stick itself doesn't move or just moves very little and is mounted in the device chassis. To move the pointer, the user has to apply force to the stick. Typical representatives can be found on notebook's keyboards between the "G" and "H" keys. By performing pressure on the TrackPoint, the cursor moves on the display. Other devices A light pen is a device similar to a touch screen, but uses a special light-sensitive pen instead of the finger, which allows for more accurate screen input. As the tip of the light pen makes contact with the screen, it sends a signal back to the computer containing the coordinates of the pixels at that point. It can be used to draw on the computer screen or make menu selections, and does not require a special touch screen because it can work with any CRT display. Light gun Palm mouse – held in the palm and operated with only two buttons; the movements across the screen correspond to a feather touch, and pressure increases the speed of movement Footmouse – sometimes called a mole – a mouse variant for those who do not wish to or cannot use the hands or the head; instead, it provides footclicks Puck, similar to a mouse, but designed for absolute positioning rather than relative. It typically has a transparent plastic with crosshairs for precise positioning and tracing. Pucks are most commonly used for tracing in CAD/CAM/CAE work. Eye tracking devices – a mouse controlled by the user's retinal movements, allowing cursor-manipulation without touch Finger-mouse – An extremely small mouse controlled by two fingers only; the user can hold it in any position Gyroscopic mouse – a gyroscope senses the movement of the mouse as it moves through the air. Users can operate a gyroscopic mouse when they have no room for a regular mouse or must give commands while standing up. This input device needs no cleaning and can have many extra buttons, in fact, some laptops doubling as TVs come with gyroscopic mice that resemble, and double as, remotes with LCD screens built in. Steering wheel – can be thought of as a 1D pointing device – see also steering wheel section of game controller article Paddle – another 1D pointing device Jog dial – another 1D pointing device Yoke (aircraft) Some high-degree-of-freedom input devices 3Dconnexion – six-degree controller Discrete pointing devices directional pad – a very simple keyboard Dance pad – used to point at gross locations in space with feet Soap mouse – a handheld, position-based pointing device based on existing wireless optical mouse technology Laser pen – can be used in presentations as a pointing device
Technology
User interface
null