id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
65907
https://en.wikipedia.org/wiki/Elastic%20collision
Elastic collision
In physics, an elastic collision is an encounter (collision) between two bodies in which the total kinetic energy of the two bodies remains the same. In an ideal, perfectly elastic collision, there is no net loss of kinetic energy into other forms such as heat, noise, or potential energy. During the collision of small objects, kinetic energy is first converted to potential energy associated with a repulsive or attractive force between the particles (when the particles move against this force, i.e. the angle between the force and the relative velocity is obtuse), then this potential energy is converted back to kinetic energy (when the particles move with this force, i.e. the angle between the force and the relative velocity is acute). Collisions of atoms are elastic, for example Rutherford backscattering. A useful special case of elastic collision is when the two bodies have equal mass, in which case they will simply exchange their momenta. The molecules—as distinct from atoms—of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules’ translational motion and their internal degrees of freedom with each collision. At any instant, half the collisions are, to a varying extent, inelastic collisions (the pair possesses less kinetic energy in their translational motions after the collision than before), and half could be described as “super-elastic” (possessing more kinetic energy after the collision than before). Averaged across the entire sample, molecular collisions can be regarded as essentially elastic as long as Planck's law forbids energy from being carried away by black-body photons. In the case of macroscopic bodies, perfectly elastic collisions are an ideal never fully realized, but approximated by the interactions of objects such as billiard balls. When considering energies, possible rotational energy before and/or after a collision may also play a role. Equations One-dimensional Newtonian In any collision without an external force, momentum is conserved; but in an elastic collision, kinetic energy is also conserved. Consider particles A and B with masses mA, mB, and velocities vA1, vB1 before collision, vA2, vB2 after collision. The conservation of momentum before and after the collision is expressed by: Likewise, the conservation of the total kinetic energy is expressed by: These equations may be solved directly to find when are known: Alternatively the final velocity of a particle, v2 (vA2 or vB2) is expressed by: Where: e is the coefficient of restitution. vCoM is the velocity of the center of mass of the system of two particles: v1 (vA1 or vB1) is the initial velocity of the particle. If both masses are the same, we have a trivial solution: This simply corresponds to the bodies exchanging their initial velocities with each other. As can be expected, the solution is invariant under adding a constant to all velocities (Galilean relativity), which is like using a frame of reference with constant translational velocity. Indeed, to derive the equations, one may first change the frame of reference so that one of the known velocities is zero, determine the unknown velocities in the new frame of reference, and convert back to the original frame of reference. Examples Before collision Ball A: mass = 3 kg, velocity = 4 m/s Ball B: mass = 5 kg, velocity = 0 m/s After collision Ball A: velocity = −1 m/s Ball B: velocity = 3 m/s Another situation: The following illustrate the case of equal mass, . In the limiting case where is much larger than , such as a ping-pong paddle hitting a ping-pong ball or an SUV hitting a trash can, the heavier mass hardly changes velocity, while the lighter mass bounces off, reversing its velocity plus approximately twice that of the heavy one. In the case of a large , the value of is small if the masses are approximately the same: hitting a much lighter particle does not change the velocity much, hitting a much heavier particle causes the fast particle to bounce back with high speed. This is why a neutron moderator (a medium which slows down fast neutrons, thereby turning them into thermal neutrons capable of sustaining a chain reaction) is a material full of atoms with light nuclei which do not easily absorb neutrons: the lightest nuclei have about the same mass as a neutron. Derivation of solution To derive the above equations for rearrange the kinetic energy and momentum equations: Dividing each side of the top equation by each side of the bottom equation, and using gives: That is, the relative velocity of one particle with respect to the other is reversed by the collision. Now the above formulas follow from solving a system of linear equations for regarding as constants: Once is determined, can be found by symmetry. Center of mass frame With respect to the center of mass, both velocities are reversed by the collision: a heavy particle moves slowly toward the center of mass, and bounces back with the same low speed, and a light particle moves fast toward the center of mass, and bounces back with the same high speed. The velocity of the center of mass does not change by the collision. To see this, consider the center of mass at time before collision and time after collision: Hence, the velocities of the center of mass before and after collision are: The numerators of and are the total momenta before and after collision. Since momentum is conserved, we have One-dimensional relativistic According to special relativity, where p denotes momentum of any particle with mass, v denotes velocity, and c is the speed of light. In the center of momentum frame where the total momentum equals zero, Here represent the rest masses of the two colliding bodies, represent their velocities before collision, their velocities after collision, their momenta, is the speed of light in vacuum, and denotes the total energy, the sum of rest masses and kinetic energies of the two bodies. Since the total energy and momentum of the system are conserved and their rest masses do not change, it is shown that the momentum of the colliding body is decided by the rest masses of the colliding bodies, total energy and the total momentum. Relative to the center of momentum frame, the momentum of each colliding body does not change magnitude after collision, but reverses its direction of movement. Comparing with classical mechanics, which gives accurate results dealing with macroscopic objects moving much slower than the speed of light, total momentum of the two colliding bodies is frame-dependent. In the center of momentum frame, according to classical mechanics, This agrees with the relativistic calculation despite other differences. One of the postulates in Special Relativity states that the laws of physics, such as conservation of momentum, should be invariant in all inertial frames of reference. In a general inertial frame where the total momentum could be arbitrary, We can look at the two moving bodies as one system of which the total momentum is the total energy is and its velocity is the velocity of its center of mass. Relative to the center of momentum frame the total momentum equals zero. It can be shown that is given by: Now the velocities before the collision in the center of momentum frame and are: When and Therefore, the classical calculation holds true when the speed of both colliding bodies is much lower than the speed of light (~300,000 kilometres per second). Relativistic derivation using hyperbolic functions Using the so-called parameter of velocity (usually called the rapidity), we get Relativistic energy and momentum are expressed as follows: Equations sum of energy and momentum colliding masses and (velocities correspond to the velocity parameters ), after dividing by adequate power are as follows: and dependent equation, the sum of above equations: subtract squares both sides equations "momentum" from "energy" and use the identity after simplifying we get: for non-zero mass, using the hyperbolic trigonometric identity we get: as functions is even we get two solutions: from the last equation, leading to a non-trivial solution, we solve and substitute into the dependent equation, we obtain and then we have: It is a solution to the problem, but expressed by the parameters of velocity. Return substitution to get the solution for velocities is: Substitute the previous solutions and replace: and after long transformation, with substituting: we get: Two-dimensional For the case of two non-spinning colliding bodies in two dimensions, the motion of the bodies is determined by the three conservation laws of momentum, kinetic energy and angular momentum. The overall velocity of each body must be split into two perpendicular velocities: one tangent to the common normal surfaces of the colliding bodies at the point of contact, the other along the line of collision. Since the collision only imparts force along the line of collision, the velocities that are tangent to the point of collision do not change. The velocities along the line of collision can then be used in the same equations as a one-dimensional collision. The final velocities can then be calculated from the two new component velocities and will depend on the point of collision. Studies of two-dimensional collisions are conducted for many bodies in the framework of a two-dimensional gas. In a center of momentum frame at any time the velocities of the two bodies are in opposite directions, with magnitudes inversely proportional to the masses. In an elastic collision these magnitudes do not change. The directions may change depending on the shapes of the bodies and the point of impact. For example, in the case of spheres the angle depends on the distance between the (parallel) paths of the centers of the two bodies. Any non-zero change of direction is possible: if this distance is zero the velocities are reversed in the collision; if it is close to the sum of the radii of the spheres the two bodies are only slightly deflected. Assuming that the second particle is at rest before the collision, the angles of deflection of the two particles, and , are related to the angle of deflection in the system of the center of mass by The magnitudes of the velocities of the particles after the collision are: Two-dimensional collision with two moving objects The final x and y velocities components of the first ball can be calculated as: where and are the scalar sizes of the two original speeds of the objects, and are their masses, and are their movement angles, that is, (meaning moving directly down to the right is either a −45° angle, or a 315° angle), and lowercase phi () is the contact angle. (To get the and velocities of the second ball, one needs to swap all the '1' subscripts with '2' subscripts.) This equation is derived from the fact that the interaction between the two bodies is easily calculated along the contact angle, meaning the velocities of the objects can be calculated in one dimension by rotating the x and y axis to be parallel with the contact angle of the objects, and then rotated back to the original orientation to get the true x and y components of the velocities. In an angle-free representation, the changed velocities are computed using the centers and at the time of contact as where the angle brackets indicate the inner product (or dot product) of two vectors. Other conserved quantities In the particular case of particles having equal masses, it can be verified by direct computation from the result above that the scalar product of the velocities before and after the collision are the same, that is Although this product is not an additive invariant in the same way that momentum and kinetic energy are for elastic collisions, it seems that preservation of this quantity can nonetheless be used to derive higher-order conservation laws. Derivation of two dimensional solution The impulse during the collision for each particle is: Conservation of Momentum implies . Since the force during collision is perpendicular to both particles' surfaces at the contact point, the impulse is along the line parallel to , the relative vector between the particles' center at collision time: for some to be determined and Then from (): From above equations, conservation of kinetic energy now requires: whith The both solutions of this equation are and , where corresponds to the trivial case of no collision. Substituting the non trivial value of in () we get the desired result (). Since all equations are in vectorial form, this derivation is valid also for three dimensions with spheres.
Physical sciences
Basics_4
Physics
65908
https://en.wikipedia.org/wiki/Inelastic%20collision
Inelastic collision
An inelastic collision, in contrast to an elastic collision, is a collision in which kinetic energy is not conserved due to the action of internal friction. In collisions of macroscopic bodies, some kinetic energy is turned into vibrational energy of the atoms, causing a heating effect, and the bodies are deformed. The molecules of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules' translational motion and their internal degrees of freedom with each collision. At any one instant, half the collisions are – to a varying extent – inelastic (the pair possesses less kinetic energy after the collision than before), and half could be described as “super-elastic” (possessing more kinetic energy after the collision than before). Averaged across an entire sample, molecular collisions are elastic. Although inelastic collisions do not conserve kinetic energy, they do obey conservation of momentum. Simple ballistic pendulum problems obey the conservation of kinetic energy only when the block swings to its largest angle. In nuclear physics, an inelastic collision is one in which the incoming particle causes the nucleus it strikes to become excited or to break up. Deep inelastic scattering is a method of probing the structure of subatomic particles in much the same way as Rutherford probed the inside of the atom (see Rutherford scattering). Such experiments were performed on protons in the late 1960s using high-energy electrons at the Stanford Linear Accelerator (SLAC). As in Rutherford scattering, deep inelastic scattering of electrons by proton targets revealed that most of the incident electrons interact very little and pass straight through, with only a small number bouncing back. This indicates that the charge in the proton is concentrated in small lumps, reminiscent of Rutherford's discovery that the positive charge in an atom is concentrated at the nucleus. However, in the case of the proton, the evidence suggested three distinct concentrations of charge (quarks) and not one. Formula The formula for the velocities after a one-dimensional collision is: where va is the final velocity of the first object after impact vb is the final velocity of the second object after impact ua is the initial velocity of the first object before impact ub is the initial velocity of the second object before impact ma is the mass of the first object mb is the mass of the second object CR is the coefficient of restitution; if it is 1 we have an elastic collision; if it is 0 we have a perfectly inelastic collision, see below. In a center of momentum frame the formulas reduce to: For two- and three-dimensional collisions the velocities in these formulas are the components perpendicular to the tangent line/plane at the point of contact. If assuming the objects are not rotating before or after the collision, the normal impulse is: where is the normal vector. Assuming no friction, this gives the velocity updates: Perfectly inelastic collision A perfectly inelastic collision occurs when the maximum amount of kinetic energy of a system is lost. In a perfectly inelastic collision, i.e., a zero coefficient of restitution, the colliding particles stick together. In such a collision, kinetic energy is lost by bonding the two bodies together. This bonding energy usually results in a maximum kinetic energy loss of the system. It is necessary to consider conservation of momentum: (Note: In the sliding block example above, momentum of the two body system is only conserved if the surface has zero friction. With friction, momentum of the two bodies is transferred to the surface that the two bodies are sliding upon. Similarly, if there is air resistance, the momentum of the bodies can be transferred to the air.) The equation below holds true for the two-body (Body A, Body B) system collision in the example above. In this example, momentum of the system is conserved because there is no friction between the sliding bodies and the surface. where v is the final velocity, which is hence given by The reduction of total kinetic energy is equal to the total kinetic energy before the collision in a center of momentum frame with respect to the system of two particles, because in such a frame the kinetic energy after the collision is zero. In this frame most of the kinetic energy before the collision is that of the particle with the smaller mass. In another frame, in addition to the reduction of kinetic energy there may be a transfer of kinetic energy from one particle to the other; the fact that this depends on the frame shows how relative this is. The change in kinetic energy is hence: where μ is the reduced mass and urel is the relative velocity of the bodies before collision. With time reversed we have the situation of two objects pushed away from each other, e.g. shooting a projectile, or a rocket applying thrust (compare the derivation of the Tsiolkovsky rocket equation). Partially inelastic collisions Partially inelastic collisions are the most common form of collisions in the real world. In this type of collision, the objects involved in the collisions do not stick, but some kinetic energy is still lost. Friction, sound and heat are some ways the kinetic energy can be lost through partial inelastic collisions.
Physical sciences
Basics_4
Physics
65910
https://en.wikipedia.org/wiki/Printed%20circuit%20board
Printed circuit board
A printed circuit board (PCB), also called printed wiring board (PWB), is a laminated sandwich structure of conductive and insulating layers, each with a pattern of traces, planes and other features (similar to wires on a flat surface) etched from one or more sheet layers of copper laminated onto or between sheet layers of a non-conductive substrate. PCBs are used to connect or "wire" components to one another in an electronic circuit. Electrical components may be fixed to conductive pads on the outer layers, generally by soldering, which both electrically connects and mechanically fastens the components to the board. Another manufacturing process adds vias, metal-lined drilled holes that enable electrical interconnections between conductive layers, to boards with more than a single side. Printed circuit boards are used in nearly all electronic products today. Alternatives to PCBs include wire wrap and point-to-point construction, both once popular but now rarely used. PCBs require additional design effort to lay out the circuit, but manufacturing and assembly can be automated. Electronic design automation software is available to do much of the work of layout. Mass-producing circuits with PCBs is cheaper and faster than with other wiring methods, as components are mounted and wired in one operation. Large numbers of PCBs can be fabricated at the same time, and the layout has to be done only once. PCBs can also be made manually in small quantities, with reduced benefits. PCBs can be single-sided (one copper layer), double-sided (two copper layers on both sides of one substrate layer), or multi-layer (stacked layers of substrate with copper plating sandwiched between each and on the outside layers). Multi-layer PCBs provide much higher component density, because circuit traces on the inner layers would otherwise take up surface space between components. The rise in popularity of multilayer PCBs with more than two, and especially with more than four, copper planes was concurrent with the adoption of surface-mount technology. However, multilayer PCBs make repair, analysis, and field modification of circuits much more difficult and usually impractical. The world market for bare PCBs exceeded US$60.2 billion in 2014, and was estimated at $80.33 billion in 2024, forecast to be $96.57 billion for 2029, growing at 4.87% per annum. History Predecessors Before the development of printed circuit boards, electrical and electronic circuits were wired point-to-point on a chassis. Typically, the chassis was a sheet metal frame or pan, sometimes with a wooden bottom. Components were attached to the chassis, usually by insulators when the connecting point on the chassis was metal, and then their leads were connected directly or with jumper wires by soldering, or sometimes using crimp connectors, wire connector lugs on screw terminals, or other methods. Circuits were large, bulky, heavy, and relatively fragile (even discounting the breakable glass envelopes of the vacuum tubes that were often included in the circuits), and production was labor-intensive, so the products were expensive. Development of the methods used in modern printed circuit boards started early in the 20th century. In 1903, a German inventor, Albert Hanson, described flat foil conductors laminated to an insulating board, in multiple layers. Thomas Edison experimented with chemical methods of plating conductors onto linen paper in 1904. Arthur Berry in 1913 patented a print-and-etch method in the UK, and in the United States Max Schoop obtained a patent to flame-spray metal onto a board through a patterned mask. Charles Ducas in 1925 patented a method of electroplating circuit patterns. Predating the printed circuit invention, and similar in spirit, was John Sargrove's 1936–1947 Electronic Circuit Making Equipment (ECME) that sprayed metal onto a Bakelite plastic board. The ECME could produce three radio boards per minute. Early PCBs The Austrian engineer Paul Eisler invented the printed circuit as part of a radio set while working in the UK around 1936. In 1941 a multi-layer printed circuit was used in German magnetic influence naval mines. Around 1943 the United States began to use the technology on a large scale to make proximity fuzes for use in World War II. Such fuzes required an electronic circuit that could withstand being fired from a gun, and could be produced in quantity. The Centralab Division of Globe Union submitted a proposal which met the requirements: a ceramic plate would be screenprinted with metallic paint for conductors and carbon material for resistors, with ceramic disc capacitors and subminiature vacuum tubes soldered in place. The technique proved viable, and the resulting patent on the process, which was classified by the U.S. Army, was assigned to Globe Union. It was not until 1984 that the Institute of Electrical and Electronics Engineers (IEEE) awarded Harry W. Rubinstein its Cledo Brunetti Award for early key contributions to the development of printed components and conductors on a common insulating substrate. Rubinstein was honored in 1984 by his alma mater, the University of Wisconsin-Madison, for his innovations in the technology of printed electronic circuits and the fabrication of capacitors. This invention also represents a step in the development of integrated circuit technology, as not only wiring but also passive components were fabricated on the ceramic substrate. Post-war developments In 1948, the US released the invention for commercial use. Printed circuits did not become commonplace in consumer electronics until the mid-1950s, after the Auto-Sembly process was developed by the United States Army. At around the same time in the UK work along similar lines was carried out by Geoffrey Dummer, then at the RRDE. Motorola was an early leader in bringing the process into consumer electronics, announcing in August 1952 the adoption of "plated circuits" in home radios after six years of research and a $1M investment. Motorola soon began using its trademarked term for the process, PLAcir, in its consumer radio advertisements. Hallicrafters released its first "foto-etch" printed circuit product, a clock-radio, on November 1, 1952. Even as circuit boards became available, the point-to-point chassis construction method remained in common use in industry (such as TV and hi-fi sets) into at least the late 1960s. Printed circuit boards were introduced to reduce the size, weight, and cost of parts of the circuitry. In 1960, a small consumer radio receiver might be built with all its circuitry on one circuit board, but a TV set would probably contain one or more circuit boards. Originally, every electronic component had wire leads, and a PCB had holes drilled for each wire of each component. The component leads were then inserted through the holes and soldered to the copper PCB traces. This method of assembly is called through-hole construction. In 1949, Moe Abramson and Stanislaus F. Danko of the United States Army Signal Corps developed the Auto-Sembly process in which component leads were inserted into a copper foil interconnection pattern and dip soldered. The patent they obtained in 1956 was assigned to the U.S. Army. With the development of board lamination and etching techniques, this concept evolved into the standard printed circuit board fabrication process in use today. Soldering could be done automatically by passing the board over a ripple, or wave, of molten solder in a wave-soldering machine. However, the wires and holes are inefficient since drilling holes is expensive and consumes drill bits and the protruding wires are cut off and discarded. From the 1980s onward, small surface mount parts have been used increasingly instead of through-hole components; this has led to smaller boards for a given functionality and lower production costs, but with some additional difficulty in servicing faulty boards. In the 1990s the use of multilayer surface boards became more frequent. As a result, size was further minimized and both flexible and rigid PCBs were incorporated in different devices. In 1995 PCB manufacturers began using microvia technology to produce High-Density Interconnect (HDI) PCBs. Recent advances Recent advances in 3D printing have meant that there are several new techniques in PCB creation. 3D printed electronics (PEs) can be utilized to print items layer by layer and subsequently the item can be printed with a liquid ink that contains electronic functionalities. HDI (High Density Interconnect) technology allows for a denser design on the PCB and thus potentially smaller PCBs with more traces and components in a given area. As a result, the paths between components can be shorter. HDIs use blind/buried vias, or a combination that includes microvias. With multi-layer HDI PCBs the interconnection of several vias stacked on top of each other (stacked vías, instead of one deep buried via) can be made stronger, thus enhancing reliability in all conditions. The most common applications for HDI technology are computer and mobile phone components as well as medical equipment and military communication equipment. A 4-layer HDI microvia PCB is equivalent in quality to an 8-layer through-hole PCB, so HDI technology can reduce costs. HDI PCBs are often made using build-up film such as ajinomoto build-up film, which is also used in the production of flip chip packages. Some PCBs have optical waveguides, similar to optical fibers built on the PCB. Composition A basic PCB consists of a flat sheet of insulating material and a layer of copper foil, laminated to the substrate. Chemical etching divides the copper into separate conducting lines called tracks or circuit traces, pads for connections, vias to pass connections between layers of copper, and features such as solid conductive areas for electromagnetic shielding or other purposes. The tracks function as wires fixed in place, and are insulated from each other by air and the board substrate material. The surface of a PCB may have a coating that protects the copper from corrosion and reduces the chances of solder shorts between traces or undesired electrical contact with stray bare wires. For its function in helping to prevent solder shorts, the coating is called solder resist or solder mask. The pattern to be etched into each copper layer of a PCB is called the "artwork". The etching is usually done using photoresist which is coated onto the PCB, then exposed to light projected in the pattern of the artwork. The resist material protects the copper from dissolution into the etching solution. The etched board is then cleaned. A PCB design can be mass-reproduced in a way similar to the way photographs can be mass-duplicated from film negatives using a photographic printer. FR-4 glass epoxy is the most common insulating substrate. Another substrate material is cotton paper impregnated with phenolic resin, often tan or brown. When a PCB has no components installed, it is less ambiguously called a printed wiring board (PWB) or etched wiring board. However, the term "printed wiring board" has fallen into disuse. A PCB populated with electronic components is called a printed circuit assembly (PCA), printed circuit board assembly or PCB assembly (PCBA). In informal usage, the term "printed circuit board" most commonly means "printed circuit assembly" (with components). The IPC preferred term for an assembled board is circuit card assembly (CCA), and for an assembled backplane it is backplane assembly. "Card" is another widely used informal term for a "printed circuit assembly". For example, expansion card. A PCB may be printed with a legend identifying the components, test points, or identifying text. Originally, silkscreen printing was used for this purpose, but today other, finer quality printing methods are usually used. Normally the legend does not affect the function of a PCBA. Layers A printed circuit board can have multiple layers of copper which almost always are arranged in pairs. The number of layers and the interconnection designed between them (vias, PTHs) provide a general estimate of the board complexity. Using more layers allow for more routing options and better control of signal integrity, but are also time-consuming and costly to manufacture. Likewise, selection of the vias for the board also allow fine tuning of the board size, escaping of signals off complex ICs, routing, and long term reliability, but are tightly coupled with production complexity and cost. One of the simplest boards to produce is the two-layer board. It has copper on both sides that are referred to as external layers; multi layer boards sandwich additional internal layers of copper and insulation. After two-layer PCBs, the next step up is the four-layer. The four layer board adds significantly more routing options in the internal layers as compared to the two layer board, and often some portion of the internal layers is used as ground plane or power plane, to achieve better signal integrity, higher signaling frequencies, lower EMI, and better power supply decoupling. In multi-layer boards, the layers of material are laminated together in an alternating sandwich: copper, substrate, copper, substrate, copper, etc.; each plane of copper is etched, and any internal vias (that will not extend to both outer surfaces of the finished multilayer board) are plated-through, before the layers are laminated together. Only the outer layers need be coated; the inner copper layers are protected by the adjacent substrate layers. Component mounting "Through hole" components are mounted by their wire leads passing through the board and soldered to traces on the other side. "Surface mount" components are attached by their leads to copper traces on the same side of the board. A board may use both methods for mounting components. PCBs with only through-hole mounted components are now uncommon. Surface mounting is used for transistors, diodes, IC chips, resistors, and capacitors. Through-hole mounting may be used for some large components such as electrolytic capacitors and connectors. The first PCBs used through-hole technology, mounting electronic components by lead inserted through holes on one side of the board and soldered onto copper traces on the other side. Boards may be single-sided, with an unplated component side, or more compact double-sided boards, with components soldered on both sides. Horizontal installation of through-hole parts with two axial leads (such as resistors, capacitors, and diodes) is done by bending the leads 90 degrees in the same direction, inserting the part in the board (often bending leads located on the back of the board in opposite directions to improve the part's mechanical strength), soldering the leads, and trimming off the ends. Leads may be soldered either manually or by a wave soldering machine. Surface-mount technology emerged in the 1960s, gained momentum in the early 1980s, and became widely used by the mid-1990s. Components were mechanically redesigned to have small metal tabs or end caps that could be soldered directly onto the PCB surface, instead of wire leads to pass through holes. Components became much smaller and component placement on both sides of the board became more common than with through-hole mounting, allowing much smaller PCB assemblies with much higher circuit densities. Surface mounting lends itself well to a high degree of automation, reducing labor costs and greatly increasing production rates compared with through-hole circuit boards. Components can be supplied mounted on carrier tapes. Surface mount components can be about one-quarter to one-tenth of the size and weight of through-hole components, and passive components much cheaper. However, prices of semiconductor surface mount devices (SMDs) are determined more by the chip itself than the package, with little price advantage over larger packages, and some wire-ended components, such as 1N4148 small-signal switch diodes, are actually significantly cheaper than SMD equivalents. Electrical properties Each trace consists of a flat, narrow part of the copper foil that remains after etching. Its resistance, determined by its width, thickness, and length, must be sufficiently low for the current the conductor will carry. Power and ground traces may need to be wider than signal traces. In a multi-layer board one entire layer may be mostly solid copper to act as a ground plane for shielding and power return. For microwave circuits, transmission lines can be laid out in a planar form such as stripline or microstrip with carefully controlled dimensions to assure a consistent impedance. In radio-frequency and fast switching circuits the inductance and capacitance of the printed circuit board conductors become significant circuit elements, usually undesired; conversely, they can be used as a deliberate part of the circuit design, as in distributed-element filters, antennae, and fuses, obviating the need for additional discrete components. High density interconnects (HDI) PCBs have tracks or vias with a width or diameter of under 152 micrometers. Materials Laminates Laminates are manufactured by curing layers of cloth or paper with thermoset resin under pressure and heat to form an integral final piece of uniform thickness. They can be up to in width and length. Varying cloth weaves (threads per inch or cm), cloth thickness, and resin percentage are used to achieve the desired final thickness and dielectric characteristics. Available standard laminate thickness are listed in ANSI/IPC-D-275. The cloth or fiber material used, resin material, and the cloth to resin ratio determine the laminate's type designation (FR-4, CEM-1, G-10, etc.) and therefore the characteristics of the laminate produced. Important characteristics are the level to which the laminate is fire retardant, the dielectric constant (er), the loss tangent (tan δ), the tensile strength, the shear strength, the glass transition temperature (Tg), and the Z-axis expansion coefficient (how much the thickness changes with temperature). There are quite a few different dielectrics that can be chosen to provide different insulating values depending on the requirements of the circuit. Some of these dielectrics are polytetrafluoroethylene (Teflon), FR-4, FR-1, CEM-1 or CEM-3. Well known pre-preg materials used in the PCB industry are FR-2 (phenolic cotton paper), FR-3 (cotton paper and epoxy), FR-4 (woven glass and epoxy), FR-5 (woven glass and epoxy), FR-6 (matte glass and polyester), G-10 (woven glass and epoxy), CEM-1 (cotton paper and epoxy), CEM-2 (cotton paper and epoxy), CEM-3 (non-woven glass and epoxy), CEM-4 (woven glass and epoxy), CEM-5 (woven glass and polyester). Thermal expansion is an important consideration especially with ball grid array (BGA) and naked die technologies, and glass fiber offers the best dimensional stability. FR-4 is by far the most common material used today. The board stock with unetched copper on it is called "copper-clad laminate". With decreasing size of board features and increasing frequencies, small non-homogeneities like uneven distribution of fiberglass or other filler, thickness variations, and bubbles in the resin matrix, and the associated local variations in the dielectric constant, are gaining importance. Key substrate parameters The circuit-board substrates are usually dielectric composite materials. The composites contain a matrix (usually an epoxy resin) and a reinforcement (usually a woven, sometimes non-woven, glass fibers, sometimes even paper), and in some cases a filler is added to the resin (e.g. ceramics; titanate ceramics can be used to increase the dielectric constant). The reinforcement type defines two major classes of materials: woven and non-woven. Woven reinforcements are cheaper, but the high dielectric constant of glass may not be favorable for many higher-frequency applications. The spatially non-homogeneous structure also introduces local variations in electrical parameters, due to different resin/glass ratio at different areas of the weave pattern. Non-woven reinforcements, or materials with low or no reinforcement, are more expensive but more suitable for some RF/analog applications. The substrates are characterized by several key parameters, chiefly thermomechanical (glass transition temperature, tensile strength, shear strength, thermal expansion), electrical (dielectric constant, loss tangent, dielectric breakdown voltage, leakage current, tracking resistance...), and others (e.g. moisture absorption). At the glass transition temperature the resin in the composite softens and significantly increases thermal expansion; exceeding Tg then exerts mechanical overload on the board components - e.g. the joints and the vias. Below Tg the thermal expansion of the resin roughly matches copper and glass, above it gets significantly higher. As the reinforcement and copper confine the board along the plane, virtually all volume expansion projects to the thickness and stresses the plated-through holes. Repeated soldering or other exposition to higher temperatures can cause failure of the plating, especially with thicker boards; thick boards therefore require a matrix with a high Tg. The materials used determine the substrate's dielectric constant. This constant is also dependent on frequency, usually decreasing with frequency. As this constant determines the signal propagation speed, frequency dependence introduces phase distortion in wideband applications; as flat a dielectric constant vs frequency characteristics as is achievable is important here. The impedance of transmission lines decreases with frequency, therefore faster edges of signals reflect more than slower ones. Dielectric breakdown voltage determines the maximum voltage gradient the material can be subjected to before suffering a breakdown (conduction, or arcing, through the dielectric). Tracking resistance determines how the material resists high voltage electrical discharges creeping over the board surface. Loss tangent determines how much of the electromagnetic energy from the signals in the conductors is absorbed in the board material. This factor is important for high frequencies. Low-loss materials are more expensive. Choosing unnecessarily low-loss material is a common engineering error in high-frequency digital design; it increases the cost of the boards without a corresponding benefit. Signal degradation by loss tangent and dielectric constant can be easily assessed by an eye pattern. Moisture absorption occurs when the material is exposed to high humidity or water. Both the resin and the reinforcement may absorb water; water also may be soaked by capillary forces through voids in the materials and along the reinforcement. Epoxies of the FR-4 materials are not too susceptible, with absorption of only 0.15%. Teflon has very low absorption of 0.01%. Polyimides and cyanate esters, on the other side, suffer from high water absorption. Absorbed water can lead to significant degradation of key parameters; it impairs tracking resistance, breakdown voltage, and dielectric parameters. Relative dielectric constant of water is about 73, compared to about 4 for common circuit board materials. Absorbed moisture can also vaporize on heating, as during soldering, and cause cracking and delamination, the same effect responsible for "popcorning" damage on wet packaging of electronic parts. Careful baking of the substrates may be required to dry them prior to soldering. Common substrates Often encountered materials: FR-2, phenolic paper or phenolic cotton paper, paper impregnated with a phenol formaldehyde resin. Common in consumer electronics with single-sided boards. Electrical properties inferior to FR-4. Poor arc resistance. Generally rated to 105 °C. FR-4, a woven fiberglass cloth impregnated with an epoxy resin. Low water absorption (up to about 0.15%), good insulation properties, good arc resistance. Very common. Several grades with somewhat different properties are available. Typically rated to 130 °C. Aluminum, or metal core board or insulated metal substrate (IMS), clad with thermally conductive thin dielectric - used for parts requiring significant cooling - power switches, LEDs. Consists of usually single, sometimes double layer thin circuit board based on e.g. FR-4, laminated on aluminum sheet metal, commonly 0.8, 1, 1.5, 2 or 3 mm thick. The thicker laminates sometimes also come with thicker copper metalization. Flexible substrates - can be a standalone copper-clad foil or can be laminated to a thin stiffener, e.g. 50-130 μm Kapton or UPILEX, a polyimide foil. Used for flexible printed circuits, in this form common in small form-factor consumer electronics or for flexible interconnects. Resistant to high temperatures. Pyralux, a polyimide-fluoropolymer composite foil. Copper layer can delaminate during soldering. Less-often encountered materials: FR-1, like FR-2, typically specified to 105 °C, some grades rated to 130 °C. Room-temperature punchable. Similar to cardboard. Poor moisture resistance. Low arc resistance. FR-3, cotton paper impregnated with epoxy. Typically rated to 105 °C. FR-5, woven fiberglass and epoxy, high strength at higher temperatures, typically specified to 170 °C. FR-6, matte glass and polyester G-10, woven glass and epoxy - high insulation resistance, low moisture absorption, very high bond strength. Typically rated to 130 °C. G-11, woven glass and epoxy - high resistance to solvents, high flexural strength retention at high temperatures. Typically rated to 170 °C. CEM-1, cotton paper and epoxy CEM-2, cotton paper and epoxy CEM-3, non-woven glass and epoxy CEM-4, woven glass and epoxy CEM-5, woven glass and polyester PTFE, ("Teflon") - expensive, low dielectric loss, for high frequency applications, very low moisture absorption (0.01%), mechanically soft. Difficult to laminate, rarely used in multilayer applications. PTFE, ceramic filled - expensive, low dielectric loss, for high frequency applications. Varying ceramics/PTFE ratio allows adjusting dielectric constant and thermal expansion. RF-35, fiberglass-reinforced ceramics-filled PTFE. Relatively less expensive, good mechanical properties, good high-frequency properties. Alumina, a ceramic. Hard, brittle, very expensive, very high performance, good thermal conductivity. Polyimide, a high-temperature polymer. Expensive, high-performance. Higher water absorption (0.4%). Can be used from cryogenic temperatures to over 260 °C. Copper thickness Copper thickness of PCBs can be specified directly or as the weight of copper per area (in ounce per square foot) which is easier to measure. One ounce per square foot is 1.344 mils or 34 micrometers thickness (0.001344 inches). Heavy copper is a layer exceeding three ounces of copper per ft2, or approximately 4.2 mils (105 μm) (0.0042 inches) thick. Heavy copper layers are used for high current or to help dissipate heat. On the common FR-4 substrates, 1 oz copper per ft2 (35 μm) is the most common thickness; 2 oz (70 μm) and 0.5 oz (17.5 μm) thickness is often an option. Less common are 12 and 105 μm, 9 μm is sometimes available on some substrates. Flexible substrates typically have thinner metalization. Metal-core boards for high power devices commonly use thicker copper; 35 μm is usual but also 140 and 400 μm can be encountered. In the US, copper foil thickness is specified in units of ounces per square foot (oz/ft2), commonly referred to simply as ounce. Common thicknesses are 1/2 oz/ft2 (150 g/m), 1 oz/ft2 (300 g/m), 2 oz/ft2 (600 g/m), and 3 oz/ft2 (900 g/m). These work out to thicknesses of 17.05 μm (0.67 thou), 34.1 μm (1.34 thou), 68.2 μm (2.68 thou), and 102.3 μm (4.02 thou), respectively. 1/2 oz/ft2 foil is not widely used as a finished copper weight, but is used for outer layers when plating for through holes will increase the finished copper weight Some PCB manufacturers refer to 1 oz/ft2 copper foil as having a thickness of 35 μm (may also be referred to as 35 μ, 35 micron, or 35 mic). 1/0 – denotes 1 oz/ft2 copper one side, with no copper on the other side. 1/1 – denotes 1 oz/ft2 copper on both sides. H/0 or H/H – denotes 0.5 oz/ft2 copper on one or both sides, respectively. 2/0 or 2/2 – denotes 2 oz/ft2 copper on one or both sides, respectively. Manufacturing Printed circuit board manufacturing involves manufacturing bare printed circuit boards and then populating them with electronic components. In large-scale board manufacturing, multiple PCBs are grouped on a single panel for efficient processing. After assembly, they are separated (depaneled). Types Breakout boards A minimal PCB for a single component, used for prototyping, is called a breakout board. The purpose of a breakout board is to "break out" the leads of a component on separate terminals so that manual connections to them can be made easily. Breakout boards are especially used for surface-mount components or any components with fine lead pitch. Advanced PCBs may contain components embedded in the substrate, such as capacitors and integrated circuits, to reduce the amount of space taken up by components on the surface of the PCB while improving electrical characteristics. Multiwire boards Multiwire is a patented technique of interconnection which uses machine-routed insulated wires embedded in a non-conducting matrix (often plastic resin). It was used during the 1980s and 1990s. Multiwire is still available through Hitachi. Since it was quite easy to stack interconnections (wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires (usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw a wire in a straight line from one location/pin to another. This led to very short design times (no complex algorithms to use even for high density designs) as well as reduced crosstalk (which is worse when wires run parallel to each other—which almost never happens in Multiwire), though the cost is too high to compete with cheaper PCB technologies when large quantities are needed. Corrections can be made to a Multiwire board layout more easily than to a PCB layout. Cordwood construction Cordwood construction can save significant space and was often used with wire-ended components in applications where space was at a premium (such as fuzes, missile guidance, and telemetry systems) and in high-speed computers, where short traces were important. In cordwood construction, axial-leaded components were mounted between two parallel planes. The name comes from the way axial-lead components (capacitors, resistors, coils, and diodes) are stacked in parallel rows and columns, like a stack of firewood. The components were either soldered together with jumper wire or they were connected to other components by thin nickel ribbon welded at right angles onto the component leads. To avoid shorting together different interconnection layers, thin insulating cards were placed between them. Perforations or holes in the cards allowed component leads to project through to the next interconnection layer. One disadvantage of this system was that special nickel-leaded components had to be used to allow reliable interconnecting welds to be made. Differential thermal expansion of the component could put pressure on the leads of the components and the PCB traces and cause mechanical damage (as was seen in several modules on the Apollo program). Additionally, components located in the interior are difficult to replace. Some versions of cordwood construction used soldered single-sided PCBs as the interconnection method (as pictured), allowing the use of normal-leaded components at the cost of being difficult to remove the boards or replace any component that is not at the edge. Before the advent of integrated circuits, this method allowed the highest possible component packing density; because of this, it was used by a number of computer vendors including Control Data Corporation. Uses Printed circuit boards have been used as an alternative to their typical use for electronic and biomedical engineering thanks to the versatility of their layers, especially the copper layer. PCB layers have been used to fabricate sensors, such as capacitive pressure sensors and accelerometers, actuators such as microvalves and microheaters, as well as platforms of sensors and actuators for Lab-on-a-chip (LoC), for example to perform polymerase chain reaction (PCR), and fuel cells, to name a few. Repair Manufacturers may not support component-level repair of printed circuit boards because of the relatively low cost to replace compared with the time and cost of troubleshooting to a component level. In board-level repair, the technician identifies the board (PCA) on which the fault resides and replaces it. This shift is economically efficient from a manufacturer's point of view but is also materially wasteful, as a circuit board with hundreds of functional components may be discarded and replaced due to the failure of one minor and inexpensive part, such as a resistor or capacitor, and this practice is a significant contributor to the problem of e-waste. Legislation In many countries (including all European Single Market participants, the United Kingdom, Turkey, and China), legislation restricts the use of lead, cadmium, and mercury in electrical equipment. PCBs sold in such countries must therefore use lead-free manufacturing processes and lead-free solder, and attached components must themselves be compliant. Safety Standard UL 796 covers component safety requirements for printed wiring boards for use as components in devices or appliances. Testing analyzes characteristics such as flammability, maximum operating temperature, electrical tracking, heat deflection, and direct support of live electrical parts.
Technology
Components
null
65913
https://en.wikipedia.org/wiki/Equations%20of%20motion
Equations of motion
In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics. Types There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since the momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term dynamics refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations. However, kinematics is simpler. It concerns only variables derived from the positions of objects and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the SUVAT equations, arising from the definitions of kinematic quantities: displacement (), initial velocity (), final velocity (), acceleration (), and time (). A differential equation of motion, usually identified as some physical law (for example, F = ma), and applying definitions of physical quantities, is used to set up an equation to solve a kinematics problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a set of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants. Stated formally, in general, an equation of motion is a function of the position of the object, its velocity (the first time derivative of , ), and its acceleration (the second derivative of , ), and time . Euclidean vectors in 3D are denoted throughout in bold. This is equivalent to saying an equation of motion in is a second-order ordinary differential equation (ODE) in , where is time, and each overdot denotes one time derivative. The initial conditions are given by the constant values at , The solution to the equation of motion, with specified initial values, describes the system for all times after . Other dynamical variables like the momentum of the object, or quantities derived from and like angular momentum, can be used in place of as the quantity to solve for from some equation of motion, although the position of the object at time is by far the most sought-after quantity. Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions. History Kinematics, dynamics and the mathematical models of the universe developed incrementally over three millennia, thanks to many thinkers, only some of whose names we know. In antiquity, priests, astrologers and astronomers predicted solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the Moon. But they had nothing other than a set of algorithms to guide them. Equations of motion were not written down for another thousand years. Medieval scholars in the thirteenth century — for example at the relatively new universities in Oxford and Paris — drew on ancient mathematicians (Euclid and Archimedes) and philosophers (Aristotle) to develop a new body of knowledge, now called physics. At Oxford, Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, who were of similar stature to the intellectuals at the University of Paris. Thomas Bradwardine extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion. For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish theologian, in his commentary on Aristotle's Physics published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) – the word velocity was not used – as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. De Soto's comments are remarkably correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that acceleration would be negative during ascent. Discourses such as these spread throughout Europe, shaping the work of Galileo Galilei and others, and helped in laying the foundation of kinematics. Galileo deduced the equation in his work geometrically, using the Merton rule, now known as a special case of one of the equations of kinematics. Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in Discourses that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution. The term "inertia" was used by Kepler who applied it to bodies at rest. (The first law of motion is now often called the law of inertia.) Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope. Galileo also was interested by the laws of the pendulum, his first observations of which were as a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum. More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation varies with the square root of length but is independent of the mass the pendulum. Thus we arrive at René Descartes, Isaac Newton, Gottfried Leibniz, et al.; and the evolved forms of the equations of motion that begin to be recognized as the modern ones. Later the equations of motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations. However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields. Kinematic equations for one particle Kinematic quantities From the instantaneous position , instantaneous meaning at an instant value of time , the instantaneous velocity and acceleration have the general, coordinate-independent definitions; Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature. The rotational analogues are the "angular vector" (angle the particle rotates about some axis) , angular velocity , and angular acceleration : where is a unit vector in the direction of the axis of rotation, and is the angle the object turns through about the axis. The following relation holds for a point-like particle, orbiting about some axis with angular velocity : where is the position vector of the particle (radial from the rotation axis) and the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body. Uniform acceleration The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below. Constant translational acceleration in a straight line These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration. Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) – only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one. where: is the particle's initial position is the particle's final position is the particle's initial velocity is the particle's final velocity is the particle's acceleration is the time interval Equations [1] and [2] are from integrating the definitions of velocity and acceleration, subject to the initial conditions and ; in magnitudes, Equation [3] involves the average velocity . Intuitively, the velocity increases linearly, so the average velocity multiplied by time is the distance traveled while increasing the velocity from to , as can be illustrated graphically by plotting velocity against time as a straight line graph. Algebraically, it follows from solving [1] for and substituting into [2] then simplifying to get or in magnitudes From [3], substituting for in [1]: From [3], substituting into [2]: Usually only the first 4 are needed, the fifth is optional. Here is constant acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two. In some programs, such as the IGCSE Physics and IB DP Physics programs (international programs but especially popular in the UK and Europe), the same formulae would be written with a different set of preferred variables. There replaces and replaces . They are often referred to as the SUVAT equations, where "SUVAT" is an acronym from the variables: = displacement, = initial velocity, = final velocity, = acceleration, = time. In these variables, the equations of motion would be written Constant linear acceleration in any direction The initial position, initial velocity, and acceleration vectors need not be collinear, and the equations of motion take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case, although the Torricelli equation [4] can be derived using the distributive property of the dot product as follows: Applications Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial velocity , one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity . While these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as unidirectional vectors. Choosing to measure up from the ground, the acceleration must be in fact , since the force of gravity acts downwards and therefore also the acceleration on the ball due to it. At the highest point, the ball will be at rest: therefore . Using equation [4] in the set above, we have: Substituting and cancelling minus signs gives: Constant circular acceleration The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary, where is the constant angular acceleration, is the angular velocity, is the initial angular velocity, is the angle turned through (angular displacement), is the initial angle, and is the time taken to rotate from the initial state to the final state. General planar motion These are the kinematic equations for a particle traversing a path in a plane, described by position . They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity and angular acceleration . These are instantaneous quantities which change with time. The position of the particle is where and are the polar unit vectors. Differentiating with respect to time gives the velocity with radial component and an additional component due to the rotation. Differentiating with respect to time again obtains the acceleration which breaks into the radial acceleration , centripetal acceleration , Coriolis acceleration , and angular acceleration . Special cases of motion described by these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration. General 3D motions In 3D space, the equations in spherical coordinates with corresponding unit vectors , and , the position, velocity, and acceleration generalize respectively to In the case of a constant this reduces to the planar equations above. Dynamic equations of motion Newtonian mechanics The first general equation of motion developed was Newton's second law of motion. In its most general form it states the rate of change of momentum of an object equals the force acting on it, The force in the equation is not the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as since is a constant in Newtonian mechanics. Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continuum, like deformable solids or fluids, but the motion of the system must be accounted for; see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum; see variable-mass system. It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex. The momentum form is preferable since this is readily generalized to more complex systems, such as special and general relativity (see four-momentum). It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces. For a number of particles (see many body problem), the equation of motion for one particle influenced by other particles is where is the momentum of particle , is the force on particle by particle , and is the resultant external force due to any agent not part of system. Particle does not exert a force on itself. Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation. Newton's second law for rotation takes a similar form to the translational case, by equating the torque acting on the body to the rate of change of its angular momentum . Analogous to mass times acceleration, the moment of inertia tensor depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity, Again, these equations apply to point like particles, or at each point of a rigid body. Likewise, for a number of particles, the equation of motion for one particle is where is the angular momentum of particle , the torque on particle by particle , and is resultant external torque (due to any agent not part of system). Particle does not exert a torque on itself. Applications Some examples of Newton's law include describing the motion of a simple pendulum, and a damped, sinusoidally driven harmonic oscillator, For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass thrown in the air, in air currents (such as wind) described by a vector field of resistive forces , where is the gravitational constant, the mass of the Earth, and is the acceleration of the projectile due to the air currents at position and time . The classical -body problem for particles each interacting with each other due to gravity is a set of nonlinear coupled second order ODEs, where labels the quantities (mass, position, etc.) associated with each particle. Analytical mechanics Using all three coordinates of 3D space is unnecessary if there are constraints on the system. If the system has degrees of freedom, then one can use a set of generalized coordinates , to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the generalized velocities The Euler–Lagrange equations are where the Lagrangian is a function of the configuration and its time rate of change (and possibly time ) Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled second order ODEs in the coordinates are obtained. Hamilton's equations are where the Hamiltonian is a function of the configuration and conjugate "generalized" momenta in which is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time , Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled first order ODEs in the coordinates and momenta are obtained. The Hamilton–Jacobi equation is where is Hamilton's principal function, also called the classical action is a functional of . In this case, the momenta are given by Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order non-linear PDE, in variables. The action allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether. All classical equations of motion can be derived from the variational principle known as Hamilton's principle of least action stating the path the system takes through the configuration space is the one with the least action . Electrodynamics In electrodynamics, the force on a charged particle of charge is the Lorentz force: Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle: or its momentum: The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass and charge : where and are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by: instead of just , implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation. Alternatively the Hamiltonian (and substituting into the equations): can derive the Lorentz force equation. General relativity Geodesic equation of motion The above equations are valid in flat spacetime. In curved spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a geodesic of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor , the metric provides the notion of arc length (see line element for details). The differential arc length is given by: and the geodesic equation is a second-order differential equation in the coordinates. The general solution is a family of geodesics: where is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system). Given the mass-energy distribution provided by the stress–energy tensor , the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of spacetime is equivalent to a gravitational field (see equivalence principle). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The relative acceleration of one geodesic to another in curved spacetime is given by the geodesic deviation equation: where is the separation vector between two geodesics, (not just ) is the covariant derivative, and is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field. For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity. Spinning objects In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field. Analogues for waves and fields Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified. Sometimes in the following contexts, the wave or field equations are also called "equations of motion". Field equations Equations that describe the spatial dependence and time evolution of fields are called field equations. These include Maxwell's equations for the electromagnetic field, Poisson's equation for Newtonian gravitational or electrostatic field potentials, the Einstein field equation for gravitation (Newton's law of gravity is a special case for weak gravitational fields and low velocities of particles). This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead. Wave equations Equations of wave motion are called wave equations. The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves. From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3D is: where is any mechanical or electromagnetic field amplitude, say: the transverse or longitudinal displacement of a vibrating rod, wire, cable, membrane etc., the fluctuating pressure of a medium, sound pressure, the electric fields or , or the magnetic fields or , the voltage or current in an alternating current circuit, and is the phase velocity. Nonlinear equations model the dependence of phase velocity on amplitude, replacing by . There are other linear and nonlinear wave equations for very specific applications, see for example the Korteweg–de Vries equation. Quantum theory In quantum theory, the wave and field concepts both appear. In quantum mechanics the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form: where is the wavefunction of the system, is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and is the Planck constant divided by 2. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation when one considers the correspondence principle, in the limit that becomes zero. To compare to measurements, operators for observables must be applied the quantum wavefunction according to the experiment performed, leading to either wave-like or particle-like results. Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance: the Heisenberg equation of motion resembles the time evolution of classical observables as functions of position, momentum, and time, if one replaces dynamical observables by their quantum operators and the classical Poisson bracket by the commutator, the phase space formulation closely follows classical Hamiltonian mechanics, placing position and momentum on equal footing, the Feynman path integral formulation extends the principle of least action to quantum mechanics and field theory, placing emphasis on the use of a Lagrangians rather than Hamiltonians.
Physical sciences
Classical mechanics
null
65914
https://en.wikipedia.org/wiki/Kinematics
Kinematics
Kinematics is a subfield of physics and mathematics, developed in classical mechanics, that describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of both applied and pure mathematics since it can be studied without considering the mass of a body or the forces acting upon it. A kinematics problem begins by describing the geometry of the system and declaring the initial conditions of any known values of position, velocity and/or acceleration of points within the system. Then, using arguments from geometry, the position, velocity and acceleration of any unknown parts of the system can be determined. The study of how forces act on bodies falls within kinetics, not kinematics. For further details, see analytical dynamics. Kinematics is used in astrophysics to describe the motion of celestial bodies and collections of such bodies. In mechanical engineering, robotics, and biomechanics, kinematics is used to describe the motion of systems composed of joined parts (multi-link systems) such as an engine, a robotic arm or the human skeleton. Geometric transformations, also called rigid transformations, are used to describe the movement of components in a mechanical system, simplifying the derivation of the equations of motion. They are also central to dynamic analysis. Kinematic analysis is the process of measuring the kinematic quantities used to describe motion. In engineering, for instance, kinematic analysis may be used to find the range of movement for a given mechanism and, working in reverse, using kinematic synthesis to design a mechanism for a desired range of motion. In addition, kinematics applies algebraic geometry to the study of the mechanical advantage of a mechanical system or mechanism. Etymology The term kinematic is the English version of A.M. Ampère's cinématique, which he constructed from the Greek kinema ("movement, motion"), itself derived from kinein ("to move"). Kinematic and cinématique are related to the French word cinéma, but neither are directly derived from it. However, they do share a root word in common, as cinéma came from the shortened form of cinématographe, "motion picture projector and camera", once again from the Greek word for movement and from the Greek grapho ("to write"). Kinematics of a particle trajectory in a non-rotating frame of reference Particle kinematics is the study of the trajectory of particles. The position of a particle is defined as the coordinate vector from the origin of a coordinate frame to the particle. For example, consider a tower 50 m south from your home, where the coordinate frame is centered at your home, such that east is in the direction of the x-axis and north is in the direction of the y-axis, then the coordinate vector to the base of the tower is r = (0 m, −50 m, 0 m). If the tower is 50 m high, and this height is measured along the z-axis, then the coordinate vector to the top of the tower is r = (0 m, −50 m, 50 m). In the most general case, a three-dimensional coordinate system is used to define the position of a particle. However, if the particle is constrained to move within a plane, a two-dimensional coordinate system is sufficient. All observations in physics are incomplete without being described with respect to a reference frame. The position vector of a particle is a vector drawn from the origin of the reference frame to the particle. It expresses both the distance of the point from the origin and its direction from the origin. In three dimensions, the position vector can be expressed as where , , and are the Cartesian coordinates and , and are the unit vectors along the , , and coordinate axes, respectively. The magnitude of the position vector gives the distance between the point and the origin. The direction cosines of the position vector provide a quantitative measure of direction. In general, an object's position vector will depend on the frame of reference; different frames will lead to different values for the position vector. The trajectory of a particle is a vector function of time, , which defines the curve traced by the moving particle, given by where , , and describe each coordinate of the particle's position as a function of time. Velocity and speed The velocity of a particle is a vector quantity that describes the direction as well as the magnitude of motion of the particle. More mathematically, the rate of change of the position vector of a point with respect to time is the velocity of the point. Consider the ratio formed by dividing the difference of two positions of a particle (displacement) by the time interval. This ratio is called the average velocity over that time interval and is defined aswhere is the displacement vector during the time interval . In the limit that the time interval approaches zero, the average velocity approaches the instantaneous velocity, defined as the time derivative of the position vector, Thus, a particle's velocity is the time rate of change of its position. Furthermore, this velocity is tangent to the particle's trajectory at every position along its path. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants. The speed of an object is the magnitude of its velocity. It is a scalar quantity: where is the arc-length measured along the trajectory of the particle. This arc-length must always increase as the particle moves. Hence, is non-negative, which implies that speed is also non-negative. Acceleration The velocity vector can change in magnitude and in direction or both at once. Hence, the acceleration accounts for both the rate of change of the magnitude of the velocity vector and the rate of change of direction of that vector. The same reasoning used with respect to the position of a particle to define velocity, can be applied to the velocity to define acceleration. The acceleration of a particle is the vector defined by the rate of change of the velocity vector. The average acceleration of a particle over a time interval is defined as the ratio. where Δv is the average velocity and Δt is the time interval. The acceleration of the particle is the limit of the average acceleration as the time interval approaches zero, which is the time derivative, Alternatively, Thus, acceleration is the first derivative of the velocity vector and the second derivative of the position vector of that particle. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants. The magnitude of the acceleration of an object is the magnitude |a| of its acceleration vector. It is a scalar quantity: Relative position vector A relative position vector is a vector that defines the position of one point relative to another. It is the difference in position of the two points. The position of one point A relative to another point B is simply the difference between their positions which is the difference between the components of their position vectors. If point A has position components and point B has position components then the position of point A relative to point B is the difference between their components: Relative velocity The velocity of one point relative to another is simply the difference between their velocities which is the difference between the components of their velocities. If point A has velocity components and point B has velocity components then the velocity of point A relative to point B is the difference between their components: Alternatively, this same result could be obtained by computing the time derivative of the relative position vector rB/A. Relative acceleration The acceleration of one point C relative to another point B is simply the difference between their accelerations. which is the difference between the components of their accelerations. If point C has acceleration components and point B has acceleration components then the acceleration of point C relative to point B is the difference between their components: Alternatively, this same result could be obtained by computing the second time derivative of the relative position vector rB/A. Assuming that the initial conditions of the position, , and velocity at time are known, the first integration yields the velocity of the particle as a function of time. A second integration yields its path (trajectory), Additional relations between displacement, velocity, acceleration, and time can be derived. Since the acceleration is constant, can be substituted into the above equation to give: A relationship between velocity, position and acceleration without explicit time dependence can be had by solving the average acceleration for time and substituting and simplifying where denotes the dot product, which is appropriate as the products are scalars rather than vectors. The dot product can be replaced by the cosine of the angle between the vectors (see Geometric interpretation of the dot product for more details) and the vectors by their magnitudes, in which case: In the case of acceleration always in the direction of the motion and the direction of motion should be in positive or negative, the angle between the vectors () is 0, so , and This can be simplified using the notation for the magnitudes of the vectors where can be any curvaceous path taken as the constant tangential acceleration is applied along that path, so This reduces the parametric equations of motion of the particle to a Cartesian relationship of speed versus position. This relation is useful when time is unknown. We also know that or is the area under a velocity–time graph. We can take by adding the top area and the bottom area. The bottom area is a rectangle, and the area of a rectangle is the where is the width and is the height. In this case and (the here is different from the acceleration ). This means that the bottom area is . Now let's find the top area (a triangle). The area of a triangle is where is the base and is the height. In this case, and or . Adding and results in the equation results in the equation . This equation is applicable when the final velocity is unknown. Particle trajectories in cylindrical-polar coordinates It is often convenient to formulate the trajectory of a particle r(t) = (x(t), y(t), z(t)) using polar coordinates in the X–Y plane. In this case, its velocity and acceleration take a convenient form. Recall that the trajectory of a particle P is defined by its coordinate vector r measured in a fixed reference frame F. As the particle moves, its coordinate vector r(t) traces its trajectory, which is a curve in space, given by: where x̂, ŷ, and ẑ are the unit vectors along the x, y and z axes of the reference frame F, respectively. Consider a particle P that moves only on the surface of a circular cylinder r(t) = constant, it is possible to align the z axis of the fixed frame F with the axis of the cylinder. Then, the angle θ around this axis in the x–y plane can be used to define the trajectory as, where the constant distance from the center is denoted as r, and θ(t) is a function of time. The cylindrical coordinates for r(t) can be simplified by introducing the radial and tangential unit vectors, and their time derivatives from elementary calculus: Using this notation, r(t) takes the form, In general, the trajectory r(t) is not constrained to lie on a circular cylinder, so the radius R varies with time and the trajectory of the particle in cylindrical-polar coordinates becomes: Where r, θ, and z might be continuously differentiable functions of time and the function notation is dropped for simplicity. The velocity vector vP is the time derivative of the trajectory r(t), which yields: Similarly, the acceleration aP, which is the time derivative of the velocity vP, is given by: The term acts toward the center of curvature of the path at that point on the path, is commonly called the centripetal acceleration. The term is called the Coriolis acceleration. Constant radius If the trajectory of the particle is constrained to lie on a cylinder, then the radius r is constant and the velocity and acceleration vectors simplify. The velocity of vP is the time derivative of the trajectory r(t), Planar circular trajectories A special case of a particle trajectory on a circular cylinder occurs when there is no movement along the z axis: where r and z0 are constants. In this case, the velocity vP is given by: where is the angular velocity of the unit vector around the z axis of the cylinder. The acceleration aP of the particle P is now given by: The components are called, respectively, the radial and tangential components of acceleration. The notation for angular velocity and angular acceleration is often defined as so the radial and tangential acceleration components for circular trajectories are also written as Point trajectories in a body moving in the plane The movement of components of a mechanical system are analyzed by attaching a reference frame to each part and determining how the various reference frames move relative to each other. If the structural stiffness of the parts are sufficient, then their deformation can be neglected and rigid transformations can be used to define this relative movement. This reduces the description of the motion of the various parts of a complicated mechanical system to a problem of describing the geometry of each part and geometric association of each part relative to other parts. Geometry is the study of the properties of figures that remain the same while the space is transformed in various ways—more technically, it is the study of invariants under a set of transformations. These transformations can cause the displacement of the triangle in the plane, while leaving the vertex angle and the distances between vertices unchanged. Kinematics is often described as applied geometry, where the movement of a mechanical system is described using the rigid transformations of Euclidean geometry. The coordinates of points in a plane are two-dimensional vectors in R2 (two dimensional space). Rigid transformations are those that preserve the distance between any two points. The set of rigid transformations in an n-dimensional space is called the special Euclidean group on Rn, and denoted SE(n). Displacements and motion The position of one component of a mechanical system relative to another is defined by introducing a reference frame, say M, on one that moves relative to a fixed frame, F, on the other. The rigid transformation, or displacement, of M relative to F defines the relative position of the two components. A displacement consists of the combination of a rotation and a translation. The set of all displacements of M relative to F is called the configuration space of M. A smooth curve from one position to another in this configuration space is a continuous set of displacements, called the motion of M relative to F. The motion of a body consists of a continuous set of rotations and translations. Matrix representation The combination of a rotation and translation in the plane R2 can be represented by a certain type of 3×3 matrix known as a homogeneous transform. The 3×3 homogeneous transform is constructed from a 2×2 rotation matrix A(φ) and the 2×1 translation vector d = (dx, dy), as: These homogeneous transforms perform rigid transformations on the points in the plane z = 1, that is, on points with coordinates r = (x, y, 1). In particular, let r define the coordinates of points in a reference frame M coincident with a fixed frame F. Then, when the origin of M is displaced by the translation vector d relative to the origin of F and rotated by the angle φ relative to the x-axis of F, the new coordinates in F of points in M are given by: Homogeneous transforms represent affine transformations. This formulation is necessary because a translation is not a linear transformation of R2. However, using projective geometry, so that R2 is considered a subset of R3, translations become affine linear transformations. Pure translation If a rigid body moves so that its reference frame M does not rotate (θ = 0) relative to the fixed frame F, the motion is called pure translation. In this case, the trajectory of every point in the body is an offset of the trajectory d(t) of the origin of M, that is: Thus, for bodies in pure translation, the velocity and acceleration of every point P in the body are given by: where the dot denotes the derivative with respect to time and vO and aO are the velocity and acceleration, respectively, of the origin of the moving frame M. Recall the coordinate vector p in M is constant, so its derivative is zero. Rotation of a body around a fixed axis Rotational or angular kinematics is the description of the rotation of an object. In what follows, attention is restricted to simple rotation about an axis of fixed orientation. The z-axis has been chosen for convenience. Position This allows the description of a rotation as the angular position of a planar reference frame M relative to a fixed F about this shared z-axis. Coordinates p = (x, y) in M are related to coordinates P = (X, Y) in F by the matrix equation: where is the rotation matrix that defines the angular position of M relative to F as a function of time. Velocity If the point p does not move in M, its velocity in F is given by It is convenient to eliminate the coordinates p and write this as an operation on the trajectory P(t), where the matrix is known as the angular velocity matrix of M relative to F. The parameter ω is the time derivative of the angle θ, that is: Acceleration The acceleration of P(t) in F is obtained as the time derivative of the velocity, which becomes where is the angular acceleration matrix of M on F, and The description of rotation then involves these three quantities: Angular position: the oriented distance from a selected origin on the rotational axis to a point of an object is a vector r(t) locating the point. The vector r(t) has some projection (or, equivalently, some component) r⊥(t) on a plane perpendicular to the axis of rotation. Then the angular position of that point is the angle θ from a reference axis (typically the positive x-axis) to the vector r⊥(t) in a known rotation sense (typically given by the right-hand rule). Angular velocity: the angular velocity ω is the rate at which the angular position θ changes with respect to time t: The angular velocity is represented in Figure 1 by a vector Ω pointing along the axis of rotation with magnitude ω and sense determined by the direction of rotation as given by the right-hand rule. Angular acceleration: the magnitude of the angular acceleration α is the rate at which the angular velocity ω changes with respect to time t: The equations of translational kinematics can easily be extended to planar rotational kinematics for constant angular acceleration with simple variable exchanges: Here θi and θf are, respectively, the initial and final angular positions, ωi and ωf are, respectively, the initial and final angular velocities, and α is the constant angular acceleration. Although position in space and velocity in space are both true vectors (in terms of their properties under rotation), as is angular velocity, angle itself is not a true vector. Point trajectories in body moving in three dimensions Important formulas in kinematics define the velocity and acceleration of points in a moving body as they trace trajectories in three-dimensional space. This is particularly important for the center of mass of a body, which is used to derive equations of motion using either Newton's second law or Lagrange's equations. Position In order to define these formulas, the movement of a component B of a mechanical system is defined by the set of rotations [A(t)] and translations d(t) assembled into the homogeneous transformation [T(t)]=[A(t), d(t)]. If p is the coordinates of a point P in B measured in the moving reference frame M, then the trajectory of this point traced in F is given by: This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context. This equation for the trajectory of P can be inverted to compute the coordinate vector p in M as: This expression uses the fact that the transpose of a rotation matrix is also its inverse, that is: Velocity The velocity of the point P along its trajectory P(t) is obtained as the time derivative of this position vector, The dot denotes the derivative with respect to time; because p is constant, its derivative is zero. This formula can be modified to obtain the velocity of P by operating on its trajectory P(t) measured in the fixed frame F. Substituting the inverse transform for p into the velocity equation yields: The matrix [S] is given by: where is the angular velocity matrix. Multiplying by the operator [S], the formula for the velocity vP takes the form: where the vector ω is the angular velocity vector obtained from the components of the matrix [Ω]; the vector is the position of P relative to the origin O of the moving frame M; and is the velocity of the origin O. Acceleration The acceleration of a point P in a moving body B is obtained as the time derivative of its velocity vector: This equation can be expanded firstly by computing and The formula for the acceleration AP can now be obtained as: or where α is the angular acceleration vector obtained from the derivative of the angular velocity vector; is the relative position vector (the position of P relative to the origin O of the moving frame M); and is the acceleration of the origin of the moving frame M. Kinematic constraints Kinematic constraints are constraints on the movement of components of a mechanical system. Kinematic constraints can be considered to have two basic forms, (i) constraints that arise from hinges, sliders and cam joints that define the construction of the system, called holonomic constraints, and (ii) constraints imposed on the velocity of the system such as the knife-edge constraint of ice-skates on a flat plane, or rolling without slipping of a disc or sphere in contact with a plane, which are called non-holonomic constraints. The following are some common examples. Kinematic coupling A kinematic coupling exactly constrains all 6 degrees of freedom. Rolling without slipping An object that rolls against a surface without slipping obeys the condition that the velocity of its center of mass is equal to the cross product of its angular velocity with a vector from the point of contact to the center of mass: For the case of an object that does not tip or turn, this reduces to . Inextensible cord This is the case where bodies are connected by an idealized cord that remains in tension and cannot change length. The constraint is that the sum of lengths of all segments of the cord is the total length, and accordingly the time derivative of this sum is zero. A dynamic problem of this type is the pendulum. Another example is a drum turned by the pull of gravity upon a falling weight attached to the rim by the inextensible cord. An equilibrium problem (i.e. not kinematic) of this type is the catenary. Kinematic pairs Reuleaux called the ideal connections between components that form a machine kinematic pairs. He distinguished between higher pairs which were said to have line contact between the two links and lower pairs that have area contact between the links. J. Phillips shows that there are many ways to construct pairs that do not fit this simple classification. Lower pair A lower pair is an ideal joint, or holonomic constraint, that maintains contact between a point, line or plane in a moving solid (three-dimensional) body to a corresponding point line or plane in the fixed solid body. There are the following cases: A revolute pair, or hinged joint, requires a line, or axis, in the moving body to remain co-linear with a line in the fixed body, and a plane perpendicular to this line in the moving body maintain contact with a similar perpendicular plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom, which is pure rotation about the axis of the hinge. A prismatic joint, or slider, requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body, and a plane parallel to this line in the moving body maintain contact with a similar parallel plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom. This degree of freedom is the distance of the slide along the line. A cylindrical joint requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body. It is a combination of a revolute joint and a sliding joint. This joint has two degrees of freedom. The position of the moving body is defined by both the rotation about and slide along the axis. A spherical joint, or ball joint, requires that a point in the moving body maintain contact with a point in the fixed body. This joint has three degrees of freedom. A planar joint requires that a plane in the moving body maintain contact with a plane in fixed body. This joint has three degrees of freedom. Higher pairs Generally speaking, a higher pair is a constraint that requires a curve or surface in the moving body to maintain contact with a curve or surface in the fixed body. For example, the contact between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints. Kinematic chains Rigid bodies ("links") connected by kinematic pairs ("joints") are known as kinematic chains. Mechanisms and robots are examples of kinematic chains. The degree of freedom of a kinematic chain is computed from the number of links and the number and type of joints using the mobility formula. This formula can also be used to enumerate the topologies of kinematic chains that have a given degree of freedom, which is known as type synthesis in machine design. Examples The planar one degree-of-freedom linkages assembled from N links and j hinges or sliding joints are: N = 2, j = 1 : a two-bar linkage that is the lever; N = 4, j = 4 : the four-bar linkage; N = 6, j = 7 : a six-bar linkage. This must have two links ("ternary links") that support three joints. There are two distinct topologies that depend on how the two ternary linkages are connected. In the Watt topology, the two ternary links have a common joint; in the Stephenson topology, the two ternary links do not have a common joint and are connected by binary links. N = 8, j = 10 : eight-bar linkage with 16 different topologies; N = 10, j = 13 : ten-bar linkage with 230 different topologies; N = 12, j = 16 : twelve-bar linkage with 6,856 topologies. For larger chains and their linkage topologies, see R. P. Sunkari and L. C. Schmidt, "Structural synthesis of planar kinematic chains by adapting a Mckay-type algorithm", Mechanism and Machine Theory #41, pp. 1021–1030 (2006).
Physical sciences
Basics_10
null
65926
https://en.wikipedia.org/wiki/Angular%20displacement
Angular displacement
The angular displacement (symbol θ, , or φ) – also called angle of rotation, rotational displacement, or rotary displacement – of a physical body is the angle (in units of radians, degrees, turns, etc.) through which the body rotates (revolves or spins) around a centre or axis of rotation. Angular displacement may be signed, indicating the sense of rotation (e.g., clockwise); it may also be greater (in absolute value) than a full turn. Context When a body rotates about its axis, the motion cannot simply be analyzed as a particle, as in circular motion it undergoes a changing velocity and acceleration at any time. When dealing with the rotation of a body, it becomes simpler to consider the body itself rigid. A body is generally considered rigid when the separations between all the particles remains constant throughout the body's motion, so for example parts of its mass are not flying off. In a realistic sense, all things can be deformable, however this impact is minimal and negligible. Example In the example illustrated to the right (or above in some mobile versions), a particle or body P is at a fixed distance r from the origin, O, rotating counterclockwise. It becomes important to then represent the position of particle P in terms of its polar coordinates (r, θ). In this particular example, the value of θ is changing, while the value of the radius remains the same. (In rectangular coordinates (x, y) both x and y vary with time.) As the particle moves along the circle, it travels an arc length s, which becomes related to the angular position through the relationship: Definition and units Angular displacement may be expressed in radians or degrees. Using radians provides a very simple relationship between distance traveled around the circle (circular arc length) and the distance r from the centre (radius): For example, if a body rotates 360° around a circle of radius r, the angular displacement is given by the distance traveled around the circumference - which is 2πr - divided by the radius: which easily simplifies to: . Therefore, 1 revolution is radians. The above definition is part of the International System of Quantities (ISQ), formalized in the international standard ISO 80000-3 (Space and time), and adopted in the International System of Units (SI). Angular displacement may be signed, indicating the sense of rotation (e.g., clockwise); it may also be greater (in absolute value) than a full turn. In the ISQ/SI, angular displacement is used to define the number of revolutions, Nθ/(2π rad), a ratio-type quantity of dimension one. In three dimensions In three dimensions, angular displacement is an entity with a direction and a magnitude. The direction specifies the axis of rotation, which always exists by virtue of the Euler's rotation theorem; the magnitude specifies the rotation in radians about that axis (using the right-hand rule to determine direction). This entity is called an axis-angle. Despite having direction and magnitude, angular displacement is not a vector because it does not obey the commutative law for addition. Nevertheless, when dealing with infinitesimal rotations, second order infinitesimals can be discarded and in this case commutativity appears. Rotation matrices Several ways to describe rotations exist, like rotation matrices or Euler angles. See charts on SO(3) for others. Given that any frame in the space can be described by a rotation matrix, the displacement among them can also be described by a rotation matrix. Being and two matrices, the angular displacement matrix between them can be obtained as . When this product is performed having a very small difference between both frames we will obtain a matrix close to the identity. In the limit, we will have an infinitesimal rotation matrix. Infinitesimal rotation matrices
Physical sciences
Classical mechanics
Physics
65927
https://en.wikipedia.org/wiki/Angular%20velocity
Angular velocity
In physics, angular velocity (symbol or , the lowercase Greek letter omega), also known as the angular frequency vector, is a pseudovector representation of how the angular position or orientation of an object changes with time, i.e. how quickly an object rotates (spins or revolves) around an axis of rotation and how fast the axis itself changes direction. The magnitude of the pseudovector, , represents the angular speed (or angular frequency), the angular rate at which the object rotates (spins or revolves). The pseudovector direction is normal to the instantaneous plane of rotation or angular displacement. There are two types of angular velocity: Orbital angular velocity refers to how fast a point object revolves about a fixed origin, i.e. the time rate of change of its angular position relative to the origin. Spin angular velocity refers to how fast a rigid body rotates with respect to its center of rotation and is independent of the choice of origin, in contrast to orbital angular velocity. Angular velocity has dimension of angle per unit time; this is analogous to linear velocity, with angle replacing distance, with time in common. The SI unit of angular velocity is radians per second, although degrees per second (°/s) is also common. The radian is a dimensionless quantity, thus the SI units of angular velocity are dimensionally equivalent to reciprocal seconds, s−1, although rad/s is preferable to avoid confusion with rotation velocity in units of hertz (also equivalent to s−1). The sense of angular velocity is conventionally specified by the right-hand rule, implying clockwise rotations (as viewed on the plane of rotation); negation (multiplication by −1) leaves the magnitude unchanged but flips the axis in the opposite direction. For example, a geostationary satellite completes one orbit per day above the equator (360 degrees per 24 hours) has angular velocity magnitude (angular speed) ω = 360°/24 h = 15°/h (or 2π rad/24 h ≈ 0.26 rad/h) and angular velocity direction (a unit vector) parallel to Earth's rotation axis (, in the geocentric coordinate system). If angle is measured in radians, the linear velocity is the radius times the angular velocity, . With orbital radius 42,000 km from the Earth's center, the satellite's tangential speed through space is thus v = 42,000 km × 0.26/h ≈ 11,000 km/h. The angular velocity is positive since the satellite travels prograde with the Earth's rotation (the same direction as the rotation of Earth). Geosynchronous satellites actually orbit based on a sidereal day which is 23h 56m 04s, but 24h is assumed in this example for simplicity. Orbital angular velocity of a point particle Particle in two dimensions In the simplest case of circular motion at radius , with position given by the angular displacement from the x-axis, the orbital angular velocity is the rate of change of angle with respect to time: . If is measured in radians, the arc-length from the positive x-axis around the circle to the particle is , and the linear velocity is , so that . In the general case of a particle moving in the plane, the orbital angular velocity is the rate at which the position vector relative to a chosen origin "sweeps out" angle. The diagram shows the position vector from the origin to a particle , with its polar coordinates . (All variables are functions of time .) The particle has linear velocity splitting as , with the radial component parallel to the radius, and the cross-radial (or tangential) component perpendicular to the radius. When there is no radial component, the particle moves around the origin in a circle; but when there is no cross-radial component, it moves in a straight line from the origin. Since radial motion leaves the angle unchanged, only the cross-radial component of linear velocity contributes to angular velocity. The angular velocity ω is the rate of change of angular position with respect to time, which can be computed from the cross-radial velocity as: Here the cross-radial speed is the signed magnitude of , positive for counter-clockwise motion, negative for clockwise. Taking polar coordinates for the linear velocity gives magnitude (linear speed) and angle relative to the radius vector; in these terms, , so that These formulas may be derived doing , being a function of the distance to the origin with respect to time, and a function of the angle between the vector and the x axis. Then: which is equal to: (see Unit vector in cylindrical coordinates). Knowing , we conclude that the radial component of the velocity is given by , because is a radial unit vector; and the perpendicular component is given by because is a perpendicular unit vector. In two dimensions, angular velocity is a number with plus or minus sign indicating orientation, but not pointing in a direction. The sign is conventionally taken to be positive if the radius vector turns counter-clockwise, and negative if clockwise. Angular velocity then may be termed a pseudoscalar, a numerical quantity which changes sign under a parity inversion, such as inverting one axis or switching the two axes. Particle in three dimensions In three-dimensional space, we again have the position vector r of a moving particle. Here, orbital angular velocity is a pseudovector whose magnitude is the rate at which r sweeps out angle (in radians per unit of time), and whose direction is perpendicular to the instantaneous plane in which r sweeps out angle (i.e. the plane spanned by r and v). However, as there are two directions perpendicular to any plane, an additional condition is necessary to uniquely specify the direction of the angular velocity; conventionally, the right-hand rule is used. Let the pseudovector be the unit vector perpendicular to the plane spanned by r and v, so that the right-hand rule is satisfied (i.e. the instantaneous direction of angular displacement is counter-clockwise looking from the top of ). Taking polar coordinates in this plane, as in the two-dimensional case above, one may define the orbital angular velocity vector as: where θ is the angle between r and v. In terms of the cross product, this is: From the above equation, one can recover the tangential velocity as: Spin angular velocity of a rigid body or reference frame Given a rotating frame of three unit coordinate vectors, all the three must have the same angular speed at each instant. In such a frame, each vector may be considered as a moving particle with constant scalar radius. The rotating frame appears in the context of rigid bodies, and special tools have been developed for it: the spin angular velocity may be described as a vector or equivalently as a tensor. Consistent with the general definition, the spin angular velocity of a frame is defined as the orbital angular velocity of any of the three vectors (same for all) with respect to its own center of rotation. The addition of angular velocity vectors for frames is also defined by the usual vector addition (composition of linear movements), and can be useful to decompose the rotation as in a gimbal. All components of the vector can be calculated as derivatives of the parameters defining the moving frames (Euler angles or rotation matrices). As in the general case, addition is commutative: . By Euler's rotation theorem, any rotating frame possesses an instantaneous axis of rotation, which is the direction of the angular velocity vector, and the magnitude of the angular velocity is consistent with the two-dimensional case. If we choose a reference point fixed in the rigid body, the velocity of any point in the body is given by Components from the basis vectors of a body-fixed frame Consider a rigid body rotating about a fixed point O. Construct a reference frame in the body consisting of an orthonormal set of vectors fixed to the body and with their common origin at O. The spin angular velocity vector of both frame and body about O is then where is the time rate of change of the frame vector due to the rotation. This formula is incompatible with the expression for orbital angular velocity as that formula defines angular velocity for a single point about O, while the formula in this section applies to a frame or rigid body. In the case of a rigid body a single has to account for the motion of all particles in the body. Components from Euler angles The components of the spin angular velocity pseudovector were first calculated by Leonhard Euler using his Euler angles and the use of an intermediate frame: One axis of the reference frame (the precession axis) The line of nodes of the moving frame with respect to the reference frame (nutation axis) One axis of the moving frame (the intrinsic rotation axis) Euler proved that the projections of the angular velocity pseudovector on each of these three axes is the derivative of its associated angle (which is equivalent to decomposing the instantaneous rotation into three instantaneous Euler rotations). Therefore: This basis is not orthonormal and it is difficult to use, but now the velocity vector can be changed to the fixed frame or to the moving frame with just a change of bases. For example, changing to the mobile frame: where are unit vectors for the frame fixed in the moving body. This example has been made using the Z-X-Z convention for Euler angles. Tensor
Physical sciences
Classical mechanics
Physics
65929
https://en.wikipedia.org/wiki/Angular%20acceleration
Angular acceleration
In physics, angular acceleration (symbol α, alpha) is the time rate of change of angular velocity. Following the two types of angular velocity, spin angular velocity and orbital angular velocity, the respective types of angular acceleration are: spin angular acceleration, involving a rigid body about an axis of rotation intersecting the body's centroid; and orbital angular acceleration, involving a point particle and an external axis. Angular acceleration has physical dimensions of angle per time squared, measured in SI units of radians per second squared (rads−2). In two dimensions, angular acceleration is a pseudoscalar whose sign is taken to be positive if the angular speed increases counterclockwise or decreases clockwise, and is taken to be negative if the angular speed increases clockwise or decreases counterclockwise. In three dimensions, angular acceleration is a pseudovector. Orbital angular acceleration of a point particle Particle in two dimensions In two dimensions, the orbital angular acceleration is the rate at which the two-dimensional orbital angular velocity of the particle about the origin changes. The instantaneous angular velocity ω at any point in time is given by where is the distance from the origin and is the cross-radial component of the instantaneous velocity (i.e. the component perpendicular to the position vector), which by convention is positive for counter-clockwise motion and negative for clockwise motion. Therefore, the instantaneous angular acceleration α of the particle is given by Expanding the right-hand-side using the product rule from differential calculus, this becomes In the special case where the particle undergoes circular motion about the origin, becomes just the tangential acceleration , and vanishes (since the distance from the origin stays constant), so the above equation simplifies to In two dimensions, angular acceleration is a number with plus or minus sign indicating orientation, but not pointing in a direction. The sign is conventionally taken to be positive if the angular speed increases in the counter-clockwise direction or decreases in the clockwise direction, and the sign is taken negative if the angular speed increases in the clockwise direction or decreases in the counter-clockwise direction. Angular acceleration then may be termed a pseudoscalar, a numerical quantity which changes sign under a parity inversion, such as inverting one axis or switching the two axes. Particle in three dimensions In three dimensions, the orbital angular acceleration is the rate at which three-dimensional orbital angular velocity vector changes with time. The instantaneous angular velocity vector at any point in time is given by where is the particle's position vector, its distance from the origin, and its velocity vector. Therefore, the orbital angular acceleration is the vector defined by Expanding this derivative using the product rule for cross-products and the ordinary quotient rule, one gets: Since is just , the second term may be rewritten as . In the case where the distance of the particle from the origin does not change with time (which includes circular motion as a subcase), the second term vanishes and the above formula simplifies to From the above equation, one can recover the cross-radial acceleration in this special case as: Unlike in two dimensions, the angular acceleration in three dimensions need not be associated with a change in the angular speed : If the particle's position vector "twists" in space, changing its instantaneous plane of angular displacement, the change in the direction of the angular velocity will still produce a nonzero angular acceleration. This cannot not happen if the position vector is restricted to a fixed plane, in which case has a fixed direction perpendicular to the plane. The angular acceleration vector is more properly called a pseudovector: It has three components which transform under rotations in the same way as the Cartesian coordinates of a point do, but which do not transform like Cartesian coordinates under reflections. Relation to torque The net torque on a point particle is defined to be the pseudovector where is the net force on the particle. Torque is the rotational analogue of force: it induces change in the rotational state of a system, just as force induces change in the translational state of a system. As force on a particle is connected to acceleration by the equation , one may write a similar equation connecting torque on a particle to angular acceleration, though this relation is necessarily more complicated. First, substituting into the above equation for torque, one gets From the previous section: where is orbital angular acceleration and is orbital angular velocity. Therefore: In the special case of constant distance of the particle from the origin (), the second term in the above equation vanishes and the above equation simplifies to which can be interpreted as a "rotational analogue" to , where the quantity (known as the moment of inertia of the particle) plays the role of the mass . However, unlike , this equation does not apply to an arbitrary trajectory, only to a trajectory contained within a spherical shell about the origin.
Physical sciences
Classical mechanics
Physics
65955
https://en.wikipedia.org/wiki/Denim
Denim
Denim is a sturdy cotton warp-faced textile in which the weft passes under two or more warp threads. This twill weave produces a diagonal ribbing that distinguishes it from cotton duck. Denim, as it is recognized today, was first produced in Nîmes, France. Denim is available in a range of colors, but the most common denim is indigo denim in which the warp thread is dyed while the weft thread is left white. As a result of the warp-faced twill weaving, one side of the textile is dominated by the blue warp threads, and the other side is dominated by the white weft threads. Jeans fabricated from this cloth are thus predominantly white on the inside. Denim is used to create a wide variety of garments, accessories, and furniture. Etymology Denim originated as a contraction of the French phrase ('serge from Nîmes'). History Denim has been used in the United States since the mid-19th century. Denim initially gained popularity in 1873 when Jacob W. Davis, a tailor from Nevada, manufactured the first pair of rivet-reinforced denim pants. The popularity of denim jeans outstripped the capacity of Davis's small shop, so he moved his production to the facilities of dry goods wholesaler Levi Strauss & Co., which had been supplying Davis with bolts of denim fabric. Throughout the 20th century, denim was used for durable uniforms like those issued to staff of the French national railways. In the post-war years, the Royal Air Force issued olive-drab denim coveralls (colloquially known as "denims") for dirty work. By the 1970s, denim jeans were such an integral part of youth culture that automobile manufactures, beginning with American Motors Corporation began offering denim-like interior finishes. (Because denim cannot pass fire resistance safety standards, indigo-colored spun nylon or vinyl was used, with contrast-stitching and copper rivets helping to sell the effect.) A Levi's-branded trim package debuted with AMC's 1973 model year. Similar packages were available from Volkswagen from 1973 to 1975 (the "Jeans Beetle") and from Jeep from 1975 through 1977. Creating denim All denim is created through generally the same process: Cotton fiber is spun into yarn The warp yarn is dyed, while the weft is left white (usually) The yarns are woven on a shuttle loom or projectile loom The woven product is sanforized Yarn production Traditional denim yarn is composed entirely of cotton. Once cotton fibers are cleaned and combed into long, cohesive lengths of similar-length fiber, they are spun into yarn using an industrial machine. Throughout the creation of denim, washes, dyes, or treatments are used to change the appearance of denim products. Some yarns may substitute an elastic component such as spandex for up to 3% of the cotton, the woven form of which (typically called 'stretch denim') may have a elasticity of up to 15%. Dyeing Denim was originally dyed with indigo dye extracted from plants, often from the genus Indigofera. In South Asia, indigo dye was extracted from the dried and fermented leaves of Indigofera tinctoria; this is the plant that is now known as "true indigo" or "natural indigo". In Europe, the use of Isatis tinctoria, or woad, can be traced back to the 8th century BC, although it was eventually replaced by Indigofera tinctoria as the superior dye product. However, most denim today is dyed with synthetic indigo dye. In all cases, the yarn undergoes a repeated sequence of dipping and oxidation—the more dips, the stronger the color of the indigo. Before 1915, cotton yarns were dyed using a skein dyeing process, in which individual skeins of yarn were dipped into dye baths. Rope dyeing machines were developed in 1915, and slasher or sheet dyeing machines were developed in the 1970s. These methods involve a series of rollers that feed continuous yarns in and out of dye vats. In rope dyeing, continuous yarns are gathered together into long ropes or groups of yarns – after these bundles are dyed, they must be re-beamed for weaving. In sheet dyeing, parallel yarns are laid out as a sheet in the same order in which they will be woven; because of this, uneven dye circulation in the bath can lead to side-to-side color variations in the woven cloth. Rope dyeing eliminates this possibility because color variations can be evenly distributed across the warp during beaming. Denim fabric dyeing is divided into two categories: indigo dyeing (Indigo dye is a unique shade of blue) and sulfur dyeing (Sulfur dye is a synthetic organic dye and it is formed by sulphurisation of organic intermediates, this contains nitro or amino groups). Indigo dyeing produces the traditional blue color or shades similar to it. Sulfur dyeing produces specialty black and other colors, such as red, pink, purple, grey, rust, mustard, and green. Weaving Most denim made today is made on a shuttleless loom that produces bolts of fabric or wider, but some denim is still woven on the traditional shuttle loom, which typically produces a bolt wide. Shuttle-loom-woven denim is usually recognizable by its selvedge (or selvage), the edge of a fabric created as a continuous cross-yarn (the weft) reverses direction at the edge side of the shuttle loom. The selvedge is traditionally accentuated with warp threads of one or more contrasting colors, which can serve as an identifying mark. Although quality denim can be made on either loom, selvedge denim has come to be associated with premium products since final production that showcases the selvedge requires greater care of assemblage. The thickness of denim can vary greatly, with a yard of fabric weighing anywhere from , with being typical. Uses Denim is frequently used for a wide array of consumer products, including: Clothing Aprons Boots and athletic shoes Capri pants Cloth face mask Dresses Hats Jackets Jeans Jeggings Overalls Shirts Shorts Skirts Sneakers Suits Accessories Belts Handbags (purses) Tote bags Wallets Furniture Bean bag chairs Lampshades Upholstery Art Denim has been a medium for many artists. At least one artist, Ian Berry, uses old or recycled denim, exclusively in crafting his portraits and other scenes. Worldwide market In 2020, the worldwide denim market equaled US$57.3 billion, with demand growing by 5.8% and supply growing by 8% annually. Over 50% of denim is produced in Asia, most of it in China, India, Turkey, Pakistan, and Bangladesh. Globally, the denim industry is expected to grow at a CAGR of over 4.8% from 2022 to 2026, with the market value expected to increase from $57.3 billion to $76.1 billion.
Technology
Fabrics and fibers
null
65980
https://en.wikipedia.org/wiki/Jaundice
Jaundice
Jaundice, also known as icterus, is a yellowish or greenish pigmentation of the skin and sclera due to high bilirubin levels. Jaundice in adults is typically a sign indicating the presence of underlying diseases involving abnormal heme metabolism, liver dysfunction, or biliary-tract obstruction. The prevalence of jaundice in adults is rare, while jaundice in babies is common, with an estimated 80% affected during their first week of life. The most commonly associated symptoms of jaundice are itchiness, pale feces, and dark urine. Normal levels of bilirubin in blood are below 1.0 mg/dl (17 μmol/L), while levels over 2–3 mg/dl (34–51 μmol/L) typically result in jaundice. High blood bilirubin is divided into two types: unconjugated and conjugated bilirubin. Causes of jaundice vary from relatively benign to potentially fatal. High unconjugated bilirubin may be due to excess red blood cell breakdown, large bruises, genetic conditions such as Gilbert's syndrome, not eating for a prolonged period of time, newborn jaundice, or thyroid problems. High conjugated bilirubin may be due to liver diseases such as cirrhosis or hepatitis, infections, medications, or blockage of the bile duct, due to factors including gallstones, cancer, or pancreatitis. Other conditions can also cause yellowish skin, but are not jaundice, including carotenemia, which can develop from eating large amounts of foods containing carotene—or medications such as rifampin. Treatment of jaundice is typically determined by the underlying cause. If a bile duct blockage is present, surgery is typically required; otherwise, management is medical. Medical management may involve treating infectious causes and stopping medication that could be contributing to the jaundice. Jaundice in newborns may be treated with phototherapy or exchanged transfusion depending on age and prematurity when the bilirubin is greater than 4–21 mg/dl (68–365 μmol/L). The itchiness may be helped by draining the gallbladder, ursodeoxycholic acid, or opioid antagonists such as naltrexone. The word jaundice is from the French jaunisse, meaning 'yellow disease'. Signs and symptoms The most common signs of jaundice in adults are a yellowish discoloration of the white area of the eye (sclera) and skin with scleral icterus presence indicating a serum bilirubin of at least 3 mg/dl. Other common signs include dark urine (bilirubinuria) and pale (acholia) fatty stool (steatorrhea). Because bilirubin is a skin irritant, jaundice is commonly associated with severe itchiness. Eye conjunctiva has a particularly high affinity for bilirubin deposition due to high elastin content. Slight increases in serum bilirubin can, therefore, be detected early on by observing the yellowing of sclerae. Traditionally referred to as scleral icterus, this term is actually a misnomer, because bilirubin deposition technically occurs in the conjunctival membranes overlying the avascular sclera. Thus, the proper term for the yellowing of "white of the eyes" is conjunctival icterus. A rare sign of jaundice in childhood is the appearance of yellowish or greenish teeth. In developing children, hyperbilirubinemia can lead to yellow or green tooth discoloration as bilirubin deposits during tooth calcification. While this may occur in children with hyperbilirubinemia, tooth discoloration due to hyperbilirubinemia is not observed in individuals with adult-onset liver disease. Disorders associated with a rise in serum levels of conjugated bilirubin during early development can also cause dental hypoplasia. Causes Jaundice is a sign indicating the presence of an underlying diseases involving abnormal bilirubin metabolism, liver dysfunction, or biliary-tract obstruction. In general, jaundice is present when blood levels of bilirubin exceed 3 mg/dl. Jaundice is classified into three categories, depending on which part of the physiological mechanism the pathology affects. The three categories are: Prehepatic causes Prehepatic jaundice is most commonly caused by a pathological increased rate of red blood cell (erythrocyte) hemolysis. The increased breakdown of erythrocytes → increased unconjugated serum bilirubin → increased deposition of unconjugated bilirubin into mucosal tissue. These diseases may cause jaundice due to increased erythrocyte hemolysis: Sickle-cell anemia Spherocytosis Thalassemia Pyruvate kinase deficiency Glucose-6-phosphate dehydrogenase deficiency Microangiopathic hemolytic anemia Hemolytic–uremic syndrome Severe malaria (in endemic countries) Hepatic causes Hepatic jaundice is caused by abnormal liver metabolism of bilirubin. The major causes of hepatic jaundice are significant damage to hepatocytes due to infectious, drug/medication-induced, autoimmune etiology, or less commonly, due to inheritable genetic diseases. The following is a partial list of hepatic causes to jaundice: Acute hepatitis Chronic hepatitis Hepatotoxicity Cirrhosis Drug-induced hepatitis Alcoholic liver disease Gilbert's syndrome (found in about 5% of the population, results in induced mild jaundice) Crigler–Najjar syndrome, type I Crigler–Najjar syndrome, type II Leptospirosis Posthepatic causes (Obstructive jaundice) Posthepatic jaundice (obstructive jaundice) is caused by a blockage of bile ducts that transport bile containing conjugated bilirubin out of the liver for excretion. This is a list of conditions that can cause posthepatic jaundice: Choledocholithiasis (common bile duct gallstones). It is the most common cause of obstructive jaundice. Pancreatic cancer of the pancreatic head Biliary tract strictures Biliary atresia Primary biliary cholangitis Cholestasis of pregnancy Acute Pancreatitis Chronic Pancreatitis Pancreatic pseudocysts Mirizzi's syndrome Parasites ("liver flukes" of the Opisthorchiidae and Fasciolidae) Pathophysiology Jaundice is typically caused by an underlying pathological process that occurs at some point along the normal physiological pathway of heme metabolism. A deeper understanding of the anatomical flow of normal heme metabolism is essential to appreciate the importance of prehepatic, hepatic, and posthepatic categories. Thus, an anatomical approach to heme metabolism precedes a discussion of the pathophysiology of jaundice. Normal heme metabolism Prehepatic metabolism When red blood cells complete their lifespan of about 120 days, or if they are damaged, they rupture as they pass through the reticuloendothelial system, and cell contents including hemoglobin are released into circulation. Macrophages phagocytose free hemoglobin and split it into heme and globin. Two reactions then take place with the heme molecule. The first oxidation reaction is catalyzed by the microsomal enzyme heme oxygenase and results in biliverdin (green color pigment), iron, and carbon monoxide. The next step is the reduction of biliverdin to a yellow color tetrapyrrole pigment called bilirubin by cytosolic enzyme biliverdin reductase. This bilirubin is "unconjugated", "free", or "indirect" bilirubin. Around 4 mg of bilirubin per kg of blood are produced each day. The majority of this bilirubin comes from the breakdown of heme from expired red blood cells in the process just described. Roughly 20% comes from other heme sources, however, including ineffective erythropoiesis, and the breakdown of other heme-containing proteins, such as muscle myoglobin and cytochromes. The unconjugated bilirubin then travels to the liver through the bloodstream. Because this bilirubin is not soluble, it is transported through the blood bound to serum albumin. Hepatic metabolism Once unconjugated bilirubin arrives in the liver, liver enzyme UDP-glucuronyl transferase conjugates bilirubin + glucuronic acid → bilirubin diglucuronide (conjugated bilirubin). Bilirubin that has been conjugated by the liver is water-soluble and excreted into the gallbladder. Posthepatic metabolism Bilirubin enters the intestinal tract via bile. In the intestinal tract, bilirubin is converted into urobilinogen by symbiotic intestinal bacteria. Most urobilinogen is converted into stercobilinogen and further oxidized into stercobilin. Stercobilin is excreted via feces, giving stool its characteristic brown coloration. A small portion of urobilinogen is reabsorbed back into the gastrointestinal cells. Most reabsorbed urobilinogen undergoes hepatobiliary recirculation. A smaller portion of reabsorbed urobilinogen is filtered into the kidneys. In the urine, urobilinogen is converted to urobilin, which gives urine its characteristic yellow color. Abnormalities in heme metabolism and excretion One way to understand jaundice pathophysiology is to organize it into disorders that cause increased bilirubin production (abnormal heme metabolism) or decreased bilirubin excretion (abnormal heme excretion). Prehepatic pathophysiology Prehepatic jaundice results from a pathological increase in bilirubin production: an increased rate of erythrocyte hemolysis causes increased bilirubin production, leading to increased deposition of bilirubin in mucosal tissues and the appearance of a yellow hue. Hepatic pathophysiology Hepatic jaundice (hepatocellular jaundice) is due to significant disruption of liver function, leading to hepatic cell death and necrosis and impaired bilirubin transport across hepatocytes. Bilirubin transport across hepatocytes may be impaired at any point between hepatocellular uptake of unconjugated bilirubin and hepatocellular transport of conjugated bilirubin into the gallbladder. In addition, subsequent cellular edema due to inflammation causes mechanical obstruction of the intrahepatic biliary tract. Most commonly, interferences in all three major steps of bilirubin metabolism—uptake, conjugation, and excretion—usually occur in hepatocellular jaundice. Thus, an abnormal rise in both unconjugated and conjugated bilirubin (formerly called cholemia) will be present. Because excretion (the rate-limiting step) is usually impaired to the greatest extent, conjugated hyperbilirubinemia predominates. The unconjugated bilirubin still enters the liver cells and becomes conjugated in the usual way. This conjugated bilirubin is then returned to the blood, probably by rupture of the congested bile canaliculi and direct emptying of the bile into the lymph exiting the liver. Thus, most of the bilirubin in the plasma becomes the conjugated type rather than the unconjugated type, and this conjugated bilirubin, which did not go to the intestine to become urobilinogen, gives the urine a dark color. Posthepatic pathophysiology Posthepatic jaundice, also called obstructive jaundice, is due to the blockage of bile excretion from the biliary tract, which leads to increased conjugated bilirubin and bile salts there. In complete obstruction of the bile duct, conjugated bilirubin cannot access the intestinal tract, disrupting further bilirubin conversion to urobilinogen and, therefore, no stercobilin or urobilin is produced. In obstructive jaundice, excess conjugated bilirubin is filtered into the urine without urobilinogen. Conjugated bilirubin in urine (bilirubinuria) gives urine an abnormally dark brown color. Thus, the presence of pale stool (stercobilin absent from feces) and dark urine (conjugated bilirubin present in urine) suggests an obstructive cause of jaundice. Because these associated signs are also positive in many hepatic jaundice conditions, they cannot be a reliable clinical feature to distinguish obstructive versus hepatocellular jaundice causes. Diagnosis Most people presenting with jaundice have various predictable patterns of liver panel abnormalities, though significant variation does exist. The typical liver panel includes blood levels of enzymes found primarily from the liver, such as the aminotransferases (ALT, AST), and alkaline phosphatase (ALP); bilirubin (which causes the jaundice); and protein levels, specifically, total protein and albumin. Other primary lab tests for liver function include gamma glutamyl transpeptidase (GGT) and prothrombin time (PT). No single test can differentiate between various classifications of jaundice. A combination of liver function tests and other physical examination findings is essential to arrive at a diagnosis. Laboratory tests Some bone and heart disorders can lead to an increase in ALP and the aminotransferases, so the first step in differentiating these from liver problems is to compare the levels of GGT, which are only elevated in liver-specific conditions. The second step is distinguishing from biliary (cholestatic) or liver causes of jaundice and altered laboratory results. ALP and GGT levels typically rise with one pattern while aspartate aminotransferase (AST) and alanine aminotransferase (ALT) rise in a separate pattern. If the ALP (10–45 IU/L) and GGT (18–85 IU/L) levels rise proportionately as high as the AST (12–38 IU/L) and ALT (10–45 IU/L) levels, this indicates a cholestatic problem. If the AST and ALT rise is significantly higher than the ALP and GGT rise, though, this indicates a liver problem. Finally, distinguishing between liver causes of jaundice, comparing levels of AST and ALT can prove useful. AST levels typically are higher than ALT. This remains the case in most liver disorders except for hepatitis (viral or hepatotoxic). Alcoholic liver damage may have fairly normal ALT levels, with AST 10 times higher than ALT. If ALT is higher than AST, however, this is indicative of hepatitis. Levels of ALT and AST are not well correlated to the extent of liver damage, although rapid drops in these levels from very high levels can indicate severe necrosis. Low levels of albumin tend to indicate a chronic condition, while the level is normal in hepatitis and cholestasis. Laboratory results for liver panels are frequently compared by the magnitude of their differences, not the pure number, as well as by their ratios. The AST:ALT ratio can be a good indicator of whether the disorder is alcoholic liver damage (above 10), some other form of liver damage (above 1), or hepatitis (less than 1). Bilirubin levels greater than 10 times normal could indicate neoplastic or intrahepatic cholestasis. Levels lower than this tend to indicate hepatocellular causes. AST levels greater than 15 times normal tend to indicate acute hepatocellular damage. Less than this tend to indicate obstructive causes. ALP levels greater than 5 times normal tend to indicate obstruction, while levels greater than 10 times normal can indicate drug (toxin) induced cholestatic hepatitis or cytomegalovirus infection. Both of these conditions can also have ALT and AST greater than 20 times normal. GGT levels greater than 10 times normal typically indicate cholestasis. Levels 5–10 times tend to indicate viral hepatitis. Levels less than 5 times normal tend to indicate drug toxicity. Acute hepatitis typically has ALT and AST levels rising 20–30 times normal (above 1000) and may remain significantly elevated for several weeks. Acetaminophen toxicity can result in ALT and AST levels greater than 50 times than normal. Laboratory findings depend on the cause of jaundice: Urine: conjugated bilirubin present, urobilinogen > 2 units but variable (except in children) Plasma proteins show characteristic changes. Plasma albumin level is low, but plasma globulins are raised due to an increased formation of antibodies. Unconjugated bilirubin is hydrophobic, so cannot be excreted in urine. Thus, the finding of increased urobilinogen in the urine without the presence of bilirubin in the urine (due to its unconjugated state) suggests hemolytic jaundice as the underlying disease process. Urobilinogen will be greater than 2 units, as hemolytic anemia causes increased heme metabolism; one exception being the case of infants, where the gut flora has not developed). Conversely, conjugated bilirubin is hydrophilic and thus can be detected as present in the urine—bilirubinuria—in contrast to unconjugated bilirubin, which is absent in the urine. Imaging Medical imaging such as ultrasound, CT scan, and HIDA scan are useful for detecting bile-duct blockage. Differential diagnosis Yellow discoloration of the skin, especially on the palms and the soles, but not of the sclera or inside the mouth, is often due to carotenemia—a harmless condition. Yellow discoloration of the skin can also rarely occur with hypercupremia, whether from Wilson's disease or from another metabolic derangement. Similarly, a golden-ish ring at the edges of the irises can occur (Kayser-Fleischer ring). Treatment Treatment of jaundice varies depending on the underlying cause. If a bile duct blockage is present, surgery is typically required; otherwise, management is pharmacological. Complications Hyperbilirubinemia, more precisely hyperbilirubinemia due to the unconjugated fraction, may cause bilirubin to accumulate in the grey matter of the central nervous system, potentially causing irreversible neurological damage, leading to a condition known as kernicterus. Depending on the level of exposure, the effects range from unnoticeable to severe brain damage and even death. Newborns are especially vulnerable to hyperbilirubinemia-induced neurological damage, so must be carefully monitored for alterations in their serum bilirubin levels. Individuals with parenchymal liver disease who have impaired hemostasis may develop bleeding problems. Epidemiology Jaundice in adults is rare. Under the five year DISCOVERY programme in the UK, annual incidence of jaundice was 0.74 per 1000 individuals over age 45, although this rate may be slightly inflated due to the main goal of the programme collecting and analyzing cancer data in the population. Jaundice is commonly associated with severity of disease with an incidence of up to 40% of patients requiring intensive care in ICU experiencing jaundice. The causes of jaundice in the intensive care setting is both due to jaundice as the primary reason for ICU stay or as a morbidity to an underlying disease (i.e. sepsis). In the developed world, the most common causes of jaundice are blockage of the bile duct or medication-induced. In the developing world, the most common cause of jaundice is infectious such as viral hepatitis, leptospirosis, schistosomiasis, or malaria. Risk factors Risk factors associated with high serum bilirubin levels include male gender, white ethnicities, and active smoking. Mean serum total bilirubin levels in adults were found to be higher in men (0.72 ± 0.004 mg/dl) than women (0.52 ± 0.003 mg/dl). Higher bilirubin levels in adults are found also in non-Hispanic white population (0.63 ± 0.004 mg/dl) and Mexican American population (0.61 ± 0.005 mg/dl) while lower in non-Hispanic black population (0.55 ± 0.005 mg/dl). Bilirubin levels are higher in active smokers. Special populations Neonatal jaundice Symptoms Jaundice in infants presents with yellowed skin and icteral sclerae. Neonatal jaundice spreads in a cephalocaudal pattern, affecting the face and neck before spreading down to the trunk and lower extremities in more severe cases. Other symptoms may include drowsiness, poor feeding, and in severe cases, unconjugated bilirubin can cross the blood-brain barrier and cause permanent neurological damage (kernicterus). Causes The most common cause of jaundice in infants is normal physiologic jaundice. Pathologic causes of neonatal jaundice include: Formula jaundice Hereditary spherocytosis Glucose-6-phosphate dehydrogenase deficiency Pyruvate kinase deficiency ABO/Rh blood type autoantibodies Alpha 1-antitrypsin deficiency Alagille syndrome (genetic defect resulting in hypoplastic intrahepatic bile ducts) Progressive familial intrahepatic cholestasis Pyknocytosis (due to vitamin deficiency) Cretinism (congenital hypothyroidism) Sepsis or other infectious causes Pathophysiology Transient neonatal jaundice is one of the most common conditions occurring in newborns (children under 28 days of age) with more than 80 per cent experienceing jaundice during their first week of life. Jaundice in infants, as in adults, is characterized by increased bilirubin levels (infants: total serum bilirubin greater than 5 mg/dL). Normal physiological neonatal jaundice is due to immaturity of liver enzymes involved in bilirubin metabolism, immature gut microbiota, and increased breakdown of fetal hemoglobin (HbF). Breast milk jaundice is caused by an increased concentration of β-glucuronidase in breast milk, which increases bilirubin deconjugation and reabsorption of bilirubin, leading to persistence of physiologic jaundice with unconjugated hyperbilirubinemia. Onset of breast milk jaundice is within 2 weeks after birth and lasts for 4–13 weeks. While most cases of newborn jaundice are not harmful, when bilirubin levels are very high, brain damage—kernicterus—may occur leading to significant disability. Kernicterus is associated with increased unconjugated bilirubin (bilirubin which is not carried by albumin). Newborns are especially vulnerable to this damage, due to increased permeability of the blood–brain barrier occurring with increased unconjugated bilirubin, simultaneous to the breakdown of fetal hemoglobin and the immaturity of gut flora. This condition has been rising in recent years, as babies spend less time in sunlight. Treatment Jaundice in newborns is usually transient and dissipates without medical intervention. In cases when serum bilirubin levels are greater than 4–21 mg/dl (68–360 μmol/L), infant may be treated with phototherapy or exchanged transfusion depending on the infant's age and prematurity status. A bili light is often the tool used for early treatment, which consists of exposing the baby to intensive phototherapy, which may be intermittent or continuous. A 2014 systematic review found no evidence indicating whether outcomes were different for hospital-based versus home-based treatment. A 2021 Cochrane systematic review found that sunlight can be used to supplement phototherapy, as long as care is taken to prevent overheating and skin damage. There was not sufficient evidence to conclude that sunlight by itself is an effective treatment. Bilirubin count is also lowered through excretion—bowel movements and urination—so frequent and effective feedings are vital measures to decrease jaundice in infants. Etymology Jaundice comes from the French , meaning 'yellow'; meaning 'yellow disease'. The medical term is icterus, from the Greek word . The term icterus is sometimes incorrectly used to refer to jaundice specifically of sclera. It is also referenced in the scientific name of the yellow-breasted chat (Icteria virens), whose sight was believed to cure jaundice.
Biology and health sciences
Symptoms and signs
Health
66013
https://en.wikipedia.org/wiki/Field%20gun
Field gun
A field gun is a field artillery piece. Originally the term referred to smaller guns that could accompany a field army on the march, that when in combat could be moved about the battlefield in response to changing circumstances (field artillery), as opposed to guns installed in a fort (garrison artillery or coastal artillery), or to siege cannons and mortars which are too large to be moved quickly, and would be used only in a prolonged siege. Perhaps the most famous use of the field gun in terms of advanced tactics was Napoleon Bonaparte's use of very large wheels on the guns that allowed them to be moved quickly even during a battle. By moving the guns from point to point during a battle, enemy formations could be broken up to be handled by the infantry or cavalry wherever they were massing, dramatically increasing the overall effectiveness of the attack. World War I As the evolution of artillery continued, almost all guns of any size became capable of being moved at some speed. With few exceptions, even the largest siege weapons had become mobile by road or rail by the start of World War I, and evolution after that point tended to be towards smaller weapons with increased mobility. Even the German super-heavy guns in World War II were rail or caterpillar-track mobile. In British use, field guns or light guns were anything up to in calibre, larger calibres were medium guns, and the largest calibres were heavy guns. World War II Since about the start of World War II, the term has been applied to long-range artillery pieces that fire at a relatively low angle, as opposed to howitzers which can fire at higher angles. Field guns also lack a specialized purpose, such as anti-tank or coastal artillery. By the later stages of World War II the majority of artillery in use was either in the form of howitzers of to , or in form of hybrid anti-tank/field guns that had high enough muzzle velocity to be used in both roles. The most common field guns of the era were the British , the American 155 mm Long Tom (a development of a French World War I weapon) and the Soviet BS-3 – an artillery piece adapted from a naval gun and designed to double up as an anti-tank weapon. One of the most produced field guns during the war was the Soviet ZiS-3 with over 103,000 produced. The ZiS-3 could be used in direct fire against armored vehicles, direct fire in infantry support, and indirect fire against distant targets. 1960s and 1970s The U.S. Army tried the long-range gun again from the early 1960s to the late 1970s with the M107 175 mm gun. The M107 was used extensively in the Vietnam War and proved effective in artillery duels with the North Vietnamese forces. It was considered a high-maintenance item and was removed from service with U.S. forces after a rash of cracked barrels. Production of the M107 continued until 1980 and the gun is still in service with the Israeli military. Reserve stocks are held by other former users such as the People's Army of Vietnam. Modern times Since the 1980s and 1990s, the field gun has seen limited combat use. The class of small and highly mobile artillery has been filled with increasing capacity by the man-portable mortar in or / calibre and has replaced every artillery piece smaller than . Gun-howitzers fill the middle ground, with the world rapidly standardizing on either the 155 mm (6.1 in) NATO or Warsaw Pact (former USSR) standards. The need for a long-range weapon is filled by rockets, missiles, and aircraft. Modern gun-artillery such as the L118 105 mm light gun or the M119 105 mm howitzer are used to provide fire support for infantry and armour at ranges where mortars are impractical. Man-packed mortars lack the range or hitting power of gun-artillery. In between is the rifled towed mortar; this weapon (usually in calibre) is light enough to be towed by a truck or SUV, has a range of over and fires a projectile comparable in destructive power to a / artillery shell.
Technology
Artillery
null
66014
https://en.wikipedia.org/wiki/Pascal%20%28unit%29
Pascal (unit)
The pascal (symbol: Pa) is the unit of pressure in the International System of Units (SI). It is also used to quantify internal pressure, stress, Young's modulus, and ultimate tensile strength. The unit, named after Blaise Pascal, is an SI coherent derived unit defined as one newton per square metre (N/m2). It is also equivalent to 10 barye (10 Ba) in the CGS system. Common multiple units of the pascal are the hectopascal (1 hPa = 100 Pa), which is equal to one millibar, and the kilopascal (1 kPa = 1000 Pa), which is equal to one centibar. The unit of measurement called standard atmosphere (atm) is defined as . Meteorological observations typically report atmospheric pressure in hectopascals per the recommendation of the World Meteorological Organization, thus a standard atmosphere (atm) or typical sea-level air pressure is about 1013 hPa. Reports in the United States typically use inches of mercury or millibars (hectopascals). In Canada, these reports are given in kilopascals. Etymology The unit is named after Blaise Pascal, noted for his contributions to hydrodynamics and hydrostatics, and experiments with a barometer. The name pascal was adopted for the SI unit newton per square metre (N/m2) by the 14th General Conference on Weights and Measures in 1971. Definition The pascal can be expressed using SI derived units, or alternatively solely SI base units, as: where N is the newton, m is the metre, kg is the kilogram, s is the second, and J is the joule. One pascal is the pressure exerted by a force of one newton perpendicularly upon an area of one square metre. Standard units The unit of measurement called an atmosphere or a standard atmosphere (atm) is . This value is often used as a reference pressure and specified as such in some national and international standards, such as the International Organization for Standardization's ISO 2787 (pneumatic tools and compressors), ISO 2533 (aerospace) and ISO 5024 (petroleum). In contrast, International Union of Pure and Applied Chemistry (IUPAC) recommends the use of 100 kPa as a standard pressure when reporting the properties of substances. Unicode has dedicated code-points and in the CJK Compatibility block, but these exist only for backward-compatibility with some older ideographic character-sets and are therefore deprecated. Uses The pascal (Pa) or kilopascal (kPa) as a unit of pressure measurement is widely used throughout the world and has largely replaced the pounds per square inch (psi) unit, except in some countries that still use the imperial measurement system or the US customary system, including the United States. Geophysicists use the gigapascal (GPa) in measuring or calculating tectonic stresses and pressures within the Earth. Medical elastography measures tissue stiffness non-invasively with ultrasound or magnetic resonance imaging, and often displays the Young's modulus or shear modulus of tissue in kilopascals. In materials science and engineering, the pascal measures the stiffness, tensile strength and compressive strength of materials. In engineering the megapascal (MPa) is the preferred unit for these uses, because the pascal represents a very small quantity. The pascal is also equivalent to the SI unit of energy density, the joule per cubic metre. This applies not only to the thermodynamics of pressurised gases, but also to the energy density of electric, magnetic, and gravitational fields. The pascal is used to measure sound pressure. Loudness is the subjective experience of sound pressure and is measured as a sound pressure level (SPL) on a logarithmic scale of the sound pressure relative to some reference pressure. For sound in air, a pressure of 20 μPa is considered to be at the threshold of hearing for humans and is a common reference pressure, so that its SPL is zero. The airtightness of buildings is measured at 50 Pa. In medicine, blood pressure is measured in millimeters of mercury (mmHg, very close to one Torr). The normal adult blood pressure is less than 120 mmHg systolic BP (SBP) and less than 80 mmHg diastolic BP (DBP). Convert mmHg to SI units as follows: . Hence the normal blood pressure in SI units is less than 16.0 kPa SBP and less than 10.7 kPa DBP. These values are similar to the pressure of water column of average human height; so pressure has to be measured on arm roughly at the level of the heart. Hectopascal and millibar units The units of atmospheric pressure commonly used in meteorology were formerly the bar (100,000 Pa), which is close to the average air pressure on Earth, and the millibar. Since the introduction of SI units, meteorologists generally measure atmospheric pressure in hectopascals (hPa), equal to 100 pascals or 1 millibar. Exceptions include Canada, which uses kilopascals (kPa). In many other fields of science, prefixes that are a power of 1000 are preferred, which theoretically excludes hectopascal from use. Many countries still use millibars to measure atmospheric pressure. In practically all other fields, the kilopascal is used instead. Multiples and submultiples Decimal multiples and submultiples are formed using standard SI units.
Physical sciences
Energy, power, force and pressure
null
66109
https://en.wikipedia.org/wiki/Eclipse%20cycle
Eclipse cycle
Eclipses may occur repeatedly, separated by certain intervals of time: these intervals are called eclipse cycles. The series of eclipses separated by a repeat of one of these intervals is called an eclipse series. Eclipse conditions Eclipses may occur when Earth and the Moon are aligned with the Sun, and the shadow of one body projected by the Sun falls on the other. So at new moon, when the Moon is in conjunction with the Sun, the Moon may pass in front of the Sun as viewed from a narrow region on the surface of Earth and cause a solar eclipse. At full moon, when the Moon is in opposition to the Sun, the Moon may pass through the shadow of Earth, and a lunar eclipse is visible from the night half of Earth. The conjunction and opposition of the Moon together have a special name: syzygy (Greek for "junction"), because of the importance of these lunar phases. An eclipse does not occur at every new or full moon, because the plane of the Moon's orbit around Earth is tilted with respect to the plane of Earth's orbit around the Sun (the ecliptic): so as viewed from Earth, when the Moon appears nearest the Sun (at new moon) or furthest from it (at full moon), the three bodies are usually not exactly on the same line. This inclination is on average about 5° 9′, much larger than the apparent mean diameter of the Sun (32′ 2″), the Moon as viewed from Earth's surface directly below the Moon (31′ 37″), and Earth's shadow at the mean lunar distance (1° 23′). Therefore, at most new moons, Earth passes too far north or south of the lunar shadow, and at most full moons, the Moon misses Earth's shadow. Also, at most solar eclipses, the apparent angular diameter of the Moon is insufficient to fully occlude the solar disc, unless the Moon is around its perigee, i.e. nearer Earth and apparently larger than average. In any case, the alignment must be almost perfect to cause an eclipse. An eclipse can occur only when the Moon is on or near the plane of Earth's orbit, i.e. when its ecliptic latitude is low. This happens when the Moon is around either of the two orbital nodes on the ecliptic at the time of the syzygy. Of course, to produce an eclipse, the Sun must also be around a node at that time – the same node for a solar eclipse or the opposite node for a lunar eclipse. Recurrences Up to three eclipses may occur during an eclipse season, a one- or two-month period that happens twice a year, around the time when the Sun is near the nodes of the Moon's orbit. An eclipse does not occur every month, because one month after an eclipse the relative geometry of the Sun, Moon, and Earth has changed. As seen from the Earth, the time it takes for the Moon to return to a node, the draconic month, is less than the time it takes for the Moon to return to the same ecliptic longitude as the Sun: the synodic month. The main reason is that during the time that the Moon has completed an orbit around the Earth, the Earth (and Moon) have completed about of their orbit around the Sun: the Moon has to make up for this in order to come again into conjunction or opposition with the Sun. Secondly, the orbital nodes of the Moon precess westward in ecliptic longitude, completing a full circle in about 18.60 years, so a draconic month is shorter than a sidereal month. In all, the difference in period between synodic and draconic month is nearly days. Likewise, as seen from the Earth, the Sun passes both nodes as it moves along its ecliptic path. The period for the Sun to return to a node is called the eclipse or draconic year: about 346.6201 days, which is about year shorter than a sidereal year because of the precession of the nodes. If a solar eclipse occurs at one new moon, which must be close to a node, then at the next full moon the Moon is already more than a day past its opposite node, and may or may not miss the Earth's shadow. By the next new moon it is even further ahead of the node, so it is less likely that there will be a solar eclipse somewhere on Earth. By the next month, there will certainly be no event. However, about 5 or 6 lunations later the new moon will fall close to the opposite node. In that time (half an eclipse year) the Sun will have moved to the opposite node too, so the circumstances will again be suitable for one or more eclipses. Periodicity The periodicity of solar eclipses is the interval between any two solar eclipses in succession, which will be either 1, 5, or 6 synodic months. It is calculated that the Earth will experience a total number of 11,898 solar eclipses between 2000 BCE and 3000 CE. A particular solar eclipse will be repeated approximately after every 18 years 11 days and 8 hours (6,585.32 days) of period, but not in the same geographical region. A particular geographical region will experience a particular solar eclipse in every 54 years 34 days period. Total solar eclipses are rare events, although they occur somewhere on Earth every 18 months on average. Repetition of solar eclipses For two solar eclipses to be almost identical, the geometric alignment of the Earth, Moon and Sun, as well as some parameters of the lunar orbit should be the same. The following parameters and criteria must be repeated for the repetition of a solar eclipse: The Moon must be in new phase. The longitude of perigee or apogee of the Moon must be the same. The longitude of the ascending node or descending node must be the same. The Earth will be nearly the same distance from the Sun, and tilted to it in nearly the same orientation. These conditions are related to the three periods of the Moon's orbital motion, viz. the synodic month, anomalistic month and draconic month, and to the anomalistic year. In other words, a particular eclipse will be repeated only if the Moon will complete roughly an integer number of synodic, draconic, and anomalistic periods and the Earth-Sun-Moon geometry will be nearly identical. The Moon will be at the same node and the same distance from the Earth. This happens after the period called the saros. Gamma (how far the Moon is north or south of the ecliptic during an eclipse) changes monotonically throughout any single saros series. The change in gamma is larger when Earth is near its aphelion (June to July) than when it is near perihelion (December to January). When the Earth is near its average distance (March to April or September to October), the change in gamma is average. Repetition of lunar eclipses For the repetition of a lunar eclipse, the geometric alignment of the Moon, Earth and Sun, as well as some parameters of the lunar orbit should be repeated. The following parameters and criteria must be repeated for the repetition of a lunar eclipse: The Moon must be in full phase. The longitude of perigee or apogee of the Moon must be the same. The longitude of the ascending node or descending node must be the same. The Earth will be nearly the same distance from the Sun, and tilted to it in nearly the same orientation. These conditions are related with the three periods of the Moon's orbital motion, viz. the synodic month, anomalistic month and draconic month. In other words, a particular eclipse will be repeated only if the Moon will complete roughly an integer number of synodic, draconic, and anomalistic periods (223, 242, and 239) and the Earth-Sun-Moon geometry will be nearly identical to that eclipse. The Moon will be at the same node and the same distance from the Earth. Gamma changes monotonically throughout any single Saros series. The change in gamma is larger when Earth is near its aphelion (June to July) than when it is near perihelion (December to January). When the Earth is near its average distance (March to April or September to October), the change in gamma is average. Effect of Eccentricity Another thing to consider is that the motion of the Moon is not a perfect circle. Its orbit is distinctly elliptic, so the lunar distance from Earth varies throughout the lunar cycle. This varying distance changes the apparent diameter of the Moon, and therefore influences the chances, duration, and type (partial, annular, total, mixed) of an eclipse. This orbital period is called the anomalistic month, and together with the synodic month causes the so-called "full moon cycle" of about 14 lunations in the timings and appearances of full (and new) Moons. The Moon moves faster when it is closer to the Earth (near perigee) and slower when it is near apogee (furthest distance), thus periodically changing the timing of syzygies by up to 14 hours either side (relative to their mean timing), and causing the apparent lunar angular diameter to increase or decrease by about 6%. An eclipse cycle must comprise close to an integer number of anomalistic months in order to perform well in predicting eclipses. If the Earth had a perfectly circular orbit centered around the Sun, and the Moon's orbit was also perfectly circular and centered around the Earth, and both orbits were coplanar (on the same plane) with each other, then two eclipses would happen every lunar month (29.53 days). A lunar eclipse would occur at every full moon, a solar eclipse every new moon, and all solar eclipses would be the same type. In fact the distances between the Earth and Moon and that of the Earth and the Sun vary because both the Earth and the Moon have elliptic orbits. Also, both the orbits are not on the same plane. The Moon's orbit is inclined about 5.14° to Earth's orbit around the Sun. So the Moon's orbit crosses the ecliptic at two points or nodes. If a New Moon takes place within about 17° of a node, then a solar eclipse will be visible from some location on Earth. At an average angular velocity of 0.99° per day, the Sun takes 34.5 days to cross the 34° wide eclipse zone centered on each node. Because the Moon's orbit with respect to the Sun has a mean duration of 29.53 days, there will always be one and possibly two solar eclipses during each 34.5-day interval when the Sun passes through the nodal eclipse zones. These time periods are called eclipse seasons. Either two or three eclipses happen each eclipse season. During the eclipse season, the inclination of the Moon's orbit is low, hence the Sun, Moon, and Earth become aligned straight enough (in syzygy) for an eclipse to occur. Numerical values These are the lengths of the various types of months as discussed above (according to the lunar ephemeris ELP2000-85, valid for the epoch J2000.0; taken from (e.g.) Meeus (1991) ): SM = 29.530588853 days (Synodic month) DM = 27.212220817 days (Draconic month) AM = 27.55454988 days (Anomalistic month) EY = 346.620076 days (Eclipse year) Note that there are three main moving points: the Sun, the Moon, and the (ascending) node; and that there are three main periods, when each of the three possible pairs of moving points meet one another: the synodic month when the Moon returns to the Sun, the draconic month when the Moon returns to the node, and the eclipse year when the Sun returns to the node. These three 2-way relations are not independent (i.e. both the synodic month and eclipse year are dependent on the apparent motion of the Sun, both the draconic month and eclipse year are dependent on the motion of the nodes), and indeed the eclipse year can be described as the beat period of the synodic and draconic months (i.e. the period of the difference between the synodic and draconic months); in formula: as can be checked by filling in the numerical values listed above. Eclipse cycles have a period in which a certain number of synodic months closely equals an integer or half-integer number of draconic months: one such period after an eclipse, a syzygy (new moon or full moon) takes place again near a node of the Moon's orbit on the ecliptic, and an eclipse can occur again. However, the synodic and draconic months are incommensurate: their ratio is not an integer number. We need to approximate this ratio by common fractions: the numerators and denominators then give the multiples of the two periods – draconic and synodic months – that (approximately) span the same amount of time, representing an eclipse cycle. These fractions can be found by the method of continued fractions: this arithmetical technique provides a series of progressively better approximations of any real numeric value by proper fractions. Since there may be an eclipse every half draconic month, we need to find approximations for the number of half draconic months per synodic month: so the target ratio to approximate is: SM / (DM/2) = 29.530588853 / (27.212220817/2) = 2.170391682 The continued fractions expansion for this ratio is: 2.170391682 = [2;5,1,6,1,1,1,1,1,11,1,...]: Quotients Convergents half DM/SM decimal named cycle (if any) 2; 2/1 = 2 synodic month 5 11/5 = 2.2 pentalunex 1 13/6 = 2.166666667 semester 6 89/41 = 2.170731707 hepton 1 102/47 = 2.170212766 octon 1 191/88 = 2.170454545 tzolkinex 1 293/135 = 2.170370370 tritos 1 484/223 = 2.170403587 saros 1 777/358 = 2.170391061 inex 11 9031/4161 = 2.170391732 selebit 1 9808/4519 = 2.170391679 square year ... The ratio of synodic months per half eclipse year yields the same series: 5.868831091 = [5;1,6,1,1,1,1,1,11,1,...] Quotients Convergents SM/half EY decimal SM/full EY named cycle 5; 5/1 = 5 pentalunex 1 6/1 = 6 12/1 semester 6 41/7 = 5.857142857 hepton 1 47/8 = 5.875 47/4 octon 1 88/15 = 5.866666667 tzolkinex 1 135/23 = 5.869565217 tritos 1 223/38 = 5.868421053 223/19 saros 1 358/61 = 5.868852459 716/61 inex 11 4161/709 = 5.868829337 selebit 1 4519/770 = 5.868831169 4519/385 square year ... Each of these is an eclipse cycle. Less accurate cycles may be constructed by combinations of these. Eclipse cycles This table summarizes the characteristics of various eclipse cycles, and can be computed from the numerical results of the preceding paragraphs; cf. Meeus (1997) Ch.9. More details are given in the comments below, and several notable cycles have their own pages. Many other cycles have been noted, some of which have been named. The number of days given is the average. The actual number of days and fractions of days between two eclipses varies because of the variation in the speed of the Moon and of the Sun in the sky. The variation is less if the number of anomalistic months is near a whole number, and if the number of anomalistic years is near a whole number. (See graphs lower down of semester and Hipparchic cycle.) Any eclipse cycle, and indeed the interval between any two eclipses, can be expressed as a combination of saros (s) and inex (i) intervals. These are listed in the column "formula".
Physical sciences
Celestial mechanics
Astronomy
66111
https://en.wikipedia.org/wiki/Saros%20%28astronomy%29
Saros (astronomy)
The saros () is a period of exactly 223 synodic months, approximately 6585.321 days (18.04 years), or 18 years plus 10, 11, or 12 days (depending on the number of leap years), and 8 hours, that can be used to predict eclipses of the Sun and Moon. One saros period after an eclipse, the Sun, Earth, and Moon return to approximately the same relative geometry, a near straight line, and a nearly identical eclipse will occur, in what is referred to as an eclipse cycle. A sar is one half of a saros. A series of eclipses that are separated by one saros is called a saros series. It corresponds to: 6,585.321347 solar days 18.029 years 223 synodic months 241.999 draconic months 18.999 eclipse years (38 eclipse seasons of 173.31 days) 238.992 anomalistic months 241.029 sidereal months The 19 eclipse years means that if there is a solar eclipse (or lunar eclipse), then after one saros a new moon will take place at the same node of the orbit of the Moon, and under these circumstances another solar eclipse can occur. History The earliest discovered historical record of what is known as the saros is by Chaldean (neo-Babylonian) astronomers in the last several centuries BCE. It was later known to Hipparchus, Pliny and Ptolemy. The name "saros" () was applied to the eclipse cycle by Edmond Halley in 1686, who took it from the Suda, a Byzantine lexicon of the 11th century. The Suda says, "[The saros is] a measure and a number among Chaldeans. For 120 saroi make 2220 years (years of 12 lunar months) according to the Chaldeans' reckoning, if indeed the saros makes 222 lunar months, which are 18 years and 6 months (i.e. years of 12 lunar months)." The information in the Suda in turn was derived directly or otherwise from the Chronicle of Eusebius of Caesarea, which quoted Berossus. (Guillaume Le Gentil claimed that Halley's usage was incorrect in 1756, but the name continues to be used.) The Greek word apparently either comes from the Babylonian word sāru meaning the number 3600 or the Greek verb saro (σαρῶ) that means "sweep (the sky with the series of eclipses)". The Saros period of 223 lunar months (in Greek numerals, ΣΚΓ′) is in the Antikythera Mechanism user manual on this instrument, made around 150 to 100 BCE in Greece, as seen in the picture. This number is one of a few inscriptions of the mechanism that are visible with the unaided eye. Above it, the period of the Metonic cycle and the Callippic cycle are also visible. Description The saros, a period of 6585.3211 days (15 common years + 3 leap years + 12.321 days, 14 common years + 4 leap years + 11.321 days, or 13 common years + 5 leap years + 10.321 days), is useful for predicting the times at which nearly identical eclipses will occur. Three periodicities related to lunar orbit, the synodic month, the draconic month, and the anomalistic month coincide almost perfectly each saros cycle. For an eclipse to occur, either the Moon must be located between the Earth and Sun (for a solar eclipse) or the Earth must be located between the Sun and Moon (for a lunar eclipse). This can happen only when the Moon is new or full, respectively, and repeat occurrences of these lunar phases result from solar and lunar orbits producing the Moon's synodic period of 29.53059 days. During most full and new moons, however, the shadow of the Earth or Moon falls to the north or south of the other body. Eclipses occur when the three bodies form a nearly straight line. Because the plane of the lunar orbit is inclined to that of the Earth, this condition occurs only when a full or new Moon is near or in the ecliptic plane, that is when the Moon is at one of the two nodes (the ascending or descending node). The period of time for two successive lunar passes through the ecliptic plane (returning to the same node) is termed the draconic month, a 27.21222 day period. The three-dimensional geometry of an eclipse, when the new or full moon is near one of the nodes, occurs every five or six months when the Sun is in conjunction or opposition to the Moon and coincidentally also near a node of the Moon's orbit at that time, or twice per eclipse year. Two eclipses separated by one saros have very similar appearance and duration due to the distance between the Earth and Moon being nearly the same for each event: this is because the saros is also an integer multiple of the anomalistic month of 27.5545 days, the period of the moon with respect to the lines of apsides in its orbit. After one saros, the Moon will have completed roughly an integer number of synodic, draconic, and anomalistic periods (223, 242, and 239) and the Earth-Sun-Moon geometry will be nearly identical: the Moon will have the same phase and be at the same node and the same distance from the Earth. In addition, because the saros is close to 18 years in length (about 11 days longer), the Earth will be nearly the same distance from the Sun, and tilted to it in nearly the same orientation (same season). Given the date of an eclipse, one saros later a nearly identical eclipse can be predicted. During this 18-year period, about 40 other solar and lunar eclipses take place, but with a somewhat different geometry. One saros equaling 18.03 years is not equal to a perfect integer number of lunar orbits (Earth revolutions with respect to the fixed stars of 27.32166 days sidereal month), therefore, even though the relative geometry of the Earth–Sun–Moon system will be nearly identical after a saros, the Moon will be in a slightly different position with respect to the stars for each eclipse in a saros series. The axis of rotation of the Earth–Moon system exhibits a precession period of 18.59992 years. The saros is not an integer number of days, but contains the fraction of of a day. Thus each successive eclipse in a saros series occurs about eight hours later in the day. In the case of an eclipse of the Sun, this means that the region of visibility will shift westward about 120°, or about one third of the way around the globe, and the two eclipses will thus not be visible from the same place on Earth. In the case of an eclipse of the Moon, the next eclipse might still be visible from the same location as long as the Moon is above the horizon. Given three saros eclipse intervals, the local time of day of an eclipse will be nearly the same. This three saros interval (19,755.96 days) is known as a triple saros or exeligmos (Greek: "turn of the wheel") cycle. Saros series Each saros series starts with a partial eclipse (Sun first enters the end of the node), and each successive saros the path of the Moon is shifted either northward (when near the descending node) or southward (when near the ascending node) due to the fact that the saros is not an exact integer of draconic months (about one hour short). At some point, eclipses are no longer possible and the series terminates (Sun leaves the beginning of the node). An arbitrary solar saros series was designated as solar saros series 1 by compilers of eclipse statistics. This series has finished, but the eclipse of November 16, 1990 BC (Julian calendar) for example is in solar saros series 1. There are different saros series for solar and lunar eclipses. For lunar saros series, the lunar eclipse occurring 58.5 synodic months earlier (February 23, 1994 BC) was assigned the number 1. If there is an eclipse one inex (29 years minus about 20 days) after an eclipse of a particular saros series then it is a member of the next series. For example, the eclipse of October 26, 1961 BC is in solar saros series 2. Saros series, of course, went on before these dates, and it is necessary to extend the saros series numbers backwards to negative numbers even just to accommodate eclipses occurring in the years following 2000 BC (up till the last eclipse with a negative saros number in 1367 BC). For solar eclipses the statistics for the complete saros series within the era between 2000 BC and AD 3000 are given in this article's references. It takes between 1226 and 1550 years for the members of a saros series to traverse the Earth's surface from north to south (or vice versa). These extremes allow from 69 to 87 eclipses in each series (most series have 71 or 72 eclipses). From 39 to 59 (mostly about 43) eclipses in a given series will be central (that is, total, annular, or hybrid annular-total). At any given time, approximately 40 different saros series will be in progress. Saros series, as mentioned, are numbered according to the type of eclipse (lunar or solar). In odd numbered series (for solar eclipses) the Sun is near the ascending node, whereas in even numbered series it is near the descending node (this is reversed for lunar eclipse saros series). Generally, the ordering of these series determines the time at which each series peaks, which corresponds to when an eclipse is closest to one of the lunar nodes. For solar eclipses, the 40 series numbered between 117 and 156 are active (series 117 will end in 2054), whereas for lunar eclipses, there are now 41 active saros series (these numbers can be derived by counting the number of eclipses listed over an 18-year (saros) period from the eclipse catalog sites). Example As an example of a single saros series, this table gives the dates of some of the 72 lunar eclipses for saros series 131. This eclipse series began in AD 1427 with a partial eclipse at the southern edge of the Earth's shadow when the Moon was close to its descending node. In each successive saros, the Moon's orbital path is shifted northward with respect to the Earth's shadow, with the first total eclipse occurring in 1950. For the following 252 years, total eclipses occur, with the central eclipse in 2078. The first partial eclipse after this will occur in the year 2220, and the final partial eclipse of the series will occur in 2707. The total lifetime of lunar saros series 131 is 1280 years. Solar saros 138 interleaves with this lunar saros with an event occurring every 9 years 5 days alternating between each saros series. Because of the fraction of days in a saros, the visibility of each eclipse will differ for an observer at a given locale. For the lunar saros series 131, the first total eclipse of 1950 had its best visibility for viewers in Eastern Europe and the Middle East because mid-eclipse was at 20:44 UT. The following eclipse in the series occurred about 8 hours later in the day with mid-eclipse at 4:47 UT, and was best seen from North America and South America. The third total eclipse occurred about 8 hours later in the day than the second eclipse with mid-eclipse at 12:43 UT, and had its best visibility for viewers in the Western Pacific, East Asia, Australia and New Zealand. This cycle of visibility repeats from the start to the end of the series, with minor variations. Solar saros 138 interleaves with this lunar saros with an event occurring every 9 years 5 days alternating between each saros series. For a similar example for solar saros see solar saros 136. Relationship between lunar and solar saros (sar) After a given lunar or solar eclipse, after 9 years and days (a half saros, or sar) an eclipse will occur that is lunar instead of solar, or vice versa, with similar properties. For example, if the Moon's penumbra partially covers the southern limb of the Earth during a solar eclipse, 9 years and days later a lunar eclipse will occur in which the Moon is partially covered by the southern limb of the Earth's penumbra. Likewise, 9 years and days after a total solar eclipse or an annular solar eclipse occurs, a total lunar eclipse will also occur. This 9-year period is referred to as a sar. It includes synodic months, or 111 synodic months plus one fortnight. The fortnight accounts for the alternation between solar and lunar eclipse. For a visual example see this chart (each row is one sar apart).
Physical sciences
Celestial mechanics
Astronomy
66152
https://en.wikipedia.org/wiki/Sea%20urchin
Sea urchin
Sea urchins or urchins () are typically spiny, globular animals, echinoderms in the class Echinoidea. About 950 species live on the seabed, inhabiting all oceans and depth zones from the intertidal to . Their tests (hard shells) are round and spiny, typically from across. Sea urchins move slowly, crawling with their tube feet, and sometimes pushing themselves with their spines. They feed primarily on algae but also eat slow-moving or sessile animals. Their predators include sharks, sea otters, starfish, wolf eels, and triggerfish. Like all echinoderms, adult sea urchins have fivefold symmetry with their pluteus larvae featuring bilateral (mirror) symmetry; The latter indicates that they belong to the Bilateria, along with chordates, arthropods, annelids and molluscs. Sea urchins are found in every ocean and in every climate, from the tropics to the polar regions, and inhabit marine benthic (sea bed) habitats, from rocky shores to hadal zone depths. The fossil record of the Echinoids dates from the Ordovician period, some 450 million years ago. The closest echinoderm relatives of the sea urchin are the sea cucumbers (Holothuroidea), which like them are deuterostomes, a clade that includes the chordates. (Sand dollars are a separate order in the sea urchin class Echinoidea.) The animals have been studied since the 19th century as model organisms in developmental biology, as their embryos were easy to observe. That has continued with studies of their genomes because of their unusual fivefold symmetry and relationship to chordates. Species such as the slate pencil urchin are popular in aquaria, where they are useful for controlling algae. Fossil urchins have been used as protective amulets. Diversity Sea urchins are members of the phylum Echinodermata, which also includes starfish, sea cucumbers, sand dollars, brittle stars, and crinoids. Like other echinoderms, they have five-fold symmetry (called pentamerism) and move by means of hundreds of tiny, transparent, adhesive "tube feet". The symmetry is not obvious in the living animal, but is easily visible in the dried test. Specifically, the term "sea urchin" refers to the "regular echinoids", which are symmetrical and globular, and includes several different taxonomic groups, with two subclasses: Euechinoidea ("modern" sea urchins, including irregular ones) and Cidaroidea, or "slate-pencil urchins", which have very thick, blunt spines, with algae and sponges growing on them. The "irregular" sea urchins are an infra-class inside the Euechinoidea, called Irregularia, and include Atelostomata and Neognathostomata. Irregular echinoids include flattened sand dollars, sea biscuits, and heart urchins. Together with sea cucumbers (Holothuroidea), they make up the subphylum Echinozoa, which is characterized by a globoid shape without arms or projecting rays. Sea cucumbers and the irregular echinoids have secondarily evolved diverse shapes. Although many sea cucumbers have branched tentacles surrounding their oral openings, these have originated from modified tube feet and are not homologous to the arms of the crinoids, sea stars, and brittle stars. Description Urchins typically range in size from , but the largest species can reach up to . They have a rigid, usually spherical body bearing moveable spines, which give the class the name Echinoidea (from the Greek 'spine'). The name urchin is an old word for hedgehog, which sea urchins resemble; they have archaically been called sea hedgehogs. The name is derived from the Old French , from Latin ('hedgehog'). Like other echinoderms, sea urchin early larvae have bilateral symmetry, but they develop five-fold symmetry as they mature. This is most apparent in the "regular" sea urchins, which have roughly spherical bodies with five equally sized parts radiating out from their central axes. The mouth is at the base of the animal and the anus at the top; the lower surface is described as "oral" and the upper surface as "aboral". Several sea urchins, however, including the sand dollars, are oval in shape, with distinct front and rear ends, giving them a degree of bilateral symmetry. In these urchins, the upper surface of the body is slightly domed, but the underside is flat, while the sides are devoid of tube feet. This "irregular" body form has evolved to allow the animals to burrow through sand or other soft materials. Systems Musculoskeletal The internal organs are enclosed in a hard shell or test composed of fused plates of calcium carbonate covered by a thin dermis and epidermis. The test is referred to as an endoskeleton rather than exoskeleton even though it encloses almost all of the urchin. This is because it is covered with a thin layer of muscle and skin; sea urchins also do not need to molt the way invertebrates with true exoskeletons do, instead the plates forming the test grow as the animal does. The test is rigid, and divides into five ambulacral grooves separated by five wider interambulacral areas. Each of these ten longitudinal columns consists of two sets of plates (thus comprising 20 columns in total). The ambulacral plates have pairs of tiny holes through which the tube feet extend. All of the plates are covered in rounded tubercles to which the spines are attached. The spines are used for defence and for locomotion and come in a variety of forms. The inner surface of the test is lined by peritoneum. Sea urchins convert aqueous carbon dioxide using a catalytic process involving nickel into the calcium carbonate portion of the test. Most species have two series of spines, primary (long) and secondary (short), distributed over the surface of the body, with the shortest at the poles and the longest at the equator. The spines are usually hollow and cylindrical. Contraction of the muscular sheath that covers the test causes the spines to lean in one direction or another, while an inner sheath of collagen fibres can reversibly change from soft to rigid which can lock the spine in one position. Located among the spines are several types of pedicellaria, moveable stalked structures with jaws. Sea urchins move by walking, using their many flexible tube feet in a way similar to that of starfish; regular sea urchins do not have any favourite walking direction. The tube feet protrude through pairs of pores in the test, and are operated by a water vascular system; this works through hydraulic pressure, allowing the sea urchin to pump water into and out of the tube feet. During locomotion, the tube feet are assisted by the spines which can be used for pushing the body along or to lift the test off the substrate. Movement is generally related to feeding, with the red sea urchin (Mesocentrotus franciscanus) managing about a day when there is ample food, and up to a day where there is not. An inverted sea urchin can right itself by progressively attaching and detaching its tube feet and manipulating its spines to roll its body upright. Some species bury themselves in soft sediment using their spines, and Paracentrotus lividus uses its jaws to burrow into soft rocks. Feeding and digestion The mouth lies in the centre of the oral surface in regular urchins, or towards one end in irregular urchins. It is surrounded by lips of softer tissue, with numerous small, embedded bony pieces. This area, called the peristome, also includes five pairs of modified tube feet and, in many species, five pairs of gills. The jaw apparatus consists of five strong arrow-shaped plates known as pyramids, the ventral surface of each of which has a toothband with a hard tooth pointing towards the centre of the mouth. Specialised muscles control the protrusion of the apparatus and the action of the teeth, and the animal can grasp, scrape, pull and tear. The structure of the mouth and teeth have been found to be so efficient at grasping and grinding that similar structures have been tested for use in real-world applications. On the upper surface of the test at the aboral pole is a membrane, the periproct, which surrounds the anus. The periproct contains a variable number of hard plates, five of which, the genital plates, contain the gonopores, and one is modified to contain the madreporite, which is used to balance the water vascular system. The mouth of most sea urchins is made up of five calcium carbonate teeth or plates, with a fleshy, tongue-like structure within. The entire chewing organ is known as Aristotle's lantern from Aristotle's description in his History of Animals (translated by D'Arcy Thompson): However, this has recently been proven to be a mistranslation. Aristotle's lantern is actually referring to the whole shape of sea urchins, which look like the ancient lamps of Aristotle's time. Heart urchins are unusual in not having a lantern. Instead, the mouth is surrounded by cilia that pull strings of mucus containing food particles towards a series of grooves around the mouth. The lantern, where present, surrounds both the mouth cavity and the pharynx. At the top of the lantern, the pharynx opens into the esophagus, which runs back down the outside of the lantern, to join the small intestine and a single caecum. The small intestine runs in a full circle around the inside of the test, before joining the large intestine, which completes another circuit in the opposite direction. From the large intestine, a rectum ascends towards the anus. Despite the names, the small and large intestines of sea urchins are in no way homologous to the similarly named structures in vertebrates. Digestion occurs in the intestine, with the caecum producing further digestive enzymes. An additional tube, called the siphon, runs beside much of the intestine, opening into it at both ends. It may be involved in resorption of water from food. Circulation and respiration The water vascular system leads downwards from the madreporite through the slender stone canal to the ring canal, which encircles the oesophagus. Radial canals lead from here through each ambulacral area to terminate in a small tentacle that passes through the ambulacral plate near the aboral pole. Lateral canals lead from these radial canals, ending in ampullae. From here, two tubes pass through a pair of pores on the plate to terminate in the tube feet. Sea urchins possess a hemal system with a complex network of vessels in the mesenteries around the gut, but little is known of the functioning of this system. However, the main circulatory fluid fills the general body cavity, or coelom. This coelomic fluid contains phagocytic coelomocytes, which move through the vascular and hemal systems and are involved in internal transport and gas exchange. The coelomocytes are an essential part of blood clotting, but also collect waste products and actively remove them from the body through the gills and tube feet. Most sea urchins possess five pairs of external gills attached to the peristomial membrane around their mouths. These thin-walled projections of the body cavity are the main organs of respiration in those urchins that possess them. Fluid can be pumped through the gills' interiors by muscles associated with the lantern, but this does not provide a continuous flow, and occurs only when the animal is low in oxygen. Tube feet can also act as respiratory organs, and are the primary sites of gas exchange in heart urchins and sand dollars, both of which lack gills. The inside of each tube foot is divided by a septum which reduces diffusion between the incoming and outgoing streams of fluid. Nervous system and senses The nervous system of sea urchins has a relatively simple layout. With no true brain, the neural center is a large nerve ring encircling the mouth just inside the lantern. From the nerve ring, five nerves radiate underneath the radial canals of the water vascular system, and branch into numerous finer nerves to innervate the tube feet, spines, and pedicellariae. Sea urchins are sensitive to touch, light, and chemicals. There are numerous sensitive cells in the epithelium, especially in the spines, pedicellaria and tube feet, and around the mouth. Although they do not have eyes or eye spots (except for diadematids, which can follow a threat with their spines), the entire body of most regular sea urchins might function as a compound eye. In general, sea urchins are negatively attracted to light, and seek to hide themselves in crevices or under objects. Most species, apart from pencil urchins, have statocysts in globular organs called spheridia. These are stalked structures and are located within the ambulacral areas; their function is to help in gravitational orientation. Life history Reproduction Sea urchins are dioecious, having separate male and female sexes, although no distinguishing features are visible externally. In addition to their role in reproduction, the gonads are also nutrient storing organs, and are made up of two main type of cells: germ cells, and somatic cells called nutritive phagocytes. Regular sea urchins have five gonads, lying underneath the interambulacral regions of the test, while the irregular forms mostly have four, with the hindmost gonad being absent; heart urchins have three or two. Each gonad has a single duct rising from the upper pole to open at a gonopore lying in one of the genital plates surrounding the anus. Some burrowing sand dollars have an elongated papilla that enables the liberation of gametes above the surface of the sediment. The gonads are lined with muscles underneath the peritoneum, and these allow the animal to squeeze its gametes through the duct and into the surrounding sea water, where fertilization takes place. Development During early development, the sea urchin embryo undergoes 10 cycles of cell division, resulting in a single epithelial layer enveloping the blastocoel. The embryo then begins gastrulation, a multipart process which dramatically rearranges its structure by invagination to produce the three germ layers, involving an epithelial-mesenchymal transition; primary mesenchyme cells move into the blastocoel and become mesoderm. It has been suggested that epithelial polarity together with planar cell polarity might be sufficient to drive gastrulation in sea urchins. An unusual feature of sea urchin development is the replacement of the larva's bilateral symmetry by the adult's broadly fivefold symmetry. During cleavage, mesoderm and small micromeres are specified. At the end of gastrulation, cells of these two types form coelomic pouches. In the larval stages, the adult rudiment grows from the left coelomic pouch; after metamorphosis, that rudiment grows to become the adult. The animal-vegetal axis is established before the egg is fertilized. The oral-aboral axis is specified early in cleavage, and the left-right axis appears at the late gastrula stage. Life cycle and development In most cases, the female's eggs float freely in the sea, but some species hold onto them with their spines, affording them a greater degree of protection. The unfertilized egg meets with the free-floating sperm released by males, and develops into a free-swimming blastula embryo in as few as 12 hours. Initially a simple ball of cells, the blastula soon transforms into a cone-shaped echinopluteus larva. In most species, this larva has 12 elongated arms lined with bands of cilia that capture food particles and transport them to the mouth. In a few species, the blastula contains supplies of nutrient yolk and lacks arms, since it has no need to feed. Several months are needed for the larva to complete its development, the change into the adult form beginning with the formation of test plates in a juvenile rudiment which develops on the left side of the larva, its axis being perpendicular to that of the larva. Soon, the larva sinks to the bottom and metamorphoses into a juvenile urchin in as little as one hour. In some species, adults reach their maximum size in about five years. The purple urchin becomes sexually mature in two years and may live for twenty. Longevity Red sea urchins were originally thought to live 7 to 10 years but recent studies have shown that they can live for more than 100 years. Canadian red urchins have been found to be around 200 years old. Ecology Trophic level Sea urchins feed mainly on algae, so they are primarily herbivores, but can feed on sea cucumbers and a wide range of invertebrates, such as mussels, polychaetes, sponges, brittle stars, and crinoids, making them omnivores, consumers at a range of trophic levels. Predators, parasites, and diseases Mass mortality of sea urchins was first reported in the 1970s, but diseases in sea urchins had been little studied before the advent of aquaculture. In 1981, bacterial "spotting disease" caused almost complete mortality in juvenile Pseudocentrotus depressus and Hemicentrotus pulcherrimus, both cultivated in Japan; the disease recurred in succeeding years. It was divided into a cool-water "spring" disease and a hot-water "summer" form. Another condition, bald sea urchin disease, causes loss of spines and skin lesions and is believed to be bacterial in origin. Adult sea urchins are usually well protected against most predators by their strong and sharp spines, which can be venomous in some species. The small urchin clingfish lives among the spines of urchins such as Diadema; juveniles feed on the pedicellariae and sphaeridia, adult males choose the tube feet and adult females move away to feed on shrimp eggs and molluscs. Sea urchins are one of the favourite foods of many lobsters, crabs, triggerfish, California sheephead, sea otter and wolf eels (which specialise in sea urchins). All these animals carry particular adaptations (teeth, pincers, claws) and a strength that allow them to overcome the excellent protective features of sea urchins. Left unchecked by predators, urchins devastate their environments, creating what biologists call an urchin barren, devoid of macroalgae and associated fauna. Sea urchins graze on the lower stems of kelp, causing the kelp to drift away and die. Loss of the habitat and nutrients provided by kelp forests leads to profound cascade effects on the marine ecosystem. Sea otters have re-entered British Columbia, dramatically improving coastal ecosystem health. Anti-predator defences The spines, long and sharp in some species, protect the urchin from predators. Some tropical sea urchins like Diadematidae, Echinothuriidae and Toxopneustidae have venomous spines. Other creatures also make use of these defences; crabs, shrimps and other organisms shelter among the spines, and often adopt the colouring of their host. Some crabs in the Dorippidae family carry sea urchins, starfish, sharp shells or other protective objects in their claws. Pedicellariae are a good means of defense against ectoparasites, but not a panacea as some of them actually feed on it. The hemal system defends against endoparasites. Range and habitat Sea urchins are established in most seabed habitats from the intertidal downwards, at an extremely wide range of depths. Some species, such as Cidaris abyssicola, can live at depths of several kilometres. Many genera are found in only the abyssal zone, including many cidaroids, most of the genera in the Echinothuriidae family, and the "cactus urchins" Dermechinus. One of the deepest-living families is the Pourtalesiidae, strange bottle-shaped irregular sea urchins that live in only the hadal zone and have been collected as deep as 6850 metres beneath the surface in the Sunda Trench. Nevertheless, this makes sea urchin the class of echinoderms living the least deep, compared to brittle stars, starfish and crinoids that remain abundant below and sea cucumbers which have been recorded from . Population densities vary by habitat, with more dense populations in barren areas as compared to kelp stands. Even in these barren areas, greatest densities are found in shallow water. Populations are generally found in deeper water if wave action is present. Densities decrease in winter when storms cause them to seek protection in cracks and around larger underwater structures. The shingle urchin (Colobocentrotus atratus), which lives on exposed shorelines, is particularly resistant to wave action. It is one of the few sea urchin that can survive many hours out of water. Sea urchins can be found in all climates, from warm seas to polar oceans. The larvae of the polar sea urchin Sterechinus neumayeri have been found to use energy in metabolic processes twenty-five times more efficiently than do most other organisms. Despite their presence in nearly all the marine ecosystems, most species are found on temperate and tropical coasts, between the surface and some tens of meters deep, close to photosynthetic food sources. Evolution Fossil history The earliest echinoid fossils date to the Middle Ordovician period (circa 465 Mya). There is a rich fossil record, their hard tests made of calcite plates surviving in rocks from every period since then. Spines are present in some well-preserved specimens, but usually only the test remains. Isolated spines are common as fossils. Some Jurassic and Cretaceous Cidaroida had very heavy, club-shaped spines. Most fossil echinoids from the Paleozoic era are incomplete, consisting of isolated spines and small clusters of scattered plates from crushed individuals, mostly in Devonian and Carboniferous rocks. The shallow-water limestones from the Ordovician and Silurian periods of Estonia are famous for echinoids. Paleozoic echinoids probably inhabited relatively quiet waters. Because of their thin tests, they would certainly not have survived in the wave-battered coastal waters inhabited by many modern echinoids. Echinoids declined to near extinction at the end of the Paleozoic era, with just six species known from the Permian period. Only two lineages survived this period's massive extinction and into the Triassic: the genus Miocidaris, which gave rise to modern cidaroida (pencil urchins), and the ancestor that gave rise to the euechinoids. By the upper Triassic, their numbers increased again. Cidaroids have changed very little since the Late Triassic, and are the only Paleozoic echinoid group to have survived. The euechinoids diversified into new lineages in the Jurassic and Cretaceous periods, and from them emerged the first irregular echinoids (the Atelostomata) during the early Jurassic. Some echinoids, such as Micraster in the chalk of the Cretaceous period, serve as zone or index fossils. Because they are abundant and evolved rapidly, they enable geologists to date the surrounding rocks. In the Paleogene and Neogene periods (circa 66 to 2.6 Mya), sand dollars (Clypeasteroida) arose. Their distinctive, flattened tests and tiny spines were adapted to life on or under loose sand in shallow water, and they are abundant as fossils in southern European limestones and sandstones. Phylogeny External Echinoids are deuterostome animals, like the chordates. A 2014 analysis of 219 genes from all classes of echinoderms gives the following phylogenetic tree. Approximate dates of branching of major clades are shown in millions of years ago (mya). Internal The phylogeny of the sea urchins is as follows: The phylogenetic study from 2022 presents a different topology of the Euechinoidea phylogenetic tree. Irregularia are sister group of Echinacea (including Salenioida) forming a common clade Carinacea, basal groups Aspidodiadematoida, Diadematoida, Echinothurioida, Micropygoida, and Pedinoida are comprised in a common basal clade Aulodonta. Relation to humans Injuries Sea urchin injuries are puncture wounds inflicted by the animal's brittle, fragile spines. These are a common source of injury to ocean swimmers, especially along coastal surfaces where coral with stationary sea urchins are present. Their stings vary in severity depending on the species. Their spines can be venomous or cause infection. Granuloma and staining of the skin from the natural dye inside the sea urchin can also occur. Breathing problems may indicate a serious reaction to toxins in the sea urchin. They inflict a painful wound when they penetrate human skin, but are not themselves dangerous if fully removed promptly; if left in the skin, further problems may occur. Science Sea urchins are traditional model organisms in developmental biology. This use originated in the 1800s, when their embryonic development became easily viewed by microscopy. The transparency of the urchin's eggs enabled them to be used to observe that sperm cells actually fertilize ova. They continue to be used for embryonic studies, as prenatal development continues to seek testing for fatal diseases. Sea urchins are being used in longevity studies for comparison between the young and old of the species, particularly for their ability to regenerate tissue as needed. Scientists at the University of St Andrews have discovered a genetic sequence, the '2A' region, in sea urchins previously thought to have belonged only to viruses like foot-and-mouth disease virus. More recently, Eric H. Davidson and Roy John Britten argued for the use of urchins as a model organism due to their easy availability, high fecundity, and long lifespan. Beyond embryology, urchins provide an opportunity to research cis-regulatory elements. Oceanography has taken an interest in monitoring the health of urchins and their populations as a way to assess overall ocean acidification, temperatures, and ecological impacts. The organism's evolutionary placement and unique embryology with five-fold symmetry were the major arguments in the proposal to seek the sequencing of its genome. Importantly, urchins act as the closest living relative to chordates and thus are of interest for the light they can shed on the evolution of vertebrates. The genome of Strongylocentrotus purpuratus was completed in 2006 and established homology between sea urchin and vertebrate immune system-related genes. Sea urchins code for at least 222 Toll-like receptor genes and over 200 genes related to the Nod-like-receptor family found in vertebrates. This increases its usefulness as a valuable model organism for studying the evolution of innate immunity. The sequencing also revealed that while some genes were thought to be limited to vertebrates, there were also innovations that have previously never been seen outside the chordate classification, such as immune transcription factors PU.1 and SPIB. As food The gonads of both male and female sea urchins, sometimes euphemized as sea urchin "roe" or "corals", are culinary delicacies in many parts of the world, especially Japan. In Japan, sea urchin is known as , and its gonads (the only meaty, edible parts of the animal) can retail for as much as ¥40,000 ($360) per kilogram; they are served raw as sashimi or in sushi, with soy sauce and wasabi. Japan imports large quantities from the United States, South Korea, and other producers. Japan consumes 50,000 tons annually, amounting to over 80% of global production. Japanese demand for sea urchins has raised concerns about overfishing. Sea urchins are commonly eaten stuffed with rice in the traditional oko-oko dish among the Sama-Bajau people of the Philippines. They were once foraged by coastal Malay communities of Singapore who call them . In New Zealand, Evechinus chloroticus, known as in Māori, is a delicacy, traditionally eaten raw. Though New Zealand fishermen would like to export them to Japan, their quality is too variable. In Mediterranean cuisines, Paracentrotus lividus is often eaten raw, or with lemon, and known as on Italian menus where it is sometimes used in pasta sauces. It can also flavour omelettes, scrambled eggs, fish soup, mayonnaise, béchamel sauce for tartlets, the for a soufflé, or Hollandaise sauce to make a fish sauce. On the Pacific Coast of North America, Strongylocentrotus franciscanus was praised by Euell Gibbons; Strongylocentrotus purpuratus is also eaten. Native Americans in California are also known to eat sea urchins. The coast of Southern California is known as a source of high quality , with divers picking sea urchin from kelp beds in depths as deep as 24 m/80 ft. As of 2013, the state was limiting the practice to 300 sea urchin diver licenses. Though the edible Strongylocentrotus droebachiensis is found in the North Atlantic, it is not widely eaten. However, sea urchins (called in Alutiiq) are commonly eaten by the Alaska Native population around Kodiak Island. It is commonly exported, mostly to Japan. In the West Indies, slate pencil urchins are eaten. In Chilean cuisine, it is served raw with lemon, onions, and olive oil. Aquaria Some species of sea urchins, such as the slate pencil urchin (Eucidaris tribuloides), are commonly sold in aquarium stores. Some species are effective at controlling filamentous algae, and they make good additions to an invertebrate tank. Folklore A folk tradition in Denmark and southern England imagined sea urchin fossils to be thunderbolts, able to ward off harm by lightning or by witchcraft, as an apotropaic symbol. Another version supposed they were petrified eggs of snakes, able to protect against heart and liver disease, poisons, and injury in battle, and accordingly they were carried as amulets. These were, according to the legend, created by magic from foam made by the snakes at midsummer. Explanatory notes
Biology and health sciences
Echinoderms
null
66159
https://en.wikipedia.org/wiki/Alfalfa
Alfalfa
Alfalfa () (Medicago sativa), also called lucerne, is a perennial flowering plant in the legume family Fabaceae. It is cultivated as an important forage crop in many countries around the world. It is used for grazing, hay, and silage, as well as a green manure and cover crop. The name alfalfa is used in North America. The name lucerne is more commonly used in the United Kingdom, South Africa, Australia, and New Zealand. The plant superficially resembles clover (a cousin in the same family), especially while young, when trifoliate leaves comprising round leaflets predominate. Later in maturity, leaflets are elongated. It has clusters of small purple flowers followed by fruits spiralled in two to three turns containing 10–20 seeds. Alfalfa is native to warmer temperate climates. It has been cultivated as livestock fodder since at least the era of the ancient Greeks and Romans. Description Alfalfa is a perennial forage legume which normally lives four to eight years, but can live more than 20 years, depending on variety and climate. The plant grows to a height of up to , and has a deep root system, sometimes growing to a depth of more than to reach groundwater. Typically the root system grows to a depth of depending on subsoil constraints. Alfalfa is a small-seeded crop and has a slowly growing seedling, but after several months of establishment, it forms a tough "crown" at the top of the root system. This crown contains shoot buds that enable alfalfa to regrow many times after being grazed or harvested. Alfalfa has a tetraploid genome. Etymology The word alfalfa is a Spanish modification of the Arabic word al-faṣfaṣa. Ecology Alfalfa is considered an insectary, a place where insects are reared, and has been proposed as helpful to other crops, such as cotton, if the two are interplanted, because the alfalfa harbours predatory and parasitic insects that would protect the other crop. Harvesting the alfalfa by mowing the entire crop area destroys the insect population, but this can be avoided by mowing in strips so that part of the growth remains. Owing to its deep root system, it helps to improve soil nitrogen fertility and protect from soil erosion. This depth of root system, and perenniality of crowns that store carbohydrates as an energy reserve, make it very resilient, especially to droughts. This plant exhibits autotoxicity, which means it is difficult for alfalfa seed to grow in existing stands of alfalfa. Therefore, alfalfa fields are recommended to be rotated with other species (for example, corn or wheat) before reseeding. The exact mechanism of autotoxicity is unclear, with medicarpins and phenols both seeming to play a role. Levels of autotoxicity in soil depends on soil type (clay soils maintain autotoxicity for longer), cultivar and age of the previous crop. A soil assay can be used to measure autotoxicity. Resistance to autotoxicity also varies by cultivar, a tolerant one being 'WL 656HQ'. Pests and diseases Like most plants, alfalfa can be attacked by various pests and pathogens. Diseases often have subtle symptoms which are easily misdiagnosed and can affect leaves, roots, stems and blossoms. Some pests, such as the alfalfa weevil, aphids, and potato leafhopper, can reduce alfalfa yields dramatically, particularly with the second cutting when weather is warmest. Spotted alfalfa aphid, broadly spread in Australia, not only sucks sap but also injects salivary toxins into the leaves. Registered insecticides or chemical controls are sometimes used to prevent this and labels will specify the withholding period before the forage crop can be grazed or cut for hay or silage. Alfalfa is also susceptible to root rots, including Phytophthora, Rhizoctonia, and Texas root rot. Alfalfa is also susceptible to downy mildew caused by the oomycete species Peronospora aestivalis. Cultivation Alfalfa is widely grown throughout the world as forage for cattle, and is most often harvested as hay, but can also be made into silage, grazed, or fed as greenchop. Alfalfa usually has the highest feeding value of all common hay crops. It is used less frequently as pasture. When grown on soils where it is well-adapted, alfalfa is often the highest-yielding forage plant, but its primary benefit is the combination of high yield per hectare and high nutritional quality. Its primary use is as feed for high-producing dairy cows, because of its high protein content and highly digestible fiber, and secondarily for beef cattle, horses, sheep, and goats. Alfalfa hay is a widely used protein and fiber source for meat rabbits. In poultry diets, dehydrated alfalfa and alfalfa leaf concentrates are used for pigmenting eggs and meat, because of their high content in carotenoids, which are efficient for colouring egg yolk and body lipids. Humans also eat alfalfa sprouts in salads and sandwiches. Dehydrated alfalfa leaf is commercially available as a dietary supplement in several forms, such as tablets, powders and tea. Fresh alfalfa can cause bloating in livestock, so care must be taken with livestock grazing on alfalfa because of this hazard. Like other legumes, its root nodules contain bacteria, Sinorhizobium meliloti, with the ability to fix nitrogen, producing a high-protein feed regardless of available nitrogen in the soil. Its nitrogen-fixing ability (which increases soil nitrogen) and its use as an animal feed greatly improve agricultural efficiency. Alfalfa can be sown in spring or fall, and does best on well-drained soils with a neutral pH of 6.8–7.5. Alfalfa requires sustained levels of potassium and phosphorus to grow well. It is moderately sensitive to salt levels in both the soil and irrigation water, although it continues to be grown in the arid southwestern United States, where salinity is an emerging issue. Soils low in fertility should be fertilized with manure or a chemical fertilizer, but correction of pH is particularly important. Usually a seeding rate of is recommended, with differences based upon region, soil type, and seeding method. A nurse crop is sometimes used, particularly for spring plantings, to reduce weed problems and soil erosion, but can lead to competition for light, water, and nutrients. In most climates, alfalfa is cut three to four times a year, but it can be harvested up to 12 times per year in Arizona and southern California. Total yields are typically around in temperate environments, but yields have been recorded up to . Yields vary with region, weather, and the crop's stage of maturity when cut. Later cuttings improve yield, but with reduced nutritional content. History Alfalfa seems to have originated in south-central Asia, and was first cultivated in Central Asia. According to Pliny (died 79 AD), it was introduced to Greece in about 490 BC when the Persians invaded Greek territory. Alfalfa cultivation is discussed in the fourth-century AD book Opus Agriculturae by Palladius, stating: "One sow-down lasts ten years. The crop may be cut four or six times a year ... A jugerum of it is abundantly sufficient for three horses all the year ... It may be given to cattle, but new provender is at first to be administered very sparingly, because it bloats up the cattle." The medieval Arabic agricultural writer Ibn al-'Awwam, who lived in Spain in the later 12th century, discussed how to cultivate alfalfa, which he called (). A 13th-century general-purpose Arabic dictionary, Lisān al-'Arab, says that alfalfa is cultivated as an animal feed and consumed in both fresh and dried forms. It is from the Arabic that the Spanish name alfalfa was derived. In the 16th century, Spanish colonizers introduced alfalfa to the Americas as fodder for their horses. In the North American colonies of the eastern US in the 18th century, it was called "lucerne", and many trials at growing it were made, but generally without sufficiently successful results. Relatively little is grown in the southeastern US today. Lucerne (or luzerne) is the name for alfalfa in Britain, Australia, France, Germany, and a number of other countries. Alfalfa seeds were imported to California from Chile in the 1850s. That was the beginning of a rapid and extensive introduction of the crop over the western US and introduced the word "alfalfa" to the English language. Since North and South America now produce a large part of the world's output, the word "alfalfa" has been slowly entering other languages. Harvesting When alfalfa is to be used as hay, it is usually cut and baled. Loose haystacks are still used in some areas, but bales are easier for use in transportation, storage, and feed. Ideally, the first cutting should be taken at the bud stage, and the subsequent cuttings just as the field is beginning to flower, or one-tenth bloom because carbohydrates are at their highest. When using farm equipment rather than hand-harvesting, a swather cuts the alfalfa and arranges it in windrows. In areas where the alfalfa does not immediately dry out on its own, a machine known as a mower-conditioner is used to cut the hay. The mower-conditioner has a set of rollers or flails that crimp and break the stems as they pass through the mower, making the alfalfa dry faster. After the alfalfa has dried, a tractor pulling a baler collects the hay into bales. Several types of bales are commonly used for alfalfa. For small animals and individual horses, the alfalfa is baled into small, two-string bales, commonly named by the strands of string used to wrap it. Other bale sizes are three-string, and so on up to half-ton (six-string) "square" bales – actually rectangular, and typically about . Small square bales weigh from depending on moisture, and can be easily hand separated into "flakes". Cattle ranches use large round bales, typically in diameter and weighing from . These bales can be placed in stable stacks or in large feeders for herds of horses or unrolled on the ground for large herds of cattle. The bales can be loaded and stacked with a tractor using a spike, known as a bale spear that pierces the center of the bale, or they can be handled with a grapple (claw) on the tractor's front-end loader. When used as feed for dairy cattle, alfalfa is often made into haylage by a process known as ensiling. Rather than being dried to make dry hay, the alfalfa is chopped finely and fermented in silos, trenches, or bags, where the oxygen supply can be limited to promote fermentation. The anaerobic fermentation of alfalfa allows it to retain high nutrient levels similar to those of fresh forage, and is also more palatable to dairy cattle than dry hay. In many cases, alfalfa silage is inoculated with different strains of microorganisms to improve the fermentation quality and aerobic stability of the silage. Production During the early 2000s, alfalfa was the most cultivated forage legume in the world. Worldwide production was around 436 million tons in 2006. In 2009, alfalfa was grown on approximately worldwide; of this North America produced 41% (), Europe produced 25% (), South America produced 23% (), Asia produced 8% (), and Africa and Oceania produced the remainder. The US was the largest alfalfa producer in the world by area in 2009, with , but considerable production area is found in Argentina (), Canada (), Russia (), Italy (), and China (). United States In the United States in 2012, the leading alfalfa-growing states were California, Idaho, and Montana. Alfalfa is predominantly grown in the northern and western US; it can be grown in the southeastern US, but leaf and root diseases, poor soils, and a lack of well-adapted varieties are often limitations. In California, varieties resistant to the spotted alfalfa aphid (Therioaphis maculata) are necessary, but even that is not always enough due to constant resistance evolution. Australia Lucerne grown in Australia prior to the 1970s was from seed brought from Great Britain in the early years of colonization, with production most successful in the Hunter and Peel river valleys. Hunter River cv. was the first lucerne variety developed for the Australian environment and was bred from selections of pre-existing lucerne stands in the Upper Hunter River (New South Wales) region. Pest burdens from the spotted alfalfa aphid in the 1970s caused significant destruction of NSW lucerne paddocks, with surviving populations being used as parents for Hunterfield cv. (released 1983). This variety showed significant improvement of resistance to spotted alfalfa aphid. Grazing is the most commonly used form of pasture management in Australia, with many varieties of lucerne specifically being bred for low rainfall, high grazing pressure. New South Wales produces 40% of Australia's lucerne. Due to the introduction of the spotted alfalfa aphid (Therioaphis maculata) in 1977 all varieties grown there must be resistant to it. South Australia is home to 83% of all lucerne seed production in Australia. Much of this seed industry is centred around the town of Keith, South Australia, also encompassing the neighbouring localities of Tintinara, Bordertown, Willalooka, Padthaway and Naracoorte. Alfalfa and bees Alfalfa seed production requires the presence of pollinators when the fields of alfalfa are in bloom. Alfalfa pollination is somewhat problematic, however, because western honey bees, the most commonly used pollinator, are less than ideal for this purpose; the pollen-carrying keel of the alfalfa flower trips and strikes pollinating bees on the head, which helps transfer the pollen to the foraging bee. Western honey bees, however, do not like being struck in the head repeatedly and learn to defeat this action by drawing nectar from the side of the flower. The bees thus collect the nectar, but carry no pollen, so do not pollinate the next flower they visit. Because older, experienced bees do not pollinate alfalfa well, most pollination is accomplished by young bees that have not yet learned the trick of robbing the flower without tripping the head-knocking keel. When western honey bees are used to pollinate alfalfa, the beekeeper stocks the field at a very high rate to maximize the number of young bees. However, Western honey bee colonies may suffer protein stress when working alfalfa only, because alfalfa pollen protein is deficient in isoleucine, one of the amino acids essential in the diet of honeybee larvae. Today, the alfalfa leafcutter bee (Megachile rotundata) is increasingly used to circumvent these problems. As a solitary but gregarious bee species, it does not build colonies or store honey, but is a very efficient pollinator of alfalfa flowers. Nesting is in individual tunnels in wooden or plastic material, supplied by the alfalfa seed growers. The leafcutter bees are used in the Pacific Northwest, while western honeybees dominate in California alfalfa seed production. M. rotundata was unintentionally introduced into the US during the 1940s, and its management as a pollinator of alfalfa has led to a three-fold increase in seed production in the U.S. The synchronous emergence of the adult bees of this species during alfalfa blooming period in combination with such behaviors as gregarious nesting, and utilization of leaves and nesting materials that have been mass-produced by humans provide positive benefits for the use of these bees in pollinating alfalfa. A smaller amount of alfalfa produced for seed is pollinated by the alkali bee, mostly in the northwestern US. It is cultured in special beds near the fields. These bees also have their own problems. They are not portable like honey bees, and when fields are planted in new areas, the bees take several seasons to build up. Honey bees are still trucked to many of the fields at bloom time. The rusty patched bumble bee, Bombus affinis, is important to the agricultural industry as well as for the pollination of alfalfa. It is known that members of this species pollinate up to 65 different species of plants, and it is the primary pollinator of key dietary crops, such as cranberries, plums, apples, onions, and alfalfa. Varieties Considerable research and development has been done with this important plant. Older cultivars such as 'Vernal' have been the standard for years, but many public and private varieties better adapted to particular climates are available. Private companies release many new varieties each year in the US. Most varieties go dormant in the fall, with reduced growth in response to low temperatures and shorter days. 'Nondormant' varieties that grow through the winter are planted in long-season environments such as Mexico, Arizona, and Southern California, whereas 'dormant' varieties are planted in the Upper Midwest, Canada, and the Northeast. 'Nondormant' varieties can be higher-yielding, but they are susceptible to winter-kill in cold climates and have poorer persistence. Most alfalfa cultivars contain genetic material from sickle medick (M. falcata), a crop wild relative of alfalfa that naturally hybridizes with M. sativa to produce sand lucerne (M. sativa ssp. varia). This species may bear either the purple flowers of alfalfa or the yellow of sickle medick, and is so called for its ready growth in sandy soil.<ref>Joseph Elwyn Wing, Alfalfa Farming in the U.S. 79 (Sanders Publishing Co. 1912)".</ref> Traits for insect resistance have also been introduced from M. glomerata and M. prostrata, members of alfalfa's secondary gene pool. Most of the improvements in alfalfa over the last decades have consisted of better disease resistance on poorly drained soils in wet years, better ability to overwinter in cold climates, and the production of more leaves. Multileaf alfalfa varieties have more than three leaflets per leaf. Alfalfa growers or lucerne growers have a suite of varieties or cultivars to choose from in the seed marketplace and base their selection on a number of factors including the dormancy or activity rating, crown height, fit for purpose (i.e., hay production or grazing), disease resistance, insect pest resistance, forage yield, fine leafed varieties and a combination of many favourable attributes. Plant breeding efforts use scientific methodology and technology to strive for new improved varieties. The L. Teweles Seed Company claimed it created the world's first hybrid alfalfa. Wisconsin and California and many other states publish alfalfa variety trial data. A complete listing of state variety testing data is provided by the North American Alfalfa Improvement Conference (NAAIC) State Listing, as well as additional detailed alfalfa genetic and variety data published by NAAIC. Genetic modification Roundup Ready alfalfa (RRA), a genetically modified variety, was released by Forage Genetics International in 2005. This was developed through the insertion of a gene owned by Monsanto Company that confers resistance to glyphosate, a broad-spectrum herbicide, also known as Roundup. Although most grassy and broadleaf plants, including ordinary alfalfa, are killed by Roundup, growers can spray fields of Roundup Ready alfalfa with the glyphosate herbicide and kill the weeds without harming the alfalfa crop. Legal issues in the US In 2005, after completing a 28-page environmental assessment the United States Department of Agriculture (USDA) granted RRA nonregulated status under Code of Federal Regulations Title 7 Part 340, which regulates, among other things, the introduction (importation, interstate movement, or release into the environment) of organisms and products altered or produced through genetic engineering that are plant pests or that there is reason to believe are plant pests. Monsanto had to seek deregulation to conduct field trials of RRA, because the RRA contains a promoter sequence derived from the plant pathogen figwort mosaic virus. The USDA granted the application for deregulation, stating that the RRA with its modifications: "(1) Exhibit no plant pathogenic properties; (2) are no more likely to become weedy than the nontransgenic parental line or other cultivated alfalfa; (3) are unlikely to increase the weediness potential of any other cultivated or wild species with which it can interbreed; (4) will not cause damage to raw or processed agricultural commodities; (5) will not harm threatened or endangered species or organisms that are beneficial to agriculture; and (6) should not reduce the ability to control pests and weeds in alfalfa or other crops." Monsanto started selling RRA and within two years, more than 300,000 acres were devoted to the plant in the US. The granting of deregulation was opposed by many groups, including growers of non-GM alfalfa who were concerned about gene flow into their crops. In 2006, the Center for Food Safety, a US non-governmental organization that is a critic of biotech crops, and others, challenged this deregulation in the United States District Court for the Northern District of California. Organic growers were concerned that the GM alfalfa could cross-pollinate with their organic alfalfa, making their crops unsalable in countries that ban the growing of GM crops. The District Court ruled that the USDA's environmental assessment did not address two issues concerning RRA's effect on the environment, and in 2007, required the USDA to complete a much more extensive environmental impact statement (EIS). Until the EIS was completed, they banned further planting of RRA but allowed land already planted to continue.Memorandum and Order Re: Permanent Injunction United States District Court for Northern California, Case No C 06-01075 CR, 3 May 2007. Retrieved 13 November 2011 The USDA proposed a partial deregulation of RRA but this was also rejected by the District Court. Planting of RRA was halted. In June 2009, a divided three-judge panel on the 9th U.S. Circuit Court of Appeals upheld the District Court's decision. Monsanto and others appealed to the US Supreme Court. On 21 June 2010, in Monsanto Co. v. Geertson Seed Farms, the Supreme Court overturned the District Court decision to ban planting RRA nationwide as there was no evidence of irreparable injury. They ruled that the USDA could partially deregulate RRA before an EIS was completed. The Supreme Court did not consider the District Court's ruling disallowing RRA's deregulation and consequently RRA was still a regulated crop waiting for USDA's completion of an EIS. This decision was welcomed by the American Farm Bureau Federation, Biotechnology Industry Organization, American Seed Trade Association, American Soybean Association, National Alfalfa and Forage Alliance, National Association of Wheat Growers, National Cotton Council, and National Potato Council. In July 2010, 75 members of Congress from both political parties sent a letter to Agriculture Secretary Tom Vilsack asking him to immediately allow limited planting of genetically engineered alfalfa.Letter by 75 Members of Congress to Vilsack Retrieved 1 November 2012 However the USDA did not issue interim deregulatory measures, instead focusing on completing the EIS. Their 2,300-page EIS, published in December 2010, concluded that RRA would not affect the environment. Three of the biggest natural food brands in the US lobbied for a partial deregulation of RRA, but in January 2011, despite protests from organic groups, Secretary Vilsack announced that the USDA had approved the unrestricted planting of genetically modified alfalfa and planting resumed.Gilla, Carey and Doering, Christopher UPDATE 3-U.S. farmers get approval to plant GMO alfalfa Reuters US Edition, 27 January 2011. Retrieved 28 April 2011 Secretary Vilsack commented, "After conducting a thorough and transparent examination of alfalfa ... APHIS [Animal and Plant Health Inspection Service] has determined that [RRA] is as safe as traditionally bred alfalfa." About of alfalfa were grown in the US, the fourth-biggest crop by acreage, of which about 1% were organic. Some biotechnology officials forecast that half of the US alfalfa acreage could eventually be planted with GM alfalfa. The National Corn Growers Association, the American Farm Bureau Federation, and the Council for Biotech Information warmly applauded this decision. Christine Bushway, CEO of the Organic Trade Association, said, "A lot of people are shell-shocked. While we feel Secretary Vilsack worked on this issue, which is progress, this decision puts our organic farmers at risk." The Organic Trade Association issued a press release in 2011 saying that the USDA recognized the impact that cross-contamination could have on organic alfalfa and urged them to place restrictions to minimize any such contamination. However, organic farming groups, organic food outlets, and activists responded by publishing an open letter saying that planting the "alfalfa without any restrictions flies in the face of the interests of conventional and organic farmers, preservation of the environment, and consumer choice". In addition to House Agriculture Committee Chairman Frank Lucas, Senator Debbie Stabenow (Chairwoman of the Senate Agriculture Committee) and Senator Richard Lugar strongly supported the decision, respectively stating that it would give growers "the green light to begin planting an abundant, affordable and safe crop" and give farmers and consumers the "choice ... in planting or purchasing food grown with GM technology, conventionally, or organically". In a joint statement, US Senator Patrick Leahy and Representative Peter DeFazio said the USDA had the "opportunity to address the concerns of all farmers", but instead "surrender[ed] to business as usual for the biotech industry". In March 2011, the non-profit Center for Food Safety appealed the deregulation decision, which the District Court for Northern California rejected in 2012. Safety concerns Alfalfa sprouts may contain microbiological pathogens, mainly from Salmonella'' or E. coli, which have caused numerous food product recalls and illness outbreaks, putting sprouts into a "high risk" category for food safety. People with weakened immune systems, such as the elderly, pregnant women, or those taking prescription drugs affecting the immune system, should not eat sprouts. With long-term human consumption of alfalfa seeds, several safety concerns and medication interactions may result, including possible reactions similar to lupus erythematosus, an autoimmune disease. Other concerns are for women during pregnancy or breast-feeding, hormone-sensitive conditions (such as breast, uterine, and ovarian cancers), and for people with diabetes. Alfalfa may interact with warfarin (e.g. Coumadin), birth control pills (contraceptive drugs), and estrogens. Toxicity of canavanine Raw alfalfa seeds and sprouts are a source of the amino acid canavanine. Much of the canavanine is converted into other amino acids during germination, so sprouts contain much less canavanine than unsprouted seeds. Canavanine competes with arginine, resulting in the synthesis of dysfunctional proteins. Raw unsprouted alfalfa has toxic effects in primates, including humans, which can result in lupus-like symptoms and other immunological diseases in susceptible individuals. Stopping consumption of alfalfa seeds can reverse the effects. Phytoestrogens and effect on livestock fertility Alfalfa, like other leguminous crops, is a source of phytoestrogens, including spinasterol, coumestrol, and coumestan. Because of this, grazing on alfalfa during breeding can cause reduced fertility in sheep and in dairy cattle if not effectively managed. Coumestrol levels in alfalfa have been shown to be elevated by fungal infection, but not significantly under drought stress or aphid infestation. Grazing management can be utilised to mitigate the effects of coumestrol on ewe reproductive performance, with full recovery after removal from alfalfa. Coumestrol levels in unirrigated crops can be predicted practically using weather variables. Nutrition Raw alfalfa seed sprouts are 93% water, 2% carbohydrates, 4% protein, and contain negligible fat. In a reference amount, raw alfalfa sprouts supply of food energy and 29% of the Daily Value of vitamin K. They are a moderate source of vitamin C, some B vitamins, phosphorus, and zinc. Sprouts Sprouting alfalfa seeds is the process of germinating seeds at the immature stage for use as a garnish on various food preparations, such as salads. Although sprouts may be grown in soil, they are more commonly germinated in a soilless medium using drums, trays or racks.
Biology and health sciences
Pulses
Plants
66173
https://en.wikipedia.org/wiki/Common%20swift
Common swift
The common swift (Apus apus) is a medium-sized bird, superficially similar to the barn swallow or house martin but somewhat larger, though not stemming from those passerine species, being in the order Apodiformes. The resemblances between the groups are due to convergent evolution, reflecting similar contextual development. The swifts' nearest relatives are the New World hummingbirds and the Southeast Asian treeswifts. Its scientific name Apus is Latin for a swift, thought by the ancients to be a type of swallow with no feet (from Ancient Greek α, a, "without", and πούς, pous, "foot"). Swifts have very short legs which they use primarily for clinging to vertical surfaces (hence the German name Mauersegler, literally meaning "wall-glider"). They never settle voluntarily on the ground where they would be vulnerable to accidents and predation, and non-breeding individuals may spend up to ten months in continuous flight. Taxonomy The common swift was one of the many species described by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae. He introduced the binomial name Hirundo apus. The current genus Apus was erected by the Italian naturalist Giovanni Antonio Scopoli in 1777 based on tautonymy. The word apus is the Latin word for a swift. It is derived from the Ancient Greek α, a, "without", and πούς, pous, "foot", based on the belief that these birds were a form of swallow that lacked feet. A Central European subspecies which lived during the last ice age has been described as Apus apus palapus. Description Common swifts are long with a wingspan of and entirely blackish-brown except for a small white or pale grey patch on their chins which is not visible from a distance. They have a short forked tail and very long swept-back wings that resemble a crescent or a boomerang. Their call is a loud scream in two different tone pitches, the higher of which issues from the female. They often form "screaming parties" during summer evenings, when 10–20 swifts will gather in flight around their nesting area, calling out and being answered by nesting swifts. Larger "screaming parties" are formed at higher altitudes, especially late in the breeding season. The purpose of these parties is uncertain, but may include ascending to sleep on the wing, while still breeding adults tend to spend the night in the nest. Radar tracking of swifts at their breeding colonies has revealed that they often move together in flocks during their evening ascent and their dawn descent, but fly separately during the subsequent evening descent and the prior dawn ascent, suggesting that this flocking benefits the swifts via cue acquisition and information exchange between individuals or through extending social behaviour. Behaviour Swifts may nest in former woodpecker tree burrows found in ancient forests, such as some 600 reported nesting in the Białowieża Forest of North Eastern Poland, or the small colony found in a combination of woodpecker holes and tree nestboxes on the RSPB's reserve at the Caledonian Forest in Abernethy, Scotland. While tree holes and cliffs may have comprised their historical nesting resource, the almost complete removal of ancient forest from their nesting range has resulted in adaptation to man-made sites. Swifts build their nests of air-borne material caught in flight, bonded with their saliva, in suitable buildings hollows, such as under tiles, in gaps beneath window sills, and most typically under eaves and within gables. Swifts form pairs that may couple for years, and often return to the same nesting site and partner year after year, repairing degradation suffered in their 40-week migratory absence. Insects such as clothes moths, carpet and larder beetles may consume all but the most indigestible nest elements, typically feather shafts. Young nesting swifts are able to survive for a few days without food by dropping their body temperature and metabolic rate, entering a torpid state. Except when nesting, swifts spend their lives in the air, living on the insects caught in flight; they drink, feed, and often mate and sleep on the wing. Some individuals go 10 months without landing. No other bird spends as much of its life in flight. Contrary to common belief, swifts can take flight from level ground. Their maximum horizontal flying speed is Over a lifetime they can cover millions of kilometers. Feeding parties can be very large in insect-rich areas, such as wetlands. Reports of as many as 2,000 swifts feeding over flooded gravel pits, lakes and marshy river deltas are not uncommon, and may represent an ingress of swifts from within as much as a radius; swifts nesting in Western Scotland are thought to venture to Lough Neagh in Northern Ireland to feed on the abundant and nutritious "Lough Neagh Fly". Breeding Common swifts nest in a wider variety of sites than any other species of Apus. Swifts usually nest in buildings but they can also be found nesting in holes in trees, cliffs and crevices, and even in nestboxes. Swifts usually enter their nesting holes with direct flight, and take-off is characterized by an initial free-fall. Migration Common swifts are migratory. Their summer breeding range runs from Portugal and Ireland in the West across to China and Siberia in the East. They breed as far south as Northern Africa (in Morocco and Algeria), with a presence in the Middle East in Israel, Lebanon and Syria, the Near East across Turkey, and the whole of Europe as far north as Norway, Finland, and most of sub-Arctic Russia. Swifts migrate to Africa by a variety of routes, ending up in Equatorial and Sub-Equatorial Africa, excluding the Cape. Common swifts do not breed on the Indian Subcontinent. Subjects of a geolocator tracking study demonstrated that swifts breeding in Sweden winter in the Congo region of Africa. Swifts spend three to three-and-a-half months in Africa and a similar time breeding – the rest is spent on the wing, flying home or away. Unsuccessful breeders, fledglings, and sexually immature year-old birds are the first to leave their breeding area. Breeding males follow next, and finally the breeding females. The breeding females stay longer in the nest to rebuild their fat reserves. The time of departure is often determined by the light cycle, and begins at the first day of less than 17 hours light. For this reason, birds further north, for instance in Finland, leave later in the second half of August. These latecomers are rushed through the quickly shortening days in Central Europe and are barely seen by bird watchers. The prevailing direction of travel through Central Europe is south-by-southwest, and so the Alps do not present a barrier. In bad weather, the swifts follow rivers, because they can find a better food supply there. The population of Western and Central Europe traverses the Iberian peninsula and northwestern Africa. Swifts from Russia and southeastern Europe make a long journey over the eastern part of the Mediterranean. It is unclear where the two groups meet. The western group of swifts mostly follow the Atlantic coastline of Africa – otherwise they would have to cross the Sahara. Once they arrive at the humid savanna, they turn southeast to arrive at their winter feeding grounds. During the summer in Africa, there is a great bounty of insects for the swifts, since the region lies in the Intertropical Convergence Zone. The swifts have a nearly unbroken presence in the sky. A few swifts, usually some of the sexually immature one year olds, remain in Africa. The majority fly northwards through Africa, then turn east towards their destinations. The birds use low pressure fronts during their spring migrations to exploit the southwestern flow of warm air, and on the return trip, ride northeastern winds on the back of the low pressure fronts. In Central Europe, the swifts return in the second half of April and the first third of May, and like to stay in lowlands and near water rather than in high places. In more northerly regions, the swifts arrive later. The weather along the journey has an enormous influence on the arrival date, so in one region the swifts may come back at varying times year to year. Differences between swifts and swallows The barn swallow and house martin hunt for airborne insects in a manner similar to that of the slightly larger swift, and occasionally mixed groups of the species form. The most noticeable differences between the three types are: The shrill screaming call of the swift distinguishes itself from the more inconspicuous babbling of the swallow. The narrow sickle-shaped wings of the swift are longer than its body, and its silhouette in the air resembles an anchor. The swift's wingbeats are deep and quick, and the swift glides for longer. The swallow's flight is more fluttering, and it presses its wings further to the rear during beats. Although sometimes difficult to discern against a bright sky, the underside of a swift, with the exception of the white spot under its chin, is entirely dark brown. Swallows show a beige-white underside. They can also be recognized by the long forks in their tails. Parasites Swift nests commonly support populations of the chewing louse Dennyus hirundinis and the lousefly Crataerina pallida. In culture In medieval Italy, swifts (rondone) were encouraged to nest in towers and buildings using rondonare, holes left in the wall and special constructions under the eaves of buildings. Young birds were harvested for eating but there were rules about leaving at least one young in the nest. The heraldic bird known as the "martlet", which is represented without feet, may have been based on the swift, but is generally assumed to refer to the house martin; it was used for the arms of younger sons, perhaps because it symbolized their landless wandering.
Biology and health sciences
Apodiformes
Animals
66174
https://en.wikipedia.org/wiki/Barn%20swallow
Barn swallow
The barn swallow (Hirundo rustica) is the most widespread species of swallow in the world, occurring on all continents, with vagrants reported even in Antarctica. It is a distinctive passerine bird with blue upperparts and a long, deeply forked tail. In Anglophone Europe, it is just called the swallow; in northern Europe, it is the only member of family Hirundinidae called a "swallow" rather than a "martin". There are six subspecies of barn swallow, which breed across the Northern Hemisphere. Two subspecies, (H. r. savignii and H. r. transitiva) have fairly restricted ranges in the Nile valley and eastern Mediterranean, respectively. The other four are more widespread, with winter ranges covering much of the Southern Hemisphere. The barn swallow is a bird of open country that normally nests in man-made structures and consequently has spread with human expansion. It builds a cup nest from mud pellets in barns or similar structures and feeds on insects caught in flight. This species lives in close association with humans, and its insect-eating habits mean that it is tolerated by humans; this acceptance was reinforced in the past by superstitions regarding the bird and its nest. There are frequent cultural references to the barn swallow in literary and religious works due to both its living in close proximity to humans and its annual migration. The barn swallow is the national bird of Austria and Estonia. Description The adult male barn swallow of the nominate subspecies H. r. rustica is long including of elongated outer tail feathers. It has a wingspan of and weighs . It has steel blue upperparts and a rufous forehead, chin and throat, which are separated from the off-white underparts by a broad dark blue breast band. The outer tail feathers are elongated, giving the distinctive deeply forked "swallow tail". There is a line of white spots across the outer end of the upper tail. The female is similar in appearance to the male, but the tail streamers are shorter, the blue of the upperparts and breast band is less glossy, and the underparts paler. The juvenile is browner and has a paler rufous face and whiter underparts. It also lacks the long tail streamers of the adult. Although both sexes sing, female song was only recently described. (See below for details about song.) Calls include witt or witt-witt and a loud splee-plink when excited or trying to chase intruders away from the nest. The alarm calls include a sharp siflitt for predators like cats and a flitt-flitt for birds of prey like the hobby. This species is fairly quiet on the wintering grounds. The distinctive combination of a red face and blue breast band renders the adult barn swallow easy to distinguish from the African Hirundo species and from the welcome swallow (Hirundo neoxena) with which its range overlaps in Australasia. In Africa the short tail streamers of the juvenile barn swallow invite confusion with juvenile red-chested swallow (Hirundo lucida), but the latter has a narrower breast band and more white in the tail. Taxonomy The barn swallow was described by Carl Linnaeus in his 1758 10th edition of Systema Naturae as Hirundo rustica, characterised as "H. rectricibus, exceptis duabus intermediis, macula alba notatîs". Hirundo is the Latin word for "swallow"; rusticus means "of the country". This species is the only one of that genus to have a range extending into the Americas, with the majority of Hirundo species being native to Africa. This genus of blue-backed swallows is sometimes called the "barn swallows". The Oxford English Dictionary dates the English common name "barn swallow" to 1851, though an earlier instance of the collocation in an English-language context is in Gilbert White's popular book The Natural History of Selborne, originally published in 1789: The swallow, though called the chimney-swallow, by no means builds altogether in chimnies , but often within barns and out-houses against the rafters ... In Sweden she builds in barns, and is called ladusvala, the barn-swallow. This suggests that the English name may be a calque on the Swedish term. There are few taxonomic problems within the genus, but the red-chested swallow—a resident of West Africa, the Congo Basin, and Ethiopia—was formerly treated as a subspecies of barn swallow. The red-chested swallow is slightly smaller than its migratory relative, has a narrower blue breast-band, and (in the adult) has shorter tail streamers. In flight, it looks paler underneath than barn swallow. Subspecies Six subspecies of barn swallow are generally recognised. In eastern Asia, a number of additional or alternative forms have been proposed, including saturata by Robert Ridgway in 1883, kamtschatica by Benedykt Dybowski in 1883, ambigua by Erwin Stresemann and mandschurica by Wilhelm Meise in 1934. Given the uncertainties over the validity of these forms, this article follows the treatment of Turner and Rose. H. r. rustica, the nominate European subspecies, breeds in Europe and Asia, as far north as the Arctic Circle, south to North Africa, the Middle East and Sikkim, and east to the Yenisei River. It migrates on a broad front to winter in Africa, Arabia, and the Indian subcontinent. The barn swallows wintering in southern Africa are from across Eurasia to at least 91°E, and have been recorded as covering up to on their annual migration. The nominate European subspecies was the first to have its genome sequenced and published. H. r. transitiva was described by Ernst Hartert in 1910. It breeds in the Middle East from southern Turkey to Israel and is partially resident, though some birds winter in East Africa. It has orange-red underparts and a broken breast band. The holotype of Chelidon rustica transitiva Hartert (Vog. pal. Fauna, Heft 6, 1910. p. 802), an adult female, is held in the vertebrate zoology collection of National Museums Liverpool at World Museum, with accession number NML-VZ T2057. The specimen was collected in the Plains of Esdraclon, Palestine on 16 December 1863 by Henry Baker Tristram. The specimen came to the Liverpool national collection through the purchase of Canon Henry Baker Tristram's collection by the museum in 1896. H. r. savignii, the resident Egyptian subspecies, was described by James Stephens in 1817 and named for French zoologist Marie Jules César Savigny. It resembles transitiva, which also has orange-red underparts, but savignii has a complete broad breast band and deeper red hue to the underparts. H. r. gutturalis, described by Giovanni Antonio Scopoli in 1786, has whitish underparts and a broken breast band. The breast is chestnut and the lower underparts more pink-buff. The populations that breed in the central and eastern Himalayas have been included in this subspecies, although the primary breeding range is Japan and Korea. The east Asian breeders winter across tropical Asia from India and Sri Lanka east to Indonesia and New Guinea. Increasing numbers are wintering in Australia. It hybridises with H. r. tytleri in the Amur River area. It is thought that the two eastern Asia forms were once geographically separate, but the nest sites provided by expanding human habitation allowed the ranges to overlap. H. r. gutturalis is a vagrant to Alaska and Washington, but is easily distinguished from the North American breeding subspecies, H. r. erythrogaster, by the latter's reddish underparts. H. r. tytleri, first described by Thomas Jerdon in 1864, and named for British soldier, naturalist and photographer Robert Christopher Tytler, has deep orange-red underparts and an incomplete breast band. The tail is also longer. It breeds in central Siberia south to northern Mongolia and winters from eastern Bengal east to Thailand and Malaysia. H. r. erythrogaster, the North American subspecies described by Pieter Boddaert in 1783, differs from the European subspecies in having redder underparts and a narrower, often incomplete, blue breast band. It breeds throughout North America, from Alaska to southern Mexico, and migrates to the Lesser Antilles, Costa Rica, Panama and South America to winter. A few may winter in the southernmost parts of the breeding range. This subspecies funnels through Central America on a narrow front and is therefore abundant on passage in the lowlands of both coasts. Since the 1980s, small numbers of this subspecies have been found nesting in Argentina. The short wings, red belly and incomplete breast band of H. r. tytleri are also found in H. r. erythrogaster, and DNA analyses show that barn swallows from North America colonised the Baikal region of Siberia, a dispersal direction opposite to that for most changes in distribution between North America and Eurasia. Behaviour Habitat and range The preferred habitat of the barn swallow is open country with low vegetation, such as pasture, meadows and farmland, preferably with nearby water. This swallow avoids heavily wooded or precipitous areas and densely built-up locations. The presence of accessible open structures such as barns, stables, or culverts to provide nesting sites, and exposed locations such as wires, roof ridges or bare branches for perching, are also important in the bird's selection of its breeding range. Barn swallows are semi-colonial, settling in groups from a single pair to a few dozen pairs, particularly in larger wooden structures housing animals. The same individuals often breed at the same site year after year, although settlement choices have been experimentally shown to be predicted by nest availability rather than any characteristics of available mates. Because it takes around 2 weeks for a pair to build a nest from mud, hair, and other materials, old nests are highly prized. This species breeds across the Northern Hemisphere from sea level to , but to in the Caucasus and North America, and it is absent only from deserts and the cold northernmost parts of the continents. Over much of its range, it avoids towns, and in Europe is replaced in urban areas by the house martin. However, in Honshū, Japan, the barn swallow is a more urban bird, with the red-rumped swallow (Cecropis daurica) replacing it as the rural species. In winter, the barn swallow is cosmopolitan in its choice of habitat, avoiding only dense forests and deserts. It is most common in open, low vegetation habitats, such as savanna and ranch land, and in Venezuela, South Africa and Trinidad and Tobago it is described as being particularly attracted to burnt or harvested sugarcane fields and the waste from the cane. In the absence of suitable roost sites, they may sometimes roost on wires where they are more exposed to predators. Individual birds tend to return to the same wintering locality each year and congregate from a large area to roost in reed beds. These roosts can be extremely large; one in Nigeria had an estimated 1.5 million birds. These roosts are thought to be a protection from predators, and the arrival of roosting birds is synchronised in order to overwhelm predators like African hobbies. The barn swallow has been recorded as breeding in the more temperate parts of its winter range, such as the mountains of Thailand and in central Argentina. Migration of barn swallows between Britain and South Africa was first established on 23 December 1912 when a bird that had been ringed by James Masefield at a nest in Staffordshire, was found in Natal. As would be expected for a long-distance migrant, this bird has occurred as a vagrant to such distant areas as Hawaii, Bermuda, Greenland, Tristan da Cunha, the Falkland Islands, and even Antarctica. Feeding The barn swallow is similar in its habits to other aerial insectivores, including other swallow species and the unrelated swifts. It is not a particularly fast flier, with a speed estimated at , up to and a wing beat rate of approximately 5, up to 7–9 times each second. The barn swallow typically feeds in open areas above shallow water or the ground often following animals, humans or farm machinery to catch disturbed insects, but it will occasionally pick prey items from the water surface, walls and plants. Swallows have been observed feeding on insects that fly around active white stork nests as well. In the breeding areas, large flies make up around 70% of the diet, with aphids also a significant component. However, in Europe, the barn swallow consumes fewer aphids than the house or sand martins. On the wintering grounds, Hymenoptera, especially flying ants, are important food items. Grasshoppers, crickets, dragonflies, beetles and moths are also preyed upon. When egg-laying, barn swallows hunt in pairs, but otherwise will form often large flocks. The amount of food a clutch will get depends on the size of the clutch, with larger clutches getting more food on average. The timing of a clutch also determines the food given; later broods get food that is smaller in size compared to earlier broods. This is because larger insects are too far away from the nest to be profitable in terms of energy expenditure. Isotope studies have shown that wintering populations may utilise different feeding habitats, with British breeders feeding mostly over grassland, whereas Swiss birds utilised woodland more. Another study showed that a single population breeding in Denmark actually wintered in two separate areas. The barn swallow drinks by skimming low over lakes or rivers and scooping up water with its open mouth. This bird bathes in a similar fashion, dipping into the water for an instant while in flight. Swallows gather in communal roosts after breeding, sometimes thousands strong. Reed beds are regularly favoured, with the birds swirling en masse before swooping low over the reeds. Reed beds are an important source of food prior to and whilst on migration; although the barn swallow is a diurnal migrant that can feed on the wing whilst it travels low over ground or water, the reed beds enable fat deposits to be established or replenished. Song Males sing to defend small territories (when living in colonies, less so in solitary pairs) and to attract mates. Males sing throughout the breeding season, from late April into August in many parts of the range. Their song is made up of a "twitter warble," followed by a rising "P-syllable" in European H. r. rustica and the North American H. r. erythrogaster. In all subspecies, this is followed by a short "Q-syllable" and a trilled series of pulses, termed the "rattle." The rattle is sometimes followed by a terminal "Ω-Note" in some subspecies' populations, and always at the end of H. r. tytleri song. Female songs are much shorter than male songs, and are only produced during the early part of the breeding season. Females sing spontaneously, though infrequently, and will also countersing in response to each other. Breeding The male barn swallow returns to the breeding grounds before the females and selects a nest site, which is then advertised to females with a circling flight and song. Plumage may be used to advertise: in some populations, like in the subspecies H. r. gutturalis, darker ventral plumage in males is associated with higher breeding success. In other populations, the breeding success of the male is related to the length of the tail streamers, with longer streamers being more attractive to the female. Males with longer tail feathers are generally longer-lived and more disease resistant, females thus gaining an indirect fitness benefit from this form of selection, since longer tail feathers indicate a genetically stronger individual which will produce offspring with enhanced vitality. Males in northern Europe have longer tails than those further south, whereas in Spain the male's tail streamers are only 5% longer than the female's; in Finland, the difference is 20%. In Denmark, the average male tail length increased by 9% between 1984 and 2004, but it is possible that climatic changes may lead in the future to shorter tails if summers become hot and dry. Males with long streamers also have larger white tail spots, and since feather-eating bird lice prefer white feathers, large white tail spots without parasite damage again demonstrate breeding quality; a positive association exists between spot size and the number of offspring produced each season. The breeding season of the barn swallow is variable: in the southern part of the range, the breeding season usually is from February or March to early to mid September, although some late second and third broods finish in October. In the northern part of the range, it usually starts late May to early June and ends the same time as the breeding season of the southernmost birds. Both sexes defend the nest, but the male is particularly aggressive and territorial. Once established, pairs stay together to breed for life, but extra-pair copulation is common, making this species genetically polygamous, despite being socially monogamous. Males guard females actively to avoid being cuckolded. Males may use deceptive alarm calls to disrupt extrapair copulation attempts toward their mates. As its name implies, the barn swallow typically nests inside accessible buildings such as barns and stables, or under bridges and wharves. Before man-made sites became common, it nested on cliff faces or in caves, but this is now rare. The neat cup-shaped nest is placed on a beam or against a suitable vertical projection. It is constructed by both sexes, although more often by the female, with mud pellets collected in their beaks and lined with grasses, feathers, algae or other soft materials. The nest building ability of the male is also sexually selected; females will lay more eggs and at an earlier date with males who are better at nest construction, with the opposite being true with males that are not. After building the nest, barn swallows may nest colonially where sufficient high-quality nest sites are available, and within a colony, each pair defends a territory around the nest which, for the European subspecies, is in size. Colony size tends to be larger in North America. In North America at least, barn swallows frequently engage in a mutualist relationship with ospreys. Barn swallows will build their nest below an osprey nest, receiving protection from other birds of prey that are repelled by the exclusively fish-eating ospreys. The ospreys are alerted to the presence of these predators by the alarm calls of the swallows. Barn swallows will normally raise two broods, with the original nest being reused for the second brood and being repaired and reused in subsequent years. The female lays two to seven, but typically four or five, reddish-spotted white eggs. The clutch size is influenced by latitude, with clutch sizes of northern populations being higher on average than southern populations. The eggs are in size, and weigh , of which 5% is shell. In Europe, the female does almost all the incubation, but in North America the male may incubate up to 25% of the time. The incubation period is normally 14–19 days, with another 18–23 days before the altricial chicks fledge. The fledged young stay with, and are fed by, the parents for about a week after leaving the nest. Occasionally, first-year birds from the first brood will assist in feeding the second brood. Compared to those from early broods, juvenile barn swallows from late broods have been found to migrate at a younger age, fuel less efficiently during migration and have lower return rates the following year. The barn swallow will mob intruders such as cats or accipiters that venture too close to their nest, often flying very close to the threat. Adult barn swallows have few predators, but some are taken by accipiters, falcons, and owls. Brood parasitism by cowbirds in North America or cuckoos in Eurasia is rare. Hatching success is 90% and the fledging survival rate is 70–90%. Average mortality is 70–80% in the first year and 40–70% for the adult. Although the record age is more than 11 years, most survive less than four years. Barn swallow nestlings have prominent red gapes, a feature shown to induce feeding by parent birds. An experiment in manipulating brood size and immune system showed the vividness of the gape was positively correlated with T-cell–mediated immunocompetence, and that larger brood size and injection with an antigen led to a less vivid gape. The barn swallow has been recorded as hybridising with the cliff swallow (Petrochelidon pyrrhonota) and the cave swallow (P. fulva) in North America, and the house martin (Delichon urbicum) in Eurasia, the cross with the latter being one of the most common passerine hybrids. Parasites and predators Barn swallows (and other small passerines) often have characteristic feather holes on their wing and tail feathers. These holes were suggested as being caused by avian lice such as Machaerilaemus malleus and Myrsidea rustica, although other studies suggest that they are mainly caused by species of Brueelia. Several other species of lice have been described from barn swallow hosts, including Brueelia domestica and Philopterus microsomaticus. The avian lice prefer to feed on white tail spots, and they are generally found more numerously on short-tailed males, indicating the function of unbroken white tail spots as a measure of quality. In Texas, the swallow bug (Oeciacus vicarius), which is common on species such as the cliff swallow, is also known to infest barn swallows. Predatory bats such as the greater false vampire bat are known to prey on barn swallows. Swallows at their communal roosts attract predators and several falcon species make use of these opportunities. Falcon species confirmed as predators include the peregrine falcon and the African hobby. In Africa, tigerfish Hydrocynus vittatus have been recorded to routinely leap out of the water to capture low-flying swallows. Status The barn swallow has an enormous range, with an estimated global extent of about and a population of 190 million individuals. The species is evaluated as least concern on the 2019 IUCN Red List, and has no special status under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which regulates international trade in specimens of wild animals and plants. This is a species that has greatly benefited historically from forest clearance, which has created the open habitats it prefers, and from human habitation, which have given it an abundance of safe man-made nest sites. There have been local declines due to the use of DDT in Israel in the 1950s, competition for nest sites with house sparrows in the US in the 19th century, and an ongoing gradual decline in numbers in parts of Europe and Asia due to agricultural intensification, reducing the availability of insect food. However, there has been an increase in the population in North America during the 20th century with the greater availability of nesting sites and subsequent range expansion, including the colonisation of northern Alberta. A specific threat to wintering birds from the European populations is the transformation by the South African government of a light aircraft runway near Durban into an international airport for the 2010 FIFA World Cup. The roughly square Mount Moreland reed bed is a night roost for more than three million barn swallows, which represent 1% of the global population and 8% of the European breeding population. The reed bed lies on the flight path of aircraft using the proposed La Mercy airport, and there were fears that it would be cleared because the birds could threaten aircraft safety. However, following detailed evaluation, advanced radar technology will be installed to enable planes using the airport to be warned of bird movements and, if necessary, take appropriate measures to avoid the flocks. Climate change may affect the barn swallow; drought causes weight loss and slow feather regrowth, and the expansion of the Sahara will make it a more formidable obstacle for migrating European birds. Hot dry summers will reduce the availability of insect food for chicks. Conversely, warmer springs may lengthen the breeding season and result in more chicks, and the opportunity to use nest sites outside buildings in the north of the range might also lead to more offspring. Relationship with humans The barn swallow is an attractive bird that feeds on flying insects and has therefore been tolerated by humans when it shares their buildings for nesting. As one of the earlier migrants, this conspicuous species is also seen as an early sign of summer's approach. In the Old World, the barn swallow appears to have used man-made structures and bridges since time immemorial. An early reference is in Virgil's Georgics (29 BC), "Ante garrula quam tignis nidum suspendat hirundo" (Before the twittering swallow hangs its nest from the rafters). Many cattle farmers believed that swallows spread Salmonella infections; however, a study in Sweden showed no evidence of the birds being reservoirs of the bacteria. In literature Many literary references are based on the barn swallow's northward migration as a symbol of spring or summer. The proverb about the necessity for more than one piece of evidence goes back at least to Aristotle's Nicomachean Ethics: "For as one swallow or one day does not make a spring, so one day or a short time does not make a fortunate or happy man." The barn swallow symbolises the coming of spring and thus love in the Pervigilium Veneris, a late Latin poem. In his poem "The Waste Land", T. S. Eliot quoted the line "Quando fiam uti chelidon [ut tacere desinam]?" ("When will I be like the swallow, so that I can stop being silent?") This refers to the myth of Philomela in which she turns into a nightingale, and her sister Procne into a swallow. In culture Gilbert White studied the barn swallow in detail in his pioneering work The Natural History of Selborne, but even this careful observer was uncertain whether it migrated or hibernated in winter. Elsewhere, its long journeys were well observed, and a swallow tattoo is traditional among sailors as a symbol of a safe return; the tradition was that a mariner had a tattoo of this fellow wanderer after sailing . A second swallow would be added after at sea. In the past, the tolerance for this beneficial insectivore was reinforced by superstitions regarding damage to the barn swallow's nest. Such an act might lead to cows giving bloody milk, or no milk at all, or to hens ceasing to lay. This may be a factor in the longevity of swallows' nests. Survival, with suitable annual refurbishment, for 10–15 years is regular, and one nest was reported to have been occupied for 48 years. It is depicted as the martlet, merlette or merlot in heraldry, where it represents younger sons who have no lands. It is also represented as lacking feet as this was a common belief at the time. As a result of a campaign by ornithologists, the barn swallow has been the national bird of Estonia since 23 June 1960, and is also the national bird of Austria.
Biology and health sciences
Passerida
Animals
66269
https://en.wikipedia.org/wiki/Chloride
Chloride
The term chloride refers to a compound or molecule that contains either a chlorine anion (), which is a negatively charged chlorine atom, or a non-charged chlorine atom covalently bonded to the rest of the molecule by a single bond (). Many inorganic chlorides are salts. Many organic compounds are chlorides. The pronunciation of the word "chloride" is . Chloride salts such as sodium chloride are often soluble in water. It is an essential electrolyte located in all body fluids responsible for maintaining acid/base balance, transmitting nerve impulses and regulating liquid flow in and out of cells. Other examples of ionic chlorides are sodium chloride NaCl, calcium chloride and ammonium chloride . The chloride is also a neutral chlorine atom covalently bonded by a single bond to the rest of the molecule. For example, methyl chloride is an organic compound with a covalent C−Cl bond in which the chlorine is not an anion. Other examples of covalent chlorides are carbon tetrachloride , sulfuryl chloride and monochloramine . Electronic properties A chloride ion (diameter 167 pm) is much larger than a chlorine atom (diameter 99 pm). The chlorine atom's hold on the valence shell is weaker because the chloride anion has one more electron than it does. The ion is colorless and diamagnetic. In aqueous solution, it is highly soluble in most cases; however, for some chloride salts, such as silver chloride, lead(II) chloride, and mercury(I) chloride, they are only slightly soluble in water. In aqueous solution, chloride is bonded by the protic end of the water molecules. Reactions of chloride Chloride can be oxidized but not reduced. The first oxidation, as employed in the chlor-alkali process, is conversion to chlorine gas. Chlorine can be further oxidized to other oxides and oxyanions including hypochlorite (ClO−, the active ingredient in chlorine bleach), chlorine dioxide (ClO2), chlorate (), and perchlorate (). In terms of its acid–base properties, chloride is a weak base as indicated by the negative value of the pKa of hydrochloric acid. Chloride can be protonated by strong acids, such as sulfuric acid: NaCl + H2SO4 → NaHSO4 + HCl Ionic chloride salts react with other salts to exchange anions. The presence of halide ions like chloride can be detected using silver nitrate. A solution containing chloride ions will produce a white silver chloride precipitate: Cl− + Ag+ → AgCl The concentration of chloride in an assay can be determined using a chloridometer, which detects silver ions once all chloride in the assay has precipitated via this reaction. Chlorided silver electrodes are commonly used in electrophysiology. Other oxyanions Chlorine can assume oxidation states of −1, +1, +3, +5, or +7. Several neutral chlorine oxides are also known. {| class="wikitable" |- ! Chlorine oxidation state | −1 | +1 | +3 | +5 | +7 |- ! Name | chloride | hypochlorite | chlorite | chlorate | perchlorate |- ! Formula | Cl− | ClO− | | | |- ! Structure | | | | | |} Occurrence in nature In nature, chloride is found primarily in seawater, which has a chloride ion concentration of 19400 mg/liter. Smaller quantities, though at higher concentrations, occur in certain inland seas and in subterranean brine wells, such as the Great Salt Lake in Utah and the Dead Sea in Israel. Most chloride salts are soluble in water, thus, chloride-containing minerals are usually only found in abundance in dry climates or deep underground. Some chloride-containing minerals include halite (sodium chloride NaCl), sylvite (potassium chloride KCl), bischofite (MgCl2∙6H2O), carnallite (KCl∙MgCl2∙6H2O), and kainite (KCl∙MgSO4∙3H2O). It is also found in evaporite minerals such as chlorapatite and sodalite. Role in biology Chloride has a major physiological significance, which includes regulation of osmotic pressure, electrolyte balance and acid-base homeostasis. Chloride is present in all body fluids, and is the most abundant extracellular anion which accounts for around one third of extracellular fluid's tonicity. Chloride is an essential electrolyte, playing a key role in maintaining cell homeostasis and transmitting action potentials in neurons. It can flow through chloride channels (including the GABAA receptor) and is transported by KCC2 and NKCC2 transporters. Chloride is usually (though not always) at a higher extracellular concentration, causing it to have a negative reversal potential (around −61 mV at 37 °C in a mammalian cell). Characteristic concentrations of chloride in model organisms are: in both E. coli and budding yeast are 10–200 mM (dependent on medium), in mammalian cells 5–100 mM and in blood plasma 100 mM. Chloride is also needed for the production of hydrochloric acid in the stomach. The concentration of chloride in the blood is called serum chloride, and this concentration is regulated by the kidneys. A chloride ion is a structural component of some proteins; for example, it is present in the amylase enzyme. For these roles, chloride is one of the essential dietary mineral (listed by its element name chlorine). Serum chloride levels are mainly regulated by the kidneys through a variety of transporters that are present along the nephron. Most of the chloride, which is filtered by the glomerulus, is reabsorbed by both proximal and distal tubules (majorly by proximal tubule) by both active and passive transport. Corrosion The presence of chlorides, such as in seawater, significantly worsens the conditions for pitting corrosion of most metals (including stainless steels, aluminum and high-alloyed materials). Chloride-induced corrosion of steel in concrete lead to a local breakdown of the protective oxide form in alkaline concrete, so that a subsequent localized corrosion attack takes place. Environmental threats Increased concentrations of chloride can cause a number of ecological effects in both aquatic and terrestrial environments. It may contribute to the acidification of streams, mobilize radioactive soil metals by ion exchange, affect the mortality and reproduction of aquatic plants and animals, promote the invasion of saltwater organisms into previously freshwater environments, and interfere with the natural mixing of lakes. Sodium chloride has also been shown to change the composition of microbial species at relatively low concentrations. It can also hinder the denitrification process, a microbial process essential to nitrate removal and the conservation of water quality, and inhibit the nitrification and respiration of organic matter. Production The chlor-alkali industry is a major consumer of the world's energy budget. This process converts concentrated sodium chloride solutions into chlorine and sodium hydroxide, which are used to make many other materials and chemicals. The process involves two parallel reactions: 2 Cl− → + 2 e− 2  + 2 e− → H2 + 2 OH− Examples and uses An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissociates into Na+ and Cl− ions. Salts such as calcium chloride, magnesium chloride, potassium chloride have varied uses ranging from medical treatments to cement formation. Calcium chloride (CaCl2) is a salt that is marketed in pellet form for removing dampness from rooms. Calcium chloride is also used for maintaining unpaved roads and for fortifying roadbases for new construction. In addition, calcium chloride is widely used as a de-icer, since it is effective in lowering the melting point when applied to ice. Examples of covalently-bonded chlorides are phosphorus trichloride, phosphorus pentachloride, and thionyl chloride, all three of which are reactive chlorinating reagents that have been used in a laboratory. Water quality and processing A major application involving chloride is desalination, which involves the energy intensive removal of chloride salts to give potable water. In the petroleum industry, the chlorides are a closely monitored constituent of the mud system. An increase of the chlorides in the mud system may be an indication of drilling into a high-pressure saltwater formation. Its increase can also indicate the poor quality of a target sand. Chloride is also a useful and reliable chemical indicator of river and groundwater fecal contamination, as chloride is a non-reactive solute and ubiquitous to sewage and potable water. Many water regulating companies around the world utilize chloride to check the contamination levels of the rivers and potable water sources. Food Chloride salts such as sodium chloride are used to preserve food and as nutrients or condiments.
Physical sciences
Salts
null
66275
https://en.wikipedia.org/wiki/Coati
Coati
Coatis (from Tupí), also known as coatimundis (), are members of the family Procyonidae in the genera Nasua and Nasuella (comprising the subtribe Nasuina). They are diurnal mammals native to South America, Central America, Mexico, and the Southwestern United States. The name "coatimundi" comes from the Tupian languages of Brazil, where it means "lone coati". Locally in Belize, the coati is known as "quash". Physical characteristics Adult coatis measure from head to the base of the tail, which can be as long as their bodies. Coatis are about tall at the shoulder and weigh between , about the size of a large house cat. Males can become almost twice as large as females and have large, sharp canine teeth. The measurements above relate to the white-nosed and South America coatis. The two species of mountain coati are smaller. All coatis share a slender head with an elongated, flexible, slightly upturned nose, small ears, dark feet, and a long non-prehensile tail used for balance and signaling. Ring-tailed coatis have either a light brown or black coat, with a lighter underpart and a white-ringed tail in most cases. Coatis have a long brown tail with rings on it which are anywhere from starkly defined like a raccoon's to very faint. As in raccoons but not ring-tailed cats and cacomistles, the rings go completely around the tail. Coatis often hold the tail erect; it is used as such to keep troops of coatis together in tall vegetation. The tip of the tail can be moved slightly on its own, as is the case with cats, but it is not prehensile as is that of the kinkajou, another procyonid. Coatis have bear- and raccoon-like paws and walk plantigrade like raccoons and bears (on the soles of the feet, as do humans). Coatis have nonretractable claws. Coatis also are able to rotate their ankles beyond 180°, in common with raccoons and other procyonids (and others in the order Carnivora and rare cases among other mammals); they are therefore able to descend trees head first. (Other animals living in forests have acquired some or all of these properties through convergent evolution, including members of the mongoose, civet, weasel, cat, and bear families.) The coati snout is long and somewhat pig-like – part of the reason for its nickname, the "hog-nosed raccoon". It is also extremely flexible and can rotate up to 60° in any direction. They use their noses to push objects and rub parts of their body. The facial markings include white markings around the eyes and on the ears and snout. Coatis have strong limbs to climb and dig and have a reputation for intelligence, like their fellow procyonid, the raccoon. Unlike the nocturnal raccoons, however, most coatis are diurnal, although some may exhibit cathemeral behavior. They prefer to sleep or rest in elevated places and niches, like the rainforest canopy, in crudely built sleeping nests. Habitat and range Overall, coatis are widespread, occupying habitats ranging from hot and arid areas to humid Amazonian rainforests or even cold Andean mountain slopes, including grasslands and bushy areas. Their geographical range extends from the southwestern U.S. (southern Arizona, New Mexico, and Texas) through northern Uruguay. Around 10 coatis are thought to have formed a breeding population in Cumbria, UK. Taxonomy The following species are recognised: Genus Nasua Nasua narica (Linnaeus, 1766) – white-nosed coati (Southwestern United States, Mexico, Central America, and Colombia) Nasua nasua (Linnaeus, 1766) – South American coati (South America) Genus Nasuella Nasuella meridensis (Thomas, 1901) – eastern mountain coati (Venezuela) Nasuella olivacea (Gray, 1865) – western mountain coati (Colombia and Ecuador) The Cozumel Island coati was formerly recognised as a species, but the vast majority of recent authorities treat it as a subspecies, N. narica nelsoni, of the white-nosed coati. Genetic evidence (cytochrome b sequences) has suggested that the genus Nasuella should be merged into Nasua, as the latter is otherwise paraphyletic. Other genetic studies have shown that the closest relatives of the coatis are the olingos (genus Bassaricyon); the two lineages are thought to have diverged about 10.2 million years ago. Lifespan Coatis can live up to seven years in the wild. In captivity, their average lifespan is about 14 years, and some coatis can live into their late teens. Feeding habits Coatis are omnivores; their diet consists mainly of ground litter, invertebrates, such as tarantula, and fruit (Alves-Costa et al., 2004, 2007; Hirsch 2007). They also eat small vertebrate prey, such as lizards, rodents, small birds, birds' eggs, and crocodile eggs. The snout, with an acute sense of smell, assists the paws in a hog-like manner to unearth invertebrates. Behaviour Little is known about the behaviour of the mountain coatis, and the following is almost entirely about the coatis of the genus Nasua. Unlike most members of the raccoon family (Procyonidae), coatis are primarily diurnal. Nasua coati females and young males up to two years of age are gregarious and travel through their territories in noisy, loosely organised bands made up of four to 25 individuals, foraging with their offspring on the ground or in the forest canopy. Males over two years become solitary due to behavioural disposition and collective aggression from the females and will join the female groups only during the breeding season. When provoked, or for defence, coatis can be fierce fighters; their strong jaws, sharp canine teeth, and fast scratching paws, along with a tough hide sturdily attached to the underlying muscles, make it very difficult for potential predators (e.g., dogs or jaguars) to seize the smaller mammal. Coatis communicate their intentions or moods with chirping, snorting, or grunting sounds. Different chirping sounds are used to express joy during social grooming, appeasement after fights, or to convey irritation or anger. Snorting while digging, along with an erect tail, states territorial or food claims during foraging. Coatis additionally use special postures or moves to convey simple messages; for example, hiding the nose between the front paws as a sign for submission; lowering the head, baring teeth, and jumping at an enemy signal an aggressive disposition. Individuals recognise other coatis by their looks, voices, and smells, the individual smell is intensified by special musk-glands on their necks and bellies. Coatis from Panama are known to rub their own fur and that of other troop members with resin from Trattinnickia aspera (Burseraceae) trees, but its purpose is unclear. Some proposed possibilities are it serves as an insect repellent, a fungicide, or as a form of scent-marking. Coatis rub preputial gland secretions on objects in their home ranges, but do not have anal glands. Reproduction Coati breeding season mainly corresponds with the start of the rainy season to coincide with maximum availability of food, especially fruits: between January and March in some areas, and between October and February in others. Female and young coatis commonly live in bands of 5 to 40 and travel together. The males are solitary and join the bands only during the short mating season. For this period, an adult male is accepted into the band of females and juveniles near the beginning of the breeding season, leading to a polygynous mating system. The pregnant females separate from the group, build a nest on a tree or in a rocky niche and, after a gestation period of about 11 weeks, give birth to litters of three to seven kits. About six weeks after birth, the females and their young will rejoin the band. Females become sexually mature at two years of age, while males will acquire sexual maturity at three years of age. Natural predators Coati predators include jaguarundis, anacondas, pumas, maned wolves, boa constrictors, foxes, dogs, tayras, ocelots, and jaguars. Large raptors, such as ornate hawk-eagles, black-and-chestnut eagles, and harpy eagles, also are known to hunt them. White-headed capuchin monkeys hunt their pups. Status In Central and South America, coatis are threatened by environmental destruction and unregulated hunting. A lack of scientifically sound population studies could be leading to an underestimation of the coati population and other ecological problems affecting the species. In captivity Coatis are one of five groups of procyonids commonly kept as pets in various parts of North, Central and South America, the others being the raccoons (common and crab-eating), the kinkajou, the ring-tailed cat and cacomistle. However, while both the white-nosed and South America coatis are common in captivity, mountain coatis are extremely rare in captivity. Coatis are small creatures that can be wild, somewhat difficult to control or train in some cases, and generally behave in a manner radically different from that of a pet dog. Optimally, they should have a spacious outdoor enclosure and a coati-proofed room in the house and/or other climate-controlled place, as well. They can be given the run of the house but need careful watching, more careful in some cases than others. It is possible to litter or toilet train coatis; if one cannot be trained as such, it is still possible to lessen problems in that they tend to designate a latrine area, which can have a litter pan placed in it as is done with many ferrets, pet skunks, rabbits, and rodents. Coatis generally need both dog and cat vaccines for distemper and many other diseases and an inactivated rabies vaccine. They can be spayed or neutered for the same reason as cats and dogs and other pets. Gallery
Biology and health sciences
Procyonidae
Animals
66284
https://en.wikipedia.org/wiki/Formic%20acid
Formic acid
Formic acid (), systematically named methanoic acid, is the simplest carboxylic acid, and has the chemical formula HCOOH and structure . It is an important intermediate in chemical synthesis and occurs naturally, most notably in some ants. Esters, salts and the anion derived from formic acid are called formates. Industrially, formic acid is produced from methanol. Natural occurrence Formic acid is found naturally in insects, weeds, fruits and vegetables, and forest emissions. It appears in most ants and in stingless bees of the genus Oxytrigona. Wood ants from the genus Formica can spray formic acid on their prey or to defend the nest. The puss moth caterpillar (Cerura vinula) will spray it as well when threatened by predators. It is also found in the trichomes of stinging nettle (Urtica dioica). Apart from that, this acid is incorporated in many fruits such as pineapple (0.21 mg per 100 g), apple (2 mg per 100 g) and kiwi (1 mg per 100 g), as well as in many vegetables, namely onion (45 mg per 100 g), eggplant (1.34 mg per 100 g) and, in extremely low concentrations, cucumber (0.11 mg per 100 g). Formic acid is a naturally occurring component of the atmosphere primarily due to forest emissions. History As early as the 15th century, some alchemists and naturalists were aware that ant hills give off an acidic vapor. The first person to describe the isolation of this substance (by the distillation of large numbers of ants) was the English naturalist John Ray, in 1671. Ants secrete the formic acid for attack and defense purposes. Formic acid was first synthesized from hydrocyanic acid by the French chemist Joseph Gay-Lussac. In 1855, another French chemist, Marcellin Berthelot, developed a synthesis from carbon monoxide similar to the process used today. Formic acid was long considered a chemical compound of only minor interest in the chemical industry. In the late 1960s, significant quantities became available as a byproduct of acetic acid production. It now finds increasing use as a preservative and antibacterial in livestock feed. Properties Formic acid is a colorless liquid having a pungent, penetrating odor at room temperature, comparable to the related acetic acid. Formic acid is about ten times stronger than acetic acid. It is miscible with water and most polar organic solvents, and is somewhat soluble in hydrocarbons. In hydrocarbons and in the vapor phase, it consists of hydrogen-bonded dimers rather than individual molecules. Owing to its tendency to hydrogen-bond, gaseous formic acid does not obey the ideal gas law. Solid formic acid, which can exist in either of two polymorphs, consists of an effectively endless network of hydrogen-bonded formic acid molecules. Formic acid forms a high-boiling azeotrope with water (107.3 °C; 77.5% formic acid). Liquid formic acid tends to supercool. Chemical reactions Decomposition Formic acid readily decomposes by dehydration in the presence of concentrated sulfuric acid to form carbon monoxide and water: HCO2H → H2O + CO Treatment of formic acid with sulfuric acid is a convenient laboratory source of CO. In the presence of platinum, it decomposes with a release of hydrogen and carbon dioxide. HCO2H → H2 + CO2 Soluble ruthenium catalysts are also effective for producing carbon monoxide-free hydrogen. Reactant Formic acid shares most of the chemical properties of other carboxylic acids. Because of its high acidity, solutions in alcohols form esters spontaneously; in Fischer esterifications of formic acid, it self-catalyzes the reaction and no additional acid catalyst is needed. Formic acid shares some of the reducing properties of aldehydes, reducing solutions of metal oxides to their respective metal. Formic acid is a source for a formyl group for example in the formylation of N-methylaniline to N-methylformanilide in toluene. In synthetic organic chemistry, formic acid is often used as a source of hydride ion, as in the Eschweiler–Clarke reaction: It is used as a source of hydrogen in transfer hydrogenation, as in the Leuckart reaction to make amines, and (in aqueous solution or in its azeotrope with triethylamine) for hydrogenation of ketones. Addition to alkenes Formic acid is unique among the carboxylic acids in its ability to participate in addition reactions with alkenes. Formic acids and alkenes readily react to form formate esters. In the presence of certain acids, including sulfuric and hydrofluoric acids, however, a variant of the Koch reaction occurs instead, and formic acid adds to the alkene to produce a larger carboxylic acid. Formic acid anhydride An unstable formic anhydride, H(C=O)−O−(C=O)H, can be obtained by dehydration of formic acid with N,-dicyclohexylcarbodiimide in ether at low temperature. Production In 2009, the worldwide capacity for producing formic acid was per year, roughly equally divided between Europe (, mainly in Germany) and Asia (, mainly in China) while production was below per year in all other continents. It is commercially available in solutions of various concentrations between 85 and 99 w/w %. , the largest producers are BASF, Eastman Chemical Company, LC Industrial, and Feicheng Acid Chemicals, with the largest production facilities in Ludwigshafen ( per year, BASF, Germany), Oulu (, Eastman, Finland), Nakhon Pathom (n/a, LC Industrial), and Feicheng (, Feicheng, China). 2010 prices ranged from around €650/tonne (equivalent to around $800/tonne) in Western Europe to $1250/tonne in the United States. From methyl formate and formamide When methanol and carbon monoxide are combined in the presence of a strong base, the result is methyl formate, according to the chemical equation: CH3OH + CO → HCO2CH3 In industry, this reaction is performed in the liquid phase at elevated pressure. Typical reaction conditions are 80 °C and 40 atm. The most widely used base is sodium methoxide. Hydrolysis of the methyl formate produces formic acid: HCO2CH3 + H2O → HCOOH + CH3OH Efficient hydrolysis of methyl formate requires a large excess of water. Some routes proceed indirectly by first treating the methyl formate with ammonia to give formamide, which is then hydrolyzed with sulfuric acid: HCO2CH3 + NH3 → HC(O)NH2 + CH3OH 2 HC(O)NH2 + 2H2O + H2SO4 → 2HCO2H + (NH4)2SO4 A disadvantage of this approach is the need to dispose of the ammonium sulfate byproduct. This problem has led some manufacturers to develop energy-efficient methods of separating formic acid from the excess water used in direct hydrolysis. In one of these processes, used by BASF, the formic acid is removed from the water by liquid-liquid extraction with an organic base. Niche and obsolete chemical routes By-product of acetic acid production A significant amount of formic acid is produced as a byproduct in the manufacture of other chemicals. At one time, acetic acid was produced on a large scale by oxidation of alkanes, by a process that cogenerates significant formic acid. This oxidative route to acetic acid has declined in importance so that the aforementioned dedicated routes to formic acid have become more important. Hydrogenation of carbon dioxide The catalytic hydrogenation of CO2 to formic acid has long been studied. This reaction can be conducted homogeneously. Oxidation of biomass Formic acid can also be obtained by aqueous catalytic partial oxidation of wet biomass by the OxFA process. A Keggin-type polyoxometalate (H5PV2Mo10O40) is used as the homogeneous catalyst to convert sugars, wood, waste paper, or cyanobacteria to formic acid and CO2 as the sole byproduct. Yields of up to 53% formic acid can be achieved. Laboratory methods In the laboratory, formic acid can be obtained by heating oxalic acid in glycerol followed by steam distillation. Glycerol acts as a catalyst, as the reaction proceeds through a glyceryl oxalate intermediate. If the reaction mixture is heated to higher temperatures, allyl alcohol results. The net reaction is thus: C2O4H2 → HCO2H + CO2 Another illustrative method involves the reaction between lead formate and hydrogen sulfide, driven by the formation of lead sulfide. Pb(HCOO)2 + H2S → 2HCOOH + PbS Electrochemical production Formate is formed by the electrochemical reduction of CO2 (in the form of bicarbonate) at a lead cathode at pH 8.6: + + 2e− → + 2 or + + 2e− → + If the feed is and oxygen is evolved at the anode, the total reaction is: + → + 1/2 Biosynthesis Formic acid is named after ants which have high concentrations of the compound in their venom, derived from serine through a 5,10-methenyltetrahydrofolate intermediate. The conjugate base of formic acid, formate, also occurs widely in nature. An assay for formic acid in body fluids, designed for determination of formate after methanol poisoning, is based on the reaction of formate with bacterial formate dehydrogenase. Uses Agriculture A major use of formic acid is as a preservative and antibacterial agent in livestock feed. It arrests certain decay processes and causes the feed to retain its nutritive value longer, In Europe, it is applied on silage, including fresh hay, to promote the fermentation of lactic acid and to suppress the formation of butyric acid; it also allows fermentation to occur quickly, and at a lower temperature, reducing the loss of nutritional value. It is widely used to preserve winter feed for cattle, and is sometimes added to poultry feed to kill E. coli bacteria. Use as a preservative for silage and other animal feed constituted 30% of the global consumption in 2009. Beekeepers use formic acid as a miticide against the tracheal mite (Acarapis woodi) and the Varroa destructor mite and Varroa jacobsoni mite. Energy Formic acid can be used directly in formic acid fuel cells or indirectly in hydrogen fuel cells. Electrolytic conversion of electrical energy to chemical fuel has been proposed as a large-scale source of formate by various groups. The formate could be used as feed to modified E. coli bacteria for producing biomass. Natural methylotroph microbes can feed on formic acid or formate. Formic acid has been considered as a means of hydrogen storage. The co-product of this decomposition, carbon dioxide, can be rehydrogenated back to formic acid in a second step. Formic acid contains 53 g/L hydrogen at room temperature and atmospheric pressure, which is three and a half times as much as compressed hydrogen gas can attain at 350 bar pressure (14.7 g/L). Pure formic acid is a liquid with a flash point of 69 °C, much higher than that of gasoline (−40 °C) or ethanol (13 °C). It is possible to use formic acid as an intermediary to produce isobutanol from using microbes. Soldering Formic acid has a potential application in soldering. Due to its capacity to reduce oxide layers, formic acid gas can be blasted at an oxide surface to increase solder wettability. Chromatography Formic acid is used as a volatile pH modifier in HPLC and capillary electrophoresis. Formic acid is often used as a component of mobile phase in reversed-phase high-performance liquid chromatography (RP-HPLC) analysis and separation techniques for the separation of hydrophobic macromolecules, such as peptides, proteins and more complex structures including intact viruses. Especially when paired with mass spectrometry detection, formic acid offers several advantages over the more traditionally used phosphoric acid. Other uses Formic acid is also significantly used in the production of leather, including tanning (23% of the global consumption in 2009), and in dyeing and finishing textiles (9% of the global consumption in 2009) because of its acidic nature. Use as a coagulant in the production of rubber consumed 6% of the global production in 2009. Formic acid is also used in place of mineral acids for various cleaning products, such as limescale remover and toilet bowl cleaner. Some formate esters are artificial flavorings and perfumes. Formic acid application has been reported to be an effective treatment for warts. Safety Formic acid has low toxicity (hence its use as a food additive), with an of 1.8g/kg (tested orally on mice). The concentrated acid is corrosive to the skin. Formic acid is readily metabolized and eliminated by the body. Nonetheless, it has specific toxic effects; the formic acid and formaldehyde produced as metabolites of methanol are responsible for the optic nerve damage, causing blindness, seen in methanol poisoning. Some chronic effects of formic acid exposure have been documented. Some experiments on bacterial species have demonstrated it to be a mutagen. Chronic exposure in humans may cause kidney damage. Another possible effect of chronic exposure is development of a skin allergy that manifests upon re-exposure to the chemical. Concentrated formic acid slowly decomposes to carbon monoxide and water, leading to pressure buildup in the containing vessel. For this reason, 98% formic acid is shipped in plastic bottles with self-venting caps. The hazards of solutions of formic acid depend on the concentration. The following table lists the Globally Harmonized System of Classification and Labelling of Chemicals for formic acid solutions: Formic acid in 85% concentration is flammable, and diluted formic acid is on the U.S. Food and Drug Administration list of food additives. The principal danger from formic acid is from skin or eye contact with the concentrated liquid or vapors. The U.S. OSHA Permissible Exposure Level (PEL) of formic acid vapor in the work environment is 5 parts per million (ppm) of air.
Physical sciences
Specific acids
Chemistry
66286
https://en.wikipedia.org/wiki/Organic%20acid
Organic acid
An organic acid is an organic compound with acidic properties. The most common organic acids are the carboxylic acids, whose acidity is associated with their carboxyl group –COOH. Sulfonic acids, containing the group –SO2OH, are relatively stronger acids. Alcohols, with –OH, can act as acids but they are usually very weak. The relative stability of the conjugate base of the acid determines its acidity. Other groups can also confer acidity, usually weakly: the thiol group –SH, the enol group, and the phenol group. In biological systems, organic compounds containing these groups are generally referred to as organic acids. A few common examples include: Lactic acid Acetic acid Formic acid Citric acid Oxalic acid Uric acid Malic acid Tartaric acid Butyric acid Folic acid Characteristics In general, organic acids are weak acids and do not dissociate completely in water, whereas the strong mineral acids do. Lower molecular mass organic acids such as formic and lactic acids are miscible in water, but higher molecular mass organic acids, such as benzoic acid, are insoluble in molecular (neutral) form. On the other hand, most organic acids are very soluble in organic solvents. p-Toluenesulfonic acid is a comparatively strong acid used in organic chemistry often because it is able to dissolve in the organic reaction solvent. Exceptions to these solubility characteristics exist in the presence of other substituents that affect the polarity of the compound. Applications Simple organic acids like formic or acetic acids are used for oil and gas well stimulation treatments. These organic acids are much less reactive with metals than are strong mineral acids like hydrochloric acid (HCl) or mixtures of HCl and hydrofluoric acid (HF). For this reason, organic acids are used at high temperatures or when long contact times between acid and pipe are needed. The conjugate bases of organic acids such as citrate and lactate are often used in biologically compatible buffer solutions. Citric and oxalic acids are used as rust removal. As acids, they can dissolve the iron oxides, but without damaging the base metal as do stronger mineral acids. In the dissociated form, they may be able to chelate the metal ions, helping to speed removal. Biological systems create many more complex organic acids such as L-lactic, citric, and D-glucuronic acids that contain hydroxyl or carboxyl groups. Human blood and urine contain these plus organic acid degradation products of amino acids, neurotransmitters, and intestinal bacterial action on food components. Examples of these categories are alpha-ketoisocaproic, vanilmandelic, and D-lactic acids, derived from catabolism of L-leucine and epinephrine (adrenaline) by human tissues and catabolism of dietary carbohydrate by intestinal bacteria, respectively. Organic acids (C1–C7) are widely distributed in nature as normal constituents of plants or animal tissues. They are also formed through microbial fermentation of carbohydrates mainly in the large intestine. They are sometimes found in their sodium, potassium, or calcium salts, or even stronger double salts. In food Organic acids are used in food preservation because of their effects on bacteria. The key basic principle on the mode of action of organic acids on bacteria is that non-dissociated (non-ionized) organic acids can penetrate the bacteria cell wall and disrupt the normal physiology of certain types of bacteria that we call pH-sensitive, meaning that they cannot tolerate a wide internal and external pH gradient. Among those bacteria are Escherichia coli, Salmonella spp., C. perfringens, Listeria monocytogenes, and Campylobacter species. Upon passive diffusion of organic acids into the bacteria, where the pH is near or above neutrality, the acids will dissociate and raise the bacteria internal pH, leading to situations that will not impair nor stop the growth of bacteria. On the other hand, the anionic part of the organic acids that can escape the bacteria in its dissociated form will accumulate within the bacteria and disrupt few metabolic functions, leading to osmotic pressure increase, incompatible with the survival of the bacteria. It has been well demonstrated that the state of the organic acids (undissociated or dissociated) is not important to define their capacity to inhibit the growth of bacteria, compared to undissociated acids. Lactic acid and its salts sodium lactate and potassium lactate are widely used as antimicrobials in food products, in particular, dairy and poultry such as ham and sausages. In nutrition and animal feeds Organic acids have been used successfully in pig production for more than 25 years. Although less research has been done in poultry, organic acids have also been found to be effective in poultry production. Organic acids added to feeds should be protected to avoid their dissociation in the crop and in the intestine (high pH segments) and reach far into the gastrointestinal tract, where the bulk of the bacteria population is located. From the use of organic acids in poultry and pigs, one can expect an improvement in performance similar to or better than that of antibiotic growth promoters, without the public health concern, a preventive effect on the intestinal problems like necrotic enteritis in chickens and Escherichia coli infection in young pigs. Also one can expect a reduction of the carrier state for Salmonella species and Campylobacter species. Ongoing research In addition to the end uses previously seen, organic acids have been tested for the following applications: Barbero-López and colleagues tested at the University of Eastern Finland the potential use of three organic acids, acetic, formic and propionic acids, in wood preservation. They showed a high antifungal potential against the decaying fungi tested (brown rotting fungi Coniophora puteana, Rhodonia placenta and Gloeophyllum trabeum; White rotting fungus Trametes versicolor) in Petri dish. However, when they treated wood with organic acids, the acids leached out from wood and did not prevent degradation. Additionally, the organic acids' acidity may have caused chemical degradation on wood. Additionally, in a more recent study, the ecotoxicity of several natural wood preservatives was compared, and the results indicated a very low toxicity of propionic acid.
Physical sciences
Specific acids
Chemistry
66306
https://en.wikipedia.org/wiki/Osmium%20tetroxide
Osmium tetroxide
Osmium tetroxide (also osmium(VIII) oxide) is the chemical compound with the formula OsO4. The compound is noteworthy for its many uses, despite its toxicity and the rarity of osmium. It also has a number of unusual properties, one being that the solid is volatile. The compound is colourless, but most samples appear yellow. This is most likely due to the presence of the impurity OsO2, which is yellow-brown in colour. In biology, its property of binding to lipids has made it a widely-used stain in electron microscopy. Physical properties Osmium(VIII) oxide forms monoclinic crystals. It has a characteristic acrid chlorine-like odor. The element name osmium is derived from osme, Greek for odor. OsO4 is volatile: it sublimes at room temperature. It is soluble in a wide range of organic solvents. It is moderately soluble in water, with which it reacts reversibly to form osmic acid (see below). Pure osmium(VIII) oxide is probably colourless; it has been suggested that its yellow hue is attributable due to osmium dioxide (OsO2) impurities. The osmium tetroxide molecule is tetrahedral and therefore nonpolar. This nonpolarity helps OsO4 penetrate charged cell membranes. Structure and electron configuration The osmium of OsO4 has an oxidation number of VIII; however, the metal does not possess a corresponding 8+ charge as the bonding in the compound is largely covalent in character (the ionization energy required to produce a formal 8+ charge also far exceeds the energies available in normal chemical reactions). The osmium atom exhibits double bonds to the four oxide ligands, resulting in a 16 electron complex. OsO4 is isoelectronic with permanganate and chromate ions. Synthesis OsO4 is formed slowly when osmium powder reacts with O2 at ambient temperature. Reaction of bulk solid requires heating to 400 °C. Os + 2O2 ->[\Delta T] OsO4 Reactions Oxidation of alkenes Alkenes add to OsO4 to give diolate species that hydrolyze to cis-diols. The net process is called dihydroxylation. This proceeds via a [3 + 2] cycloaddition reaction between the OsO4 and alkene to form an intermediate osmate ester that rapidly hydrolyses to yield the vicinal diol. As the oxygen atoms are added in a concerted step, the resulting stereochemistry is cis. OsO4 is expensive and highly toxic, making it an unappealing reagent to use in stoichiometric amounts. However, its reactions are made catalytic by adding reoxidants to reoxidise the Os(VI) by-product back to Os(VIII). Typical reagents include H2O2 (Milas hydroxylation), N-methylmorpholine N-oxide (Upjohn dihydroxylation) and K3Fe(CN)6/water. These reoxidants do not react with the alkenes on their own. Other osmium compounds can be used as catalysts, including osmate(VI) salts ([OsO2(OH)4)]2−, and osmium trichloride hydrate (OsCl3·xH2O). These species oxidise to osmium(VIII) in the presence of such oxidants. Lewis bases such as tertiary amines and pyridines increase the rate of dihydroxylation. This "ligand-acceleration" arises via the formation of adduct OsO4L, which adds more rapidly to the alkene. If the amine is chiral, then the dihydroxylation can proceed with enantioselectivity (see Sharpless asymmetric dihydroxylation). OsO4 does not react with most carbohydrates. The process can be extended to give two aldehydes in the Lemieux–Johnson oxidation, which uses periodate to achieve diol cleavage and to regenerate the catalytic loading of OsO4. This process is equivalent to that of ozonolysis. Coordination chemistry OsO4 is a Lewis acid and a mild oxidant. It reacts with alkaline aqueous solution to give the perosmate anion . This species is easily reduced to osmate anion, . When the Lewis base is an amine, adducts are also formed. Thus OsO4 can be stored in the form of osmeth, in which OsO4 is complexed with hexamine. Osmeth can be dissolved in tetrahydrofuran (THF) and diluted in an aqueous buffer solution to make a dilute (0.25%) working solution of OsO4. With tert-BuNH2, the imido derivative is produced: OsO4 + Me3CNH2 → OsO3(NCMe3) + H2O Similarly, with NH3 one obtains the nitrido complex: OsO4 + NH3 + KOH → K[Os(N)O3] + 2 H2O The [Os(N)O3]− anion is isoelectronic and isostructural with OsO4. OsO4 is very soluble in tert-butyl alcohol. In solution, it is readily reduced by hydrogen to osmium metal. The suspended osmium metal can be used to catalytically hydrogenate a wide variety of organic chemicals containing double or triple bonds. OsO4 + 4 H2 → Os + 4 H2O OsO4 undergoes "reductive carbonylation" with carbon monoxide in methanol at 400 K and 200 sbar to produce the triangular cluster Os3(CO)12: 3 OsO4 + 24 CO → Os3(CO)12 + 12 CO2 Oxofluorides Osmium forms several oxofluorides, all of which are very sensitive to moisture. Purple cis-OsO2F4 forms at 77 K in an anhydrous HF solution: OsO4 + 2 KrF2 → cis-OsO2F4 + 2 Kr + O2 OsO4 also reacts with F2 to form yellow OsO3F2: 2 OsO4 + 2 F2 → 2 OsO3F2 + O2 OsO4 reacts with one equivalent of [Me4N]F at 298 K and 2 equivalents at 253 K: OsO4 + [Me4N]F → [Me4N][OsO4F] OsO4 + 2 [Me4N]F → [Me4N]2[cis-OsO4F2] Uses Organic synthesis In organic synthesis OsO4 is widely used to oxidize alkenes to the vicinal diols, adding two hydroxyl groups at the same side (syn addition). See reaction and mechanism above. This reaction has been made both catalytic (Upjohn dihydroxylation) and asymmetric (Sharpless asymmetric dihydroxylation). Osmium(VIII) oxide is also used in catalytic amounts in the Sharpless oxyamination to give vicinal amino-alcohols. In combination with sodium periodate, OsO4 is used for the oxidative cleavage of alkenes (Lemieux-Johnson oxidation) when the periodate serves both to cleave the diol formed by dihydroxylation, and to reoxidize the OsO3 back to OsO4. The net transformation is identical to that produced by ozonolysis. Below an example from the total synthesis of Isosteviol. Biological staining OsO4 is a widely used staining agent used in transmission electron microscopy (TEM) to provide contrast to the image. This staining method may also be known in the literature as the OTO (osmium-thiocarbohydrazide-osmium) method, or osmium impregnation technique or simply as osmium staining. As a lipid stain, it is also useful in scanning electron microscopy (SEM) as an alternative to sputter coating. It embeds a heavy metal directly into cell membranes, creating a high electron scattering rate without the need for coating the membrane with a layer of metal, which can obscure details of the cell membrane. In the staining of the plasma membrane, osmium(VIII) oxide binds phospholipid head regions, thus creating contrast with the neighbouring protoplasm (cytoplasm). Additionally, osmium(VIII) oxide is also used for fixing biological samples in conjunction with HgCl2. Its rapid killing abilities are used to quickly kill live specimens such as protozoa. OsO4 stabilizes many proteins by transforming them into gels without destroying structural features. Tissue proteins that are stabilized by OsO4 are not coagulated by alcohols during dehydration. Osmium(VIII) oxide is also used as a stain for lipids in optical microscopy. OsO4 also stains the human cornea (see safety considerations). Polymer staining It is also used to stain copolymers preferentially, the best known example being block copolymers where one phase can be stained so as to show the microstructure of the material. For example, styrene-butadiene block copolymers have a central polybutadiene chain with polystyrene end caps. When treated with OsO4, the butadiene matrix reacts preferentially and so absorbs the oxide. The presence of a heavy metal is sufficient to block the electron beam, so the polystyrene domains are seen clearly in thin films in TEM. Osmium ore refining OsO4 is an intermediate in the extraction of osmium from its ores. Osmium-containing residues are treated with sodium peroxide (Na2O2) forming Na2[OsO4(OH)2], which is soluble. When exposed to chlorine, this salt gives OsO4. In the final stages of refining, crude OsO4 is dissolved in alcoholic NaOH forming Na2[OsO2(OH)4], which, when treated with NH4Cl, to give (NH4)4[OsO2Cl2]. This salt is reduced under hydrogen to give osmium. Buckminsterfullerene adduct OsO4 allowed for the confirmation of the soccer ball model of buckminsterfullerene, a 60-atom carbon allotrope. The adduct, formed from a derivative of OsO4, was C60(OsO4)(4-tert-butylpyridine)2. The adduct broke the fullerene's symmetry, allowing for crystallization and confirmation of the structure of C60 by X-ray crystallography. Medicine The only known clinical use of osmium tetroxide is for the treatment of arthritis. The lack of reports of long-term side effects from the local administration of osmium tetroxide (OsO4) suggest that osmium itself can be biocompatible, though this depends on the osmium compound administered. Safety considerations OsO4 will irreversibly stain the human cornea, which can lead to blindness. The permissible exposure limit for osmium(VIII) oxide (8 hour time-weighted average) is 2 μg/m3. Osmium(VIII) oxide can penetrate plastics and food packaging, and therefore must be stored in glass under refrigeration.
Physical sciences
Oxide salts
Chemistry
66313
https://en.wikipedia.org/wiki/Redox
Redox
Redox ( , , reduction–oxidation or oxidation–reduction) is a type of chemical reaction in which the oxidation states of the reactants change. Oxidation is the loss of electrons or an increase in the oxidation state, while reduction is the gain of electrons or a decrease in the oxidation state. The oxidation and reduction processes occur simultaneously in the chemical reaction. There are two classes of redox reactions: Electron-transfer – Only one (usually) electron flows from the atom, ion, or molecule being oxidized to the atom, ion, or molecule that is reduced. This type of redox reaction is often discussed in terms of redox couples and electrode potentials. Atom transfer – An atom transfers from one substrate to another. For example, in the rusting of iron, the oxidation state of iron atoms increases as the iron converts to an oxide, and simultaneously, the oxidation state of oxygen decreases as it accepts electrons released by the iron. Although oxidation reactions are commonly associated with forming oxides, other chemical species can serve the same function. In hydrogenation, bonds like C=C are reduced by transfer of hydrogen atoms. Terminology "Redox" is a portmanteau of the words "REDuction" and "OXidation." The term "redox" was first used in 1928. Oxidation is a process in which a substance loses electrons. Reduction is a process in which a substance gains electrons. The processes of oxidation and reduction occur simultaneously and cannot occur independently. In redox processes, the reductant transfers electrons to the oxidant. Thus, in the reaction, the reductant or reducing agent loses electrons and is oxidized, and the oxidant or oxidizing agent gains electrons and is reduced. The pair of an oxidizing and reducing agent that is involved in a particular reaction is called a redox pair. A redox couple is a reducing species and its corresponding oxidizing form, e.g., / .The oxidation alone and the reduction alone are each called a half-reaction because two half-reactions always occur together to form a whole reaction. In electrochemical reactions the oxidation and reduction processes do occur simultaneously but are separated in space. Oxidants Oxidation originally implied a reaction with oxygen to form an oxide. Later, the term was expanded to encompass substances that accomplished chemical reactions similar to those of oxygen. Ultimately, the meaning was generalized to include all processes involving the loss of electrons or the increase in the oxidation state of a chemical species. Substances that have the ability to oxidize other substances (cause them to lose electrons) are said to be oxidative or oxidizing, and are known as oxidizing agents, oxidants, or oxidizers. The oxidant removes electrons from another substance, and is thus itself reduced. Because it "accepts" electrons, the oxidizing agent is also called an electron acceptor. Oxidants are usually chemical substances with elements in high oxidation states (e.g., , , , , ), or else highly electronegative elements (e.g. O2, F2, Cl2, Br2, I2) that can gain extra electrons by oxidizing another substance. Oxidizers are oxidants, but the term is mainly reserved for sources of oxygen, particularly in the context of explosions. Nitric acid is a strong oxidizer. Reductants Substances that have the ability to reduce other substances (cause them to gain electrons) are said to be reductive or reducing and are known as reducing agents, reductants, or reducers. The reductant transfers electrons to another substance and is thus itself oxidized. Because it donates electrons, the reducing agent is also called an electron donor. Electron donors can also form charge transfer complexes with electron acceptors. The word reduction originally referred to the loss in weight upon heating a metallic ore such as a metal oxide to extract the metal. In other words, ore was "reduced" to metal. Antoine Lavoisier demonstrated that this loss of weight was due to the loss of oxygen as a gas. Later, scientists realized that the metal atom gains electrons in this process. The meaning of reduction then became generalized to include all processes involving a gain of electrons. Reducing equivalent refers to chemical species which transfer the equivalent of one electron in redox reactions. The term is common in biochemistry. A reducing equivalent can be an electron or a hydrogen atom as a hydride ion. Reductants in chemistry are very diverse. Electropositive elemental metals, such as lithium, sodium, magnesium, iron, zinc, and aluminium, are good reducing agents. These metals donate electrons relatively readily. Hydride transfer reagents, such as NaBH4 and LiAlH4, reduce by atom transfer: they transfer the equivalent of hydride or H−. These reagents are widely used in the reduction of carbonyl compounds to alcohols. A related method of reduction involves the use of hydrogen gas (H2) as sources of H atoms. Electronation and deelectronation The electrochemist John Bockris proposed the words electronation and de-electronation to describe reduction and oxidation processes, respectively, when they occur at electrodes. These words are analogous to protonation and deprotonation. They have not been widely adopted by chemists worldwide, although IUPAC has recognized the terms electronation and de-electronation. Rates, mechanisms, and energies Redox reactions can occur slowly, as in the formation of rust, or rapidly, as in the case of burning fuel. Electron transfer reactions are generally fast, occurring within the time of mixing. The mechanisms of atom-transfer reactions are highly variable because many kinds of atoms can be transferred. Such reactions can also be quite complex, involving many steps. The mechanisms of electron-transfer reactions occur by two distinct pathways, inner sphere electron transfer and outer sphere electron transfer. Analysis of bond energies and ionization energies in water allows calculation of the thermodynamic aspects of redox reactions. Standard electrode potentials (reduction potentials) Each half-reaction has a standard electrode potential (E), which is equal to the potential difference or voltage at equilibrium under standard conditions of an electrochemical cell in which the cathode reaction is the half-reaction considered, and the anode is a standard hydrogen electrode where hydrogen is oxidized: H2 → H+ + e− The electrode potential of each half-reaction is also known as its reduction potential (E), or potential when the half-reaction takes place at a cathode. The reduction potential is a measure of the tendency of the oxidizing agent to be reduced. Its value is zero for H+ + e− → H2 by definition, positive for oxidizing agents stronger than H+ (e.g., +2.866 V for F2) and negative for oxidizing agents that are weaker than H+ (e.g., −0.763V for Zn2+). For a redox reaction that takes place in a cell, the potential difference is: E = E – E However, the potential of the reaction at the anode is sometimes expressed as an oxidation potential: E = –E The oxidation potential is a measure of the tendency of the reducing agent to be oxidized but does not represent the physical potential at an electrode. With this notation, the cell voltage equation is written with a plus sign E = E + E Examples of redox reactions In the reaction between hydrogen and fluorine, hydrogen is being oxidized and fluorine is being reduced: This spontaneous reaction releases 542 kJ per 2 g of hydrogen because the H-F bond is much stronger than the F-F bond. This reaction can be analyzed as two half-reactions. The oxidation reaction converts hydrogen to protons: The reduction reaction converts fluorine to the fluoride anion: The half-reactions are combined so that the electrons cancel: {| |align=right| |→ |align=left|2 H+ + 2 e− |- |align=right| + 2 e− |→ |align=left|2 F− |- |colspan=3| |- |align=right|H2 + F2 |→ |align=left|2 H+ + 2 F− |} The protons and fluoride combine to form hydrogen fluoride in a non-redox reaction: 2 H+ + 2 F− → 2 HF The overall reaction is: Metal displacement In this type of reaction, a metal atom in a compound or solution is replaced by an atom of another metal. For example, copper is deposited when zinc metal is placed in a copper(II) sulfate solution: In the above reaction, zinc metal displaces the copper(II) ion from the copper sulfate solution, thus liberating free copper metal. The reaction is spontaneous and releases 213 kJ per 65 g of zinc. The ionic equation for this reaction is: As two half-reactions, it is seen that the zinc is oxidized: And the copper is reduced: Other examples The reduction of nitrate to nitrogen in the presence of an acid (denitrification): The combustion of hydrocarbons, such as in an internal combustion engine, produces water, carbon dioxide, some partially oxidized forms such as carbon monoxide, and heat energy. Complete oxidation of materials containing carbon produces carbon dioxide. The stepwise oxidation of a hydrocarbon by oxygen, in organic chemistry, produces water and, successively: an alcohol, an aldehyde or a ketone, a carboxylic acid, and then a peroxide. Corrosion and rusting The term corrosion refers to the electrochemical oxidation of metals in reaction with an oxidant such as oxygen. Rusting, the formation of iron oxides, is a well-known example of electrochemical corrosion: it forms as a result of the oxidation of iron metal. Common rust often refers to iron(III) oxide, formed in the following chemical reaction: The oxidation of iron(II) to iron(III) by hydrogen peroxide in the presence of an acid: Here the overall equation involves adding the reduction equation to twice the oxidation equation, so that the electrons cancel: Disproportionation A disproportionation reaction is one in which a single substance is both oxidized and reduced. For example, thiosulfate ion with sulfur in oxidation state +2 can react in the presence of acid to form elemental sulfur (oxidation state 0) and sulfur dioxide (oxidation state +4). Thus one sulfur atom is reduced from +2 to 0, while the other is oxidized from +2 to +4. Redox reactions in industry Cathodic protection is a technique used to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. A simple method of protection connects protected metal to a more easily corroded "sacrificial anode" to act as the anode. The sacrificial metal, instead of the protected metal, then corrodes. A common application of cathodic protection is in galvanized steel, in which a sacrificial zinc coating on steel parts protects them from rust. Oxidation is used in a wide variety of industries, such as in the production of cleaning products and oxidizing ammonia to produce nitric acid. Redox reactions are the foundation of electrochemical cells, which can generate electrical energy or support electrosynthesis. Metal ores often contain metals in oxidized states, such as oxides or sulfides, from which the pure metals are extracted by smelting at high temperatures in the presence of a reducing agent. The process of electroplating uses redox reactions to coat objects with a thin layer of a material, as in chrome-plated automotive parts, silver plating cutlery, galvanization and gold-plated jewelry. Redox reactions in biology Top: ascorbic acid (reduced form of Vitamin C)Bottom: dehydroascorbic acid (oxidized form of Vitamin C) Many essential biological processes involve redox reactions. Before some of these processes can begin, iron must be assimilated from the environment. Cellular respiration, for instance, is the oxidation of glucose (C6H12O6) to CO2 and the reduction of oxygen to water. The summary equation for cellular respiration is: The process of cellular respiration also depends heavily on the reduction of NAD+ to NADH and the reverse reaction (the oxidation of NADH to NAD+). Photosynthesis and cellular respiration are complementary, but photosynthesis is not the reverse of the redox reaction in cellular respiration: Biological energy is frequently stored and released using redox reactions. Photosynthesis involves the reduction of carbon dioxide into sugars and the oxidation of water into molecular oxygen. The reverse reaction, respiration, oxidizes sugars to produce carbon dioxide and water. As intermediate steps, the reduced carbon compounds are used to reduce nicotinamide adenine dinucleotide (NAD+) to NADH, which then contributes to the creation of a proton gradient, which drives the synthesis of adenosine triphosphate (ATP) and is maintained by the reduction of oxygen. In animal cells, mitochondria perform similar functions. Free radical reactions are redox reactions that occur as part of homeostasis and killing microorganisms. In these reactions, an electron detaches from a molecule and then re-attaches almost instantly. Free radicals are part of redox molecules and can become harmful to the human body if they do not reattach to the redox molecule or an antioxidant. The term redox state is often used to describe the balance of GSH/GSSG, NAD+/NADH and NADP+/NADPH in a biological system such as a cell or organ. The redox state is reflected in the balance of several sets of metabolites (e.g., lactate and pyruvate, beta-hydroxybutyrate and acetoacetate), whose interconversion is dependent on these ratios. Redox mechanisms also control some cellular processes. Redox proteins and their genes must be co-located for redox regulation according to the CoRR hypothesis for the function of DNA in mitochondria and chloroplasts. Redox cycling Wide varieties of aromatic compounds are enzymatically reduced to form free radicals that contain one more electron than their parent compounds. In general, the electron donor is any of a wide variety of flavoenzymes and their coenzymes. Once formed, these anion free radicals reduce molecular oxygen to superoxide and regenerate the unchanged parent compound. The net reaction is the oxidation of the flavoenzyme's coenzymes and the reduction of molecular oxygen to form superoxide. This catalytic behavior has been described as a futile cycle or redox cycling. Redox reactions in geology Minerals are generally oxidized derivatives of metals. Iron is mined as its magnetite (Fe3O4). Titanium is mined as its dioxide, usually in the form of rutile (TiO2). These oxides must be reduced to obtain the corresponding metals, often achieved by heating these oxides with carbon or carbon monoxide as reducing agents. Blast furnaces are the reactors where iron oxides and coke (a form of carbon) are combined to produce molten iron. The main chemical reaction producing the molten iron is: Redox reactions in soils Electron transfer reactions are central to myriad processes and properties in soils, and redox potential, quantified as Eh (platinum electrode potential (voltage) relative to the standard hydrogen electrode) or pe (analogous to pH as -log electron activity), is a master variable, along with pH, that controls and is governed by chemical reactions and biological processes. Early theoretical research with applications to flooded soils and paddy rice production was seminal for subsequent work on thermodynamic aspects of redox and plant root growth in soils. Later work built on this foundation, and expanded it for understanding redox reactions related to heavy metal oxidation state changes, pedogenesis and morphology, organic compound degradation and formation, free radical chemistry, wetland delineation, soil remediation, and various methodological approaches for characterizing the redox status of soils. Mnemonics The key terms involved in redox can be confusing. For example, a reagent that is oxidized loses electrons; however, that reagent is referred to as the reducing agent. Likewise, a reagent that is reduced gains electrons and is referred to as the oxidizing agent. These mnemonics are commonly used by students to help memorise the terminology: "OIL RIG" — oxidation is loss of electrons, reduction is gain of electrons "LEO the lion says GER [grr]" — loss of electrons is oxidation, gain of electrons is reduction "LEORA says GEROA" — the loss of electrons is called oxidation (reducing agent); the gain of electrons is called reduction (oxidizing agent). "RED CAT" and "AN OX", or "AnOx RedCat" ("an ox-red cat") — reduction occurs at the cathode and the anode is for oxidation "RED CAT gains what AN OX loses" – reduction at the cathode gains (electrons) what anode oxidation loses (electrons) "PANIC" – Positive Anode and Negative is Cathode. This applies to electrolytic cells which release stored electricity, and can be recharged with electricity. PANIC does not apply to cells that can be recharged with redox materials. These galvanic or voltaic cells, such as fuel cells, produce electricity from internal redox reactions. Here, the positive electrode is the cathode and the negative is the anode.
Physical sciences
Chemistry
null
66315
https://en.wikipedia.org/wiki/Solid-state%20chemistry
Solid-state chemistry
Solid-state chemistry, also sometimes referred as materials chemistry, is the study of the synthesis, structure, and properties of solid phase materials. It therefore has a strong overlap with solid-state physics, mineralogy, crystallography, ceramics, metallurgy, thermodynamics, materials science and electronics with a focus on the synthesis of novel materials and their characterization. A diverse range of synthetic techniques, such as the ceramic method and chemical vapour depostion, make solid-state materials. Solids can be classified as crystalline or amorphous on basis of the nature of order present in the arrangement of their constituent particles. Their elemental compositions, microstructures, and physical properties can be characterized through a variety of analytical methods. History Because of its direct relevance to products of commerce, solid state inorganic chemistry has been strongly driven by technology. Progress in the field has often been fueled by the demands of industry, sometimes in collaboration with academia. Applications discovered in the 20th century include zeolite and platinum-based catalysts for petroleum processing in the 1950s, high-purity silicon as a core component of microelectronic devices in the 1960s, and “high temperature” superconductivity in the 1980s. The invention of X-ray crystallography in the early 1900s by William Lawrence Bragg was an enabling innovation. Our understanding of how reactions proceed at the atomic level in the solid state was advanced considerably by Carl Wagner's work on oxidation rate theory, counter diffusion of ions, and defect chemistry. Because of his contributions, he has sometimes been referred to as the father of solid state chemistry. Synthetic methods Given the diversity of solid-state compounds, an equally diverse array of methods are used for their preparation. Synthesis can range from high-temperature methods, like the ceramic method, to gas methods, like chemical vapour deposition. Often, the methods prevent defect formation or produce high-purity products. High-temperature methods Ceramic method The ceramic method is one of the most common synthesis techniques. The synthesis occurs entirely in the solid state.  The reactants are ground together, formed into a pellet using a pellet press and hydraulic press, and heated at high temperatures. When the temperature of the reactants are sufficient, the ions at the grain boundaries react to form desired phases. Generally ceramic methods give polycrystalline powders, but not single crystals. Using a mortar and pestle, ResonantAcoustic mixer, or ball mill, the reactants are ground together, which decreases size and increases surface area of the reactants. If the mixing is not sufficient, we can use techniques such as co-precipitation and sol-gel. A chemist forms pellets from the ground reactants and places the pellets into containers for heating. The choice of container depends on the precursors, the reaction temperature and the expected product. For example, metal oxides are typically synthesized in silica or alumina containers. A tube furnace heats the pellet. Tube furnaces are available up to maximum temperatures of 2800oC. Molten flux synthesis Molten flux synthesis can be an efficient method for obtaining single crystals. In this method, the starting reagents are combined with flux, an inert material with a melting point lower than that of the starting materials. The flux serves as a solvent. After the reaction, the excess flux can be washed away using an appropriate solvent or it can be heat again to remove the flux by sublimation if it is a volatile compound. Crucible materials have a great role to play in molten flux synthesis. The crucible should not react with the flux or the starting reagent. If any of the material is volatile, it is recommended to conduct the reaction in a sealed ampule. If the target phase is sensitive to oxygen, a carbon- coated fused silica tube or a carbon crucible inside a fused silica tube is often used which prevents the direct contact between the tube wall and reagents. Chemical vapour transport Chemical vapour transport results in very pure materials. The reaction typically occurs in a sealed ampoule. A transporting agent, added to the sealed ampoule, produces a volatile intermediate species from the solid reactant. For metal oxides, the transporting agent is usually Cl2 or HCl. The ampoule has a temperature gradient, and, as the gaseous reactant travels along the gradient, it eventually deposits as a crystal. An example of an industrially-used chemical vapor transport reaction is the Mond process. The Mond process involves heating impure nickel in a stream of carbon monoxide to produce pure nickel. Low-temperature methods Intercalation method Intercalation synthesis is the insertion of molecules or ions between layers of a solid. The layered solid has weak intermolecular bonds holding its layers together. The process occurs via diffusion. Intercalation is further driven by ion exchange, acid-base reactions or electrochemical reactions. The intercalation method was first used in China with the discovery of porcelain. Also, graphene is produced by the intercalation method, and this method is the principle behind lithium-ion batteries. Solution methods It is possible to use solvents to prepare solids by precipitation or by evaporation. At times, the solvent is a hydrothermal that is under pressure at temperatures higher than the normal boiling point. A variation on this theme is the use of flux methods, which use a salt with a relatively low melting point as the solvent. Gas methods Many solids react vigorously with gas species like chlorine, iodine, and oxygen. Other solids form adducts, such as CO or ethylene. Such reactions are conducted in open-ended tubes, which the gasses are passed through. Also, these reactions can take place inside a measuring device such as a TGA. In that case, stoichiometric information can be obtained during the reaction, which helps identify the products. Chemical vapour deposition Chemical vapour deposition is a method widely used for the preparation of coatings and semiconductors from molecular precursors. A carrier gas transports the gaseous precursors to the material for coating. Characterization This is the process in which a material’s chemical composition, structure, and physical properties are determined using a variety of analytical techniques. New phases Synthetic methodology and characterization often go hand in hand in the sense that not one but a series of reaction mixtures are prepared and subjected to heat treatment. Stoichiometry, a numerical relationship between the quantities of reactant and product, is typically varied systematically. It is important to find which stoichiometries will lead to new solid compounds or solid solutions between known ones. A prime method to characterize the reaction products is powder diffraction because many solid-state reactions will produce polycrystalline molds or powders. Powder diffraction aids in the identification of known phases in the mixture. If a pattern is found that is not known in the diffraction data libraries, an attempt can be made to index the pattern. The characterization of a material's properties is typically easier for a product with crystalline structures. Compositions and structures Once the unit cell of a new phase is known, the next step is to establish the stoichiometry of the phase. This can be done in several ways. Sometimes the composition of the original mixture will give a clue, under the circumstances that only a product with a single powder pattern is found or a phase of a certain composition is made by analogy to known material, but this is rare. Often, considerable effort in refining the synthetic procedures is required to obtain a pure sample of the new material. If it is possible to separate the product from the rest of the reaction mixture, elemental analysis methods such as scanning electron microscopy (SEM) and transmission electron microscopy (TEM) can be used. The detection of scattered and transmitted electrons from the surface of the sample provides information about the surface topography and composition of the material. Energy dispersive X-ray spectroscopy (EDX) is a technique that uses electron beam excitation. Exciting the inner shell of an atom with incident electrons emits characteristic X-rays with specific energy to each element. The peak energy can identify the chemical composition of a sample, including the distribution and concentration.Similar to EDX, X-ray diffraction analysis (XRD) involves the generation of characteristic X-rays upon interaction with the sample. The intensity of diffracted rays scattered at different angles is used to analyze the physical properties of a material such as phase composition and crystallographic structure. These techniques can also be coupled to achieve a better effect. For example, SEM is a useful complement to EDX due to its focused electron beam, it produces a high-magnification image that provides information on the surface topography. Once the area of interest has been identified, EDX can be used to determine the elements present in that specific spot. Selected area electron diffraction can be coupled with TEM or SEM to investigate the level of crystallinity and the lattice parameters of a sample. More information X-ray diffraction is also used due to its imaging capabilities and speed of data generation. The latter often requires revisiting and refining the preparative procedures and that are linked to the question of which phases are stable at what composition and what stoichiometry. In other words, what the phase diagram looks like. An important tool in establishing this are thermal analysis techniques like DSC or DTA and increasingly also, due to the advent of synchrotrons, temperature-dependent powder diffraction. Increased knowledge of the phase relations often leads to further refinement in synthetic procedures in an iterative way. New phases are thus characterized by their melting points and their stoichiometric domains. The latter is important for the many solids that are non-stoichiometric compounds. The cell parameters obtained from XRD are particularly helpful to characterize the homogeneity ranges of the latter. Local structure In contrast to the large structures of crystals, the local structure describes the interaction of the nearest neighbouring atoms. Methods of nuclear spectroscopy use specific nuclei to probe the electric and magnetic fields around the nucleus. E.g. electric field gradients are very sensitive to small changes caused by lattice expansion/compression (thermal or pressure), phase changes, or local defects. Common methods are Mössbauer spectroscopy and perturbed angular correlation. Optical properties For metallic materials, their optical properties arise from the collective excitation of conduction electrons. The coherent oscillations of electrons under electromagnetic radiation along with associated oscillations of the electromagnetic field are called surface plasmon resonances. The excitation wavelength and frequency of the plasmon resonances provide information on the particle's size, shape, composition, and local optical environment. For non-metallic materials or semiconductors, they can be characterized by their band structure. It contains a band gap that represents the minimum energy difference between the top of the valence band and the bottom of the conduction band. The band gap can be determined using Ultraviolet-visible spectroscopy to predict the photochemical properties of the semiconductors. Further characterization In many cases, new solid compounds are further characterized by a variety of techniques that straddle the fine line that separates solid-state chemistry from solid-state physics. See Characterisation in material science for additional information.
Physical sciences
Subdisciplines
Chemistry
66321
https://en.wikipedia.org/wiki/Group%206%20element
Group 6 element
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} Group 6, numbered by IUPAC style, is a group of elements in the periodic table. Its members are chromium (Cr), molybdenum (Mo), tungsten (W), and seaborgium (Sg). These are all transition metals and chromium, molybdenum and tungsten are refractory metals. The electron configuration of these elements do not follow a unified trend, though the outermost shells do correlate with trends in chemical behavior: "Group 6" is the new IUPAC name for this group; the old style name was "group VIB" in the old US system (CAS) or "group VIA" in the European system (old IUPAC). Group 6 must not be confused with the group with the old-style group crossed names of either VIA (US system, CAS) or VIB (European system, old IUPAC). That group is now called group 16. History Discoveries Chromium was first reported on July 26, 1761, when Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains of Russia, which he named "Siberian red lead," which was found out in less than 10 years to be a bright yellow pigment. Though misidentified as a lead compound with selenium and iron components, the mineral was crocoite with a formula of PbCrO4. Studying the mineral in 1797, Louis Nicolas Vauquelin produced chromium trioxide by mixing crocoite with hydrochloric acid, and metallic chromium by heating the oxide in a charcoal oven a year later. He was also able to detect traces of chromium in precious gemstones, such as ruby or emerald. Molybdenite—the principal ore from which molybdenum is now extracted—was previously known as molybdena, which was confused with and often implemented as though it were graphite. Like graphite, molybdenite can be used to blacken a surface or as a solid lubricant. Even when molybdena was distinguishable from graphite, it was still confused with a galena (a common lead ore), which took its name from Ancient Greek , meaning lead. It was not until 1778 that Swedish chemist Carl Wilhelm Scheele realized that molybdena was neither graphite nor lead. He and other chemists then correctly assumed that it was the ore of a distinct new element, named molybdenum for the mineral in which it was discovered. Peter Jacob Hjelm successfully isolated molybdenum by using carbon and linseed oil in 1781. Regarding tungsten, in 1781 Carl Wilhelm Scheele discovered that a new acid, tungstic acid, could be made from scheelite (at the time named tungsten). Scheele and Torbern Bergman suggested that it might be possible to obtain a new metal by reducing this acid. In 1783, José and Fausto Elhuyar found an acid made from wolframite that was identical to tungstic acid. Later that year, in Spain, the brothers succeeded in isolating tungsten by reduction of this acid with charcoal, and they are credited with the discovery of the element. Seaborgium was first produced by a team of scientists led by Albert Ghiorso who worked at the Lawrence Berkeley Laboratory in Berkeley, California, in 1974. They created seaborgium by bombarding atoms of californium-249 with ions of oxygen-18 until seaborgium-263 was produced. Historical development and uses During the 1800s, chromium was primarily used as a component of paints and in tanning salts. At first, crocoite from Russia was the main source, but in 1827, a larger chromite deposit was discovered near Baltimore, United States. This made the United States the largest producer of chromium products until 1848 when large deposits of chromite where found near Bursa, Turkey. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924. For about a century after its isolation, molybdenum had no industrial use, owing to its relative scarcity, difficulty extracting the pure metal, and the immaturity of the metallurgical subfield. Early molybdenum steel alloys showed great promise in their increased hardness, but efforts were hampered by inconsistent results and a tendency toward brittleness and recrystallization. In 1906, William D. Coolidge filed a patent for rendering molybdenum ductile, leading to its use as a heating element for high-temperature furnaces and as a support for tungsten-filament light bulbs; oxide formation and degradation require that moly be physically sealed or held in an inert gas. In 1913, Frank E. Elmore developed a flotation process to recover molybdenite from ores; flotation remains the primary isolation process. During the first World War, demand for molybdenum spiked; it was used both in armor plating and as a substitute for tungsten in high-speed steels. Some British tanks were protected by 75 mm (3 in) manganese steel plating, but this proved to be ineffective. The manganese steel plates were replaced with 25 mm (1 in) molybdenum-steel plating allowing for higher speed, greater maneuverability, and better protection. After the war, demand plummeted until metallurgical advances allowed extensive development of peacetime applications. In World War II, molybdenum again saw strategic importance as a substitute for tungsten in steel alloys. In World War II, tungsten played a significant role in background political dealings. Portugal, as the main European source of the element, was put under pressure from both sides, because of its deposits of wolframite ore at Panasqueira. Tungsten's resistance to high temperatures and its strengthening of alloys made it an important raw material for the arms industry. Chemistry Unlike other groups, the members of this family do not show patterns in its electron configuration, as two lighter members of the group are exceptions from the Aufbau principle: Most of the chemistry has been observed only for the first three members of the group. The chemistry of seaborgium is not very established and therefore the rest of the section deals only with its upper neighbors in the periodic table. The elements in the group, like those of groups 7–11, have high melting points, and form volatile compounds in higher oxidation states. All the elements of the group are relatively nonreactive metals with a high melting points (1907 °C, 2477 °C, 3422 °C); that of tungsten is the highest of all metals. The metals form compounds in different oxidation states: chromium forms compounds in all states from −2 to +6: disodium pentacarbonylchromate, disodium decacarbonyldichromate, bis(benzene)chromium, tripotassium pentanitrocyanochromate, chromium(II) chloride, chromium(III) oxide, chromium(IV) chloride, potassium tetraperoxochromate(V), and chromium(VI) dichloride dioxide; the same is also true for molybdenum and tungsten, but the stability of the +6 state grows down the group. Depending on oxidation states, the compounds are basic, amphoteric, or acidic; the acidity grows with the oxidation state of the metal. Occurrence and production Chromium is a very common naturally occurring element. It is the 21st most abundant element in the Earth's crust with an average concentration of 100 ppm. The most common oxidation states for chromium are zero, trivalent, and hexavalent states. Most naturally occurring chromium is in the hexavalent state. About two-fifths of the worlds chromium are produced in South Africa, with Kazakhstan, India, Russia, and Turkey following. Chromium is mined as chromite ore. Molybdenum is refined mainly from molybdenite. It is mainly mined in the United States, China, Chile, and Peru, with the total amount produced being 200,000 tonnes per year. Tungsten is not a common element on Earth, having an average concentration of 1.5 ppm in Earth's crust. Tungsten is mainly found in the minerals wolframite and scheelite, and it usually never occurs as a free element in nature. The largest producers of tungsten in the world are China, Russia, and Portugal. Seaborgium is a transuranium element that is made artificially by bombarding californium-249 with oxygen-18 nuclei. It is artificial, therefore it does not occur in nature. Precautions Hexavalent chromium compounds are genotoxic carcinogens. Seaborgium is a radioactive synthetic element that is not found in nature; the most stable known isotope has a half-life of approximately 14 minutes. Applications Alloys Catalysts High temperature and refractory applications, such as welding electrodes and kiln components. Metallurgy, sometimes used in jet engines and gas turbines. Dyes and pigments Tanning hard materials Biological occurrences Group 6 is notable in that it contains some of the only elements in periods 5 and 6 with a known role in the biological chemistry of living organisms: molybdenum is common in enzymes of many organisms, and tungsten has been identified in an analogous role in enzymes from some archaea, such as Pyrococcus furiosus. In contrast, and unusually for a first-row d-block transition metal, chromium appears to have few biological roles, although it is thought to form part of the glucose metabolism enzyme in some mammals.
Physical sciences
Group 6
Chemistry
66338
https://en.wikipedia.org/wiki/Holography
Holography
Holography is a technique that enables a wavefront to be recorded and later reconstructed. It is best known as a method of generating three-dimensional images, and has a wide range of other uses, including data storage, microscopy, and interferometry. In principle, it is possible to make a hologram for any type of wave. A hologram is a recording of an interference pattern that can reproduce a 3D light field using diffraction. In general usage, a hologram is a recording of any type of wavefront in the form of an interference pattern. It can be created by capturing light from a real scene, or it can be generated by a computer, in which case it is known as a computer-generated hologram, which can show virtual objects or scenes. Optical holography needs a laser light to record the light field. The reproduced light field can generate an image that has the depth and parallax of the original scene. A hologram is usually unintelligible when viewed under diffuse ambient light. When suitably lit, the interference pattern diffracts the light into an accurate reproduction of the original light field, and the objects that were in it exhibit visual depth cues such as parallax and perspective that change realistically with the different angles of viewing. That is, the view of the image from different angles shows the subject viewed from similar angles. A hologram is traditionally generated by overlaying a second wavefront, known as the reference beam, onto a wavefront of interest. This generates an interference pattern, which is then captured on a physical medium. When the recorded interference pattern is later illuminated by the second wavefront, it is diffracted to recreate the original wavefront. The 3D image from a hologram can often be viewed with non-laser light. However, in common practice, major image quality compromises are made to remove the need for laser illumination to view the hologram. A computer-generated hologram is created by digitally modeling and combining two wavefronts to generate an interference pattern image. This image can then be printed onto a mask or film and illuminated with an appropriate light source to reconstruct the desired wavefront. Alternatively, the interference pattern image can be directly displayed on a dynamic holographic display. Holographic portraiture often resorts to a non-holographic intermediate imaging procedure, to avoid the dangerous high-powered pulsed lasers which would be needed to optically "freeze" moving subjects as perfectly as the extremely motion-intolerant holographic recording process requires. Early holography required high-power and expensive lasers. Currently, mass-produced low-cost laser diodes, such as those found on DVD recorders and used in other common applications, can be used to make holograms. They have made holography much more accessible to low-budget researchers, artists, and dedicated hobbyists. Most holograms produced are of static objects, but systems for displaying changing scenes on dynamic holographic displays are now being developed. The word holography comes from the Greek words (holos; "whole") and (graphē; "writing" or "drawing"). History The Hungarian-British physicist Dennis Gabor invented holography in 1948 while he was looking for a way to improve image resolution in electron microscopes. Gabor's work was built on pioneering work in the field of X-ray microscopy by other scientists including Mieczysław Wolfke in 1920 and William Lawrence Bragg in 1939. The formulation of holography was an unexpected result of Gabor's research into improving electron microscopes at the British Thomson-Houston Company (BTH) in Rugby, England, and the company filed a patent in December 1947 (patent GB685286). The technique as originally invented is still used in electron microscopy, where it is known as electron holography. Gabor was awarded the Nobel Prize in Physics in 1971 "for his invention and development of the holographic method". Optical holography did not really advance until the development of the laser in 1960. The development of the laser enabled the first practical optical holograms that recorded 3D objects to be made in 1962 by Yuri Denisyuk in the Soviet Union and by Emmett Leith and Juris Upatnieks at the University of Michigan, US. Early optical holograms used silver halide photographic emulsions as the recording medium. They were not very efficient as the produced diffraction grating absorbed much of the incident light. Various methods of converting the variation in transmission to a variation in refractive index (known as "bleaching") were developed which enabled much more efficient holograms to be produced. A major advance in the field of holography was made by Stephen Benton, who invented a way to create holograms that can be viewed with natural light instead of lasers. These are called rainbow holograms. Basics of holography Holography is a technique for recording and reconstructing light fields. A light field is generally the result of a light source scattered off objects. Holography can be thought of as somewhat similar to sound recording, whereby a sound field created by vibrating matter like musical instruments or vocal cords, is encoded in such a way that it can be reproduced later, without the presence of the original vibrating matter. However, it is even more similar to Ambisonic sound recording in which any listening angle of a sound field can be reproduced in the reproduction. Laser In laser holography, the hologram is recorded using a source of laser light, which is very pure in its color and orderly in its composition. Various setups may be used, and several types of holograms can be made, but all involve the interaction of light coming from different directions and producing a microscopic interference pattern which a plate, film, or other medium photographically records. In one common arrangement, the laser beam is split into two, one known as the object beam and the other as the reference beam. The object beam is expanded by passing it through a lens and used to illuminate the subject. The recording medium is located where this light, after being reflected or scattered by the subject, will strike it. The edges of the medium will ultimately serve as a window through which the subject is seen, so its location is chosen with that in mind. The reference beam is expanded and made to shine directly on the medium, where it interacts with the light coming from the subject to create the desired interference pattern. Like conventional photography, holography requires an appropriate exposure time to correctly affect the recording medium. Unlike conventional photography, during the exposure the light source, the optical elements, the recording medium, and the subject must all remain motionless relative to each other, to within about a quarter of the wavelength of the light, or the interference pattern will be blurred and the hologram spoiled. With living subjects and some unstable materials, that is only possible if a very intense and extremely brief pulse of laser light is used, a hazardous procedure which is rarely done outside of scientific and industrial laboratory settings. Exposures lasting several seconds to several minutes, using a much lower-powered continuously operating laser, are typical. Apparatus A hologram can be made by shining part of the light beam directly into the recording medium, and the other part onto the object in such a way that some of the scattered light falls onto the recording medium. A more flexible arrangement for recording a hologram requires the laser beam to be aimed through a series of elements that change it in different ways. The first element is a beam splitter that divides the beam into two identical beams, each aimed in different directions: One beam (known as the 'illumination' or 'object beam') is spread using lenses and directed onto the scene using mirrors. Some of the light scattered (reflected) from the scene then falls onto the recording medium. The second beam (known as the 'reference beam') is also spread through the use of lenses, but is directed so that it does not come in contact with the scene, and instead travels directly onto the recording medium. Several different materials can be used as the recording medium. One of the most common is a film very similar to photographic film (silver halide photographic emulsion), but with much smaller light-reactive grains (preferably with diameters less than 20 nm), making it capable of the much higher resolution that holograms require. A layer of this recording medium (e.g., silver halide) is attached to a transparent substrate, which is commonly glass, but may also be plastic. Process When the two laser beams reach the recording medium, their light waves intersect and interfere with each other. It is this interference pattern that is imprinted on the recording medium. The pattern itself is seemingly random, as it represents the way in which the scene's light interfered with the original light source – but not the original light source itself. The interference pattern can be considered an encoded version of the scene, requiring a particular key – the original light source – in order to view its contents. This missing key is provided later by shining a laser, identical to the one used to record the hologram, onto the developed film. When this beam illuminates the hologram, it is diffracted by the hologram's surface pattern. This produces a light field identical to the one originally produced by the scene and scattered onto the hologram. Comparison with photography Holography may be better understood via an examination of its differences from ordinary photography: A hologram represents a recording of information regarding the light that came from the original scene as scattered in a range of directions rather than from only one direction, as in a photograph. This allows the scene to be viewed from a range of different angles, as if it were still present. A photograph can be recorded using normal light sources (sunlight or electric lighting) whereas a laser is required to record a hologram. A lens is required in photography to record the image, whereas in holography, the light from the object is scattered directly onto the recording medium. A holographic recording requires a second light beam (the reference beam) to be directed onto the recording medium. A photograph can be viewed in a wide range of lighting conditions, whereas holograms can only be viewed with very specific forms of illumination. When a photograph is cut in half, each piece shows half of the scene. When a hologram is cut in half, the whole scene can still be seen in each piece. This is because, whereas each point in a photograph only represents light scattered from a single point in the scene, each point on a holographic recording includes information about light scattered from every point in the scene. It can be thought of as viewing a street outside a house through a large window, then through a smaller window. One can see all of the same things through the smaller window (by moving the head to change the viewing angle), but the viewer can see more at once through the large window. A photographic stereogram is a two-dimensional representation that can produce a three-dimensional effect but only from one point of view, whereas the reproduced viewing range of a hologram adds many more depth perception cues that were present in the original scene. These cues are recognized by the human brain and translated into the same perception of a three-dimensional image as when the original scene might have been viewed. A photograph clearly maps out the light field of the original scene. The developed hologram's surface consists of a very fine, seemingly random pattern, which appears to bear no relationship to the scene it recorded. Physics of holography For a better understanding of the process, it is necessary to understand interference and diffraction. Interference occurs when one or more wavefronts are superimposed. Diffraction occurs when a wavefront encounters an object. The process of producing a holographic reconstruction is explained below purely in terms of interference and diffraction. It is somewhat simplified but is accurate enough to give an understanding of how the holographic process works. For those unfamiliar with these concepts, it is worthwhile to read those articles before reading further in this article. Plane wavefronts A diffraction grating is a structure with a repeating pattern. A simple example is a metal plate with slits cut at regular intervals. A light wave that is incident on a grating is split into several waves; the direction of these diffracted waves is determined by the grating spacing and the wavelength of the light. A simple hologram can be made by superimposing two plane waves from the same light source on a holographic recording medium. The two waves interfere, giving a straight-line fringe pattern whose intensity varies sinusoidally across the medium. The spacing of the fringe pattern is determined by the angle between the two waves, and by the wavelength of the light. The recorded light pattern is a diffraction grating. When it is illuminated by only one of the waves used to create it, it can be shown that one of the diffracted waves emerges at the same angle at which the second wave was originally incident, so that the second wave has been 'reconstructed'. Thus, the recorded light pattern is a holographic recording as defined above. Point sources If the recording medium is illuminated with a point source and a normally incident plane wave, the resulting pattern is a sinusoidal zone plate, which acts as a negative Fresnel lens whose focal length is equal to the separation of the point source and the recording plane. When a plane wave-front illuminates a negative lens, it is expanded into a wave that appears to diverge from the focal point of the lens. Thus, when the recorded pattern is illuminated with the original plane wave, some of the light is diffracted into a diverging beam equivalent to the original spherical wave; a holographic recording of the point source has been created. When the plane wave is incident at a non-normal angle at the time of recording, the pattern formed is more complex, but still acts as a negative lens if it is illuminated at the original angle. Complex objects To record a hologram of a complex object, a laser beam is first split into two beams of light. One beam illuminates the object, which then scatters light onto the recording medium. According to diffraction theory, each point in the object acts as a point source of light so the recording medium can be considered to be illuminated by a set of point sources located at varying distances from the medium. The second (reference) beam illuminates the recording medium directly. Each point source wave interferes with the reference beam, giving rise to its own sinusoidal zone plate in the recording medium. The resulting pattern is the sum of all these 'zone plates', which combine to produce a random (speckle) pattern as in the photograph above. When the hologram is illuminated by the original reference beam, each of the individual zone plates reconstructs the object wave that produced it, and these individual wavefronts are combined to reconstruct the whole of the object beam. The viewer perceives a wavefront that is identical with the wavefront scattered from the object onto the recording medium, so that it appears that the object is still in place even if it has been removed. Applications Art Early on, artists saw the potential of holography as a medium and gained access to science laboratories to create their work. Holographic art is often the result of collaborations between scientists and artists, although some holographers would regard themselves as both an artist and a scientist. Salvador Dalí claimed to have been the first to employ holography artistically. He was certainly the first and best-known surrealist to do so, but the 1972 New York exhibit of Dalí holograms had been preceded by the holographic art exhibition that was held at the Cranbrook Academy of Art in Michigan in 1968 and by the one at the Finch College gallery in New York in 1970, which attracted national media attention. In Great Britain, Margaret Benyon began using holography as an artistic medium in the late 1960s and had a solo exhibition at the University of Nottingham art gallery in 1969. This was followed in 1970 by a solo show at the Lisson Gallery in London, which was billed as the "first London expo of holograms and stereoscopic paintings". During the 1970s, a number of art studios and schools were established, each with their particular approach to holography. Notably, there was the San Francisco School of Holography established by Lloyd Cross, The Museum of Holography in New York founded by Rosemary (Posy) H. Jackson, the Royal College of Art in London and the Lake Forest College Symposiums organised by Tung Jeong. None of these studios still exist; however, there is the Center for the Holographic Arts in New York and the HOLOcenter in Seoul, which offers artists a place to create and exhibit work. During the 1980s, many artists who worked with holography helped the diffusion of this so-called "new medium" in the art world, such as Harriet Casdin-Silver of the United States, Dieter Jung of Germany, and Moysés Baumstein of Brazil, each one searching for a proper "language" to use with the three-dimensional work, avoiding the simple holographic reproduction of a sculpture or object. For instance, in Brazil, many concrete poets (Augusto de Campos, Décio Pignatari, Julio Plaza and José Wagner Garcia, associated with Moysés Baumstein) found in holography a way to express themselves and to renew Concrete Poetry. A small but active group of artists still integrate holographic elements into their work. Some are associated with novel holographic techniques; for example, artist Matt Brand employed computational mirror design to eliminate image distortion from specular holography. The MIT Museum and Jonathan Ross both have extensive collections of holography and on-line catalogues of art holograms. Data storage Holographic data storage is a technique that can store information at high density inside crystals or photopolymers. The ability to store large amounts of information in some kind of medium is of great importance, as many electronic products incorporate storage devices. As current storage techniques such as Blu-ray Disc reach the limit of possible data density (due to the diffraction-limited size of the writing beams), holographic storage has the potential to become the next generation of popular storage media. The advantage of this type of data storage is that the volume of the recording media is used instead of just the surface. Currently available SLMs can produce about 1000 different images a second at 1024×1024-bit resolution which would result in about one-gigabit-per-second writing speed. In 2005, companies such as Optware and Maxell produced a 120 mm disc that uses a holographic layer to store data to a potential 3.9 TB, a format called Holographic Versatile Disc. As of September 2014, no commercial product has been released. Another company, InPhase Technologies, was developing a competing format, but went bankrupt in 2011 and all its assets were sold to Akonia Holographics, LLC. While many holographic data storage models have used "page-based" storage, where each recorded hologram holds a large amount of data, more recent research into using submicrometre-sized "microholograms" has resulted in several potential 3D optical data storage solutions. While this approach to data storage can not attain the high data rates of page-based storage, the tolerances, technological hurdles, and cost of producing a commercial product are significantly lower. Dynamic holography In static holography, recording, developing and reconstructing occur sequentially, and a permanent hologram is produced. There also exist holographic materials that do not need the developing process and can record a hologram in a very short time. This allows one to use holography to perform some simple operations in an all-optical way. Examples of applications of such real-time holograms include phase-conjugate mirrors ("time-reversal" of light), optical cache memories, image processing (pattern recognition of time-varying images), and optical computing. The amount of processed information can be very high (terabits/s), since the operation is performed in parallel on a whole image. This compensates for the fact that the recording time, which is in the order of a microsecond, is still very long compared to the processing time of an electronic computer. The optical processing performed by a dynamic hologram is also much less flexible than electronic processing. On one side, one has to perform the operation always on the whole image, and on the other side, the operation a hologram can perform is basically either a multiplication or a phase conjugation. In optics, addition and Fourier transform are already easily performed in linear materials, the latter simply by a lens. This enables some applications, such as a device that compares images in an optical way. The search for novel nonlinear optical materials for dynamic holography is an active area of research. The most common materials are photorefractive crystals, but in semiconductors or semiconductor heterostructures (such as quantum wells), atomic vapors and gases, plasmas and even liquids, it was possible to generate holograms. A particularly promising application is optical phase conjugation. It allows the removal of the wavefront distortions a light beam receives when passing through an aberrating medium, by sending it back through the same aberrating medium with a conjugated phase. This is useful, for example, in free-space optical communications to compensate for atmospheric turbulence (the phenomenon that gives rise to the twinkling of starlight). Hobbyist use Since the beginning of holography, many holographers have explored its uses and displayed them to the public. In 1971, Lloyd Cross opened the San Francisco School of Holography and taught amateurs how to make holograms using only a small (typically 5 mW) helium-neon laser and inexpensive home-made equipment. Holography had been supposed to require a very expensive metal optical table set-up to lock all the involved elements down in place and damp any vibrations that could blur the interference fringes and ruin the hologram. Cross's home-brew alternative was a sandbox made of a cinder block retaining wall on a plywood base, supported on stacks of old tires to isolate it from ground vibrations, and filled with sand that had been washed to remove dust. The laser was securely mounted atop the cinder block wall. The mirrors and simple lenses needed for directing, splitting and expanding the laser beam were affixed to short lengths of PVC pipe, which were stuck into the sand at the desired locations. The subject and the photographic plate holder were similarly supported within the sandbox. The holographer turned off the room light, blocked the laser beam near its source using a small relay-controlled shutter, loaded a plate into the holder in the dark, left the room, waited a few minutes to let everything settle, then made the exposure by remotely operating the laser shutter. In 1979, Jason Sapan opened the Holographic Studios in New York City. Since then, they have been involved in the production of many holographs for many artists as well as companies. Sapan has been described as the "last professional holographer of New York". Many of these holographers would go on to produce art holograms. In 1983, Fred Unterseher, a co-founder of the San Francisco School of Holography and a well-known holographic artist, published the Holography Handbook, an easy-to-read guide to making holograms at home. This brought in a new wave of holographers and provided simple methods for using the then-available AGFA silver halide recording materials. In 2000, Frank DeFreitas published the Shoebox Holography Book and introduced the use of inexpensive laser pointers to countless hobbyists. For many years, it had been assumed that certain characteristics of semiconductor laser diodes made them virtually useless for creating holograms, but when they were eventually put to the test of practical experiment, it was found that not only was this untrue, but that some actually provided a coherence length much greater than that of traditional helium-neon gas lasers. This was a very important development for amateurs, as the price of red laser diodes had dropped from hundreds of dollars in the early 1980s to about $5 after they entered the mass market as a component pulled from CD, or later, DVD players from the mid 1980s onwards. Now, there are thousands of amateur holographers worldwide. By late 2000, holography kits with inexpensive laser pointer diodes entered the mainstream consumer market. These kits enabled students, teachers, and hobbyists to make several kinds of holograms without specialized equipment, and became popular gift items by 2005. The introduction of holography kits with self-developing plates in 2003 made it possible for hobbyists to create holograms without the bother of wet chemical processing. In 2006, a large number of surplus holography-quality green lasers (Coherent C315) became available and put dichromated gelatin (DCG) holography within the reach of the amateur holographer. The holography community was surprised at the amazing sensitivity of DCG to green light. It had been assumed that this sensitivity would be uselessly slight or non-existent. Jeff Blyth responded with the G307 formulation of DCG to increase the speed and sensitivity to these new lasers. Kodak and Agfa, the former major suppliers of holography-quality silver halide plates and films, are no longer in the market. While other manufacturers have helped fill the void, many amateurs are now making their own materials. The favorite formulations are dichromated gelatin, Methylene-Blue-sensitised dichromated gelatin, and diffusion method silver halide preparations. Jeff Blyth has published very accurate methods for making these in a small lab or garage. A small group of amateurs are even constructing their own pulsed lasers to make holograms of living subjects and other unsteady or moving objects. Holographic interferometry Holographic interferometry (HI) is a technique that enables static and dynamic displacements of objects with optically rough surfaces to be measured to optical interferometric precision (i.e. to fractions of a wavelength of light). It can also be used to detect optical-path-length variations in transparent media, which enables, for example, fluid flow to be visualized and analyzed. It can also be used to generate contours representing the form of the surface or the isodose regions in radiation dosimetry. It has been widely used to measure stress, strain, and vibration in engineering structures. Interferometric microscopy The hologram keeps the information on the amplitude and phase of the field. Several holograms may keep information about the same distribution of light, emitted to various directions. The numerical analysis of such holograms allows one to emulate large numerical aperture, which, in turn, enables enhancement of the resolution of optical microscopy. The corresponding technique is called interferometric microscopy. Recent achievements of interferometric microscopy allow one to approach the quarter-wavelength limit of resolution. Sensors or biosensors The hologram is made with a modified material that interacts with certain molecules generating a change in the fringe periodicity or refractive index, therefore, the color of the holographic reflection. Security Holograms are commonly used for security, as they are replicated from a master hologram that requires expensive, specialized and technologically advanced equipment, and are thus difficult to forge. They are used widely in many currencies, such as the Brazilian 20, 50, and 100-reais notes; British 5, 10, 20 and 50-pound notes; South Korean 5000, 10,000, and 50,000-won notes; Japanese 5000 and 10,000 yen notes, Indian 50, 100, 500, and 2000 rupee notes; and all the currently-circulating banknotes of the Canadian dollar, Croatian kuna, Danish krone, and Euro. They can also be found in credit and bank cards as well as passports, ID cards, books, food packaging, DVDs, and sports equipment. Such holograms come in a variety of forms, from adhesive strips that are laminated on packaging for fast-moving consumer goods to holographic tags on electronic products. They often contain textual or pictorial elements to protect identities and separate genuine articles from counterfeits. Holographic scanners are in use in post offices, larger shipping firms, and automated conveyor systems to determine the three-dimensional size of a package. They are often used in tandem with checkweighers to allow automated pre-packing of given volumes, such as a truck or pallet for bulk shipment of goods. Holograms produced in elastomers can be used as stress-strain reporters due to its elasticity and compressibility, the pressure and force applied are correlated to the reflected wavelength, therefore its color. Holography technique can also be effectively used for radiation dosimetry. High security registration plates High-security holograms can be used on license plates for vehicles such as cars and motorcycles. As of April 2019, holographic license plates are required on vehicles in parts of India to aid in identification and security, especially in cases of car theft. Such number plates hold electronic data of vehicles, and have a unique ID number and a sticker to indicate authenticity. Holography using other types of waves In principle, it is possible to make a hologram for any wave. Electron holography is the application of holography techniques to electron waves rather than light waves. Electron holography was invented by Dennis Gabor to improve the resolution and avoid the aberrations of the transmission electron microscope. Today it is commonly used to study electric and magnetic fields in thin films, as magnetic and electric fields can shift the phase of the interfering wave passing through the sample. The principle of electron holography can also be applied to interference lithography. Acoustic holography enables sound maps of an object to be generated. Measurements of the acoustic field are made at many points close to the object. These measurements are digitally processed to produce the "images" of the object. Atomic holography has evolved out of the development of the basic elements of atom optics. With the Fresnel diffraction lens and atomic mirrors atomic holography follows a natural step in the development of the physics (and applications) of atomic beams. Recent developments including atomic mirrors and especially ridged mirrors have provided the tools necessary for the creation of atomic holograms, although such holograms have not yet been commercialized. Neutron beam holography has been used to see the inside of solid objects. Holograms with x-rays are generated by using synchrotrons or x-ray free-electron lasers as radiation sources and pixelated detectors such as CCDs as recording medium. The reconstruction is then retrieved via computation. Due to the shorter wavelength of x-rays compared to visible light, this approach allows imaging objects with higher spatial resolution. As free-electron lasers can provide ultrashort and x-ray pulses in the range of femtoseconds which are intense and coherent, x-ray holography has been used to capture ultrafast dynamic processes. False holograms There are many optical effects that are falsely confused with holography, such as the effects produced by lenticular printing, the Pepper's ghost illusion (or modern variants such as the Musion Eyeliner), tomography and volumetric displays. Such illusions have been called "fauxlography". The Pepper's ghost technique, being the easiest to implement of these methods, is most prevalent in 3D displays that claim to be (or are referred to as) "holographic". While the original illusion, used in theater, involved actual physical objects and persons, located offstage, modern variants replace the source object with a digital screen, which displays imagery generated with 3D computer graphics to provide the necessary depth cues. The reflection, which seems to float mid-air, is still flat however, thus less realistic than if an actual 3D object was being reflected. Examples of this digital version of Pepper's ghost illusion include the Gorillaz performances in the 2005 MTV Europe Music Awards and the 48th Grammy Awards; and Tupac Shakur's virtual performance at Coachella Valley Music and Arts Festival in 2012, rapping alongside Snoop Dogg during his set with Dr. Dre. Digital avatars of the Swedish supergroup ABBA were displayed on stage in May 2022. The ABBA performance used technology that was an updated version of Pepper's Ghost created by Industrial Light & Magic. American rock group KISS unveiled similar digital avatars in December 2023 to tour in their place at the conclusion of the End of the Road World Tour using the same Pepper's Ghost technology as the ABBA avatars. An even simpler illusion can be created by rear-projecting realistic images into semi-transparent screens. The rear projection is necessary because otherwise the semi-transparency of the screen would allow the background to be illuminated by the projection, which would break the illusion. Crypton Future Media, a music software company that produced Hatsune Miku, one of many Vocaloid singing synthesizer applications, has produced concerts that have Miku, along with other Crypton Vocaloids, performing on stage as "holographic" characters. These concerts use rear projection onto a semi-transparent DILAD screen to achieve its "holographic" effect. In 2011, in Beijing, apparel company Burberry produced the "Burberry Prorsum Autumn/Winter 2011 Hologram Runway Show", which included life size 2-D projections of models. The company's own video shows several centered and off-center shots of the main 2-dimensional projection screen, the latter revealing the flatness of the virtual models. The claim that holography was used was reported as fact in the trade media. In Madrid, on 10 April 2015, a public visual presentation called "Hologramas por la Libertad" (Holograms for Liberty), featuring a ghostly virtual crowd of demonstrators, was used to protest a new Spanish law that prohibits citizens from demonstrating in public places. Although widely called a "hologram protest" in news reports, no actual holography was involved – it was yet another technologically updated variant of the Pepper's ghost illusion. Holography is distinct from specular holography which is a technique for making three-dimensional images by controlling the motion of specularities on a two-dimensional surface. It works by reflectively or refractively manipulating bundles of light rays, not by using interference and diffraction. Tactile holograms In fiction Holography has been widely referred to in movies, novels, and TV, usually in science fiction, starting in the late 1970s. Science fiction writers absorbed the urban legends surrounding holography that had been spread by overly-enthusiastic scientists and entrepreneurs trying to market the idea. This had the effect of giving the public overly high expectations of the capability of holography, due to the unrealistic depictions of it in most fiction, where they are fully three-dimensional computer projections that are sometimes tactile through the use of force fields. Examples of this type of depiction include the hologram of Princess Leia in Star Wars, Arnold Rimmer from Red Dwarf, who was later converted to "hard light" to make him solid, and the Holodeck and Emergency Medical Hologram from Star Trek. Holography has served as an inspiration for many video games with science fiction elements. In many titles, fictional holographic technology has been used to reflect real life misrepresentations of potential military use of holograms, such as the "mirage tanks" in Command & Conquer: Red Alert 2 that can disguise themselves as trees. Player characters are able to use holographic decoys in games such as Halo: Reach and Crysis 2 to confuse and distract the enemy. Starcraft ghost agent Nova has access to "holo decoy" as one of her three primary abilities in Heroes of the Storm. Fictional depictions of holograms have, however, inspired technological advances in other fields, such as augmented reality, that promise to fulfill the fictional depictions of holograms by other means.
Technology
Basics_9
null
66364
https://en.wikipedia.org/wiki/Moggy
Moggy
A moggy is any cat which has not been intentionally bred. Moggies lack a standard appearance unlike pedigree cats which have a standard. In contexts where cats need to be registered—such as in veterinary practices or shelters—they are called a 'domestic short-hair' or 'domestic long-hair' depending on coat length. Although not as common as the aforementioned designations sometimes 'domestic medium-hair' is also used. The vast majority of cats worldwide lack any pedigree ancestry. History Cat fancying is relatively new, with over 85% of cat breeds coming into existence since the 1930s. Demography In the US domestic short-haired cats make up 95% of the cat population. In the UK 89–92% of cats are of non-pedigree lineage. Domestic shorthair In the cat fancy, and among veterinarians and animal control agencies, domestic short-haired cats may be classified with organisation-specific terminology (often capitalized), such as: Domestic Shorthair (DSH) House Cat, Shorthair (HCS), or Shorthair Household Pet. Such a pseudo-breed is used for registry as well as shelter/rescue classification purposes. While not bred as show cats, some domestic short-haired cats are actually pedigreed and entered into cat shows that have non-purebred "Household Pet" divisions. Show rules vary; Fédération Internationale Féline (FIFe) permits "any eye colour, all coat colours and patterns, any coat length or texture, and any length of tail" (basically, any cat). Others may be more restrictive; an example from the World Cat Federation: "All classic colours are permitted. Any amount of white is permitted. The colours chocolate and cinnamon, as well as their dilution (lilac and fawn) are not recognized in any combinations (bicolour, tricolour, tabby). The pointed pattern is also not recognized." Domestic short-haired cats are characterised by a wide range of colouring, and typically "revert to type" after a few generations, which means they express their coats as a tabby cat. This can be any colour or combination of colours. They also exhibit a wide range of physical characteristics, and as a result, domestic short-haired cats in different countries tend to look different in body shape and size, as they are working from differing gene pools. DSH cats in Asia tend to have a build similar to a "classic" Siamese or Tonkinese, while European and American varieties have a thicker, heavier build. Domestic longhair A domestic long-haired cat is a cat of mixed ancestry – thus not belonging to any particular recognized cat breed – possessing a coat of semi-long to long fur. Domestic long-haired cats should not be confused with the British Longhair, American Longhair, or other breeds with "Longhair" names, which are standardized breeds defined by various registries. Other generic terms are in British English, moggie and in American English alley cat. Domestic long-haired cats are the third most common type of cat in the United States. In the cat fancy, and among veterinarians and animal control agencies, domestic long-haired cats may be classified with organisation-specific terminology (often capitalized), such as Domestic Longhair (DLH), House Cat, Longhair (HCL), or Semi-Longhair Household Pet. Such a pseudo-breed is used for registry and shelter/rescue classification purposes, and breeds such as the Persian cat. While not bred as show cats, some mixed-breed cats are actually pedigreed and entered into cat shows that have non-purebred "Household Pet" divisions. Show rules vary; Fédération Internationale Féline (FIFe) permits "any eye colour, all coat colours and patterns, any coat length or texture, and any length of tail" (basically any healthy cat). Others may be more restrictive; an example from the World Cat Federation: "The colours chocolate and cinnamon, as well as their dilution (lilac and fawn) are not recognized in any combinations...[and] the pointed pattern is also not recognized". Domestic long-haireds come in all genetically possible cat colors including tabby, tortoiseshell, bicolor cat, and smoke. Domestic long-haireds can have fur that is up to six inches long. They can also have a mane similar to a Maine Coon's, as well as toe tufts and ear tufts. Some long-haired cats are not able to maintain their own coat, which must be frequently groomed by a human or may be prone to matting. Because of their wide gene pool, domestic long-haireds are not predisposed to any genetically inherited problems. History Having apparently originated in Western Asia, domestic long-haired cats have been kept as pets around the world for several centuries. During the 16th century, the first long-haired cats were imported into Europe. In the mid-17th century, when the Great Plague of London decimated much of London's human population, the number of cats started to recover after centuries of persecution, as they were encouraged as protectors from flea-carrying rats. How the variant developed is still a matter of speculation. The long coat may have been the result of a recessive mutant gene. When a long-haired cat is mated to one with a short coat, only short-haired kittens can result; however, their offspring, when mated, can produce a proportion of long-coated kittens. Successive litters of early European long-haired cats produced more and more long-coated offspring, which were more likely to survive in the cooler European climates. By the year 1521, around the time they were first documented in Italy, the variety had become fixed after only a few generations.In the late-18th century, Peter Simon Pallas advanced the hypothesis that the manul (also known as Pallas's cat) might be the ancestor of the long-haired domestic cat. He had anecdotal evidence that established even though the male offspring would be sterile hybrids, the female offspring could again reproduce with domestic cats and pass on a small proportion of the manul's genes. In 1907, zoologist Reginald Innes Pocock refuted this claim, citing his work on the skull differences between the manul and the Angoras or Persians of his time. This early hypothesis overlooked the potential for crossbreeding within the family Felidae. For example, the Savannah cat is a crossbreed between a domestic short-haired cat and a wild serval—both of which have different skulls and evolutionary lineage. Furthermore, hybrid females in the related genus Panthera, such as ligers and tigons, have successfully mated, producing tiligers and litigons. The first modern, formal breeds of long-haired cats were the Persian and the Angora (named after Ankara, Turkey) and were said to have come from those two areas.
Biology and health sciences
Cats
Animals
66388
https://en.wikipedia.org/wiki/Bronchodilator
Bronchodilator
A bronchodilator or broncholytic (although the latter occasionally includes secretory inhibition as well) is a substance that dilates the bronchi and bronchioles, decreasing resistance in the respiratory airway and increasing airflow to the lungs. Bronchodilators may be originating naturally within the body, or they may be medications administered for the treatment of breathing difficulties, usually in the form of inhalers. They are most useful in obstructive lung diseases, of which asthma and chronic obstructive pulmonary disease are the most common conditions. They may be useful in bronchiolitis and bronchiectasis, although this remains somewhat controversial. They are often prescribed but of unproven significance in restrictive lung diseases. Bronchodilators are either short-acting or long-acting. Short-acting medications provide quick or "rescue" relief from acute bronchoconstriction. Long-acting bronchodilators help to control and prevent symptoms. The three types of prescription bronchodilating drugs are beta-2 adrenergic agonists (short- and long-acting), anticholinergics (short- and long-acting), and theophylline (long-acting). Short-acting β2-adrenergic agonists These are quick-relief or "rescue" medications that provide quick, temporary relief from asthma symptoms or flare-ups. These medications usually take effect within 20 minutes or less, and can last from four to six hours. These inhaled medications are best for treating sudden and severe or new asthma symptoms. Taken 15 to 20 minutes ahead of time, these medications can also prevent asthma symptoms triggered by exercise or exposure to cold air. Some short-acting β-agonists, such as salbutamol, are specific to the lungs; they are called β2-adrenergic agonists and can relieve bronchospasms without unwanted cardiac side effects of nonspecific β-agonists (for example, ephedrine or epinephrine). Patients who regularly or frequently need to take a short-acting β2-adrenergic agonist should consult their doctor, as such usage indicates uncontrolled asthma, and their routine medications may need adjustment. Long-acting β2-adrenergic agonists These are long-term medications taken routinely in order to control and prevent bronchoconstriction. They are not intended for fast relief. These medications may take longer to begin working, but relieve airway constriction for up to 12 hours. Commonly taken twice a day with an anti-inflammatory medication, they maintain open airways and prevent asthma symptoms, particularly at night. Salmeterol and formoterol are examples of these. Anticholinergics Some examples of anticholinergics are tiotropium (Spiriva) and ipratropium bromide. Tiotropium is a long-acting, 24-hour, anticholinergic bronchodilator used in the management of chronic obstructive pulmonary disease (COPD). Only available as an inhalant, ipratropium bromide is used in the treatment of asthma and COPD. As a short-acting anticholinergic, it improves lung function and reduces the risk of exacerbation in people with symptomatic asthma. However, it will not stop an asthma attack already in progress. Because it has no effect on asthma symptoms when used alone, it is most often paired with a short-acting β2-adrenergic agonist. While it is considered a relief or rescue medication, it can take a full hour to begin working. For this reason, it plays a secondary role in acute asthma treatment. Dry throat is the most common side effect. If the medication gets in contact with the eyes, it may cause blurred vision for a brief time. The use of anticholinergics in combination with short-acting β2-adrenergic agonists has been shown to reduce hospital admissions in children and adults with acute asthma exacerbations. Other Available in oral and injectable form, theophylline is a long-acting bronchodilator that prevents asthma episodes. It belongs to the chemical class methylxanthines (along with caffeine). It is prescribed in severe cases of asthma or those that are difficult to control. It must be taken 1–4 times daily, and doses cannot be missed. Blood tests are required to monitor therapy and to indicate when dosage adjustment is necessary. Side effects can include nausea, vomiting, diarrhea, stomach or headache, rapid or irregular heart beat, muscle cramps, nervous or jittery feelings, and hyperactivity. These symptoms may signal the need for an adjustment in medication. It may promote acid reflux, also known as GERD, by relaxing the lower esophageal sphincter muscle. Some medications, such as seizure and ulcer medications and antibiotics containing erythromycin, can interfere with the way theophylline works. Coffee, tea, colas, cigarette-smoking, and viral illnesses can all affect the action of theophylline and change its effectiveness. A physician should monitor dosage levels to meet each patient's profile and needs. Additionally, some psychostimulant drugs that have an amphetamine like mode of action, such as amphetamine, methamphetamine, and cocaine, have bronchodilating effects and were used often for asthma due to the lack of effective β2-adrenergic agonists for use as bronchodilator, but are now rarely, if ever, used medically for their bronchodilatory effects. Gaseous carbon dioxide also relaxes airway musculature: hypocapnia caused by deliberate hyperventilation increases respiratory resistance while hypercapnia induced by carbon dioxide inhalation reduces it; however, this bronchodilating effect of carbon dioxide inhalation only lasts 4 to 5 minutes. Nonetheless, this observation has inspired the development of S-1226, carbon dioxide-enriched air formulated with nebulized perflubron. Common bronchodilators The bronchodilators are divided into short- and long-acting groups. Short-acting bronchodilators are used for relief of bronchoconstriction, while long-acting bronchodilators are predominantly used for prevention. Short-acting bronchodilators include: Salbutamol/albuterol (Proventil or Ventolin) Levosalbutamol/levalbuterol (Xopenex) Pirbuterol (Maxair) Epinephrine (Primatene Mist) Racemic Epinephrine (Asthmanefrin, Primatene Mist Replacement) Ephedrine (Bronkaid) Terbutaline Long-acting bronchodilators include: Salmeterol (Serevent or Seretide) Clenbuterol (Spiropent) Formoterol Bambuterol Indacaterol
Biology and health sciences
Specific drugs
Health
66391
https://en.wikipedia.org/wiki/Stimulant
Stimulant
Stimulants (also known as central nervous system stimulants, or psychostimulants, or colloquially as uppers) are a class of drugs that increase alertness. They are used for various purposes, such as enhancing attention, motivation, cognition, mood, and physical performance. Some stimulants occur naturally, while others are exclusively synthetic. Common stimulants include caffeine, nicotine, amphetamines, cocaine, methylphenidate, and modafinil. Stimulants may be subject to varying forms of regulation, or outright prohibition, depending on jurisdiction. Stimulants increase activity in the sympathetic nervous system, either directly or indirectly. Prototypical stimulants increase synaptic concentrations of excitatory neurotransmitters, particularly norepinephrine and dopamine (e.g., methylphenidate). Other stimulants work by binding to the receptors of excitatory neurotransmitters (e.g., nicotine) or by blocking the activity of endogenous agents that promote sleep (e.g., caffeine). Stimulants can affect various functions, including arousal, attention, the reward system, learning, memory, and emotion. Effects range from mild stimulation to euphoria, depending on the specific drug, dose, route of administration, and inter-individual characteristics. Stimulants have a long history of use, both for medical and non-medical purposes. Archeological evidence from Peru shows that cocaine use dates back as far as 8000 B.C.E. Stimulants have been used to treat various conditions, such as narcolepsy, attention deficit hyperactivity disorder (ADHD), obesity, depression, and fatigue. They have also been used as recreational drugs, performance-enhancing substances, and cognitive enhancers, by various groups of people, such as students, athletes, artists, and workers. They have also been used to promote aggression of combatants in wartime, both historically and in the present day. Simulants have potential risks and side effects, such as addiction, tolerance, withdrawal, psychosis, anxiety, insomnia, cardiovascular problems, and neurotoxicity. The misuse and abuse of stimulants can lead to serious health and social consequences, such as overdose, dependence, crime, and violence. Therefore, the use of stimulants is regulated by laws and policies in most countries, and requires medical supervision and prescription in some cases. Definition A stimulant is an overarching term that covers many drugs including those that increase the activity of the central nervous system and the body, drugs that are pleasurable and invigorating, or drugs that have sympathomimetic effects. Sympathomimetic effects are those effects that mimic or copy the actions of the sympathetic nervous system. The sympathetic nervous system is a part of the nervous system that prepares the body for action, such as increasing the heart rate, blood pressure, and breathing rate. Stimulants can activate the same receptors as the natural chemicals released by the sympathetic nervous system (namely epinephrine and norepinephrine) and cause similar effects. Effects Acute Stimulants in therapeutic doses, such as those given to patients with attention deficit hyperactivity disorder (ADHD), increase ability to focus, vigor, sociability, libido and may elevate mood. However, in higher doses, stimulants may actually decrease the ability to focus, a principle of the Yerkes-Dodson Law. In higher doses, stimulants may also produce euphoria, vigor, and a decreased need for sleep. Many, but not all, stimulants have ergogenic effects; that is, they enhance physical performance. Drugs such as ephedrine, pseudoephedrine, amphetamine and methylphenidate have well documented ergogenic effects, while cocaine has the opposite effect. Neurocognitive enhancing effects of stimulants, specifically modafinil, amphetamine and methylphenidate have been reported in healthy adolescents by some studies, and is a commonly cited reason among illicit drug users for use, particularly among college students in the context of studying. Still, results of these studies is inconclusive: assessing the potential overall neurocognitive benefits of stimulants among healthy youth is challenging due to the diversity within the population, the variability in cognitive task characteristics, and the absence of replication of studies. Research on the cognitive enhancement effects of modafinil in healthy non-sleep-deprived individuals has yielded mixed results, with some studies suggesting modest improvements in attention and executive functions while others show no significant benefits or even a decline in cognitive functions. In some cases, psychiatric phenomena may emerge such as stimulant psychosis, paranoia, and suicidal ideation. Acute toxicity has been reportedly associated with hyperhydrosis, panic attacks, severe anxiety, mydriasis, paranoia, aggressive behavior, excessive motor activity, psychosis, rhabdomyolysis, and punding. The violent and aggressive behavior associated with acute stimulant toxicity may partially be driven by paranoia. Most drugs classified as stimulants are sympathomimetic, meaning that they stimulate the sympathetic branch of the autonomic nervous system. This leads to effects such as mydriasis (dilation of the pupils), increased heart rate, blood pressure, respiratory rate and body temperature. When these changes become pathological, they are called arrhythmia, hypertension, and hyperthermia, and may lead to rhabdomyolysis, stroke, cardiac arrest, or seizures. However, given the complexity of the mechanisms that underlie these potentially fatal outcomes of acute stimulant toxicity, it is impossible to determine what dose may be lethal. Chronic Assessment of the effects of stimulants is relevant given the large population currently taking stimulants. A systematic review of cardiovascular effects of prescription stimulants found no association in children, but found a correlation between prescription stimulant use and ischemic heart attacks. A review over a four-year period found that there were few negative effects of stimulant treatment, but stressed the need for longer-term studies. A review of a year long period of prescription stimulant use in those with ADHD found that cardiovascular side effects were limited to transient increases in blood pressure only. However, a 2024 systematic review of the evidence found that stimulants overall improve ADHD symptoms and broadband behavioral measures in children and adolescents, though they carry risks of side effects like appetite suppression and other adverse events. Initiation of stimulant treatment in those with ADHD in early childhood appears to carry benefits into adulthood with regard to social and cognitive functioning, and appears to be relatively safe. Abuse of prescription stimulants (not following physician instruction) or of illicit stimulants carries many negative health risks. Abuse of cocaine, depending upon route of administration, increases risk of cardiorespiratory disease, stroke, and sepsis. Some effects are dependent upon the route of administration, with intravenous use associated with the transmission of many disease such as Hepatitis C, HIV/AIDS and potential medical emergencies such as infection, thrombosis or pseudoaneurysm, while inhalation may be associated with increased lower respiratory tract infection, lung cancer, and pathological restricting of lung tissue. Cocaine may also increase risk for autoimmune disease and damage nasal cartilage. Abuse of methamphetamine produces similar effects as well as marked degeneration of dopaminergic neurons, resulting in an increased risk for Parkinson's disease. Medical uses Stimulants are widely used throughout the world as prescription medicines as well as without a prescription (either legally or illicitly) as performance-enhancing or recreational drugs. Among narcotics, stimulants produce a noticeable crash or comedown at the end of their effects. In the US, the most frequently prescribed stimulants as of 2013 were lisdexamfetamine (Vyvanse), methylphenidate (Ritalin), and amphetamine (Adderall). It was estimated in 2015 that the percentage of the world population that had used cocaine during a year was 0.4%. For the category "amphetamines and prescription stimulants" (with "amphetamines" including amphetamine and methamphetamine) the value was 0.7%, and for MDMA 0.4%. Stimulants have been used in medicine for many conditions including obesity, sleep disorders, mood disorders, impulse control disorders, asthma, nasal congestion and, in case of cocaine, as local anesthetics. Drugs used to treat obesity are called anorectics and generally include drugs that follow the general definition of a stimulant, but other drugs such as cannabinoid receptor antagonists also belong to this group. Eugeroics are used in management of sleep disorders characterized by excessive daytime sleepiness, such as narcolepsy, and include stimulants such as modafinil and pitolisant. Stimulants are used in impulse control disorders such as ADHD and off-label in mood disorders such as major depressive disorder to increase energy, focus and elevate mood. Stimulants such as epinephrine, theophylline and salbutamol orally have been used to treat asthma, but inhaled adrenergic drugs are now preferred due to less systemic side effects. Pseudoephedrine is used to relieve nasal or sinus congestion caused by the common cold, sinusitis, hay fever and other respiratory allergies; it is also used to relieve ear congestion caused by ear inflammation or infection. Depression Stimulants were one of the first classes of drugs to be used in the treatment of depression, beginning after the introduction of the amphetamines in the 1930s. However, they were largely abandoned for treatment of depression following the introduction of conventional antidepressants in the 1950s. Subsequent to this, there has been a resurgence in interest in stimulants for depression in recent years. Stimulants produce a fast-acting and pronounced but transient and short-lived mood lift. In relation to this, they are minimally effective in the treatment of depression when administered continuously. In addition, tolerance to the mood-lifting effects of amphetamine has led to dose escalation and dependence. Although the efficacy for depression with continuous administration is modest, it may still reach statistical significance over placebo and provide benefits similar in magnitude to those of conventional antidepressants. The reasons for the short-term mood-improving effects of stimulants are unclear, but may relate to rapid tolerance. Tolerance to the effects of stimulants has been studied and characterized both in animals and humans. Stimulant withdrawal is remarkably similar in its symptoms to those of major depressive disorder. Chemistry Classifying stimulants is difficult, because of the large number of classes the drugs occupy, and the fact that they may belong to multiple classes; for example, ecstasy can be classified as a substituted methylenedioxyphenethylamine, a substituted amphetamine and consequently, a substituted phenethylamine. Major stimulant classes include phenethylamines and their daughter class substituted amphetamines. Amphetamines (class) Substituted amphetamines are a class of compounds based upon the amphetamine structure; it includes all derivative compounds which are formed by replacing, or substituting, one or more hydrogen atoms in the amphetamine core structure with substituents. Examples of substituted amphetamines are amphetamine (itself), methamphetamine, ephedrine, cathinone, phentermine, mephentermine, bupropion, methoxyphenamine, selegiline, amfepramone, pyrovalerone, MDMA (ecstasy), and DOM (STP). Many drugs in this class work primarily by activating trace amine-associated receptor 1 (TAAR1); in turn, this causes reuptake inhibition and effluxion, or release, of dopamine, norepinephrine, and serotonin. An additional mechanism of some substituted amphetamines is the release of vesicular stores of monoamine neurotransmitters through VMAT2, thereby increasing the concentration of these neurotransmitters in the cytosol, or intracellular fluid, of the presynaptic neuron. Amphetamines-type stimulants are often used for their therapeutic effects. Physicians sometimes prescribe amphetamine to treat major depression, where subjects do not respond well to traditional SSRI medications, but evidence supporting this use is poor/mixed. Notably, two recent large phase III studies of lisdexamfetamine (a prodrug to amphetamine) as an adjunct to an SSRI or SNRI in the treatment of major depressive disorder showed no further benefit relative to placebo in effectiveness. Numerous studies have demonstrated the effectiveness of drugs such as Adderall (a mixture of salts of amphetamine and dextroamphetamine) in controlling symptoms associated with ADHD. Due to their availability and fast-acting effects, substituted amphetamines are prime candidates for abuse. Cocaine analogs Hundreds of cocaine analogs have been created, all of them usually maintaining a benzyloxy connected to the 3 carbon of a tropane. Various modifications include substitutions on the benzene ring, as well as additions or substitutions in place of the normal carboxylate on the tropane 2 carbon. Various compound with similar structure activity relationships to cocaine that aren't technically analogs have been developed as well. Mechanisms of action Most stimulants exert their activating effects by enhancing catecholamine neurotransmission. Catecholamine neurotransmitters are employed in regulatory pathways implicated in attention, arousal, motivation, task salience and reward anticipation. Classical stimulants either block the reuptake or stimulate the efflux of these catecholamines, resulting in increased activity of their circuits. Some stimulants, specifically those with empathogenic and hallucinogenic effects, also affect serotonergic transmission. Some stimulants, such as some amphetamine derivatives and, notably, yohimbine, can decrease negative feedback by antagonizing regulatory autoreceptors. Adrenergic agonists, such as, in part, ephedrine, act by directly binding to and activating adrenergic receptors, producing sympathomimetic effects. There are also more indirect mechanisms of action by which a drug can elicit activating effects. Caffeine is an adenosine receptor antagonist, and only indirectly increases catecholamine transmission in the brain. Pitolisant is an histamine 3 (H3)-receptor inverse agonist. As histamine 3 (H3) receptors mainly act as autoreceptors, pitolisant decreases negative feedback to histaminergic neurons, enhancing histaminergic transmission. The precise mechanism of action of some stimulants, such as modafinil, for treating symptoms of narcolepsy and other sleep disorders, remains unknown. Notable stimulants Amphetamine Amphetamine is a potent central nervous system (CNS) stimulant of the phenethylamine class that is approved for the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. Amphetamine is also used off-label as a performance and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. Although it is a prescription medication in many countries, unauthorized possession and distribution of amphetamine is often tightly controlled due to the significant health risks associated with uncontrolled or heavy use. As a consequence, amphetamine is illegally manufactured in clandestine labs to be trafficked and sold to users. Based upon drug and drug precursor seizures worldwide, illicit amphetamine production and trafficking is much less prevalent than that of methamphetamine. The first pharmaceutical amphetamine was Benzedrine, a brand of inhalers used to treat a variety of conditions. Because the dextrorotary isomer has greater stimulant properties, Benzedrine was gradually discontinued in favor of formulations containing all or mostly dextroamphetamine. Presently, it is typically prescribed as mixed amphetamine salts, dextroamphetamine, and lisdexamfetamine. Amphetamine is a norepinephrine-dopamine releasing agent (NDRA). It enters neurons through dopamine and norepinephrine transporters and facilitates neurotransmitter efflux by activating TAAR1 and inhibiting VMAT2. At therapeutic doses, this causes emotional and cognitive effects such as euphoria, change in libido, increased arousal, and improved cognitive control. Likewise, it induces physical effects such as decreased reaction time, fatigue resistance, and increased muscle strength. In contrast, supratherapeutic doses of amphetamine are likely to impair cognitive function and induce rapid muscle breakdown. Very high doses can result in psychosis (e.g., delusions and paranoia), which very rarely occurs at therapeutic doses even during long-term use. As recreational doses are generally much larger than prescribed therapeutic doses, recreational use carries a far greater risk of serious side effects, such as dependence, which only rarely arises with therapeutic amphetamine use. Caffeine Caffeine is a stimulant compound belonging to the xanthine class of chemicals naturally found in coffee, tea, and (to a lesser degree) cocoa or chocolate. It is included in many soft drinks, as well as a larger amount in energy drinks. Caffeine is the world's most widely used psychoactive drug and by far the most common stimulant. In North America, 90% of adults consume caffeine daily. A few jurisdictions restrict the sale and use of caffeine. In the United States, the FDA has banned the sale of pure and highly concentrated caffeine products for personal consumption, due to the risk of overdose and death. The Australian Government has announced a ban on the sale of pure and highly concentrated caffeine food products for personal consumption, following the death of a young man from acute caffeine toxicity. In Canada, Health Canada has proposed to limit the amount of caffeine in energy drinks to 180 mg per serving, and to require warning labels and other safety measures on these products. Caffeine is also included in some medications, usually for the purpose of enhancing the effect of the primary ingredient, or reducing one of its side-effects (especially drowsiness). Tablets containing standardized doses of caffeine are also widely available. Caffeine's mechanism of action differs from many stimulants, as it produces stimulant effects by inhibiting adenosine receptors. Adenosine receptors are thought to be a large driver of drowsiness and sleep, and their action increases with extended wakefulness. Caffeine has been found to increase striatal dopamine in animal models, as well as inhibit the inhibitory effect of adenosine receptors on dopamine receptors, however the implications for humans are unknown. Unlike most stimulants, caffeine has no addictive potential. Caffeine does not appear to be a reinforcing stimulus, and some degree of aversion may actually occur, per a study on drug abuse liability published in an NIDA research monograph that described a group preferring placebo over caffeine. In large telephone surveys only 11% reported dependence symptoms. However, when people were tested in labs, only half of those who claim dependence actually experienced it, casting doubt on caffeine's ability to produce dependence and putting societal pressures in the spotlight. Coffee consumption is associated with a lower overall risk of cancer. This is primarily due to a decrease in the risks of hepatocellular and endometrial cancer, but it may also have a modest effect on colorectal cancer. There does not appear to be a significant protective effect against other types of cancers, and heavy coffee consumption may increase the risk of bladder cancer. A protective effect of caffeine against Alzheimer's disease is possible, but the evidence is inconclusive. Moderate coffee consumption may decrease the risk of cardiovascular disease, and it may somewhat reduce the risk of type 2 diabetes. Drinking 1-3 cups of coffee per day does not affect the risk of hypertension compared to drinking little or no coffee. However those who drink 2–4 cups per day may be at a slightly increased risk. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. It may protect people from liver cirrhosis. There is no evidence that coffee stunts a child's growth. Caffeine may increase the effectiveness of some medications including ones used to treat headaches. Caffeine may lessen the severity of acute mountain sickness if taken a few hours prior to attaining a high altitude. Ephedrine Ephedrine is a sympathomimetic amine similar in molecular structure to the well-known drugs phenylpropanolamine and methamphetamine, as well as to the important neurotransmitter epinephrine (adrenaline). Ephedrine is commonly used as a stimulant, appetite suppressant, concentration aid, and decongestant, and to treat hypotension associated with anesthesia. In chemical terms, it is an alkaloid with a phenethylamine skeleton found in various plants in the genus Ephedra (family Ephedraceae). It works mainly by increasing the activity of norepinephrine (noradrenaline) on adrenergic receptors. It is most usually marketed as the hydrochloride or sulfate salt. The herb má huáng (Ephedra sinica), used in traditional Chinese medicine (TCM), contains ephedrine and pseudoephedrine as its principal active constituents. The same may be true of other herbal products containing extracts from other Ephedra species. MDMA 3,4-Methylenedioxymethamphetamine (MDMA, ecstasy, or molly) is a euphoriant, empathogen, and stimulant of the amphetamine class. Briefly used by some psychotherapists as an adjunct to therapy, the drug became popular recreationally and the DEA listed MDMA as a Schedule I controlled substance, prohibiting most medical studies and applications. MDMA is known for its entactogenic properties. The stimulant effects of MDMA include hypertension, anorexia (appetite loss), euphoria, social disinhibition, insomnia (enhanced wakefulness/inability to sleep), improved energy, increased arousal, and increased perspiration, among others. Relative to catecholaminergic transmission, MDMA enhances serotonergic transmission significantly more, when compared to classical stimulants like amphetamine. MDMA does not appear to be significantly addictive or dependence forming. Due to the relative safety of MDMA, some researchers such as David Nutt have criticized the scheduling level, writing a satirical article finding MDMA to be 28 times less dangerous than horseriding, a condition he termed "equasy" or "Equine Addiction Syndrome". MDPV Methylenedioxypyrovalerone (MDPV) is a psychoactive drug with stimulant properties that acts as a norepinephrine-dopamine reuptake inhibitor (NDRI). It was first developed in the 1960s by a team at Boehringer Ingelheim. MDPV remained an obscure stimulant until around 2004, when it was reported to be sold as a designer drug. Products labeled as bath salts containing MDPV were previously sold as recreational drugs in gas stations and convenience stores in the United States, similar to the marketing for Spice and K2 as incense. Incidents of psychological and physical harm have been attributed to MDPV use. Mephedrone Mephedrone is a synthetic stimulant drug of the amphetamine and cathinone classes. Slang names include drone and MCAT. It is reported to be manufactured in China and is chemically similar to the cathinone compounds found in the khat plant of eastern Africa. It comes in the form of tablets or a powder, which users can swallow, snort, or inject, producing similar effects to MDMA, amphetamines, and cocaine. Mephedrone was first synthesized in 1929, but did not become widely known until it was rediscovered in 2003. By 2007, mephedrone was reported to be available for sale on the Internet; by 2008 law enforcement agencies had become aware of the compound; and, by 2010, it had been reported in most of Europe, becoming particularly prevalent in the United Kingdom. Mephedrone was first made illegal in Israel in 2008, followed by Sweden later that year. In 2010, it was made illegal in many European countries, and, in December 2010, the EU ruled it illegal. In Australia, New Zealand, and the US, it is considered an analog of other illegal drugs and can be controlled by laws similar to the Federal Analog Act. In September 2011, the USA temporarily classified mephedrone as illegal, in effect from October 2011. Mephedrone is neurotoxic and has abuse potential, predominantly exerted on 5-hydroxytryptamine (5-HT) terminals, mimicking that of MDMA with which it shares the same subjective sensations on abusers. Methamphetamine Methamphetamine (contracted from ) is a potent psychostimulant of the phenethylamine and amphetamine classes that is used to treat attention deficit hyperactivity disorder (ADHD) and obesity. Methamphetamine exists as two enantiomers, dextrorotary and levorotary. Dextromethamphetamine is a stronger CNS stimulant than levomethamphetamine; however, both are addictive and produce the same toxicity symptoms at high doses. Although rarely prescribed due to the potential risks, methamphetamine hydrochloride is approved by the United States Food and Drug Administration (USFDA) under the trade name Desoxyn. Recreationally, methamphetamine is used to increase sexual desire, lift the mood, and increase energy, allowing some users to engage in sexual activity continuously for several days straight. Methamphetamine may be sold illicitly, either as pure dextromethamphetamine or in an equal parts mixture of the right- and left-handed molecules (i.e., 50% levomethamphetamine and 50% dextromethamphetamine). Both dextromethamphetamine and racemic methamphetamine are schedule II controlled substances in the United States. Also, the production, distribution, sale, and possession of methamphetamine is restricted or illegal in many other countries due to its placement in schedule II of the United Nations Convention on Psychotropic Substances treaty. In contrast, levomethamphetamine is an over-the-counter drug in the United States. In low doses, methamphetamine can cause an elevated mood and increase alertness, concentration, and energy in fatigued individuals. At higher doses, it can induce psychosis, rhabdomyolysis, and cerebral hemorrhage. Methamphetamine is known to have a high potential for abuse and addiction. Recreational use of methamphetamine may result in psychosis or lead to post-withdrawal syndrome, a withdrawal syndrome that can persist for months beyond the typical withdrawal period. Unlike amphetamine and cocaine, methamphetamine is neurotoxic to humans, damaging both dopamine and serotonin neurons in the central nervous system (CNS). Unlike the long-term use of amphetamine in prescription doses, which may improve certain brain regions in individuals with ADHD, there is evidence that methamphetamine causes brain damage from long-term use in humans; this damage includes adverse changes in brain structure and function, such as reductions in gray matter volume in several brain regions and adverse changes in markers of metabolic integrity. However, recreational amphetamine doses may also be neurotoxic. Methylphenidate Methylphenidate is a stimulant drug that is often used in the treatment of ADHD and narcolepsy and occasionally to treat obesity in combination with diet restraints and exercise. Its effects at therapeutic doses include increased focus, increased alertness, decreased appetite, decreased need for sleep and decreased impulsivity. Methylphenidate is not usually used recreationally, but when it is used, its effects are very similar to those of amphetamines. Methylphenidate acts as a norepinephrine-dopamine reuptake inhibitor (NDRI), by blocking the norepinephrine transporter (NET) and the dopamine transporter (DAT). Methylphenidate has a higher affinity for the dopamine transporter than for the norepinephrine transporter, and so its effects are mainly due to elevated dopamine levels caused by the inhibited reuptake of dopamine, however increased norepinephrine levels also contribute to various of the effects caused by the drug. Methylphenidate is sold under a number of brand names including Ritalin. Other versions include the long lasting tablet Concerta and the long lasting transdermal patch Daytrana. Cocaine Cocaine is an SNDRI. Cocaine is made from the leaves of the coca shrub, which grows in the mountain regions of South American countries such as Bolivia, Colombia, and Peru, regions in which it was cultivated and used for centuries mainly by the Aymara people. In Europe, North America, and some parts of Asia, the most common form of cocaine is a white crystalline powder. Cocaine is a stimulant but is not normally prescribed therapeutically for its stimulant properties, although it sees clinical use as a local anesthetic, in particular in ophthalmology. Most cocaine use is recreational and its abuse potential is high (higher than amphetamine), and so its sale and possession are strictly controlled in most jurisdictions. Other tropane derivative drugs related to cocaine are also known such as troparil and lometopane but have not been widely sold or used recreationally. Nicotine Nicotine is the active chemical constituent in tobacco, which is available in many forms, including cigarettes, cigars, chewing tobacco, and smoking cessation aids such as nicotine patches, nicotine gum, and electronic cigarettes. Nicotine is used widely throughout the world for its stimulating and relaxing effects. Nicotine exerts its effects through the agonism of nicotinic acetylcholine receptors, resulting in multiple downstream effects such as increase in activity of dopaminergic neurons in the midbrain reward system, and acetaldehyde one of the tobacco constituent decreased the expression of monoamine oxidase in the brain. Nicotine is addictive and dependence forming. Tobacco, the most common source of nicotine, has an overall harm to user and self score 3 percent below cocaine, and 13 percent above amphetamines, ranking 6th most harmful of the 20 drugs assessed, as determined by a multi-criteria decision analysis. Phenylpropanolamine Phenylpropanolamine (PPA; Accutrim; β-hydroxyamphetamine), also known as the stereoisomers norephedrine and norpseudoephedrine, is a psychoactive drug of the phenethylamine and amphetamine chemical classes that is used as a stimulant, decongestant, and anorectic agent. It is commonly used in prescription and over-the-counter cough and cold preparations. In veterinary medicine, it is used to control urinary incontinence in dogs under trade names Propalin and Proin. In the United States, PPA is no longer sold without a prescription due to a possible increased risk of stroke in younger women. In a few countries in Europe, however, it is still available either by prescription or sometimes over-the-counter. In Canada, it was withdrawn from the market on 31 May 2001. In India, human use of PPA and its formulations were banned on 10 February 2011. Lisdexamfetamine Lisdexamfetamine (Vyvanse, etc.) is an amphetamine-type medication, sold for use in treating ADHD. Its effects typically last around 14 hours. Lisdexamfetamine is inactive on its own and is metabolized into dextroamphetamine in the body. Consequently, it has a lower abuse potential. Pseudoephedrine Pseudoephedrine is a sympathomimetic drug of the phenethylamine and amphetamine chemical classes. It may be used as a nasal/sinus decongestant, as a stimulant, or as a wakefulness-promoting agent. The salts pseudoephedrine hydrochloride and pseudoephedrine sulfate are found in many over-the-counter preparations, either as a single ingredient or (more commonly) in combination with antihistamines, guaifenesin, dextromethorphan, and/or paracetamol (acetaminophen) or another NSAID (such as aspirin or ibuprofen). It is also used as a precursor chemical in the illegal production of methamphetamine. Catha edulis (Khat) Khat is a flowering plant native to the Horn of Africa and the Arabian Peninsula. Khat contains a monoamine alkaloid called cathinone, a "keto-amphetamine". This alkaloid causes excitement, loss of appetite, and euphoria. In 1980, the World Health Organization (WHO) classified it as a drug of abuse that can produce mild to moderate psychological dependence (less than tobacco or alcohol), although the WHO does not consider khat to be seriously addictive. It is banned in some countries, such as the United States, Canada, and Germany, while its production, sale, and consumption are legal in other countries, including Djibouti, Ethiopia, Somalia, Kenya and Yemen. Modafinil Modafinil is an eugeroic medication, which means that it promotes wakefulness and alertness. Modafinil is sold under the brand name Provigil among others. Modafinil is used to treat excessive daytime sleepiness due to narcolepsy, shift work sleep disorder, or obstructive sleep apnea. While it has seen off-label use as a purported cognitive enhancer, the research on its effectiveness for this use is not conclusive. Despite being a CNS stimulant, the addiction and dependence liabilities of modafinil are considered very low. Although modafinil shares biochemical mechanisms with stimulant drugs, it is less likely to have mood-elevating properties. The similarities in effects with caffeine are not clearly established. Unlike other stimulants, modafinil does not induce a subjective feeling of pleasure or reward, which is commonly associated with euphoria, an intense feeling of well-being. Euphoria is a potential indicator of drug abuse, which is the compulsive and excessive use of a substance despite adverse consequences. In clinical trials, modafinil has shown no evidence of abuse potential, that is why modafinil is considered to have a low risk of addiction and dependence, however, caution is advised. Pitolisant Pitolisant is an inverse agonist (antagonist) of the histamine 3 (H3) autoreceptor. As such, pitolisant is an antihistamine medication that also belongs to the class of CNS stimulants. Pitolisant is also considered a medication of eugeroic class, which means that it promotes wakefulness and alertness. Pitolisant is the first wakefulness-promoting agent that acts by blocking the H3 autoreceptor. Pitolisant has been shown to be effective and well-tolerated for the treatment of narcolepsy with or without cataplexy. Pitolisant is the only non-controlled anti-narcoleptic drug in the US. It has shown minimal abuse risk in studies. Blocking the histamine 3 (H3) autoreceptor increases the activity of histamine neurons in the brain. The H3 autoreceptors regulate histaminergic activity in the central nervous system (and to a lesser extent, the peripheral nervous system) by inhibiting histamine biosynthesis and release upon binding to endogenous histamine. By preventing the binding of endogenous histamine at the H3, as well as producing a response opposite to that of endogenous histamine at the receptor (inverse agonism), pitolisant enhances histaminergic activity in the brain. Recreational use and issues of abuse Stimulants enhance the activity of the central and peripheral nervous systems. Common effects may include increased alertness, awareness, wakefulness, endurance, productivity, and motivation, arousal, locomotion, heart rate, and blood pressure, and a diminished desire for food and sleep. Use of stimulants may cause the body to reduce significantly its production of natural body chemicals that fulfill similar functions. Until the body reestablishes its normal state, once the effect of the ingested stimulant has worn off the user may feel depressed, lethargic, confused, and miserable. This is referred to as a "crash", and may provoke reuse of the stimulant. Abuse of central nervous system (CNS) stimulants is common. Addiction to some CNS stimulants can quickly lead to medical, psychiatric, and psychosocial deterioration. Drug tolerance, dependence, and sensitization as well as a withdrawal syndrome can occur. Stimulants may be screened for in animal discrimination and self-administration models which have high sensitivity albeit low specificity. Research on a progressive ratio self-administration protocol has found amphetamine, methylphenidate, modafinil, cocaine, and nicotine to all have a higher break point than placebo that scales with dose indicating reinforcing effects. A progressive ratio self-administration protocol is a way of testing how much an animal or a human wants a drug by making them do a certain action (like pressing a lever or poking a nose device) to get the drug. The number of actions needed to get the drug increases every time, so it becomes harder and harder to get the drug. The highest number of actions that the animal or human is willing to do to get the drug is called the break point. The higher the break point, the more the animal or human wants the drug. In contrast to the classical stimulants such as amphetamine, the effects of modafinil depend on what the animals or humans have to do after getting the drug. If they have to do a performance task, like solving a puzzle or remembering something, modafinil makes them work harder for it than placebo, and the subjects wanted to self-administer modafinil. But if they had to do a relaxation task, like listening to music or watching a video, the subjects did not want to self-administer modafinil. This suggests that modafinil is more rewarding when it helps the animals or humans do something better or faster, especially considering that modafinil is not commonly abused or depended on by people, unlike other stimulants. Treatment for misuse Psychosocial treatments, such as contingency management, have demonstrated improved effectiveness when added to treatment as usual consisting of counseling and/or case-management. This is demonstrated with a decrease in dropout rates and a lengthening of periods of abstinence. Testing The presence of stimulants in the body may be tested by a variety of procedures. Serum and urine are the common sources of testing material although saliva is sometimes used. Commonly used tests include chromatography, immunologic assay, and mass spectrometry.
Biology and health sciences
General concepts_2
Health
66392
https://en.wikipedia.org/wiki/Cardiopulmonary%20resuscitation
Cardiopulmonary resuscitation
Cardiopulmonary resuscitation (CPR) is an emergency procedure consisting of chest compressions often combined with artificial ventilation, or mouth to mouth in an effort to manually preserve intact brain function until further measures are taken to restore spontaneous blood circulation and breathing in a person who is in cardiac arrest. It is recommended for those who are unresponsive with no breathing or abnormal breathing, for example, agonal respirations. CPR involves chest compressions for adults between and deep and at a rate of at least 100 to 120 per minute. The rescuer may also provide artificial ventilation by either exhaling air into the subject's mouth or nose (mouth-to-mouth resuscitation) or using a device that pushes air into the subject's lungs (mechanical ventilation). Current recommendations place emphasis on early and high-quality chest compressions over artificial ventilation; a simplified CPR method involving only chest compressions is recommended for untrained rescuers. With children, however, 2015 American Heart Association guidelines indicate that doing only compressions may actually result in worse outcomes, because such problems in children normally arise from respiratory issues rather than from cardiac ones, given their young age. Chest compression to breathing ratios is set at 30 to 2 in adults. CPR alone is unlikely to restart the heart. Its main purpose is to restore the partial flow of oxygenated blood to the brain and heart. The objective is to delay tissue death and to extend the brief window of opportunity for a successful resuscitation without permanent brain damage. Administration of an electric shock to the subject's heart, termed defibrillation, is usually needed to restore a viable, or "perfusing", heart rhythm. Defibrillation is effective only for certain heart rhythms, namely ventricular fibrillation or pulseless ventricular tachycardia, rather than asystole or pulseless electrical activity, which usually requires the treatment of underlying conditions to restore cardiac function. Early shock, when appropriate, is recommended. CPR may succeed in inducing a heart rhythm that may be shockable. In general, CPR is continued until the person has a return of spontaneous circulation (ROSC) or is declared dead. Medical uses CPR is indicated for any person unresponsive with no breathing or breathing only in occasional agonal gasps, as it is most likely that they are in cardiac arrest. If a person still has a pulse but is not breathing (respiratory arrest), artificial ventilations may be more appropriate, but due to the difficulty people have in accurately assessing the presence or absence of a pulse, CPR guidelines recommend that lay persons should not be instructed to check the pulse, while giving healthcare professionals the option to check a pulse. In those with cardiac arrest due to trauma, CPR is considered futile but still recommended. Correcting the underlying cause such as a tension pneumothorax or pericardial tamponade may help. Pathophysiology CPR is used on people in cardiac arrest to oxygenate the blood and maintain a cardiac output to keep vital organs alive. Blood circulation and oxygenation are required to transport oxygen to the tissues. The physiology of CPR involves generating a pressure gradient between the arterial and venous vascular beds; CPR achieves this via multiple mechanisms. The brain may sustain damage after blood flow has been stopped for about four minutes and irreversible damage after about seven minutes. Typically if blood flow ceases for one to two hours, then body cells die. Therefore, in general CPR is effective only if performed within seven minutes of the stoppage of blood flow. The heart also rapidly loses the ability to maintain a normal rhythm. Low body temperatures, as sometimes seen in near-drownings, prolong the time the brain survives. Following cardiac arrest, effective CPR enables enough oxygen to reach the brain to delay brain stem death, and allows the heart to remain responsive to defibrillation attempts. If an incorrect compression rate is used during CPR, going against standing American Heart Association (AHA) guidelines of 100–120 compressions per minute, this can cause a net decrease in venous return of blood, for what is required, to fill the heart. For example, if a compression rate of above 120 compressions per minute is used consistently throughout the entire CPR process, this error could adversely affect survival rates and outcomes for the victim. Order of CPR in a first aid sequence The best position for CPR maneuvers in the sequence of first aid reactions to a cardiac arrest is a question that has been long studied. As a general reference, the recommended order (according to the guidelines of many related associations as AHA and Red Cross) is: Asking for help to bystanders around in case of any of them has received training in first aid or can perform additional tasks. Variation: when the rescuer is alone and no phone is nearby, the rescuer would go first for a phone to call for emergency medical services (only if the rescuer can return in very few minutes to apply CPR maneuvers to the patient, or emergency medical services will be with the patient in a few minutes). Calling by phone for emergency medical services. Also, going for an automated defibrillator (AED), but only if the AED is available within a few minutes. Attempting defibrillation with the automated external defibrillator (AED), because it is easy to use, if it has been found. If not, or until it has arrived, attempting CPR maneuvers as the latest step of those possible ones. If there are multiple rescuers, these tasks can be distributed and performed simultaneously to save time. Exception to the main sequence If a rescuer is completely alone with a victim of drowning, or with a child who was already unconscious when the rescuer arrived, the rescuer should: First perform two minutes of CPR maneuvers. Variation: when the lone rescuer has not a phone, it is recommended to perform about two minutes of CPR maneuvers, and then going for a phone to call for emergency medical services (only if the rescuer can return in very few minutes to continue the CPR maneuvers, or emergency medical services will be with the patient in a few minutes). Call by phone for emergency medical services. Also, go for an automated defibrillator (AED), but only if the AED is available within a few minutes. Attempt defibrillation with the automated external defibrillator (AED), because it is easy to use, if it has been found. If not, or until it has arrived, attempt CPR maneuvers as the latest step of those possible ones. The reason is that the CPR ventilations (rescue breaths) are considered the most important action for those victims. Cardiac arrest in drowning victims originates from a lack of oxygen, and a child would probably not suffer from cardiac diseases. Methods In 2010, the AHA and International Liaison Committee on Resuscitation updated their CPR guidelines. The importance of high quality CPR (sufficient rate and depth without excessively ventilating) was emphasized. The order of interventions was changed for all age groups except newborns from airway, breathing, chest compressions (ABC) to chest compressions, airway, breathing (CAB). An exception to this recommendation is for those believed to be in a respiratory arrest (airway obstruction, drug overdose, etc.). The most important aspects of CPR are: few interruptions of chest compressions, a sufficient speed and depth of compressions, completely relaxing pressure between compressions, and not ventilating too much. It is unclear if a few minutes of CPR before defibrillation results in different outcomes than immediate defibrillation. Compressions with rescue breaths A normal CPR procedure uses chest compressions and ventilations (rescue breaths, usually mouth-to-mouth) for any victim of cardiac arrest, who would be unresponsive (usually unconscious or approximately unconscious), not breathing or only gasping because of the lack of heart beats. But the ventilations could be omitted for untrained rescuers aiding adults who suffer a cardiac arrest (if it is not an asphyxial cardiac arrest, as by drowning, which needs ventilations).The patient's head is commonly tilted back (a head-tilt and chin-lift position) for improving the air flow if ventilations can be used. However, when a patient seems to have a possible serious injury in the spinal cord (in the backbone, either at the neck part or the back part), the head must not be moved except if that is completely necessary, and always very carefully, which avoids further damages for the patient's mobility in the future. And, in the case of babies, the head is left straight, looking forward, which is necessary for the ventilations, because of the size of the baby's neck.In CPR, the chest compressions push on the lower half of the sternum —the bone that is along the middle of the chest from the neck to the belly— and leave it rise up until recovering its normal position. The rescue breaths are made by pinching the victim's nose and blowing air mouth-to-mouth. This fills the lungs, which makes the chest to rise up, and increases the pressure into the thoracic cavity. If the victim is a baby, the rescuer would compress the chest with only 2 fingers and would make the ventilations using their own mouth to cover the baby's mouth and nose at the same time. The recommended compression-to-ventilation ratio, for all victims of any age, is 30:2 (a cycle that alternates continually 30 rhythmic chest compressions series and 2 rescue breaths series). Victims of drowning receive an initial series of 2 rescue breaths before that cycle begins. As an exception for the normal compression-to-ventilation ratio of 30:2, if at least two trained rescuers are present and the victim is a child, the preferred ratio is 15:2. Equally, in newborns, the ratio is 30:2 if one rescuer is present, and 15:2 if two rescuers are present (according to the AHA 2015 Guidelines). In an advanced airway treatment, such as an endotracheal tube or laryngeal mask airway, the artificial ventilation should occur without pauses in compressions at a rate of 1 breath every 6 to 8 seconds (8–10 ventilations per minute). In all victims, the compression speed is of at least 100 compressions per minute. Recommended compression depth in adults and children is of 5 cm (2 inches), and in infants it is 4 cm (1.6 inches). In adults, rescuers should use two hands for the chest compressions (one on the top of the other), while in children one hand could be enough (or two, adapting the compressions to the child's constitution), and with babies the rescuer must use only two fingers. There exist some plastic shields and respirators that can be used in the rescue breaths between the mouths of the rescuer and the victim, with the purposes of sealing a better vacuum and avoiding infections. In some cases, the problem is one of the failures in the rhythm of the heart (ventricular fibrillation and ventricular tachycardia) that can be corrected with the electric shock of a defibrillator. So, if a victim is suffering a cardiac arrest, it is important that someone asks for a defibrillator nearby, to try with it a defibrillation process when the victim is already unconscious. The common model of defibrillator (the AED) is an automatic portable machine that guides to the user with recorded voice instructions along the process, and analyzes the victim, and applies the correct shocks if they are needed. The time in which a cardiopulmonary resuscitation can still work is not clear, and it depends on many factors. Many official guides recommend continuing a cardiopulmonary resuscitation until emergency medical services arrive (for trying to keep the patient alive, at least). The same guides also indicate asking for any emergency defibrillator (AED) near, to try an automatic defibrillation as soon as possible before considering that the patient has died. A normal cardiopulmonary resuscitation has a recommended order named 'CAB': first 'Chest' (chest compressions), followed by 'Airway' (attempt to open the airway by performing a head tilt and a chin lift), and 'Breathing' (rescue breaths). As of 2010, the Resuscitation Council (UK) was still recommending an 'ABC' order, with the 'C' standing for 'Circulation' (check for a pulse), if the victim is a child. It can be difficult to determine the presence or absence of a pulse, so the pulse check has been removed for common providers and should not be performed for more than 10 seconds by healthcare providers. Compression only For untrained rescuers helping adult victims of cardiac arrest, it is recommended to perform compression-only CPR (chest compressions hands-only or cardiocerebral resuscitation, without artificial ventilation), as it is easier to perform and instructions are easier to give over a phone. In adults with out-of-hospital cardiac arrest, compression-only CPR by the average person has an equal or higher success rate than standard CPR. The CPR 'compressions only' procedure consists only of chest compressions that push on the lower half of the bone that is in the middle of the chest (the sternum). Compression-only CPR is not as good for children who are more likely to have cardiac arrest from respiratory causes. Two reviews have found that compression-only CPR had no more success than no CPR whatsoever. Rescue breaths for children and especially for babies should be relatively gentle. Either a ratio of compressions to breaths of 30:2 or 15:2 was found to have better results for children. Both children and adults should receive 100 chest compressions per minute. Other exceptions besides children include cases of drownings and drug overdose; in both these cases, compressions and rescue breaths are recommended if the bystander is trained and is willing to do so. As per the AHA, the beat of the Bee Gees song "Stayin' Alive" provides an ideal rhythm in terms of beats per minute to use for hands-only CPR, which is 104 beats-per-minute. One can also hum Queen's "Another One Bites the Dust", which is 110 beats-per-minute and contains a repeating drum pattern. For those in cardiac arrest due to non-heart related causes and in people less than 20 years of age, standard CPR is superior to compression-only CPR. Prone CPR Standard CPR is performed with the victim in supine position. Prone CPR, or reverse CPR, is performed on a victim in prone position, lying on the chest. This is achieved by turning the head to the side and compressing the back. Due to the head being turned, the risk of vomiting and complications caused by aspiration pneumonia may be reduced. The American Heart Association's current guidelines recommend performing CPR in the supine position, and limits prone CPR to situations where the patient cannot be turned. Pregnancy During pregnancy when a woman is lying on her back, the uterus may compress the inferior vena cava and thus decrease venous return. It is therefore recommended that the uterus be pushed to the woman's left. This can be done by placing a pillow or towel under her right hip so that she is on an angle of 15–30 degrees, and making sure their shoulders are flat to the ground. If this is not effective, healthcare professionals should consider emergency resuscitative hysterotomy. Family presence Evidence generally supports family being present during CPR. This includes in CPR for children. Other Interposed abdominal compressions may be beneficial in the hospital environment. There is no evidence of benefit pre-hospital or in children. Cooling during CPR is being studied as currently results are unclear whether or not it improves outcomes. Internal cardiac massage is manual squeezing of the exposed heart itself carried out through a surgical incision into the chest cavity, usually when the chest is already open for cardiac surgery. Active compression-decompression methods using mechanical decompression of the chest have not been shown to improve outcome in cardiac arrest. Use of devices Defibrillators A defibrillator is a machine that produces a defibrillation: electric shocks that can restore the normal heart function of the victim. The common model of defibrillator out of an hospital is the automated external defibrillator (AED), a portable device that is especially easy to use because it produces recorded voice instructions. Defibrillation is only indicated for some arrhythmias (abnormal heart beatings), specifically ventricular fibrillation (VF) and pulseless ventricular tachycardia (VT). Defibrillation is not indicated if the patient has a normal pulse or is still conscious. Also, it is not indicated in asystole or pulseless electrical activity (PEA), in those cases a normal CPR would be used to oxygenate the brain until the heart function can be restored. Improperly given electrical shocks can cause dangerous arrhythmias, such as the ventricular fibrillation (VF). When a patient does not have heart beatings (or they present a sort of arrhythmia that will stop the heart immediately), it is recommended that someone asks for a defibrillator (because they are quite common in the present time), for trying with it a defibrillation on the already unconscious victim, in case it is successful. Order of defibrillation in a first aid sequence It is recommended calling for emergency medical services before a defibrillation. Afterwards, a nearby AED defibrillator should be used on the patient as soon as possible. As a general reference, defibrillation is preferred to performing CPR, but only if the AED can be retrieved in a short period of time. All these tasks (calling by phone, getting an AED, and the chest compressions and rescue breaths maneuvers of CPR) can be distributed between many rescuers who make them simultaneously. The defibrillator itself would indicate if more CPR maneuvers are required. As a slight variation for that sequence, if the rescuer is completely alone with a victim of drowning, or with a child who was already unconscious when the rescuer arrived, the rescuer would do the CPR maneuvers during 2 minutes (approximately 5 cycles of ventilations and compressions); after that, the rescuer would call to emergency medical services, and then it could be tried a search for a defibrillator nearby (the CPR maneuvers are supposed to be the priority for the drowned and most of the already collapsed children). As another possible variation, if a rescuer is completely alone and without a phone near, and is aiding to any other victim (not a victim of drowning, nor an already unconscious child), the rescuer would go to call by phone first. After the call, the rescuer would get a nearby defibrillator and use it, or continue the CPR (the phone call and the defibrillator are considered urgent when the problem has a cardiac origin). Defibrillation The standard defibrillation device, prepared for a fast use out of the medical centres, is the automated external defibrillator (AED), a portable machine of small size (similar to a briefcase) that can be used by any user with no previous training. That machine produces recorded voice instructions that guide to the user along the defibrillation process. It also checks the victim's condition to automatically apply electric shocks at the correct level, if they are needed. Other models are semi-automatic and require the user to push a button before an electric shock. A defibrillator may ask for applying CPR maneuvers, so the patient would be placed lying in a face up position. Additionally, the patient's head would be tilted back, except in the case of babies. Water and metals transmit the electric current. This depends on the amount of water, but it is convenient to avoid starting the defibrillation on a floor with puddles, and to dry the wet areas of the patient before (fast, even with any cloth, if that could be enough). It is not necessary to remove the patient's jewels or piercings, but it should be avoided placing the patches of the defibrillator directly on top of them. The patches with electrodes are put on the positions that appear at the right. In very small bodies: children between 1 and 8 years, and, in general, similar bodies up to 25 kg approximately, it is recommended the use of children's size patches with reduced electric doses. If that is not possible, sizes and doses for adults would be used, and, if the patches were too big, one would be placed on the chest and the other on the back (no matter which of them). There are several devices for improving CPR, but only defibrillators (as of 2010) have been found better than standard CPR for an out-of-hospital cardiac arrest. When a defibrillator has been used, it should remain attached to the patient until emergency services arrive. Devices for timing CPR Timing devices can feature a metronome (an item carried by many ambulance crews) to assist the rescuer in achieving the correct rate. Some units can also give timing reminders for performing compressions, ventilating and changing operators. Devices for assisting in manual CPR Mechanical chest compression devices have not been found to be better than standard manual compressions. Their use is reasonable in situations where manual compressions are not safe to perform, such as in a moving vehicle. Audible and visual prompting may improve the quality of CPR and prevent the decrease of compression rate and depth that naturally occurs with fatigue, and to address this potential improvement, a number of devices have been developed to help improve CPR technique. These items can be devices to be placed on top of the chest, with the rescuer's hands going over the device, and a display or audio feedback giving information on depth, force or rate, or in a wearable format such as a glove. Several published evaluations show that these devices can improve the performance of chest compressions. As well as its use during actual CPR on a cardiac arrest victim, which relies on the rescuer carrying the device with them, these devices can also be used as part of training programs to improve basic skills in performing correct chest compressions. Devices for providing automatic CPR Mechanical CPR has not seen as much use as mechanical ventilation; however, use in the prehospital setting is increasing. Devices on the market include the LUCAS device, developed at the University Hospital of Lund, and AutoPulse. Both use straps around the chest to secure the patient. The first generation of the LUCAS uses a gas-driven piston and motor-driven constricting band, while later version are battery operated. There are several advantages to automated devices: they allow rescuers to focus on performing other interventions; they do not fatigue and begin to perform less effective compressions, as humans do; they are able to perform effective compressions in limited-space environments such as air ambulances, where manual compressions are difficult, and they allow ambulance workers to be strapped in safely rather than standing over a patient in a speeding vehicle. However the disadvantages are cost to purchase, time to train emergency personnel to use them, interruption to CPR to implement, potential for incorrect application and the need for multiple device sizes. Several studies have shown little or no improvement in survival rates but acknowledge the need for more study. Mobile apps for providing CPR instructions To support training and incident management, mobile apps have been published on the largest app markets. An evaluation of 61 available apps has revealed that a large number do not follow international guidelines for basic life support and many apps are not designed in a user-friendly way. As a result, the Red Cross updated and endorsed its emergency preparedness application, which uses pictures, text and videos to assist the user. The UK Resuscitation Council has an app, called Lifesaver, which shows how to perform CPR. Effectivity rate CPR oxygenates the body and brain, which favours making a later defibrillation and the advanced life support. Even in the case of a "non-shockable" rhythm, such as pulseless electrical activity (PEA) where defibrillation is not indicated, effective CPR is no less important. Used alone, CPR will result in few complete recoveries, though the outcome without CPR is almost uniformly fatal. Studies have shown that immediate CPR followed by defibrillation within 3–5 minutes of sudden VF cardiac arrest dramatically improves survival. In cities such as Seattle where CPR training is widespread and defibrillation by EMS personnel follows quickly, the survival rate is about 20 percent for all causes and as high as 57 percent for a witnessed "shockable" arrest. In cities such as New York, without those advantages, the survival rate is only 5 percent for witnessed shockable arrest. Similarly, in-hospital CPR is more successful when arrests are witnessed, occur in the ICU, or occur in patients wearing heart monitors. \* AED data here exclude health facilities and nursing homes, where patients are sicker than average. In adults compression-only CPR by bystanders appears to be better than chest compressions with rescue breathing. Compression-only CPR may be less effective in children than in adults, as cardiac arrest in children is more likely to have a non-cardiac cause. In a 2010 prospective study of cardiac arrest in children (age 1–17) for arrests with a non-cardiac cause, provision by bystanders of conventional CPR with rescue breathing yielded a favorable neurological outcome at one month more often than did compression-only CPR (OR 5.54). For arrests with a cardiac cause in this cohort, there was no difference between the two techniques (OR 1.20). This is consistent with American Heart Association guidelines for parents. When done by trained responders, 30 compressions interrupted by two breaths appears to have a slightly better result than continuous chest compressions with breaths being delivered while compressions are ongoing. Measurement of end-tidal carbon dioxide during CPR reflects cardiac output and can predict chances of ROSC. In a study of in-hospital CPR from 2000 to 2008, 59% of CPR survivors lived over a year after hospital discharge and 44% lived over 3 years. Consequences Survival rates: In US hospitals in 2017, 26% of patients who received CPR survived to hospital discharge. In 2017 in the US, outside hospitals, 16% of people whose cardiac arrest was witnessed survived to hospital discharge. Since 2003, widespread cooling of patients after CPR and other improvements have raised survival and reduced mental disabilities. Organ donation Organ donation is usually made possible by CPR, even if CPR does not save the patient. If there is a return of spontaneous circulation (ROSC), all organs can be considered for donation. If the patient does not achieve ROSC, and CPR continues until an operating room is available, the kidneys and liver can still be considered for donation. 1,000 organs per year in the US are transplanted from patients who had CPR. Donations can be taken from 40% of patients who have ROSC and later become brain dead. Up to 8 organs can be taken from each donor, and an average of 3 organs are taken from each patient who donates organs. Mental abilities Mental abilities are about the same for survivors before and after CPR for 89% of patients, based on before and after counts of 12,500 US patients' Cerebral-Performance Category (CPC) codes in a 2000–2009 study of CPR in hospitals. 1% more survivors were in comas than before CPR. 5% more needed help with daily activities. 5% more had moderate mental problems and could still be independent. For CPR outside hospitals, a Copenhagen study of 2,504 patients in 2007-2011 found 21% of survivors developed moderate mental problems but could still be independent, and 11% of survivors developed severe mental problems, so they needed daily help. Two patients out of 2,504 went into comas (0.1% of patients, or 2 out of 419 survivors, 0.5%), and the study did not track how long the comas lasted. Most people in comas start to recover in 2–3 weeks. 2018 guidelines on disorders of consciousness say it is no longer appropriate to use the term "permanent vegetative state." Mental abilities can continue to improve in the six months after discharge, and in subsequent years. For long-term problems, brains form new paths to replace damaged areas. Injuries Injuries from CPR vary. 87% of patients are not injured by CPR. Overall, injuries are caused in 13% (2009–12 data) of patients, including broken sternum or ribs (9%), lung injuries (3%), and internal bleeding (3%). The internal injuries counted here can include heart contusion, hemopericardium, upper airway complications, damage to the abdominal viscera − lacerations of the liver and spleen, fat emboli, pulmonary complications − pneumothorax, hemothorax, lung contusions. Most injuries did not affect care; only 1% of those given CPR received life-threatening injuries from it. Broken ribs are present in 3% of those who survive to hospital discharge, and 15% of those who die in the hospital, for an average rate of 9% (2009-12 data) to 8% (1997–99). In the 2009-12 study, 20% of survivors were older than 75. A study in the 1990s found 55% of CPR patients who died before discharge had broken ribs, and a study in the 1960s found 97% did; training and experience levels have improved. Lung injuries were caused in 3% of patients and other internal bleeding in 3% (2009–12). Bones heal in 1–2 months. The costal cartilage also breaks in an unknown number of additional cases, which can sound like breaking bones. The type and frequency of injury can be affected by factors such as sex and age. A 1999 Austrian study of CPR on cadavers, using a machine which alternately compressed the chest then pulled it outward, found a higher rate of sternal fractures in female cadavers (9 of 17) than male (2 of 20), and found the risk of rib fractures rose with age, though they did not say how much. Children and infants have a low risk of rib fractures during CPR, with an incidence less than 2%, although, when they do occur, they are usually anterior and multiple. Where CPR is performed in error by a bystander, on a person not in cardiac arrest, around 2% have injury as a result (although 12% experienced discomfort). A 2004 overview said, "Chest injury is a price worth paying to achieve optimal efficacy of chest compressions. Cautious or faint-hearted chest compression may save bones in the individual case but not the patient's life." Other side effects The most common side effect is vomiting, which necessitates clearing the mouth so patients do not breathe it in. It happened in 16 of 35 CPR efforts in a 1989 study in King County, Washington. Survival differences, based on prior illness, age or location The American Heart Association guidelines say that survival rates below 1% are "futility," but all groups have better survival than that. Even among very sick patients at least 10% survive: A study of CPR in a sample of US hospitals from 2001 to 2010, where overall survival was 19%, found 10% survival among cancer patients, 12% among dialysis patients, 14% over age 80, 15% among blacks, 17% for patients who lived in nursing homes, 19% for patients with heart failure, and 25% for patients with heart monitoring outside the ICU. Another study, of advanced cancer patients, found the same 10% survival mentioned above. A study of Swedish patients in 2007–2015 with ECG monitors found 40% survived at least 30 days after CPR at ages 70–79, 29% at ages 80–89, and 27% above age 90. An earlier study of Medicare patients in hospitals 1992–2005, where overall survival was 18%, found 13% survival in the poorest neighborhoods, 12% survival over age 90, 15% survival among ages 85–89, and 17% survival among ages 80–84. Swedish patients 90 years or older had 15% survival to hospital discharge, 80–89 had 20%, and 70–79 had 28%. A study of King County WA patients who had CPR outside hospitals in 1999–2003, where 34% survived to hospital discharge overall, found that among patients with 4 or more major medical conditions, 18% survived; with 3 major conditions 24% survived, and 33% of those with 2 major medical conditions survived. Nursing home residents' survival has been studied by several authors, and is measured annually by the Cardiac Arrest Registry to Enhance Survival (CARES). CARES reports CPR results from a catchment area of 115 million people, including 23 state-wide registries, and individual communities in 18 other states as of 2019. CARES data show that in health care facilities and nursing homes where AEDs are available and used, survival rates are double the average survival found in nursing homes overall. Geographically, there is wide variation state-to-state in survival after CPR in US hospitals, from 40% in Wyoming to 20% in New York, so there is room for good practices to spread, raising the averages. For CPR outside hospitals, survival varies even more across the US, from 3% in Omaha to 45% in Seattle in 2001. This study only counted heart rhythms which can respond to defibrillator shocks (tachycardia). A major reason for the variation has been delay in some areas between the call to emergency services and the departure of medics, and then arrival and treatment. Delays were caused by lack of monitoring, and the mismatch between recruiting people as firefighters, though most emergency calls they are assigned to are medical, so staff resisted and delayed on the medical calls. Building codes have cut the number of fires, but staff still think of themselves as firefighters. Dysthanasia In some instances CPR can be considered a form of dysthanasia. Prevalence Chance of receiving CPR Various studies show that in out-of-home cardiac arrest, bystanders in the US attempt CPR in between 14% and 45% of the time, with a median of 32%. Globally, rates of bystander CPR reported to be as low as 1% and as high as 44%. However, the effectiveness of this CPR is variable, and the studies suggest only around half of bystander CPR is performed correctly. One study found that members of the public having received CPR training in the past lack the skills and confidence needed to save lives. The report's authors suggested that better training is needed to improve the willingness to respond to cardiac arrest. Factors that influence bystander CPR in out-of-hospital cardiac arrest include: Affordable training Target CPR training to family members of potential cardiac arrest CPR classes should be simplified and shortened Offer reassurance and education about CPR Provide clearer information about legal implications for specific regions Focus on reducing the stigma and fears around providing bystander CPR There is a relation between age and the chance of CPR being commenced. Younger people are far more likely to have CPR attempted on them before the arrival of emergency medical services. Bystanders more commonly administer CPR when in public than when at the person's home, although health care professionals are responsible for more than half of out-of-hospital resuscitation attempts. People with no connection to the person are more likely to perform CPR than are a member of their family. There is also a clear relation between the cause of arrest and the likelihood of a bystander initiating CPR. Laypersons are most likely to give CPR to younger people in cardiac arrest in a public place when it has a medical cause; those in arrest from trauma, exsanguination or intoxication are less likely to receive CPR. It is believed that there is a higher chance that CPR will be performed if the bystander is told to perform only the chest compression element of the resuscitation. The first formal study into gender bias in receiving CPR from the public versus professionals was conducted by the American Heart Association and the National Institutes of Health (NIH), and examined nearly 20,000 cases across the U.S. The study found that women are six percent less likely than men to receive bystander CPR when in cardiac arrest in a public place, citing the disparity as "likely due to the fear of being falsely accused of sexual assault." Chance of receiving CPR in time CPR is likely to be effective only if commenced within 6 minutes after the blood flow stops because permanent brain cell damage occurs when fresh blood infuses the cells after that time, since the cells of the brain become dormant in as little as 4–6 minutes in an oxygen deprived environment and, therefore, cannot survive the reintroduction of oxygen in a traditional resuscitation. Research using cardioplegic blood infusion resulted in a 79.4% survival rate with cardiac arrest intervals of 72±43 minutes, traditional methods achieve a 15% survival rate in this scenario, by comparison. New research is currently needed to determine what role CPR, defibrillation, and new advanced gradual resuscitation techniques will have with this new knowledge. A notable exception is cardiac arrest that occurs in conjunction with exposure to very cold temperatures. Hypothermia seems to protect by slowing down metabolic and physiologic processes, greatly decreasing the tissues' need for oxygen. There are cases where CPR, defibrillation, and advanced warming techniques have revived victims after substantial periods of hypothermia. Society and culture Portrayed effectiveness CPR is often severely misrepresented in movies and television as being highly effective in resuscitating a person who is not breathing and has no circulation. A 1996 study published in the New England Journal of Medicine showed that CPR success rates in television shows was 75% for immediate circulation, and 67% survival to discharge. This gives the general public an unrealistic expectation of a successful outcome. When educated on the actual survival rates, the proportion of patients over 60 years of age desiring CPR should they have a cardiac arrest drops from 41% to 22%. Training and stage CPR It is dangerous to perform CPR on a person who is breathing normally. These chest compressions create significant local blunt trauma, risking bruising or fracture of the sternum or ribs. If a patient is not breathing, these risks still exist but are dwarfed by the immediate threat to life. For this reason, training is always done with a mannequin, such as the well-known Resusci Anne model. The portrayal of CPR technique on television and film often is purposely incorrect. Actors simulating the performance of CPR may bend their elbows while appearing to compress, to prevent force from reaching the chest of the actor portraying the patient. Self-CPR hoax A form of "self-CPR" termed "cough CPR" was the subject of a hoax chain e-mail entitled "How to Survive a Heart Attack When Alone," which wrongly cited "Via Health Rochester General Hospital" as the source of the technique. Rochester General Hospital has denied any connection with the technique. "Cough CPR" in the sense of resuscitating oneself is impossible because a prominent symptom of cardiac arrest is unconsciousness, which makes coughing impossible. The American Heart Association (AHA) and other resuscitation bodies do not endorse "cough CPR", which it terms a misnomer as it is not a form of resuscitation. The AHA does recognize a limited legitimate use of the coughing technique: "This coughing technique to maintain blood flow during brief arrhythmias has been useful in the hospital, particularly during cardiac catheterization. In such cases the patient's ECG is monitored continuously, and a physician is present." When coughing is used on trained and monitored patients in hospitals, it has been shown to be effective only for 90 seconds. Learning from film In at least one case, it has been alleged that CPR learned from a film was used to save a person's life. In April 2011, it was claimed that nine-year-old Tristin Saghin saved his sister's life by administering CPR on her after she fell into a swimming pool, using only the knowledge of CPR that he had gleaned from a motion picture, Black Hawk Down. Hands-only CPR portrayal Less than 1/3 of those people who experience a cardiac arrest at home, work or in a public location have CPR performed on them. Most bystanders are worried that they might do something wrong. On October 28, 2009, the American Heart Association and the Ad Council launched a hands-only CPR public service announcement and website as a means to address this issue. In July 2011, new content was added to the website including a digital app that helps a user learn how to perform hands-only CPR. History In the 19th century, Doctor H. R. Silvester described a method (the Silvester method) of artificial ventilation in which the patient is laid on their back, and their arms are raised above their head to aid inhalation and then pressed against their chest to aid exhalation. The Holger Nielsen technique of artificial respiration, developed by Danish physician Holger Nielsen, revolutionized the field of emergency medical care. Introduced in the early 20th century, this technique involved positioning the patient in a supine position (lying flat on their back) and the performer of the technique kneeling beside or above the patient. The Holger Nielsen technique utilized a manual resuscitator, commonly referred to as the "Holger Nielsen bag," to administer rescue breaths. The performer would place a mask or the bag's mouthpiece over the patient's mouth and nose while manually compressing the bag. This action would deliver a controlled flow of air into the patient's lungs, aiding in oxygenation and facilitating the exchange of gases. It was not until the middle of the 20th century that the wider medical community started to recognize and promote artificial ventilation in the form of mouth-to-mouth resuscitation combined with chest compressions as a key part of resuscitation following cardiac arrest. The combination was first seen in a 1962 training video called "The Pulse of Life" created by James Jude, Guy Knickerbocker, and Peter Safar. Jude and Knickerbocker, along with William Kouwenhoven and Joseph S. Redding had recently discovered the method of external chest compressions, whereas Safar had worked with Redding and James Elam to prove the effectiveness of mouth-to-mouth resuscitation. The first effort at testing the technique was performed on a dog by Redding, Safar and JW Pearson. Soon afterward, the technique was used to save the life of a child. Their combined findings were presented at the annual Maryland Medical Society meeting on September 16, 1960, in Ocean City, and gained widespread acceptance over the following decade, helped by the video and speaking tour they undertook. Peter Safar wrote the book ABC of Resuscitation in 1957. In the U.S., it was first promoted as a technique for the public to learn in the 1970s. Mouth-to-mouth resuscitation was combined with chest compressions based on the assumption that active ventilation is necessary to keep circulating blood oxygenated, and the combination was accepted without comparing its effectiveness with chest compressions alone. However, research in the 2000s demonstrated that assumption to be in error, resulting in the American Heart Association's acknowledgment of the effectiveness of chest compressions alone (see Compression only in this article). CPR methods continued to advance, with developments in the 2010s including an emphasis on constant, rapid heart stimulation, and a de-emphasis on the respiration aspect. Studies have shown that people who had rapid, constant heart-only chest compression are 22% more likely to survive than those receiving conventional CPR that included breathing. Because people tend to be reluctant to do mouth-to-mouth resuscitation, chest-only CPR nearly doubles the chances of survival overall, by increasing the odds of receiving CPR in the first place. On animals It is feasible to perform CPR on animals, including cats and dogs. The principles and practices are similar to CPR for humans, except that resuscitation is usually done through the animal's nose, not the mouth. CPR should only be performed on unconscious animals to avoid the risk of being bitten; a conscious animal would not require chest compressions. Animals, depending on species, may have a lower bone density than humans and so CPR can cause bones to become weakened after it is performed. Research Cerebral performance category (CPC scores) are used as a research tool to describe "good" and "poor" outcomes. Level 1 is conscious and alert with normal function. Level 2 is only slight disability. Level 3 is moderate disability. Level 4 is severe disability. Level 5 is comatose or persistent vegetative state. Level 6 is brain dead or death from other causes.
Biology and health sciences
Treatments
Health
66394
https://en.wikipedia.org/wiki/Island%20of%20stability
Island of stability
In nuclear physics, the island of stability is a predicted set of isotopes of superheavy elements that may have considerably longer half-lives than known isotopes of these elements. It is predicted to appear as an "island" in the chart of nuclides, separated from known stable and long-lived primordial radionuclides. Its theoretical existence is attributed to stabilizing effects of predicted "magic numbers" of protons and neutrons in the superheavy mass region. Several predictions have been made regarding the exact location of the island of stability, though it is generally thought to center near copernicium and flerovium isotopes in the vicinity of the predicted closed neutron shell at N = 184. These models strongly suggest that the closed shell will confer further stability towards fission and alpha decay. While these effects are expected to be greatest near atomic number Z = 114 (flerovium) and N = 184, the region of increased stability is expected to encompass several neighboring elements, and there may also be additional islands of stability around heavier nuclei that are doubly magic (having magic numbers of both protons and neutrons). Estimates of the stability of the nuclides within the island are usually around a half-life of minutes or days; some optimists propose half-lives on the order of millions of years. Although the nuclear shell model predicting magic numbers has existed since the 1940s, the existence of long-lived superheavy nuclides has not been definitively demonstrated. Like the rest of the superheavy elements, the nuclides within the island of stability have never been found in nature; thus, they must be created artificially in a nuclear reaction to be studied. Scientists have not found a way to carry out such a reaction, for it is likely that new types of reactions will be needed to populate nuclei near the center of the island. Nevertheless, the successful synthesis of superheavy elements up to Z = 118 (oganesson) with up to 177 neutrons demonstrates a slight stabilizing effect around elements 110 to 114 that may continue in heavier isotopes, consistent with the existence of the island of stability. Introduction Nuclide stability The composition of a nuclide (atomic nucleus) is defined by the number of protons Z and the number of neutrons N, which sum to mass number A. Proton number Z, also named the atomic number, determines the position of an element in the periodic table. The approximately 3300 known nuclides are commonly represented in a chart with Z and N for its axes and the half-life for radioactive decay indicated for each unstable nuclide (see figure). , 251 nuclides are observed to be stable (having never been observed to decay); generally, as the number of protons increases, stable nuclei have a higher neutron–proton ratio (more neutrons per proton). The last element in the periodic table that has a stable isotope is lead (Z = 82), with stability (i.e., half-lives of the longest-lived isotopes) generally decreasing in heavier elements, especially beyond curium (Z = 96). The half-lives of nuclei also decrease when there is a lopsided neutron–proton ratio, such that the resulting nuclei have too few or too many neutrons to be stable. The stability of a nucleus is determined by its binding energy, higher binding energy conferring greater stability. The binding energy per nucleon increases with atomic number to a broad plateau around A = 60, then declines. If a nucleus can be split into two parts that have a lower total energy (a consequence of the mass defect resulting from greater binding energy), it is unstable. The nucleus can hold together for a finite time because there is a potential barrier opposing the split, but this barrier can be crossed by quantum tunneling. The lower the barrier and the masses of the fragments, the greater the probability per unit time of a split. Protons in a nucleus are bound together by the strong force, which counterbalances the Coulomb repulsion between positively charged protons. In heavier nuclei, larger numbers of uncharged neutrons are needed to reduce repulsion and confer additional stability. Even so, as physicists started to synthesize elements that are not found in nature, they found the stability decreased as the nuclei became heavier. Thus, they speculated that the periodic table might come to an end. The discoverers of plutonium (element 94) considered naming it "ultimium", thinking it was the last. Following the discoveries of heavier elements, of which some decayed in microseconds, it then seemed that instability with respect to spontaneous fission would limit the existence of heavier elements. In 1939, an upper limit of potential element synthesis was estimated around element 104, and following the first discoveries of transactinide elements in the early 1960s, this upper limit prediction was extended to element 108. Magic numbers As early as 1914, the possible existence of superheavy elements with atomic numbers well beyond that of uranium—then the heaviest known element—was suggested, when German physicist Richard Swinne proposed that superheavy elements around Z = 108 were a source of radiation in cosmic rays. Although he did not make any definitive observations, he hypothesized in 1931 that transuranium elements around Z = 100 or Z = 108 may be relatively long-lived and possibly exist in nature. In 1955, American physicist John Archibald Wheeler also proposed the existence of these elements; he is credited with the first usage of the term "superheavy element" in a 1958 paper published with Frederick Werner. This idea did not attract wide interest until a decade later, after improvements in the nuclear shell model. In this model, the atomic nucleus is built up in "shells", analogous to electron shells in atoms. Independently of each other, neutrons and protons have energy levels that are normally close together, but after a given shell is filled, it takes substantially more energy to start filling the next. Thus, the binding energy per nucleon reaches a local maximum and nuclei with filled shells are more stable than those without. This theory of a nuclear shell model originates in the 1930s, but it was not until 1949 that German physicists Maria Goeppert Mayer and Johannes Hans Daniel Jensen et al. independently devised the correct formulation. The numbers of nucleons for which shells are filled are called magic numbers. Magic numbers of 2, 8, 20, 28, 50, 82 and 126 have been observed for neutrons, and the next number is predicted to be 184. Protons share the first six of these magic numbers, and 126 has been predicted as a magic proton number since the 1940s. Nuclides with a magic number of each—such as 16O (Z = 8, N = 8), 132Sn (Z = 50, N = 82), and 208Pb (Z = 82, N = 126)—are referred to as "doubly magic" and are more stable than nearby nuclides as a result of greater binding energies. In the late 1960s, more sophisticated shell models were formulated by American physicist William Myers and Polish physicist Władysław Świątecki, and independently by German physicist Heiner Meldner (1939–2019). With these models, taking into account Coulomb repulsion, Meldner predicted that the next proton magic number may be 114 instead of 126. Myers and Świątecki appear to have coined the term "island of stability", and American chemist Glenn Seaborg, later a discoverer of many of the superheavy elements, quickly adopted the term and promoted it. Myers and Świątecki also proposed that some superheavy nuclei would be longer-lived as a consequence of higher fission barriers. Further improvements in the nuclear shell model by Soviet physicist Vilen Strutinsky led to the emergence of the macroscopic–microscopic method, a nuclear mass model that takes into consideration both smooth trends characteristic of the liquid drop model and local fluctuations such as shell effects. This approach enabled Swedish physicist Sven Nilsson et al., as well as other groups, to make the first detailed calculations of the stability of nuclei within the island. With the emergence of this model, Strutinsky, Nilsson, and other groups argued for the existence of the doubly magic nuclide 298Fl (Z = 114, N = 184), rather than 310Ubh (Z = 126, N = 184) which was predicted to be doubly magic as early as 1957. Subsequently, estimates of the proton magic number have ranged from 114 to 126, and there is still no consensus. Discoveries Interest in a possible island of stability grew throughout the 1960s, as some calculations suggested that it might contain nuclides with half-lives of billions of years. They were also predicted to be especially stable against spontaneous fission in spite of their high atomic mass. It was thought that if such elements exist and are sufficiently long-lived, there may be several novel applications as a consequence of their nuclear and chemical properties. These include use in particle accelerators as neutron sources, in nuclear weapons as a consequence of their predicted low critical masses and high number of neutrons emitted per fission, and as nuclear fuel to power space missions. These speculations led many researchers to conduct searches for superheavy elements in the 1960s and 1970s, both in nature and through nucleosynthesis in particle accelerators. During the 1970s, many searches for long-lived superheavy nuclei were conducted. Experiments aimed at synthesizing elements ranging in atomic number from 110 to 127 were conducted at laboratories around the world. These elements were sought in fusion-evaporation reactions, in which a heavy target made of one nuclide is irradiated by accelerated ions of another in a cyclotron, and new nuclides are produced after these nuclei fuse and the resulting excited system releases energy by evaporating several particles (usually protons, neutrons, or alpha particles). These reactions are divided into "cold" and "hot" fusion, which respectively create systems with lower and higher excitation energies; this affects the yield of the reaction. For example, the reaction between 248Cm and 40Ar was expected to yield isotopes of element 114, and that between 232Th and 84Kr was expected to yield isotopes of element 126. None of these attempts were successful, indicating that such experiments may have been insufficiently sensitive if reaction cross sections were low—resulting in lower yields—or that any nuclei reachable via such fusion-evaporation reactions might be too short-lived for detection. Subsequent successful experiments reveal that half-lives and cross sections indeed decrease with increasing atomic number, resulting in the synthesis of only a few short-lived atoms of the heaviest elements in each experiment; , the highest reported cross section for a superheavy nuclide near the island of stability is for 288Mc in the reaction between 243Am and 48Ca. Similar searches in nature were also unsuccessful, suggesting that if superheavy elements do exist in nature, their abundance is less than 10−14 moles of superheavy elements per mole of ore. Despite these unsuccessful attempts to observe long-lived superheavy nuclei, new superheavy elements were synthesized every few years in laboratories through light-ion bombardment and cold fusion reactions; rutherfordium, the first transactinide, was discovered in 1969, and copernicium, eight protons closer to the island of stability predicted at Z = 114, was reached by 1996. Even though the half-lives of these nuclei are very short (on the order of seconds), the very existence of elements heavier than rutherfordium is indicative of stabilizing effects thought to be caused by closed shells; a model not considering such effects would forbid the existence of these elements due to rapid spontaneous fission. Flerovium, with the expected magic 114 protons, was first synthesized in 1998 at the Joint Institute for Nuclear Research in Dubna, Russia, by a group of physicists led by Yuri Oganessian. A single atom of element 114 was detected, with a lifetime of 30.4 seconds, and its decay products had half-lives measurable in minutes. Because the produced nuclei underwent alpha decay rather than fission, and the half-lives were several orders of magnitude longer than those previously predicted or observed for superheavy elements, this event was seen as a "textbook example" of a decay chain characteristic of the island of stability, providing strong evidence for the existence of the island of stability in this region. Even though the original 1998 chain was not observed again, and its assignment remains uncertain, further successful experiments in the next two decades led to the discovery of all elements up to oganesson, whose half-lives were found to exceed initially predicted values; these decay properties further support the presence of the island of stability. However, a 2021 study on the decay chains of flerovium isotopes suggests that there is no strong stabilizing effect from Z = 114 in the region of known nuclei (N = 174), and that extra stability would be predominantly a consequence of the neutron shell closure. Although known nuclei still fall several neutrons short of N = 184 where maximum stability is expected (the most neutron-rich confirmed nuclei, 293Lv and 294Ts, only reach N = 177), and the exact location of the center of the island remains unknown, the trend of increasing stability closer to N = 184 has been demonstrated. For example, the isotope 285Cn, with eight more neutrons than 277Cn, has a half-life almost five orders of magnitude longer. This trend is expected to continue into unknown heavier isotopes in the vicinity of the shell closure. Deformed nuclei Though nuclei within the island of stability around N = 184 are predicted to be spherical, studies from the early 1990s—beginning with Polish physicists Zygmunt Patyk and Adam Sobiczewski in 1991—suggest that some superheavy elements do not have perfectly spherical nuclei. A change in the shape of the nucleus changes the position of neutrons and protons in the shell. Research indicates that large nuclei farther from spherical magic numbers are deformed, causing magic numbers to shift or new magic numbers to appear. Current theoretical investigation indicates that in the region Z = 106–108 and N ≈ 160–164, nuclei may be more resistant to fission as a consequence of shell effects for deformed nuclei; thus, such superheavy nuclei would only undergo alpha decay. Hassium-270 is now believed to be a doubly magic deformed nucleus, with deformed magic numbers Z = 108 and N = 162. It has a half-life of 9 seconds. This is consistent with models that take into account the deformed nature of nuclei intermediate between the actinides and island of stability near N = 184, in which a stability "peninsula" emerges at deformed magic numbers Z = 108 and N = 162. Determination of the decay properties of neighboring hassium and seaborgium isotopes near N = 162 provides further strong evidence for this region of relative stability in deformed nuclei. This also strongly suggests that the island of stability (for spherical nuclei) is not completely isolated from the region of stable nuclei, but rather that both regions are instead linked through an isthmus of relatively stable deformed nuclei. Predicted decay properties The half-lives of nuclei in the island of stability itself are unknown since none of the nuclides that would be "on the island" have been observed. Many physicists believe that the half-lives of these nuclei are relatively short, on the order of minutes or days. Some theoretical calculations indicate that their half-lives may be long, on the order of 100 years, or possibly as long as 109 years. The shell closure at N = 184 is predicted to result in longer partial half-lives for alpha decay and spontaneous fission. It is believed that the shell closure will result in higher fission barriers for nuclei around 298Fl, strongly hindering fission and perhaps resulting in fission half-lives 30 orders of magnitude greater than those of nuclei unaffected by the shell closure. For example, the neutron-deficient isotope 284Fl (with N = 170) undergoes fission with a half-life of 2.5 milliseconds, and is thought to be one of the most neutron-deficient nuclides with increased stability in the vicinity of the N = 184 shell closure. Beyond this point, some undiscovered isotopes are predicted to undergo fission with still shorter half-lives, limiting the existence and possible observation of superheavy nuclei far from the island of stability (namely for N < 170 as well as for Z > 120 and N > 184). These nuclei may undergo alpha decay or spontaneous fission in microseconds or less, with some fission half-lives estimated on the order of 10−20 seconds in the absence of fission barriers. In contrast, 298Fl (predicted to lie within the region of maximum shell effects) may have a much longer spontaneous fission half-life, possibly on the order of 1019 years. In the center of the island, there may be competition between alpha decay and spontaneous fission, though the exact ratio is model-dependent. The alpha decay half-lives of 1700 nuclei with 100 ≤ Z ≤ 130 have been calculated in a quantum tunneling model with both experimental and theoretical alpha decay Q-values, and are in agreement with observed half-lives for some of the heaviest isotopes. The longest-lived nuclides are also predicted to lie on the beta-stability line, for beta decay is predicted to compete with the other decay modes near the predicted center of the island, especially for isotopes of elements 111–115. Unlike other decay modes predicted for these nuclides, beta decay does not change the mass number. Instead, a neutron is converted into a proton or vice versa, producing an adjacent isobar closer to the center of stability (the isobar with the lowest mass excess). For example, significant beta decay branches may exist in nuclides such as 291Fl and 291Nh; these nuclides have only a few more neutrons than known nuclides, and might decay via a "narrow pathway" towards the center of the island of stability. The possible role of beta decay is highly uncertain, as some isotopes of these elements (such as 290Fl and 293Mc) are predicted to have shorter partial half-lives for alpha decay. Beta decay would reduce competition and would result in alpha decay remaining the dominant decay channel, unless additional stability towards alpha decay exists in superdeformed isomers of these nuclides. Considering all decay modes, various models indicate a shift of the center of the island (i.e., the longest-living nuclide) from 298Fl to a lower atomic number, and competition between alpha decay and spontaneous fission in these nuclides; these include 100-year half-lives for 291Cn and 293Cn, a 1000-year half-life for 296Cn, a 300-year half-life for 294Ds, and a 3500-year half-life for 293Ds, with 294Ds and 296Cn exactly at the N = 184 shell closure. It has also been posited that this region of enhanced stability for elements with 112 ≤ Z ≤ 118 may instead be a consequence of nuclear deformation, and that the true center of the island of stability for spherical superheavy nuclei lies around 306Ubb (Z = 122, N = 184). This model defines the island of stability as the region with the greatest resistance to fission rather than the longest total half-lives; the nuclide 306Ubb is still predicted to have a short half-life with respect to alpha decay. The island of stability for spherical nuclei may also be a "coral reef" (i.e., a broad region of increased stability without a clear "peak") around N = 184 and 114 ≤ Z ≤ 120, with half-lives rapidly decreasing at higher atomic number, due to combined effects from proton and neutron shell closures. Another potentially significant decay mode for the heaviest superheavy elements was proposed to be cluster decay by Romanian physicists Dorin N. Poenaru and Radu A. Gherghescu and German physicist Walter Greiner. Its branching ratio relative to alpha decay is expected to increase with atomic number such that it may compete with alpha decay around Z = 120, and perhaps become the dominant decay mode for heavier nuclides around Z = 124. As such, it is expected to play a larger role beyond the center of the island of stability (though still influenced by shell effects), unless the center of the island lies at a higher atomic number than predicted. Possible natural occurrence Even though half-lives of hundreds or thousands of years would be relatively long for superheavy elements, they are far too short for any such nuclides to exist primordially on Earth. Additionally, instability of nuclei intermediate between primordial actinides (232Th, 235U, and 238U) and the island of stability may inhibit production of nuclei within the island in r-process nucleosynthesis. Various models suggest that spontaneous fission will be the dominant decay mode of nuclei with A > 280, and that neutron-induced or beta-delayed fission—respectively neutron capture and beta decay immediately followed by fission—will become the primary reaction channels. As a result, beta decay towards the island of stability may only occur within a very narrow path or may be entirely blocked by fission, thus precluding the synthesis of nuclides within the island. The non-observation of superheavy nuclides such as 292Hs and 298Fl in nature is thought to be a consequence of a low yield in the r-process resulting from this mechanism, as well as half-lives too short to allow measurable quantities to persist in nature. Various studies utilizing accelerator mass spectroscopy and crystal scintillators have reported upper limits of the natural abundance of such long-lived superheavy nuclei on the order of relative to their stable homologs. Despite these obstacles to their synthesis, a 2013 study published by a group of Russian physicists led by Valeriy Zagrebaev proposes that the longest-lived copernicium isotopes may occur at an abundance of 10−12 relative to lead, whereby they may be detectable in cosmic rays. Similarly, in a 2013 experiment, a group of Russian physicists led by Aleksandr Bagulya reported the possible observation of three cosmogenic superheavy nuclei in olivine crystals in meteorites. The atomic number of these nuclei was estimated to be between 105 and 130, with one nucleus likely constrained between 113 and 129, and their lifetimes were estimated to be at least 3,000 years. Although this observation has yet to be confirmed in independent studies, it strongly suggests the existence of the island of stability, and is consistent with theoretical calculations of half-lives of these nuclides. The decay of heavy, long-lived elements in the island of stability is a proposed explanation for the unusual presence of the short-lived radioactive isotopes observed in Przybylski's Star. Synthesis and difficulties The manufacture of nuclei on the island of stability proves to be very difficult because the nuclei available as starting materials do not deliver the necessary sum of neutrons. Radioactive ion beams (such as 44S) in combination with actinide targets (such as 248Cm) may allow the production of more neutron rich nuclei nearer to the center of the island of stability, though such beams are not currently available in the required intensities to conduct such experiments. Several heavier isotopes such as 250Cm and 254Es may still be usable as targets, allowing the production of isotopes with one or two more neutrons than known isotopes, though the production of several milligrams of these rare isotopes to create a target is difficult. It may also be possible to probe alternative reaction channels in the same 48Ca-induced fusion-evaporation reactions that populate the most neutron-rich known isotopes, namely those at a lower excitation energy (resulting in fewer neutrons being emitted during de-excitation), or those involving evaporation of charged particles (pxn, evaporating a proton and several neutrons, or αxn, evaporating an alpha particle and several neutrons). This may allow the synthesis of neutron-enriched isotopes of elements 111–117. Although the predicted cross sections are on the order of 1–900 fb, smaller than when only neutrons are evaporated (xn channels), it may still be possible to generate otherwise unreachable isotopes of superheavy elements in these reactions. Some of these heavier isotopes (such as 291Mc, 291Fl, and 291Nh) may also undergo electron capture (converting a proton into a neutron) in addition to alpha decay with relatively long half-lives, decaying to nuclei such as 291Cn that are predicted to lie near the center of the island of stability. However, this remains largely hypothetical as no superheavy nuclei near the beta-stability line have yet been synthesized and predictions of their properties vary considerably across different models. In 2024, a team of researchers at the JINR observed one decay chain of the known isotope 289Mc as a product in the p2n channel of the reaction between 242Pu and 50Ti, an experiment targeting neutron-deficient livermorium isotopes. This was the first successful report of a charged-particle exit channel in a hot fusion reaction between an actinide target and a projectile with Z ≥ 20. The process of slow neutron capture used to produce nuclides as heavy as 257Fm is blocked by short-lived isotopes of fermium that undergo spontaneous fission (for example, 258Fm has a half-life of 370 μs); this is known as the "fermium gap" and prevents the synthesis of heavier elements in such a reaction. It might be possible to bypass this gap, as well as another predicted region of instability around A = 275 and Z = 104–108, in a series of controlled nuclear explosions with a higher neutron flux (about a thousand times greater than fluxes in existing reactors) that mimics the astrophysical r-process. First proposed in 1972 by Meldner, such a reaction might enable the production of macroscopic quantities of superheavy elements within the island of stability; the role of fission in intermediate superheavy nuclides is highly uncertain, and may strongly influence the yield of such a reaction. It may also be possible to generate isotopes in the island of stability such as 298Fl in multi-nucleon transfer reactions in low-energy collisions of actinide nuclei (such as 238U and 248Cm). This inverse quasifission (partial fusion followed by fission, with a shift away from mass equilibrium that results in more asymmetric products) mechanism may provide a path to the island of stability if shell effects around Z = 114 are sufficiently strong, though lighter elements such as nobelium and seaborgium (Z = 102–106) are predicted to have higher yields. Preliminary studies of the 238U + 238U and 238U + 248Cm transfer reactions have failed to produce elements heavier than mendelevium (Z = 101), though the increased yield in the latter reaction suggests that the use of even heavier targets such as 254Es (if available) may enable production of superheavy elements. This result is supported by a later calculation suggesting that the yield of superheavy nuclides (with Z ≤ 109) will likely be higher in transfer reactions using heavier targets. A 2018 study of the 238U + 232Th reaction at the Texas A&M Cyclotron Institute by Sara Wuenschel et al. found several unknown alpha decays that may possibly be attributed to new, neutron-rich isotopes of superheavy elements with 104 < Z < 116, though further research is required to unambiguously determine the atomic number of the products. This result strongly suggests that shell effects have a significant influence on cross sections, and that the island of stability could possibly be reached in future experiments with transfer reactions. Other islands of stability Further shell closures beyond the main island of stability in the vicinity of Z = 112–114 may give rise to additional islands of stability. Although predictions for the location of the next magic numbers vary considerably, two significant islands are thought to exist around heavier doubly magic nuclei; the first near 354126 (with 228 neutrons) and the second near 472164 or 482164 (with 308 or 318 neutrons). Nuclides within these two islands of stability might be especially resistant to spontaneous fission and have alpha decay half-lives measurable in years, thus having comparable stability to elements in the vicinity of flerovium. Other regions of relative stability may also appear with weaker proton shell closures in beta-stable nuclides; such possibilities include regions near 342126 and 462154. Substantially greater electromagnetic repulsion between protons in such heavy nuclei may greatly reduce their stability, and possibly restrict their existence to localized islands in the vicinity of shell effects. This may have the consequence of isolating these islands from the main chart of nuclides, as intermediate nuclides and perhaps elements in a "sea of instability" would rapidly undergo fission and essentially be nonexistent. It is also possible that beyond a region of relative stability around element 126, heavier nuclei would lie beyond a fission threshold given by the liquid drop model and thus undergo fission with very short lifetimes, rendering them essentially nonexistent even in the vicinity of greater magic numbers. It has also been posited that in the region beyond A > 300, an entire "continent of stability" consisting of a hypothetical phase of stable quark matter, comprising freely flowing up and down quarks rather than quarks bound into protons and neutrons, may exist. Such a form of matter is theorized to be a ground state of baryonic matter with a greater binding energy per baryon than nuclear matter, favoring the decay of nuclear matter beyond this mass threshold into quark matter. If this state of matter exists, it could possibly be synthesized in the same fusion reactions leading to normal superheavy nuclei, and would be stabilized against fission as a consequence of its stronger binding that is enough to overcome Coulomb repulsion.
Physical sciences
Nuclear physics
Physics
66432
https://en.wikipedia.org/wiki/Progesterone
Progesterone
Progesterone (; P4) is an endogenous steroid and progestogen sex hormone involved in the menstrual cycle, pregnancy, and embryogenesis of humans and other species. It belongs to a group of steroid hormones called the progestogens and is the major progestogen in the body. Progesterone has a variety of important functions in the body. It is also a crucial metabolic intermediate in the production of other endogenous steroids, including the sex hormones and the corticosteroids, and plays an important role in brain function as a neurosteroid. In addition to its role as a natural hormone, progesterone is also used as a medication, such as in combination with estrogen for contraception, to reduce the risk of uterine or cervical cancer, in hormone replacement therapy, and in feminizing hormone therapy. It was first prescribed in 1934. Biological activity Progesterone is the most important progestogen in the body. As a potent agonist of the nuclear progesterone receptor (nPR) (with an affinity of KD = 1 nM) the resulting effects on ribosomal transcription plays a major role in regulation of female reproduction. In addition, progesterone is an agonist of the more recently discovered membrane progesterone receptors (mPRs), of which the expression has regulation effects in reproduction function (oocyte maturation, labor, and sperm motility) and cancer although additional research is required to further define the roles. It also functions as a ligand of the PGRMC1 (progesterone receptor membrane component 1) which impacts tumor progression, metabolic regulation, and viability control of nerve cells. Moreover, progesterone is also known to be an antagonist of the sigma σ1 receptor, a negative allosteric modulator of nicotinic acetylcholine receptors, and a potent antagonist of the mineralocorticoid receptor (MR). Progesterone prevents MR activation by binding to this receptor with an affinity exceeding even those of aldosterone and glucocorticoids such as cortisol and corticosterone, and produces antimineralocorticoid effects, such as natriuresis, at physiological concentrations. In addition, progesterone binds to and behaves as a partial agonist of the glucocorticoid receptor (GR), albeit with very low potency (EC50 >100-fold less relative to cortisol). Progesterone, through its neurosteroid active metabolites such as 5α-dihydroprogesterone and allopregnanolone, acts indirectly as a positive allosteric modulator of the GABAA receptor. Progesterone and some of its metabolites, such as 5β-dihydroprogesterone, are agonists of the pregnane X receptor (PXR), albeit weakly so (EC50 >10 μM). In accordance, progesterone induces several hepatic cytochrome P450 enzymes, such as CYP3A4, especially during pregnancy when concentrations are much higher than usual. Perimenopausal women have been found to have greater CYP3A4 activity relative to men and postmenopausal women, and it has been inferred that this may be due to the higher progesterone levels present in perimenopausal women. Progesterone modulates the activity of CatSper (cation channels of sperm) voltage-gated Ca2+ channels. Since eggs release progesterone, sperm may use progesterone as a homing signal to swim toward eggs (chemotaxis). As a result, it has been suggested that substances that block the progesterone binding site on CatSper channels could potentially be used in male contraception. Biological function Hormonal interactions Progesterone has a number of physiological effects that are amplified in the presence of estrogens. Estrogens through estrogen receptors (ERs) induce or upregulate the expression of the PR. One example of this is in breast tissue, where estrogens allow progesterone to mediate lobuloalveolar development. Elevated levels of progesterone potently reduce the sodium-retaining activity of aldosterone, resulting in natriuresis and a reduction in extracellular fluid volume. Progesterone withdrawal, on the other hand, is associated with a temporary increase in sodium retention (reduced natriuresis, with an increase in extracellular fluid volume) due to the compensatory increase in aldosterone production, which combats the blockade of the mineralocorticoid receptor by the previously elevated level of progesterone. Early sexual differentiation Progesterone plays a role in early human sexual differentiation. Placental progesterone is the feedstock for the 5α-dihydrotestosterone (DHT) produced via the backdoor pathway found operating in multiple non-gonadal tissues of the fetus, whereas deficiencies in this pathway lead to undervirilization of the male fetus, resulting in incomplete development of the male genitalia. DHT is a potent androgen that is responsible for the development of male genitalia, including the penis and scrotum. During early fetal development, the undifferentiated gonads can develop into either testes or ovaries. The presence of the Y chromosome leads to the development of testes. The testes then produce testosterone, which is converted to DHT via the enzyme 5α-reductase. DHT is a potent androgen that is responsible for the masculinization of the external genitalia and the development of the prostate gland. Progesterone, produced by the placenta during pregnancy, plays a role in fetal sexual differentiation by serving as a precursor molecule for the synthesis of DHT via the backdoor pathway. In the absence of adequate levels of steroidogenic enzymes during fetal development, the backdoor pathway for DHT synthesis can become deficient, leading to undermasculinization of the male fetus. This can result in the development of ambiguous genitalia or even female genitalia in some cases. Therefore, both DHT and progesterone play crucial roles in early fetal sexual differentiation, with progesterone acting as a precursor molecule for DHT synthesis and DHT promoting the development of male genitalia. Reproductive system Progesterone has key effects via non-genomic signalling on human sperm as they migrate through the female reproductive tract before fertilization occurs, though the receptor(s) as yet remain unidentified. Detailed characterisation of the events occurring in sperm in response to progesterone has elucidated certain events including intracellular calcium transients and maintained changes, slow calcium oscillations, now thought to possibly regulate motility. It is produced by the ovaries. Progesterone has also been shown to demonstrate effects on octopus spermatozoa. Progesterone is sometimes called the "hormone of pregnancy", and it has many roles relating to the development of the fetus: Progesterone converts the endometrium to its secretory stage to prepare the uterus for implantation. At the same time progesterone affects the vaginal epithelium and cervical mucus, making it thick and impenetrable to sperm. Progesterone is anti-mitogenic in endometrial epithelial cells, and as such, mitigates the tropic effects of estrogen. If pregnancy does not occur, progesterone levels will decrease, leading to menstruation. Normal menstrual bleeding is progesterone-withdrawal bleeding. If ovulation does not occur and the corpus luteum does not develop, levels of progesterone may be low, leading to anovulatory dysfunctional uterine bleeding. During implantation and gestation, progesterone appears to decrease the maternal immune response to allow for the acceptance of the pregnancy. Progesterone decreases contractility of the uterine smooth muscle. This effect contributes to prevention of preterm labor. Studies have shown that in individuals who are pregnant with a single fetus, asymptomatic in the prenatal stage, and at a high risk of giving pre-term birth spontaneously, vaginal progesterone medication has been found to be effective in preventing spontaneous pre-term birth. Individuals who are at a high risk of giving pre-term birth spontaneously are those who have a short cervix of less than 25 mm or have previously given pre-term birth spontaneously. Although pre-term births are generally considered to be less than 37 weeks, these studies found that vaginal progesterone is associated with fewer pre-term births of less than 34 weeks. A drop in progesterone levels is possibly one step that facilitates the onset of labor. In addition, progesterone inhibits lactation during pregnancy. The fall in progesterone levels following delivery is one of the triggers for milk production. The fetus metabolizes placental progesterone in the production of adrenal steroids. Breasts Lobuloalveolar development Progesterone plays an important role in breast development. In conjunction with prolactin, it mediates lobuloalveolar maturation of the mammary glands during pregnancy to allow for milk production and thus lactation and breastfeeding of offspring following parturition (childbirth). Estrogen induces expression of the PR in breast tissue and hence progesterone is dependent on estrogen to mediate lobuloalveolar development. It has been found that is a critical downstream mediator of progesterone-induced lobuloalveolar maturation. RANKL knockout mice show an almost identical mammary phenotype to PR knockout mice, including normal mammary ductal development but complete failure of the development of lobuloalveolar structures. Ductal development Though to a far lesser extent than estrogen, which is the major mediator of mammary ductal development (via the ERα), progesterone may be involved in ductal development of the mammary glands to some extent as well. PR knockout mice or mice treated with the PR antagonist mifepristone show delayed although otherwise normal mammary ductal development at puberty. In addition, mice modified to have overexpression of PRA display ductal hyperplasia, and progesterone induces ductal growth in the mouse mammary gland. Progesterone mediates ductal development mainly via induction of the expression of amphiregulin, the same growth factor that estrogen primarily induces the expression of to mediate ductal development. These animal findings suggest that, while not essential for full mammary ductal development, progesterone seems to play a potentiating or accelerating role in estrogen-mediated mammary ductal development. Breast cancer risk Progesterone also appears to be involved in the pathophysiology of breast cancer, though its role, and whether it is a promoter or inhibitor of breast cancer risk, has not been fully elucidated. Most progestins, or synthetic progestogens, like medroxyprogesterone acetate, have been found to increase the risk of breast cancer in postmenopausal people in combination with estrogen as a component of menopausal hormone therapy. The combination of natural oral progesterone or the atypical progestin dydrogesterone with estrogen has been associated with less risk of breast cancer than progestins plus estrogen. However, this may simply be an artifact of the low progesterone levels produced with oral progesterone. More research is needed on the role of progesterone in breast cancer. Skin health The estrogen receptor, as well as the progesterone receptor, have been detected in the skin, including in keratinocytes and fibroblasts. At menopause and thereafter, decreased levels of female sex hormones result in atrophy, thinning, and increased wrinkling of the skin and a reduction in skin elasticity, firmness, and strength. These skin changes constitute an acceleration in skin aging and are the result of decreased collagen content, irregularities in the morphology of epidermal skin cells, decreased ground substance between skin fibers, and reduced capillaries and blood flow. The skin also becomes more dry during menopause, which is due to reduced skin hydration and surface lipids (sebum production). Along with chronological aging and photoaging, estrogen deficiency in menopause is one of the three main factors that predominantly influences skin aging. Hormone replacement therapy, consisting of systemic treatment with estrogen alone or in combination with a progestogen, has well-documented and considerable beneficial effects on the skin of postmenopausal people. These benefits include increased skin collagen content, skin thickness and elasticity, and skin hydration and surface lipids. Topical estrogen has been found to have similar beneficial effects on the skin. In addition, a study has found that topical 2% progesterone cream significantly increases skin elasticity and firmness and observably decreases wrinkles in peri- and postmenopausal people. Skin hydration and surface lipids, on the other hand, did not significantly change with topical progesterone. These findings suggest that progesterone, like estrogen, also has beneficial effects on the skin, and may be independently protective against skin aging. Sexuality Libido Progesterone and its neurosteroid active metabolite allopregnanolone appear to be importantly involved in libido in females. Homosexuality Dr. Diana Fleischman, of the University of Portsmouth, and colleagues looked for a relationship between progesterone and sexual attitudes in 92 women. Their research, published in the Archives of Sexual Behavior found that women who had higher levels of progesterone scored higher on a questionnaire measuring homoerotic motivation. They also found that men who had high levels of progesterone were more likely to have higher homoerotic motivation scores after affiliative priming compared to men with low levels of progesterone. Nervous system Progesterone, like pregnenolone and dehydroepiandrosterone (DHEA), belongs to an important group of endogenous steroids called neurosteroids. It can be metabolized within all parts of the central nervous system. Neurosteroids are neuromodulators, and are neuroprotective, neurogenic, and regulate neurotransmission and myelination. The effects of progesterone as a neurosteroid are mediated predominantly through its interactions with non-nuclear PRs, namely the mPRs and PGRMC1, as well as certain other receptors, such as the σ1 and nACh receptors. Brain damage Previous studies have shown that progesterone supports the normal development of neurons in the brain, and that the hormone has a protective effect on damaged brain tissue. It has been observed in animal models that females have reduced susceptibility to traumatic brain injury and this protective effect has been hypothesized to be caused by increased circulating levels of estrogen and progesterone in females. Proposed mechanism The mechanism of progesterone protective effects may be the reduction of inflammation that follows brain trauma and hemorrhage. Damage incurred by traumatic brain injury is believed to be caused in part by mass depolarization leading to excitotoxicity. One way in which progesterone helps to alleviate some of this excitotoxicity is by blocking the voltage-dependent calcium channels that trigger neurotransmitter release. It does so by manipulating the signaling pathways of transcription factors involved in this release. Another method for reducing the excitotoxicity is by up-regulating the GABAA, a widespread inhibitory neurotransmitter receptor. Progesterone has also been shown to prevent apoptosis in neurons, a common consequence of brain injury. It does so by inhibiting enzymes involved in the apoptosis pathway specifically concerning the mitochondria, such as activated caspase 3 and cytochrome c. Not only does progesterone help prevent further damage, it has also been shown to aid in neuroregeneration. One of the serious effects of traumatic brain injury includes edema. Animal studies show that progesterone treatment leads to a decrease in edema levels by increasing the concentration of macrophages and microglia sent to the injured tissue. This was observed in the form of reduced leakage from the blood brain barrier in secondary recovery in progesterone treated rats. In addition, progesterone was observed to have antioxidant properties, reducing the concentration of oxygen free radicals faster than without. There is also evidence that the addition of progesterone can also help remyelinate damaged axons due to trauma, restoring some lost neural signal conduction. Another way progesterone aids in regeneration includes increasing the circulation of endothelial progenitor cells in the brain. This helps new vasculature to grow around scar tissue which helps repair the area of insult. Addiction Progesterone enhances the function of serotonin receptors in the brain, so an excess or deficit of progesterone has the potential to result in significant neurochemical issues. This provides an explanation for why some people resort to substances that enhance serotonin activity such as nicotine, alcohol, and cannabis when their progesterone levels fall below optimal levels. Sex differences in hormone levels may induce women to respond differently than men to nicotine. When women undergo cyclic changes or different hormonal transition phases (menopause, pregnancy, adolescence), there are changes in their progesterone levels. Therefore, females have an increased biological vulnerability to nicotine's reinforcing effects compared to males and progesterone may be used to counter this enhanced vulnerability. This information supports the idea that progesterone can affect behavior. Similar to nicotine, cocaine also increases the release of dopamine in the brain. The neurotransmitter is involved in the reward center and is one of the main neurotransmitters involved with substance abuse and reliance. In a study of cocaine users, it was reported that progesterone reduced craving and the feeling of being stimulated by cocaine. Thus, progesterone was suggested as an agent that decreases cocaine craving by reducing the dopaminergic properties of the drug. Societal In a 2012 University of Amsterdam study of 120 women, women's luteal phase (higher levels of progesterone, and increasing levels of estrogen) was correlated with a lower level of competitive behavior in gambling and math contest scenarios, while their premenstrual phase (sharply-decreasing levels of progesterone, and decreasing levels of estrogen) was correlated with a higher level of competitive behavior. Other effects Progesterone also has a role in skin elasticity and bone strength, in respiration, in nerve tissue and in female sexuality, and the presence of progesterone receptors in certain muscle and fat tissue may hint at a role in sexually dimorphic proportions of those. During pregnancy, progesterone is said to decrease uterine irritability. During pregnancy, progesterone helps to suppress immune responses of the mother to fetal antigens, which prevents rejection of the fetus. Progesterone raises epidermal growth factor-1 (EGF-1) levels, a factor often used to induce proliferation, and used to sustain cultures, of stem cells. Progesterone increases core temperature (thermogenic function) during ovulation. Progesterone reduces spasm and relaxes smooth muscle. Bronchi are widened and mucus regulated. (PRs are widely present in submucosal tissue.) Progesterone acts as an antiinflammatory agent and regulates the immune response. Progesterone reduces gall-bladder activity. Progesterone normalizes blood clotting and vascular tone, zinc and copper levels, cell oxygen levels, and use of fat stores for energy. Progesterone may affect gum health, increasing risk of gingivitis (gum inflammation). Progesterone appears to prevent endometrial cancer (involving the uterine lining) by regulating the effects of estrogen. Progesterone plays an important role in the signaling of insulin release and pancreatic function, and may affect the susceptibility to diabetes or gestational diabetes. Progesterone levels in the blood were found to be lower in those who had higher weight and higher BMI among those who became pregnant through in vitro fertilization. Current data shows that micronized progesterone, which is chemically identical to the progesterone produced in people's bodies, in combination with estrogen in menopausal hormone therapy does not seem to have significant effects on venous thromboembolism (blood clots in veins) and ischemic stroke (lack of blood flow to the brain due to blockage of a blood vessel that supplies the brain). However, more studies need to be conducted to see whether or not micronized progesterone alone or in combined menopausal hormone therapy changes the risk of myocardial infarctions (heart attacks). There have not been any studies done yet on the effects of micronized progesterone on hair loss due to menopause. Despite suggestions for using hormone therapy to prevent loss of muscle mass in post-menopausal individuals (50 and older), menopausal hormone therapy involving either estrogen alone or estrogen and progesterone has not been found to preserve muscle mass. Menopausal hormone therapy also does not result in body weight reduction, BMI reduction, or change in glucose metabolism. Biochemistry Biosynthesis In mammals, progesterone, like all other steroid hormones, is synthesized from pregnenolone, which itself is derived from cholesterol. Cholesterol undergoes double oxidation to produce 22R-hydroxycholesterol and then 20α,22R-dihydroxycholesterol. This vicinal diol is then further oxidized with loss of the side chain starting at position C22 to produce pregnenolone. This reaction is catalyzed by cytochrome P450scc. The conversion of pregnenolone to progesterone takes place in two steps. First, the 3β-hydroxyl group is oxidized to a keto group and second, the double bond is moved to C4, from C5 through a keto/enol tautomerization reaction. This reaction is catalyzed by 3β-hydroxysteroid dehydrogenase/δ5-4-isomerase. Progesterone in turn is the precursor of the mineralocorticoid aldosterone, and after conversion to 17α-hydroxyprogesterone, of cortisol and androstenedione. Androstenedione can be converted to testosterone, estrone, and estradiol, highlighting the critical role of progesterone in testosterone synthesis. Pregnenolone and progesterone can also be synthesized by yeast. Approximately 25 mg of progesterone is secreted from the ovaries per day, while the adrenal glands produce about 2 mg of progesterone per day. Distribution Progesterone binds extensively to plasma proteins, including albumin (50–54%) and transcortin (43–48%). It has similar affinity for albumin relative to the PR. Metabolism The metabolism of progesterone is rapid and extensive and occurs mainly in the liver, though enzymes that metabolize progesterone are also expressed widely in the brain, skin, and various other extrahepatic tissues. Progesterone has an elimination half-life of only approximately 5 minutes in circulation. The metabolism of progesterone is complex, and it may form as many as 35 different unconjugated metabolites when it is ingested orally. Progesterone is highly susceptible to enzymatic reduction via reductases and hydroxysteroid dehydrogenases due to its double bond (between the C4 and C5 positions) and its two ketones (at the C3 and C20 positions). The major metabolic pathway of progesterone is reduction by 5α-reductase and 5β-reductase into the dihydrogenated 5α-dihydroprogesterone and 5β-dihydroprogesterone, respectively. This is followed by the further reduction of these metabolites via 3α-hydroxysteroid dehydrogenase and 3β-hydroxysteroid dehydrogenase into the tetrahydrogenated allopregnanolone, pregnanolone, isopregnanolone, and epipregnanolone. Subsequently, 20α-hydroxysteroid dehydrogenase and 20β-hydroxysteroid dehydrogenase reduce these metabolites to form the corresponding hexahydrogenated pregnanediols (eight different isomers in total), which are then conjugated via glucuronidation and/or sulfation, released from the liver into circulation, and excreted by the kidneys into the urine. The major metabolite of progesterone in the urine is the 3α,5β,20α isomer of pregnanediol glucuronide, which has been found to constitute 15 to 30% of an injection of progesterone. Other metabolites of progesterone formed by the enzymes in this pathway include 3α-dihydroprogesterone, 3β-dihydroprogesterone, 20α-dihydroprogesterone, and 20β-dihydroprogesterone, as well as various combination products of the enzymes aside from those already mentioned. Progesterone can also first be hydroxylated (see below) and then reduced. Endogenous progesterone is metabolized approximately 50% into 5α-dihydroprogesterone in the corpus luteum, 35% into 3β-dihydroprogesterone in the liver, and 10% into 20α-dihydroprogesterone. Relatively small portions of progesterone are hydroxylated via 17α-hydroxylase (CYP17A1) and 21-hydroxylase (CYP21A2) into 17α-hydroxyprogesterone and 11-deoxycorticosterone (21-hydroxyprogesterone), respectively, and pregnanetriols are formed secondarily to 17α-hydroxylation. Even smaller amounts of progesterone may be also hydroxylated via 11β-hydroxylase (CYP11B1) and to a lesser extent via aldosterone synthase (CYP11B2) into 11β-hydroxyprogesterone. In addition, progesterone can be hydroxylated in the liver by other cytochrome P450 enzymes which are not steroid-specific. 6β-Hydroxylation, which is catalyzed mainly by CYP3A4, is the major transformation, and is responsible for approximately 70% of cytochrome P450-mediated progesterone metabolism. Other routes include 6α-, 16α-, and 16β-hydroxylation. However, treatment of women with ketoconazole, a strong CYP3A4 inhibitor, had minimal effects on progesterone levels, producing only a slight and non-significant increase, and this suggests that cytochrome P450 enzymes play only a small role in progesterone metabolism. Levels Progesterone levels are relatively low during the preovulatory phase of the menstrual cycle, rise after ovulation, and are elevated during the luteal phase, as shown in the diagram above. Progesterone levels tend to be less than 2 ng/mL prior to ovulation and greater than 5 ng/mL after ovulation. If pregnancy occurs, human chorionic gonadotropin is released, maintaining the corpus luteum and allowing it to maintain levels of progesterone. Between 7 and 9 weeks, the placenta begins to produce progesterone in place of the corpus luteum in a process called the luteal-placental shift. After the luteal-placental shift, progesterone levels start to rise further and may reach 100 to 200 ng/mL at term. Whether a decrease in progesterone levels is critical for the initiation of labor has been argued and may be species-specific. After delivery of the placenta and during lactation, progesterone levels are very low. Progesterone levels are low in children and postmenopausal people. Adult males have levels similar to those in women during the follicular phase of the menstrual cycle. Ranges Blood test results should always be interpreted using the reference ranges provided by the laboratory that performed the results. Example reference ranges are listed below. Sources Animal Progesterone is produced in high amounts in the ovaries (by the corpus luteum) from the onset of puberty to menopause, and is also produced in smaller amounts by the adrenal glands after the onset of adrenarche in both males and females. To a lesser extent, progesterone is produced in nervous tissue, especially in the brain, and in adipose (fat) tissue, as well. During human pregnancy, progesterone is produced in increasingly high amounts by the ovaries and placenta. At first, the source is the corpus luteum that has been "rescued" by the presence of human chorionic gonadotropin (hCG) from the conceptus. However, after the 8th week, production of progesterone shifts to the placenta. The placenta utilizes maternal cholesterol as the initial substrate, and most of the produced progesterone enters the maternal circulation, but some is picked up by the fetal circulation and used as substrate for fetal corticosteroids. At term the placenta produces about 250 mg progesterone per day. An additional animal source of progesterone is milk products. After consumption of milk products the level of bioavailable progesterone goes up. Plants In at least one plant, Juglans regia, progesterone has been detected. In addition, progesterone-like steroids are found in Dioscorea mexicana. Dioscorea mexicana is a plant that is part of the yam family native to Mexico. It contains a steroid called diosgenin that is taken from the plant and is converted into progesterone. Diosgenin and progesterone are also found in other Dioscorea species, as well as in other plants that are not closely related, such as fenugreek. Another plant that contains substances readily convertible to progesterone is Dioscorea pseudojaponica native to Taiwan. Research has shown that the Taiwanese yam contains saponins — steroids that can be converted to diosgenin and thence to progesterone. Many other Dioscorea species of the yam family contain steroidal substances from which progesterone can be produced. Among the more notable of these are Dioscorea villosa and Dioscorea polygonoides. One study showed that the Dioscorea villosa contains 3.5% diosgenin. Dioscorea polygonoides has been found to contain 2.64% diosgenin as shown by gas chromatography-mass spectrometry. Many of the Dioscorea species that originate from the yam family grow in countries that have tropical and subtropical climates. Medical use Progesterone is used as a medication. It is used in combination with estrogens mainly in hormone therapy for menopausal symptoms and low sex hormone levels. It may also be used alone to treat menopausal symptoms. Studies have shown that transdermal progesterone (skin patch) and oral micronized progesterone are effective treatments for certain symptoms of menopause such as hot flashes and night sweats, which are otherwise referred to as vasomotor symptoms or VMS. It is also used to support pregnancy and fertility and to treat gynecological disorders. Progesterone has been shown to prevent miscarriage in those with 1) vaginal bleeding early in their current pregnancy and 2) a previous history of miscarriage. Progesterone can be taken by mouth, through the vagina, and by injection into muscle or fat, among other routes. Chemistry Progesterone is a naturally occurring pregnane steroid and is also known as pregn-4-ene-3,20-dione. It has a double bond (4-ene) between the C4 and C5 positions and two ketone groups (3,20-dione), one at the C3 position and the other at the C20 position. Synthesis Progesterone is commercially produced by semisynthesis. Two main routes are used: one from yam diosgenin first pioneered by Marker in 1940, and one based on soy phytosterols scaled up in the 1970s. Additional (not necessarily economical) semisyntheses of progesterone have also been reported starting from a variety of steroids. For the example, cortisone can be simultaneously deoxygenated at the C-17 and C-21 position by treatment with iodotrimethylsilane in chloroform to produce 11-keto-progesterone (ketogestin), which in turn can be reduced at position-11 to yield progesterone. Marker semisynthesis An economical semisynthesis of progesterone from the plant steroid diosgenin isolated from yams was developed by Russell Marker in 1940 for the Parke-Davis pharmaceutical company. This synthesis is known as the Marker degradation. The 16-DPA intermediate is important to the synthesis of many other medically important steroids. A very similar approach can produce 16-DPA from solanine. Soy semisynthesis Progesterone can also be made from the stigmasterol found in soybean oil also. c.f. Percy Julian. Total synthesis A total synthesis of progesterone was reported in 1971 by W.S. Johnson. The synthesis begins with reacting the phosphonium salt 7 with phenyl lithium to produce the phosphonium ylide 8. The ylide 8 is reacted with an aldehyde to produce the alkene 9. The ketal protecting groups of 9 are hydrolyzed to produce the diketone 10, which in turn is cyclized to form the cyclopentenone 11. The ketone of 11 is reacted with methyl lithium to yield the tertiary alcohol 12, which in turn is treated with acid to produce the tertiary cation 13. The key step of the synthesis is the π-cation cyclization of 13 in which the B-, C-, and D-rings of the steroid are simultaneously formed to produce 14. This step resembles the cationic cyclization reaction used in the biosynthesis of steroids and hence is referred to as biomimetic. In the next step the enol orthoester is hydrolyzed to produce the ketone 15. The cyclopentene A-ring is then opened by oxidizing with ozone to produce 16. Finally, the diketone 17 undergoes an intramolecular aldol condensation by treating with aqueous potassium hydroxide to produce progesterone. History George W. Corner and Willard M. Allen discovered the hormonal action of progesterone in 1929. By 1931–1932, nearly pure crystalline material of high progestational activity had been isolated from the corpus luteum of animals, and by 1934, pure crystalline progesterone had been refined and obtained and the chemical structure of progesterone was determined. This was achieved by Adolf Butenandt at the Chemisches Institut of Technical University in Danzig, who extracted this new compound from several thousand liters of urine. Chemical synthesis of progesterone from stigmasterol and pregnanediol was accomplished later that year. Up to this point, progesterone, known generically as corpus luteum hormone, had been being referred to by several groups by different names, including corporin, lutein, luteosterone, and progestin. In 1935, at the time of the Second International Conference on the Standardization of Sex Hormones in London, England, a compromise was made between the groups and the name progesterone (progestational steroidal ketone) was created. Veterinary use The use of progesterone tests in dog breeding to pinpoint ovulation is becoming more widely used. There are several tests available but the most reliable test is a blood test with blood drawn by a veterinarian and sent to a lab for processing. Results can usually be obtained with 24 to 72 hours. The rationale for using progesterone tests is that increased numbers begin in close proximity to preovulatory surge in gonadotrophins and continue through ovulation and estrus. When progesterone levels reach certain levels they can signal the stage of estrus the female is. Prediction of birth date of the pending litter can be very accurate if ovulation date is known. Puppies deliver with a day or two of 9 weeks gestation in most cases. It is not possible to determine pregnancy using progesterone tests once a breeding has taken place, however. This is due to the fact that, in dogs, progesterone levels remain elevated throughout the estrus period. Pricing Pricing for progesterone can vary depending location, insurance coverage, discount coupons, quantity, shortages, manufacturers, brand or generic versions, different pharmacies, and so on. As of currently, 30 capsules of 100 mg of the generic version, Prometrium, from CVS Pharmacy is around $40 without any discounts or insurance applied. The brand version, Progesterone, is around $450 for 30 capsules without any discounts or insurance applied. In comparison, Walgreens offers 30 capsules of 100 mg in the generic version for $51 without insurance or coupons applied. The brand name costs around $431 for 30 capsules of 100 mg.
Biology and health sciences
Animal hormones
Biology
66488
https://en.wikipedia.org/wiki/Moscow%20Metro
Moscow Metro
The Moscow Metro is a metro system serving the Russian capital of Moscow as well as the neighbouring cities of Krasnogorsk, Reutov, Lyubertsy and Kotelniki in Moscow Oblast. Opened in 1935 with one line and 13 stations, it was the first underground railway system in the Soviet Union. , the Moscow Metro had 297 stations and of route length, excluding light rail Monorail, making it the 8th-longest in the world, the longest in Europe and the longest outside China. It is also the only system with three circle lines. The system is mostly underground, with the deepest section underground at the Park Pobedy station, one of the world's deepest underground stations. It is the busiest metro system in Europe, the busiest in the world outside Asia, and is considered a tourist attraction in itself, thanks to its lavish interior decoration. The Moscow Metro is a world leader in the frequency of train traffic, as intervals during peak hours often do not exceed 90 seconds. In February 2023, Moscow was the first in the world to reduce the intervals of metro trains to 80 seconds, but in practice trains are not likely to exceed the 90 seconds interval. Name The full legal name of the metro has been Moscow Order of Lenin and Order of the Red Banner of Labor V.I. Lenin Metro () since 1955. This is usually shortened to V.I. Lenin Metro (). This shorter official name appears on many stations. Although there were proposals to remove Lenin from the official name, it still stands. During the 1990s and 2000s, Lenin's name was excluded from the signage on newly built and reconstructed stations. In 2016, a Metro representative stated that Lenin's name would remain on station name plates as it aligns with the official name of the company, unchanged since the Soviet era. The first official name of the metro was L. M. Kaganovich Metro () after Lazar Kaganovich (see History section). However, when the Metro was awarded the Order of Lenin, it was officially renamed Moscow Order of Lenin L. M. Kaganovich Metro () in 1947. When the metro was renamed in 1955, the Okhotny Ryad station was renamed as "Imeni Kaganovicha" in honor of Lazar Kaganovich. In 1957, the original Okhotny Ryad name of the station was reinstated. Logo The first line of the Moscow Metro was launched in 1935, complete with the first logo, the capital M paired with the text "МЕТРО". There is no accurate information about the author of the logo, so it is often attributed to the architects of the first stations – Samuil Kravets, Ivan Taranov and Nadezhda Bykova. At the opening in 1935, the M letter on the logo had no definite shape. In 2014, the Moscow Metro adopted a standardized logo of the network as part of a broader rebranding of the Moscow Transport. Operations The Moscow Metro, a state-owned enterprise, is long and consists of 15 lines and 263 stations organized in a spoke-hub distribution paradigm, with the majority of rail lines running radially from the centre of Moscow to the outlying areas. The Koltsevaya Line (line 5) forms a long circle which enables passenger travel between these diameters, and the new Moscow Central Circle (line 14) and even newer Bolshaya Koltsevaya line (line 11) form a and long circles respectively that serve a similar purpose on middle periphery. Most stations and lines are underground, but some lines have at-grade and elevated sections; the Filyovskaya Line, Butovskaya Line and the Central Circle Line are the three lines that are at grade or mostly at grade. The Moscow Metro uses , like other Russian railways, and an underrunning third rail with a supply of 825 Volts DC, except lines 13 and 14, the former being a monorail, and the latter being directly connected to the mainlines with 3000V DC overhead lines, as is typical. The average distance between stations is ; the shortest ( long) section is between Delovoy Tsentr and Mezhdunarodnaya, and the longest ( long) is between Krylatskoye and Strogino. Long distances between stations have the positive effect of a high cruising speed of . The Moscow Metro opens at 05:25 and closes at 01:00. The exact opening time varies at different stations according to the arrival of the first train, but all stations simultaneously close their entrances at 01:00 for maintenance, and so do transfer corridors. The minimum interval between trains is 90 seconds during the morning and evening rush hours. As of 2017, the system had an average daily ridership of 6.99 million passengers. Peak daily ridership of 9.71 million was recorded on 26 December 2014. Free Wi-Fi has been available on all lines of the Moscow Metro since 2 December 2014. Network Lines Each line is identified by a name, an alphanumeric index (usually consisting of just a number, and sometimes a letter suffix), and a colour. The colour assigned to each line is its colloquial identifier, except for the nondescript greens and blues assigned to the Bolshaya Koltsevaya, the Lyublinsko-Dmitrovskaya, and Butovskaya lines (lines 11, 10, and 12, respectively). The upcoming station is announced by a male voice on inbound trains to the city center (on the Circle line, the clockwise trains), and by a female voice on outbound trains (anti-clockwise trains on the Circle line). The metro has a connection to the Moscow Monorail, a , six-station monorail line between Timiryazevskaya and VDNKh which opened in January 2008. Prior to the official opening, the monorail had operated in "excursion mode" since 2004. Also, from 11 August 1969 to 26 October 2019, the Moscow Metro included Kakhovskaya line long with 3 stations, which closed for a long reconstruction. On 7 December 2021, Kakhovskaya is reopened after reconstruction as part of the Bolshaya Koltsevaya line. The renewed Varshavskaya and Kashirskaya stations reopened as part of the Bolshaya Koltsevaya line, which became fully functional on 1 March 2023. Its new stations included Pechatniki, Nagatinsky Zaton and Klenovy Bulvar. Renamed lines Sokolnicheskaya line was previously named Kirovsko-Fruzenskaya Zamoskvoretskaya line was previously named Gorkovsko-Zamoskvoretskaya. Filyovskaya line was previously named Arbatsko-Filyovskaya. Tagansko-Krasnopresnenskaya line was previously named Zhdanovsko-Krasnopresnenskaya History The first plans for a metro system in Moscow date back to the Russian Empire but were postponed by World War I, the October Revolution and the Russian Civil War. In 1923, the Moscow City Council formed the Underground Railway Design Office at the Moscow Board of Urban Railways. It carried out preliminary studies, and by 1928 had developed a project for the first route from Sokolniki to the city centre. At the same time, an offer was made to the German company Siemens Bauunion to submit its own project for the same route. In June 1931, the decision to begin construction of the Moscow Metro was made by the Central Committee of the Communist Party of the Soviet Union. In January 1932 the plan for the first lines was approved, and on 21 March 1933 the Soviet government approved a plan for 10 lines with a total route length of . The first lines were built using the Moscow general plan designed by Lazar Kaganovich, along with his project managers (notably Ivan M. Kuznetsov and, later, Isaac Y. Segal) in the 1930s–1950s, and the Metro was named after him until 1955 (). The Moscow Metro construction engineers consulted with their counterparts from the London Underground, the world's oldest metro system, in 1936: British architect Charles Holden and administrator Frank Pick had been working on the station developments of the Piccadilly Line extension, and Soviet delegates to London were impressed by Holden's thoroughly modern redeployment of classical elements and use of high-quality materials for the circular ticket hall of Piccadilly Circus, and so engaged Pick and Holden as advisors to Moscow's metro system. Partly because of this connection, the design of Gants Hill tube station, which was completed in 1947, is reminiscent of a Moscow Metro station. Indeed, Holden's homage to Moscow has been described as a gesture of gratitude for the USSR's helpful role in The Second World War. Soviet workers did the labour and the art work, but the main engineering designs, routes, and construction plans were handled by specialists recruited from London Underground. The British called for tunnelling instead of the "cut-and-cover" technique, the use of escalators instead of lifts, the routes and the design of the rolling stock. The paranoia of the NKVD was evident when the secret police arrested numerous British engineers for espionage because they gained an in-depth knowledge of the city's physical layout. Engineers for the Metropolitan-Vickers Electrical Company (Metrovick) were given a show trial and deported in 1933, ending the role of British business in the USSR. First four stages of construction The first line was opened to the public on 15 May 1935 at 07:00 am. It was long and included 13 stations. The day was celebrated as a technological and ideological victory for socialism (and, by extension, Stalinism). An estimated 285,000 people rode the Metro at its debut, and its design was greeted with pride; street celebrations included parades, plays and concerts. The Bolshoi Theatre presented a choral performance by 2,200 Metro workers; 55,000 colored posters (lauding the Metro as the busiest and fastest in the world) and 25,000 copies of "Songs of the Joyous Metro Conquerors" were distributed. The Moscow Metro averaged and had a top speed of . In comparison, New York City Subway trains averaged a slower and had a top speed of . While the celebration was an expression of popular joy it was also an effective propaganda display, legitimizing the Metro and declaring it a success. The initial line connected Sokolniki to Okhotny Ryad then branching to Park Kultury and Smolenskaya. The latter branch was extended westwards to a new station (Kiyevskaya) in March 1937, the first Metro line crossing the Moskva River over the Smolensky Metro Bridge. The second stage was completed before the war. In March 1938, the Arbatskaya branch was split and extended to the Kurskaya station (now the dark-blue Arbatsko-Pokrovskaya Line). In September 1938, the Gorkovskaya Line opened between Sokol and Teatralnaya. Here the architecture was based on that of the most popular stations in existence (Krasniye Vorota, Okhotnyi Ryad and Kropotkinskaya); while following the popular art-deco style, it was merged with socialist themes. The first deep-level column station Mayakovskaya was built at the same time. Building work on the third stage was delayed (but not interrupted) during World War II, and two Metro sections were put into service; Teatralnaya–Avtozavodskaya (three stations, crossing the Moskva River through a deep tunnel) and Kurskaya–Partizanskaya (four stations) were inaugurated in 1943 and 1944 respectively. War motifs replaced socialist visions in the architectural design of these stations. During the Siege of Moscow in the fall and winter of 1941, Metro stations were used as air-raid shelters; the Council of Ministers moved its offices to the Mayakovskaya platforms, where Stalin made public speeches on several occasions. The Chistiye Prudy station was also walled off, and the headquarters of the Air Defence established there. After the war ended in 1945, construction began on the fourth stage of the Metro, which included the Koltsevaya Line, a deep part of the Arbatsko-Pokrovskaya line from Ploshchad Revolyutsii to Kievskaya and a surface extension to Pervomaiskaya during the early 1950s. The decoration and design characteristic of the Moscow Metro is considered to have reached its zenith in these stations. The Koltsevaya Line was first planned as a line running under the Garden Ring, a wide avenue encircling the borders of Moscow's city centre. The first part of the line – from Park Kultury to Kurskaya (1950) – follows this avenue. Plans were later changed and the northern part of the ring line runs outside the Sadovoye Koltso, thus providing service for seven (out of nine) rail terminals. The next part of the Koltsevaya Line opened in 1952 (Kurskaya–Belorusskaya), and in 1954 the ring line was completed. Stalinist ideals in Metro's history When the Metro opened in 1935, it immediately became the centrepiece of the transportation system (as opposed to horse-carried barrows still widely used in 1930s Moscow). It also became the prototype, the vision for future Soviet large-scale technologies. The artwork of the 13 original stations became nationally and internationally famous. For example, the Sverdlov Square subway station featured porcelain bas-reliefs depicting the daily life of the Soviet peoples, and the bas-reliefs at the Dynamo Stadium sports complex glorified sports and physical prowess on the powerful new "Homo Sovieticus" (Soviet man). The metro was touted as the symbol of the new social ordera sort of Communist cathedral of engineering modernity. The Metro was also iconic for showcasing Socialist Realism in public art. The method was influenced by Nikolay Chernyshevsky, Lenin's favorite 19th-century nihilist, who stated that "art is no useful unless it serves politics". This maxim sums up the reasons why the stations combined aesthetics, technology and ideology: any plan which did not incorporate all three areas cohesively was rejected. Kaganovich was in charge; he designed the subway so that citizens would absorb the values and ethos of Stalinist civilization as they rode. Without this cohesion, the Metro would not reflect Socialist Realism. If the Metro did not utilize Socialist Realism, it would fail to illustrate Stalinist values and transform Soviet citizens into socialists. Anything less than Socialist Realism's grand artistic complexity would fail to inspire a long-lasting, nationalistic attachment to Stalin's new society. Socialist Realism was in fact a method, not exactly a style. Bright future and literal brightness in the Metro of Moscow The Moscow Metro was one of the USSR's most ambitious architectural projects. The metro's artists and architects worked to design a structure that embodied svet (literally "light", figuratively "radiance" or "brilliance") and svetloe budushchee (a well-lit/radiant/bright future). With their reflective marble walls, high ceilings and grand chandeliers, many Moscow Metro stations have been likened to an "artificial underground sun". This palatial underground environment reminded Metro users their taxes were spent on materializing bright future; also, the design was useful for demonstrating the extra structural strength of the underground works (as in Metro doubling as bunkers, bomb shelters). The chief lighting engineer was Abram Damsky, a graduate of the Higher State Art-Technical Institute in Moscow. By 1930 he was a chief designer in Moscow's Elektrosvet Factory, and during World War II was sent to the Metrostroi (Metro Construction) Factory as head of the lighting shop. Damsky recognized the importance of efficiency, as well as the potential for light as an expressive form. His team experimented with different materials (most often cast bronze, aluminum, sheet brass, steel, and milk glass) and methods to optimize the technology. Damsky's discourse on "Lamps and Architecture 1930–1950" describes in detail the epic chandeliers installed in the Taganskaya Station and the Kaluzhskaia station (Oktyabrskaya nowadays, not to be confused with contemporary "Kaluzhskaya" station on line 6). The work of Abram Damsky further publicized these ideas hoping people would associate the party with the idea of bright future. Industrialization Stalin's first five-year plan (1928–1932) facilitated rapid industrialization to build a socialist motherland. The plan was ambitious, seeking to reorient an agrarian society towards industrialism. It was Stalin's fanatical energy, large-scale planning, and resource distribution that kept up the pace of industrialization. The first five-year plan was instrumental in the completion of the Moscow Metro; without industrialization, the Soviet Union would not have had the raw materials necessary for the project. For example, steel was a main component of many subway stations. Before industrialization, it would have been impossible for the Soviet Union to produce enough steel to incorporate it into the metro's design; in addition, a steel shortage would have limited the size of the subway system and its technological advancement. The Moscow Metro furthered the construction of a socialist Soviet Union because the project accorded with Stalin's second five-year plan. The Second Plan focused on urbanization and the development of social services. The Moscow Metro was necessary to cope with the influx of peasants who migrated to the city during the 1930s; Moscow's population had grown from 2.16 million in 1928 to 3.6 million in 1933. The Metro also bolstered Moscow's shaky infrastructure and its communal services, which hitherto were nearly nonexistent. Mobilization The Communist Party had the power to mobilize; because the party was a single source of control, it could focus its resources. The most notable example of mobilization in the Soviet Union occurred during World War II. The country also mobilized in order to complete the Moscow Metro with unprecedented speed. One of the main motivation factors of the mobilization was to overtake the West and prove that a socialist metro could surpass capitalist designs. It was especially important to the Soviet Union that socialism succeed industrially, technologically, and artistically in the 1930s, since capitalism was at a low ebb during the Great Depression. The person in charge of Metro mobilization was Lazar Kaganovich. A prominent Party member, he assumed control of the project as chief overseer. Kaganovich was nicknamed the "Iron Commissar"; he shared Stalin's fanatical energy, dramatic oratory flare, and ability to keep workers building quickly with threats and punishment. He was determined to realise the Moscow Metro, regardless of cost. Without Kaganovich's managerial ability, the Moscow Metro might have met the same fate as the Palace of the Soviets: failure. This was a comprehensive mobilization; the project drew resources and workers from the entire Soviet Union. In his article, archeologist Mike O'Mahoney describes the scope of the Metro mobilization: Skilled engineers were scarce, and unskilled workers were instrumental to the realization of the metro. The Metrostroi (the organization responsible for the Metro's construction) conducted massive recruitment campaigns. It printed 15,000 copies of Udarnik metrostroia (Metrostroi Shock Worker, its daily newspaper) and 700 other newsletters (some in different languages) to attract unskilled laborers. Kaganovich was closely involved in the recruitment campaign, targeting the Komsomol generation because of its strength and youth. Later Soviet stations "Fifth stage" set of stations The beginning of the Cold War led to the construction of a deep section of the Arbatsko-Pokrovskaya Line. The stations on this line were planned as shelters in the event of nuclear war. After finishing the line in 1953 the upper tracks between Ploshchad Revolyutsii and Kiyevskaya were closed, and later reopened in 1958 as a part of the Filyovskaya Line. The stations, too, were supplied with tight gates and life-sustenance systems to function as proper nuclear shelters. In the further development of the Metro the term "stages" was not used any more, although sometimes the stations opened in 1957–1959 are referred to as the "fifth stage". Nikita Khrushchev's era of cost cutting During the late 1950s and throughout the 1960s, the architectural extravagance of new Metro stations was decisively rejected on the orders of Nikita Khrushchev. He had a preference for a utilitarian "minimalism"-like approach to design, similar to Brutalism style. The idea behind the rejection was similar to one used to create Khrushchyovkas: cheap yet easily mass-produced buildings. Stations of his era, as well as most 1970s stations, were simple in design and style, with walls covered with identical square ceramic tiles. Even decorations at the Metro stations almost finished at the time of the ban (such as VDNKh and Alexeyevskaya) got their final decors simplified: VDNKh's arcs/portals, for example, got plain green paint to contrast with well-detailed decorations and pannos around them. A typical layout of the cheap shallow-dug metro station (which quickly became known as Sorokonozhka – "centipede", from early designs with 40 concrete columns in two rows) was developed for all new stations, and the stations were built to look almost identical, differing from each other only in colours of the marble and ceramic tiles. Most stations were built with simpler, cheap technology; this resulted in utilitarian design being flawed in some ways. Some stations such as adjacent Rechnoi Vokzal and Vodny Stadion or sequiential Leninsky Prospect, Akadmicheskaya, Profsoyuznaya and Novye Cheryomushki would have a similar look due to the extensive use of same-sized white or off-white ceramic tiles with hard-to-feel differences. Walls with cheap ceramic tiles were susceptible to train-related vibration: some tiles would eventually fall off and break. It was not always possible to replace the missing tiles with the ones of the exact color and tone, which eventually led to variegated parts of the walls. Metro stations of late USSR The contrasting style gap between the powerfully decorated stations of Moscow's center and the spartan-looking stations of the 1960s was eventually filled. In the mid-1970s the architectural extravagance was partially restored. However, the newer design of shallow "centipede" stations (now with 26 columns, more widely spaced) continued to dominate. For example, Kaluzhskaya "centipede" station from 1974 (adjacent to Novye Cheryomushki station) features non-flat tiles (with 3D effect utilized), and Medvedkovo from 1978 features complex decorations. 1971 station Kitay-Gorod ("Ploshchad Nogina" at the time) features cross-platform interchange (Line 6 and line 7). Although built without "centipede" design or cheap ceramic tiles, the station utilizes near-grayscale selection of colors. It is to note the "southbound" and "northbound" halls of the station have identical look. Babushkinskaya station from 1978 is a no-column station (similar to Biblioteka Imeni Lenina from 1935). 1983 Chertanovskaya station has resemblance to Kropotkinskaya (from 1935). Some stations, such as the deep-dug Shabolovskaya (1980), have the near-tunnel walls decorated with metal sheets, not tiles. Tyoply Stan features a theme related to the name and the location of the station ("Tyoply Stan" used to literally mean warm area): its walls are covered in brick-colored ribbed panes, which look like radiators). Downtown area got such stations as Borovitskaya (1986), with uncovered red bricks and gray, concrete-like colors accompanying a single gold-plated decorative pane known as "Tree of peoples' of USSR" or additional station hall for Tretyakovskaya to house cross-platform interchange system between line 6 and line 8. To this day, Tretyakovskaya metro station consists of two contrasting halls: brutalistic 1971 hall and custom design hall from 1986 reminiscent of Tretyakovskaya Galereya museum located within walking distance. Post-USSR stations of the modern Russian Federation Metro stations of the 1990s and 2000s vary in style, but some of the stations seem to have their own themes: Ulitsa Akademika Yangelya station used to feature thick orange neon lamp-like sodium lights instead of regular white lights. Park Pobedy, the deepest station of the Moscow Metro, was built in 2003; it features extensive use of dark orange polished granite. Slavyansky Bulvar station utilizes a plant-inspired theme (similar to "bionic style"). The sleek variant of aforementioned bionic style is somewhat represented in various Line 10 stations. Sretensky Bulvar station of line 10 is decorated with paintings of nearby memorials and locations. Strogino station has a theme of huge eye-shaped boundaries for lights; with "eyes" occupying the station's ceiling. Troparyovo (2014) features trees made of polished metal. The trees hold the station's diamond-shaped lights. The station, however, is noticeably dim-lit. Delovoy Tsentr (2016, MCC, overground station) has green tint. Lomonosovsky Prospekt (Line 8A) is decorated with various equations. Olkhovaya (2019) uses other plant-inspired themes (ольха noun means alder) with autumn/winter inspired colours. Kosino (2019) uses high-tech style with the addition of thin LED lights. Some bleak, bland-looking "centipedes" like Akademicheskaya and Yugo-Zapadnaya have undergone renovations in the 21st century (new blue-striped white walls on Akademicheskaya, aqualine glassy, shiny walls on Yugo-Zapadnaya). Moscow Central Circle urban railway (Line 14) A new circle metro line in Moscow was relatively quickly made in the 2010s. The Moscow Central Circle line (Line 14) was opened for use in September 2016 by re-purposing and upgrading the Maloe ZheleznoDorozhnoe Kol'tso. A proposal to convert that freight line into a metropolitan railway with frequent passenger service was announced in 2012. The original tracks had been built in pre-revolutionary Moscow decades before the creation of Moscow Metro; the tracks remained in place in one piece as a non-electrified line until the 21st century. Yet the circle route was never abandoned or cut. New track (along the existing one) was laid and all-new stations were built between 2014 and 2016. MCC's stations got such amenities as vending machines and free water closets. Line 14 is operated by Russian Railways and uses full-sized trains (an idea, somewhat similar to S-Train). The extra resemblance to an S-Train line is, the 1908 line now connects modern northern residential districts to western and southern downtown area, with a station adjacent to Moscow International Business Center. There is a noticeable relief of congestion, decrease in usage of formerly overcrowded Koltsevaya line since the introduction of MCC. To make line 14 attractive to frequent Koltsevaya line interchanges users, upgrades over regular comfort of Moscow Metro were made. Use of small laptops/portable video playing devices and food consumption from tupperwares and tubs was also improved for Line 14: the trains have small folding tables in the back of nearly every seat, while the seats are facing one direction like in planes or intercity buses - unlike side-against-side sofas typical for Metro. Unlike MCD lines (D1, D2 etc.) MCC line accepts "unified" tickets and "Troika" cards just like Moscow Metro and buses of Moscow do. Free transfers are permitted between the MCC and the Moscow Metro if the trip before the transfer is less than 90 minutes. It's made possible by using same "Ediny", literally "unified" tickets instead of printing "paper tickets" used at railroads. To interchange to line 14 for free, passenger must keep their freshly used ticket after entering Moscow Metro to apply it upon entering any line 14 station (and vice versa, keep their "fresh" ticket to enter underground Metro line after leaving Line 14 for an interchange). MCD (D lines) In 2019, new lines of Russian Railways got included in the map of Metro as "line D1" and "line D2". Unlike Line 14, the MCD lines actually form S-Train lines, bypassing the "vokzals", terminus stations of respective intercity railways. Line D3 is planned to be launched in August 2023, while D4 will be launched in September of that year. The schedule for the development of the infrastructure of the Central Transport Hub in 2023 was signed by the Moscow Mayor Sergei Sobyanin and the head of Russian Railways Oleg Belozerov in December 2022. As for the fees, MCD accepts Moscow's "Troika" cards. Also, every MCD station has printers which print "station X – station Y" tickets on paper. Users of the D lines must keep their tickets until exiting their destination stations: their exit terminals require a valid "... to station Y" ticket's barcode. Big Circle Line (line 11) After upgrading the railway from 1908 to a proper Metro line, the development of another circle route was re-launched, now adjusted for the pear-shaped circle route of line #14. Throughout the late 2010s, Line 11 was extended from short, tiny Kakhovskaya line to a half-circle (from Kakhovskaya to Savyolovskaya). In early 2023, the circle was finished. Similarly made Shelepikha, Khoroshovskaya, CSKA and Petrovsky Park stations have lots of polished granite and shiny surfaces, in contrast to Soviet "centipedes". Throughout 2018–2021, these stations were connected to line 8A. Narodnoye Opolcheniye (2021) features lots of straight edges and linear decorations (such as uninterrupted "three stripes" style of the ceiling lights and rectangular columns). As for the spring of 2023, the whole circle route line is up and running, forming a circle stretching to the southern near-MKAD residential parts of the city (Prospekt Vernadskogo, Tekstilshchiki) as opposed to the MCC's stretching towards the northern districts of Moscow. In other words, BCL "mirrors" MCC, avoiding forming a perfect circle around the city centre. While being long, the line is now the longest subway line in the world, ahead of the previous record holder - the line 10 of Beijing Subway. Expansions Since the turn of the 2nd millennium several projects have been completed, and more are underway. The first was the Annino-Butovo extension, which extended the Serpukhovsko-Timiryazevskaya Line from Prazhskaya to Ulitsa Akademika Yangelya in 2000, Annino in 2001 and Bulvar Dmitriya Donskogo in 2002. Its continuation, an elevated Butovskaya Line, was inaugurated in 2003. Vorobyovy Gory station, which initially opened in 1959 and was forced to close in 1983 after the concrete used to build the bridge was found to be defective, was rebuilt and reopened after many years in 2002. Another recent project included building a branch off the Filyovskaya Line to the Moscow International Business Center. This included Vystavochnaya (opened in 2005) and Mezhdunarodnaya (opened in 2006). The Strogino–Mitino extension began with Park Pobedy in 2003. Its first stations (an expanded Kuntsevskaya and Strogino) opened in January 2008, and Slavyansky Bulvar followed in September. Myakinino, Volokolamskaya and Mitino opened in December 2009. Myakinino station was built by a state-private financial partnership, unique in Moscow Metro history. A new terminus, Pyatnitskoye Shosse, was completed in December 2012. After many years of construction, the long-awaited Lyublinskaya Line extension was inaugurated with Trubnaya in August 2007 and Sretensky Bulvar in December of that year. In June 2010, it was extended northwards with the Dostoyevskaya and Maryina Roscha stations. In December 2011, the Lyublinskaya Line was expanded southwards by three stations and connected to the Zamoskvoretskaya Line, with the Alma-Atinskaya station opening on the latter in December 2012. The Kalininskaya Line was extended past the Moscow Ring Road in August 2012 with Novokosino station. In 2011, works began on the Third Interchange Contour that is set to take the pressure off the Koltsevaya Line. Eventually the new line will attain a shape of the second ring with connections to all lines (except Koltsevaya and Butovskaya). In 2013, the Tagansko-Krasnopresnenskaya Line was extended after several delays to the south-eastern districts of Moscow outside the Ring Road with the opening of Zhulebino and Lermontovsky Prospekt stations. Originally scheduled for 2013, a new segment of the Kalininskaya Line between Park Pobedy and Delovoy Tsentr (separate from the main part) was opened in January 2014, while the underground extension of Butovskaya Line northwards to offer a transfer to the Kaluzhsko-Rizhskaya Line was completed in February. Spartak, a station on the Tagansko-Krasnopresnenskaya Line that remained unfinished for forty years, was finally opened in August 2014. The first stage of the southern extension of the Sokolnicheskaya Line, the Troparyovo station, opened in December 2014. Current plans In addition to major metro expansion the Moscow Government and Russian Railways plans to upgrade more commuter railways to a metro-style service, similar to the MCC. New tracks and stations are planned to be built in order to achieve this. Stations Of the metro's 250 stations, 88 are deep underground, 123 are shallow, 12 are surface-level and 5 are elevated. The deep stations comprise 55 triple-vaulted pylon stations, 19 triple-vaulted column stations, and one single-vault station. The shallow stations comprise 79 spanned column stations (a large portion of them following the "centipede" design), 33 single-vaulted stations (Kharkov technology), and four single-spanned stations. In addition, there are 12 ground-level stations, four elevated stations, and one station (Vorobyovy Gory) on a bridge. Two stations have three tracks, and one has double halls. Seven of the stations have side platforms (only one of which is subterranean). In addition, there were two temporary stations within rail yards. The stations being constructed under Stalin's regime, in the style of socialist classicism, were meant as underground "palaces of the people". Stations such as Komsomolskaya, Kiyevskaya or Mayakovskaya and others built after 1935 in the second phase of the evolution of the network are tourist landmarks: their photogenic architecture, large chandeliers and detailed decoration are unusual for an urban transport system of the twentieth century. The stations opened in the 21st century are influenced by an international and more neutral style with improved technical quality. Rolling stock Since the beginning, platforms have been at least long to accommodate eight-car trains. The only exceptions are on the Filyovskaya Line: Vystavochnaya, Mezhdunarodnaya, Studencheskaya, Kutuzovskaya, Fili, Bagrationovskaya, Filyovsky Park and Pionerskaya, which only allows six-car trains (note that this list includes all ground-level stations on the line, except Kuntsevskaya, which allows normal length trains). Trains on the Zamoskvoretskaya, Kaluzhsko-Rizhskaya, Tagansko-Krasnopresnenskaya, Kalininskaya, Solntsevskaya, Bolshaya Koltsevaya, Serpukhovsko-Timiryazevskaya, Lyublinsko-Dmitrovskaya and Nekrasovskaya lines have eight cars, on the Sokolnicheskaya line seven or eight cars, on the original Koltsevaya line seven cars, and on the Filyovskaya line six cars. The Arbatsko-Pokrovskaya line also once ran seven-car 81-717 size trains, but now use five-car trains of another type. Butovskaya line uses three-car trains of another type. Dimensions have varied subtly, but for the most cars fit into the ranges of long and wide with 4 doors per side. The 81-740/741 Rusich deviates greatly from this, with a 3-car Rusich being roughly 4 normal cars and a 5-car Rusich being 7 normal cars. Trains no longer in operation The V-type trains were formerly from Berlin U-Bahn C-class trains from 1945 to 1969, until its complete demise in 1970. They were transported from the Berlin U-Bahn during the Soviet occupation. A-type and B-type trains were custom-made since the opening. Trains in operation Currently, the Metro only operates 81-style trains. Rolling stock on several lines was replaced with articulated 81-740/741 Rusich trains, which were originally designed for light rail subway lines. The Butovskaya Line was designed by different standards, and has shorter ( long) platforms. It employs articulated 81-740/741 trains, which consist of three cars (although the line can also use traditional four-car trains). On the Moscow Monorail, Intamin P30 trains are used, consisting of six short cars. On the Moscow Central Circle, which is a route on the conventional railway line, ES2G Lastochka trains are used, consisting of five cars. Ticketing The Moscow Metro charges a flat fare for a single journey, regardless of distance or time travelled within the network. An exception to this is the Moscow Central Diameters, which operate on a zone-based fare system. The Moscow Metro ticketing system allows free interchanges within a 90-minute window between different transport modes, including the MCC, the MCD, trams and buses. Modern Metro turnstiles are designed to accept various forms of payment, including plastic cards like the Troika card or Moscow Resident Social Cards, bank cards, bank stickers, souvenir tickets such as Troika rings, bracelets, or keychains, and disposable RFID chip cardboard cards. Additionally, all stations are equipped to accept biometric payments. Some transport cards have usage limitations that impose a waiting period between consecutive uses (e.g., delays of 7 or 20 minutes). History of the ticketing system Soviet era turnstiles simply accepted N kopeck coins. In the early years of Russian Federation (and with the start of a hyperinflation) plastic tokens were used. Disposable magnetic stripe cards were introduced in 1993 on a trial basis, and used as unlimited monthly tickets between 1996 and 1998. The sale of tokens ended on 1 January 1999, and they stopped being accepted in February 1999; from that time, magnetic cards were used as tickets with a fixed number of rides. On 1 September 1998, the Moscow Metro became the first metro system in Europe to fully implement "contactless" smart cards, known as Transport Cards. Transport Cards were the card to have unlimited amount of trips for 30, 90 or 365 days, its active lifetime was projected as 3½ years. Defective cards were to be exchanged at no extra cost. In August 2004, the city government launched the Moscow Resident Social Card program. Social Cards are free smart cards issued for the elderly and other groups of citizens officially registered as residents of Moscow or the Moscow region; they offer discounts in shops and pharmacies, and double as credit cards issued by the Bank of Moscow. Social Cards can be used for unlimited free access to the city's public-transport system, including the Moscow Metro; while they do not feature the time delay, they include a photograph and are non-transferable. Since 2006, several banks have issued credit cards which double as Ultralight cards and are accepted at turnstiles. The fare is passed to the bank and the payment is withdrawn from the owner's bank account at the end of the calendar month, using a discount rate based on the number of trips that month (for up to 70 trips, the cost of each trip is prorated from current Ultralight rates; each additional trip costs 24.14 rubles). Partner banks include the Bank of Moscow, CitiBank, Rosbank, Alfa-Bank and Avangard Bank. In January 2007, Moscow Metro began replacing limited magnetic cards with contactless disposable tickets based on NXP's MIFARE Ultralight technology. Ultralight tickets are available for a fixed number of trips in 1, 2, 5, 10, 20 and 60-trip denominations (valid for 5 or 90 days from the day of purchase) and as a monthly ticket, only valid for a selected calendar month and limited to 70 trips. The sale of magnetic cards ended on 16 January 2008 and magnetic cards ceased to be accepted in late 2008, making the Moscow metro the world's first major public-transport system to run exclusively on a contactless automatic fare-collection system. Contemporary ticketing system On 2 April 2013, the Moscow Department of Transport introduced the Troika smartcard, which serves as the foundation of the city's modern ticketing system. Currently, passengers can use a single Troika card to pay for travel on the metro, MCC, MCD, buses, trams, river transport, suburban trains, and Aeroexpress. Approximately 80% of all trips in Moscow are paid for using Troika, with over 50 million cards sold to date. In 2023, Troika production, including its chip, was fully localized in Moscow. In 2024, Moscow plans to launch a virtual analog of the card for smartphones. Moreover, the Moscow Metro offers Ediniy (Unified) tickets with varying durations: 1 day, 3 days, 30 days, 60 days, 90 days and 365 days. In 2015, the Moscow Metro started testing bank card payments at ticket windows. At the moment, bank card or bank sticker payments are accepted at all turnstiles in the network. As of April 2024, this payment option is used approximately 900 thousand times daily. In October 2021, the Moscow Metro became the first metro system in the world to implement biometric payment on a large scale. To use this system, passengers must link their photo, bank card, and metro card to the service through the Moscow Metro mobile app. This allows passengers to pay for their rides without taking out their phone, metro card, or bank card, thereby increasing passenger flow at station entrances. The technology is available at all metro stations, the MCC, and on river transport. As of April 2024, passengers have completed 100 million trips using biometric technology. Other payment methods include: Payment via Mir Pay using an Android phone with a Mir card Payment with a smartphone via FPS in open test mode at all metro, MCC, and river transport turnstiles Cash or bank card payments at ticket offices and vending machines Moscow Resident Social Card The Moscow Metro ticketing system received two prestigious international Transport Ticketing Awards in 2020 and 2021. Fares MCD network is divided between the "Central" and "Suburban" zone. Metro (with the monorail and the MCC) is completely within the Central zone. Passenger services Passenger Mobility Center The Passenger Mobility Center was created within the Moscow Metro in October 2013 to aid passengers with reduced mobility, encompassing individuals with hearing or visual impairments, mobility limitations, senior citizens, large families, and parents with strollers. Today, the PMC staff escorts passengers at the metro, MCC, MCD, buses and trams. Since its inception, PMC has assisted over 1.15 million passengers with reduced mobility. In 2023, PMC staff escorted approximately 70,000 passengers, representing a 9% increase compared to 2022. Wayfinding In 2013, the Moscow Metro started to develop the new principles of wayfinding, including a redesigned metro map. Today, these principles have been applied to all of Moscow Transport. The new system is characterized by the following features: The single font of the Moscow Transport – Moscow Sans More concise and comprehensible signage Geographical maps across the city that includes information on surrounding streets and landmarks Use of easily understandable pictograms instead of words (e.g., line numbers) Assignment of numbers to each metro exit Floor signage on stations Accessibility information for passengers with reduced mobility Digital wayfinding screens above the doors in the newest train models A standardized design for temporary announcements Digital services Mobile app Launched in 2017, the Moscow Metro mobile app offers a range of useful features for passengers: Troika card management (purchase tickets, view pass and transaction history, set up auto-payment) Transfer of Troika card balance to a new card in case of loss Identification of less crowded carriages on arriving trains Temporary suspension of annual passes (once per year for 14 days) Route planning Linking of the Moscow Resident Social Card Purchase of intercity bus tickets Registration for biometric payment service Reporting of lost items Request for assistance from the Passenger Mobility Service Chatbot access As of March 2024, the app has been downloaded 13 million times and is used by 2 million people monthly. Chatbot In 2020, the Moscow Metro introduced Aleksandra, a chatbot that has since become the official chatbot for all types of urban transport in Moscow. As of February 2024, Aleksandra has answered over 6.8 million questions and is equipped to respond to over 58,000 inquiries related to Moscow's urban transport system. Statistics Notable incidents 1977 bombing On 8 January 1977, a bomb was reported to have killed 7 and seriously injured 33. It went off in a crowded train between Izmaylovskaya and Pervomayskaya stations. Three Armenians were later arrested, charged and executed in connection with the incident. 1981 station fires In June 1981, seven bodies were seen being removed from the Oktyabrskaya station during a fire there. A fire was also reported at Prospekt Mira station about that time. 1982 escalator accident A fatal accident occurred on 17 February 1982 due to an escalator collapse at the Aviamotornaya station on the Kalininskaya Line. Eight people were killed and 30 injured due to a pileup caused by faulty emergency brakes. 1996 murder In 1996, an American-Russian businessman Paul Tatum was murdered at the Kiyevskaya Metro station. He was shot dead by a man carrying a concealed Kalashnikov gun. 2000 bombings On 8 August 2000, a strong blast in a Metro underpass at Pushkinskaya metro station in the center of Moscow claimed the lives of 12, with 150 injured. A homemade bomb equivalent to 800 grams of TNT had been left in a bag near a kiosk. 2004 bombings On 6 February 2004, an explosion wrecked a train between the Avtozavodskaya and Paveletskaya stations on the Zamoskvoretskaya Line, killing 41 and wounding over 100. Chechen terrorists were blamed. A later investigation concluded that a Karachay-Cherkessian resident had carried out a suicide bombing. The same group organized another attack on 31 August 2004, killing 10 and injuring more than 50 others. 2005 Moscow blackout On 25 May 2005, a citywide blackout halted operation on some lines. The following lines, however, continued operations: Sokolnicheskaya, Zamoskvoretskaya from Avtozavodskaya to Rechnoy Vokzal, Arbatsko-Pokrovskaya, Filyovskaya, Koltsevaya, Kaluzhsko-Rizhskaya from Bitsevskiy Park to Oktyabrskaya-Radialnaya and from Prospekt Mira-Radialnaya to Medvedkovo, Tagansko-Krasnopresnenskaya, Kalininskaya, Serpukhovsko-Timiryazevskaya from Serpukhovskaya to Altufyevo and Lyublinskaya from Chkalovskaya to Dubrovka. There was no service on the Kakhovskaya and Butovskaya lines. The blackout severely affected the Zamoskvoretskaya and Serpukhovsko-Timiryazevskaya lines, where initially all service was disrupted because of trains halted in tunnels in the southern part of city (most affected by the blackout). Later, limited service resumed and passengers stranded in tunnels were evacuated. Some lines were only slightly impacted by the blackout, which mainly affected southern Moscow; the north, east and western parts of the city experienced little or no disruption. 2006 billboard incident On 19 March 2006, a construction pile from an unauthorized billboard installation was driven through a tunnel roof, hitting a train between the Sokol and Voikovskaya stations on the Zamoskvoretskaya Line. No injuries were reported. 2010 bombing On 29 March 2010, two bombs exploded on the Sokolnicheskaya Line, killing 40 and injuring 102 others. The first bomb went off at the Lubyanka station on the Sokolnicheskaya Line at 7:56, during the morning rush hour. At least 26 were killed in the first explosion, of which 14 were in the rail car where it took place. A second explosion occurred at the Park Kultury station at 8:38, roughly forty minutes after the first one. Fourteen people were killed in that blast. The Caucasus Emirate later claimed responsibility for the bombings. 2014 pile incident On 25 January 2014, at 15:37 a construction pile from a Moscow Central Circle construction site was driven through a tunnel roof between Avtozavodskaya and Kolomenskaya stations on the Zamoskvoretskaya Line. The train operator applied emergency brakes, and the train did not crash into the pile. Passengers were evacuated from the tunnel, with no injures reported. The normal line operation resumed the same day at 19:50. 2014 derailment On 15 July 2014, a train derailed between Park Pobedy and Slavyansky Bulvar on the Arbatsko-Pokrovskaya Line, killing 24 people and injuring dozens more. Metro-2 Conspiracy theorists have claimed that a second and deeper metro system code-named "D-6", designed for emergency evacuation of key city personnel in case of nuclear attack during the Cold War, exists under military jurisdiction. It is believed that it consists of a single track connecting the Kremlin, chief HQ (General Staff –Genshtab), Lubyanka (FSB Headquarters), the Ministry of Defense and several other secret installations. There are alleged to be entrances to the system from several civilian buildings, such as the Russian State Library, Moscow State University (MSU) and at least two stations of the regular Metro. It is speculated that these would allow for the evacuation of a small number of randomly chosen civilians, in addition to most of the elite military personnel. A suspected junction between the secret system and the regular Metro is supposedly behind the Sportivnaya station on the Sokolnicheskaya Line. The final section of this system was supposedly completed in 1997. In popular culture The Moscow Metro is the central location and namesake for the Metro series, where during a nuclear war, Moscow's inhabitants are driven down into the Moscow Metro, which has been designed as a fallout shelter, with the various stations being turned into makeshift settlements. In 2012, an art film was released about a catastrophe in the Moscow underground.
Technology
Russia
null
66512
https://en.wikipedia.org/wiki/Fern
Fern
The ferns (Polypodiopsida or Polypodiophyta) are a group of vascular plants (plants with xylem and phloem) that reproduce via spores and have neither seeds nor flowers. They differ from mosses by being vascular, i.e., having specialized tissues that conduct water and nutrients, and in having life cycles in which the branched sporophyte is the dominant phase. Ferns have complex leaves called megaphylls that are more complex than the microphylls of clubmosses. Most ferns are leptosporangiate ferns. They produce coiled fiddleheads that uncoil and expand into fronds. The group includes about 10,560 known extant species. Ferns are defined here in the broad sense, being all of the Polypodiopsida, comprising both the leptosporangiate (Polypodiidae) and eusporangiate ferns, the latter group including horsetails, whisk ferns, marattioid ferns, and ophioglossoid ferns. The fern crown group, consisting of the leptosporangiates and eusporangiates, is estimated to have originated in the late Silurian period 423.2 million years ago, but Polypodiales, the group that makes up 80% of living fern diversity, did not appear and diversify until the Cretaceous, contemporaneous with the rise of flowering plants that came to dominate the world's flora. Ferns are not of major economic importance, but some are used for food, medicine, as biofertilizer, as ornamental plants, and for remediating contaminated soil. They have been the subject of research for their ability to remove some chemical pollutants from the atmosphere. Some fern species, such as bracken (Pteridium aquilinum) and water fern (Azolla filiculoides), are significant weeds worldwide. Some fern genera, such as Azolla, can fix nitrogen and make a significant input to the nitrogen nutrition of rice paddies. They also play certain roles in folklore. Description Sporophyte Extant ferns are herbaceous perennials and most lack woody growth. When woody growth is present, it is found in the stem. Their foliage may be deciduous or evergreen, and some are semi-evergreen depending on the climate. Like the sporophytes of seed plants, those of ferns consist of stems, leaves and roots. Ferns differ from spermatophytes in that they reproduce by spores rather than having flowers and producing seeds. However, they also differ from spore-producing bryophytes in that, like seed plants, they are polysporangiophytes, their sporophytes branching and producing many sporangia. Also unlike bryophytes, fern sporophytes are free-living and only briefly dependent on the maternal gametophyte. The green, photosynthetic part of the plant is technically a megaphyll and in ferns, it is often called a frond. New leaves typically expand by the unrolling of a tight spiral called a crozier or fiddlehead into fronds. This uncurling of the leaf is termed circinate vernation. Leaves are divided into two types: sporophylls and tropophylls. Sporophylls produce spores; tropophylls do not. Fern spores are borne in sporangia which are usually clustered to form sori. The sporangia may be covered with a protective coating called an indusium. The arrangement of the sporangia is important in classification. In monomorphic ferns, the fertile and sterile leaves look morphologically the same, and both are able to photosynthesize. In hemidimorphic ferns, just a portion of the fertile leaf is different from the sterile leaves. In dimorphic (holomorphic) ferns, the two types of leaves are morphologically distinct. The fertile leaves are much narrower than the sterile leaves, and may have no green tissue at all, as in the Blechnaceae and Lomariopsidaceae. The anatomy of fern leaves can be anywhere from simple to highly divided, or even indeterminate (e.g. Gleicheniaceae, Lygodiaceae). The divided forms are pinnate, where the leaf segments are completely separated from one other, or pinnatifid (partially pinnate), where the leaf segments are still partially connected. When the fronds are branched more than once, it can also be a combination of the pinnatifid are pinnate shapes. If the leaf blades are divided twice, the plant has bipinnate fronds, and tripinnate fronds if they branch three times, and all the way to tetra- and pentapinnate fronds. In tree ferns, the main stalk that connects the leaf to the stem (known as the stipe), often has multiple leaflets. The leafy structures that grow from the stipe are known as pinnae and are often again divided into smaller pinnules. Fern stems are often loosely called rhizomes, even though they grow underground only in some of the species. Epiphytic species and many of the terrestrial ones have above-ground creeping stolons (e.g., Polypodiaceae), and many groups have above-ground erect semi-woody trunks (e.g., Cyatheaceae, the scaly tree ferns). These can reach up to tall in a few species (e.g., Cyathea brownii on Norfolk Island and Cyathea medullaris in New Zealand). Roots are underground non-photosynthetic structures that take up water and nutrients from soil. They are always fibrous and are structurally very similar to the roots of seed plants. Gametophyte As in all vascular plants, the sporophyte is the dominant phase or generation in the life cycle. The gametophytes of ferns, however, are very different from those of seed plants. They are free-living and resemble liverworts, whereas those of seed plants develop within the spore wall and are dependent on the parent sporophyte for their nutrition. A fern gametophyte typically consists of: Prothallus: A green, photosynthetic structure that is one cell thick, usually heart or kidney shaped, 3–10 mm long and 2–8 mm broad. The prothallus produces gametes by means of: Antheridia: Small spherical structures that produce flagellate antherozoids. Archegonia: A flask-shaped structure that produces a single egg at the bottom, reached by the male gametophyte by swimming down the neck. Rhizoids: root-like structures (not true roots) that consist of single greatly elongated cells, that absorb water and mineral salts over the whole structure. Rhizoids anchor the prothallus to the soil. Life cycle and reproduction The lifecycle of a fern involves two stages, as in club mosses and horsetails. In stage one, the spores are produced by sporophytes in sporangia, which are clustered together in sori (s.g. sorus), developing on the underside of fertile fronds. In stage two, the spores germinate into a short-lived structure anchored to the ground by rhizoids called gametophyte which produce gametes. When a mature fertile frond bears sori, and spores are released, the spores will settle on the soil and send out rhizoids, while it develops into a prothallus. The prothallus bears spherical antheridia (s.g. antheridium) which produce antherozoids (male gametophytes) and archegonia (s.g. archegonium) which release a single oosphere. The antherozoid swims up the archegonium and fertilize the oosphere, resulting in a zygote, which will grow into a separate sporophyte, while the gametophyte shortly persists as a free-living plant. Taxonomy Carl Linnaeus (1753) originally recognized 15 genera of ferns and fern allies, classifying them in class Cryptogamia in two groups, Filices (e.g. Polypodium) and Musci (mosses). By 1806 this had increased to 38 genera, and has progressively increased since (see ). Ferns were traditionally classified in the class Filices, and later in a Division of the Plant Kingdom named Pteridophyta or Filicophyta. Pteridophyta is no longer recognised as a valid taxon because it is paraphyletic. The ferns are also referred to as Polypodiophyta or, when treated as a subdivision of Tracheophyta (vascular plants), Polypodiopsida, although this name sometimes only refers to leptosporangiate ferns. Traditionally, all of the spore producing vascular plants were informally denominated the pteridophytes, rendering the term synonymous with ferns and fern allies. This can be confusing because members of the division Pteridophyta were also denominated pteridophytes (sensu stricto). Traditionally, three discrete groups have been denominated ferns: two groups of eusporangiate ferns, the families Ophioglossaceae (adder's tongues, moonworts, and grape ferns) and Marattiaceae; and the leptosporangiate ferns. The Marattiaceae are a primitive group of tropical ferns with large, fleshy rhizomes and are now thought to be a sibling taxon to the leptosporangiate ferns. Several other groups of species were considered fern allies: the clubmosses, spikemosses, and quillworts in Lycopodiophyta; the whisk ferns of Psilotaceae; and the horsetails of Equisetaceae. Since this grouping is polyphyletic, the term fern allies should be abandoned, except in a historical context. More recent genetic studies demonstrated that the Lycopodiophyta are more distantly related to other vascular plants, having radiated evolutionarily at the base of the vascular plant clade, while both the whisk ferns and horsetails are as closely related to leptosporangiate ferns as the ophioglossoid ferns and Marattiaceae. In fact, the whisk ferns and ophioglossoid ferns are demonstrably a clade, and the horsetails and Marattiaceae are arguably another clade. Molecular phylogenetics Smith et al. (2006) carried out the first higher-level pteridophyte classification published in the molecular phylogenetic era, and considered the ferns as monilophytes, as follows: Division Tracheophyta (tracheophytes) – vascular plants Sub division Euphyllophytina (euphyllophytes) Infradivision Moniliformopses (monilophytes) Infradivision Spermatophyta – seed plants, ~260,000 species Subdivision Lycopodiophyta (lycophytes) – less than 1% of extant vascular plants Molecular data, which remain poorly constrained for many parts of the plants' phylogeny, have been supplemented by morphological observations supporting the inclusion of Equisetaceae in the ferns, notably relating to the construction of their sperm and peculiarities of their roots. The leptosporangiate ferns are sometimes called "true ferns". This group includes most plants familiarly known as ferns. Modern research supports older ideas based on morphology that the Osmundaceae diverged early in the evolutionary history of the leptosporangiate ferns; in certain ways this family is intermediate between the eusporangiate ferns and the leptosporangiate ferns. Rai and Graham (2010) broadly supported the primary groups, but queried their relationships, concluding that "at present perhaps the best that can be said about all relationships among the major lineages of monilophytes in current studies is that we do not understand them very well". Grewe et al. (2013) confirmed the inclusion of horsetails within ferns sensu lato, but also suggested that uncertainties remained in their precise placement. Other classifications have raised Ophioglossales to the rank of a fifth class, separating the whisk ferns and ophioglossoid ferns. Phylogeny The ferns are related to other groups as shown in the following cladogram: Nomenclature and subdivision The classification of Smith et al. in 2006 treated ferns as four classes: Equisetopsida (Sphenopsida) – 1 order, Equisetales (Horsetails) ~ 15 species Psilotopsida – 2 orders (whisk ferns and ophioglossoid ferns) ~92 species Marattiopsida – 1 order, Marattiales ~ 150 species Polypodiopsida (Filicopsida) – 7 orders (leptosporangiate ferns) ~ 9,000 species In addition they defined 11 orders and 37 families. That system was a consensus of a number of studies, and was further refined. The phylogenetic relationships are shown in the following cladogram (to the level of orders). This division into four major clades was then confirmed using morphology alone. Subsequently, Chase and Reveal considered both lycopods and ferns as subclasses of a class Equisetopsida (Embryophyta) encompassing all land plants. This is referred to as Equisetopsida sensu lato to distinguish it from the narrower use to refer to horsetails alone, Equisetopsida sensu stricto. They placed the lycopods into subclass Lycopodiidae and the ferns, keeping the term monilophytes, into five subclasses, Equisetidae, Ophioglossidae, Psilotidae, Marattiidae and Polypodiidae, by dividing Smith's Psilotopsida into its two orders and elevating them to subclass (Ophioglossidae and Psilotidae). Christenhusz et al. (2011) followed this use of subclasses but recombined Smith's Psilotopsida as Ophioglossidae, giving four subclasses of ferns again. Christenhusz and Chase (2014) developed a new classification of ferns and lycopods. They used the term Polypodiophyta for the ferns, subdivided like Smith et al. into four groups (shown with equivalents in the Smith system), with 21 families, approximately 212 genera and 10,535 species; Equisetidae (=Equisetopsida) – monotypic (Equisetales, Equisetaceae, Equisetum) horsetails ~ 20 species) Ophioglossidae (=Psilotopsida) – 2 monotypic orders ~ 92 species Marattiidae (=Marattiopsida) – 1 monotypic order (Marattiales, Marattiaceae, 2 subfamilies) ~ 130 species Polypodiidae (=Polypodiopsida) – 7 orders This was a considerable reduction in the number of families from the 37 in the system of Smith et al., since the approach was more that of lumping rather than splitting. For instance a number of families were reduced to subfamilies. Subsequently, a consensus group was formed, the Pteridophyte Phylogeny Group (PPG), analogous to the Angiosperm Phylogeny Group, publishing their first complete classification in November 2016. They recognise ferns as a class, the Polypodiopsida, with four subclasses as described by Christenhusz and Chase, and which are phylogenetically related as in this cladogram: In the Pteridophyte Phylogeny Group classification of 2016 (PPG I), the Polypodiopsida consist of four subclasses, 11 orders, 48 families, 319 genera, and an estimated 10,578 species. Thus Polypodiopsida in the broad sense (sensu lato) as used by the PPG (Polypodiopsida sensu PPG I) needs to be distinguished from the narrower usage (sensu stricto) of Smith et al. (Polypodiopsida sensu Smith et al.) Classification of ferns remains unresolved and controversial with competing viewpoints (splitting vs lumping) between the systems of the PPG on the one hand and Christenhusz and Chase on the other, respectively. In 2018, Christenhusz and Chase explicitly argued against recognizing as many genera as PPG I. Evolution and biogeography Fern-like taxa (Wattieza) first appear in the fossil record in the middle Devonian period, ca. 390 Mya. By the Triassic, the first evidence of ferns related to several modern families appeared. The great fern radiation occurred in the late Cretaceous, when many modern families of ferns first appeared. Ferns evolved to cope with low-light conditions present under the canopy of angiosperms. Remarkably, the photoreceptor neochrome in the two orders Cyatheales and Polypodiales, integral to their adaptation to low-light conditions, was obtained via horizontal gene transfer from hornworts, a bryophyte lineage. Due to the very large genome seen in most ferns, it was suspected they might have gone through whole genome duplications, but DNA sequencing has shown that their genome size is caused by the accumulation of mobile DNA like transposons and other genetic elements that infect genomes and get copied over and over again. Ferns appear to have evolved extrafloral nectaries 135 million years ago, nearly simultaneously with the trait's evolution in angiosperms. However, nectary-associated diversifications in ferns did not hit their stride until nearly 100 million years later, in the Cenozoic. There is weak support for the rise of fern-feeding arthropods driving this diversification. Distribution and habitat Ferns are widespread in their distribution, with the greatest richness in the tropics and least in arctic areas. The greatest diversity occurs in tropical rainforests. New Zealand, for which the fern is a symbol, has about 230 species, distributed throughout the country. It is a common plant in European forests. Ecology Fern species live in a wide variety of habitats, from remote mountain elevations, to dry desert rock faces, bodies of water or open fields. Ferns in general may be thought of as largely being specialists in marginal habitats, often succeeding in places where various environmental factors limit the success of flowering plants. Some ferns are among the world's most serious weed species, including the bracken fern growing in the Scottish highlands, or the mosquito fern (Azolla) growing in tropical lakes, both species forming large aggressively spreading colonies. There are four particular types of habitats that ferns are found in: moist, shady forests; crevices in rock faces, especially when sheltered from the full sun; acid wetlands including bogs and swamps; and tropical trees, where many species are epiphytes (something like a quarter to a third of all fern species). Especially the epiphytic ferns have turned out to be hosts of a huge diversity of invertebrates. It is assumed that bird's-nest ferns alone contain up to half the invertebrate biomass within a hectare of rainforest canopy. Many ferns depend on associations with mycorrhizal fungi. Many ferns grow only within specific pH ranges; for instance, the climbing fern (Lygodium palmatum) of eastern North America will grow only in moist, intensely acid soils, while the bulblet bladder fern (Cystopteris bulbifera), with an overlapping range, is found only on limestone. The spores are rich in lipids, protein and calories, so some vertebrates eat these. The European woodmouse (Apodemus sylvaticus) has been found to eat the spores of Culcita macrocarpa, and the bullfinch (Pyrrhula murina) and the New Zealand lesser short-tailed bat (Mystacina tuberculata) also eat fern spores. Life cycle Ferns are vascular plants differing from lycophytes by having true leaves (megaphylls), which are often pinnate. They differ from seed plants (gymnosperms and angiosperms) in reproducing by means of spores and lacking flowers and seeds. Like all land plants, they have a life cycle referred to as alternation of generations, characterized by alternating diploid sporophytic and haploid gametophytic phases. The diploid sporophyte has 2n paired chromosomes, where n varies from species to species. The haploid gametophyte has n unpaired chromosomes, i.e. half the number of the sporophyte. The gametophyte of ferns is a free-living organism, whereas the gametophyte of the gymnosperms and angiosperms is dependent on the sporophyte. The life cycle of a typical fern proceeds as follows: A diploid sporophyte phase produces haploid spores by meiosis (a process of cell division which reduces the number of chromosomes by a half). A spore grows into a free-living haploid gametophyte by mitosis (a process of cell division which maintains the number of chromosomes). The gametophyte typically consists of a photosynthetic prothallus. The gametophyte produces gametes (often both sperm and eggs on the same prothallus) by mitosis. A mobile, flagellate sperm fertilizes an egg that remains attached to the prothallus. The fertilized egg is now a diploid zygote and grows by mitosis into a diploid sporophyte (the typical fern plant). Sometimes a gametophyte can give rise to sporophyte traits like roots or sporangia without the rest of the sporophyte. Uses Ferns are not as important economically as seed plants, but have considerable importance in some societies. Some ferns are used for food, including the fiddleheads of Pteridium aquilinum (bracken), Matteuccia struthiopteris (ostrich fern), and Osmundastrum cinnamomeum (cinnamon fern). Diplazium esculentum is also used in the tropics (for example in budu pakis, a traditional dish of Brunei) as food. Tubers from the "para", Ptisana salicina (king fern) are a traditional food in New Zealand and the South Pacific. Fern tubers were used for food 30,000 years ago in Europe. Fern tubers were used by the Guanches to make gofio in the Canary Islands. Ferns are generally not known to be poisonous to humans. Licorice fern rhizomes were chewed by the natives of the Pacific Northwest for their flavor. Some species of ferns are carcinogenic, and the British Royal Horticultural Society has advised not to consume any species for health reasons of both humans and livestock. Ferns of the genus Azolla, commonly known as water fern or mosquito ferns are very small, floating plants that do not resemble ferns. The mosquito ferns are used as a biological fertilizer in the rice paddies of southeast Asia, taking advantage of their ability to fix nitrogen from the air into compounds that can then be used by other plants. Ferns have proved resistant to phytophagous insects. The gene that express the protein Tma12 in an edible fern, Tectaria macrodonta, has been transferred to cotton plants, which became resistant to whitefly infestations. Many ferns are grown in horticulture as landscape plants, for cut foliage and as houseplants, especially the Boston fern (Nephrolepis exaltata) and other members of the genus Nephrolepis. The bird's nest fern (Asplenium nidus) is also popular, as are the staghorn ferns (genus Platycerium). Perennial (also known as hardy) ferns planted in gardens in the northern hemisphere also have a considerable following. Several ferns, such as bracken and Azolla species are noxious weeds or invasive species. Further examples include Japanese climbing fern (Lygodium japonicum), sensitive fern (Onoclea sensibilis) and Giant water fern (Salvinia molesta), one of the world's worst aquatic weeds. The important fossil fuel coal consists of the remains of primitive plants, including ferns. Culture Pteridology The study of ferns and other pteridophytes is called pteridology. A pteridologist is a specialist in the study of pteridophytes in a broader sense that includes the more distantly related lycophytes. Pteridomania Pteridomania was a Victorian era craze which involved fern collecting and fern motifs in decorative art including pottery, glass, metals, textiles, wood, printed paper, and sculpture "appearing on everything from christening presents to gravestones and memorials." The fashion for growing ferns indoors led to the development of the Wardian case, a glazed cabinet that would exclude air pollutants and maintain the necessary humidity. Other applications The Barnsley fern is a fractal named after the British mathematician Michael Barnsley who first described it in his book Fractals Everywhere. A self-similar structure is described by a mathematical function, applied repeatedly at different scales to create a frond pattern. The dried form of ferns was used in other arts, such as a stencil or directly inked for use in a design. The botanical work, The Ferns of Great Britain and Ireland, is a notable example of this type of nature printing. The process, patented by the artist and publisher Henry Bradbury, impressed a specimen on to a soft lead plate. The first publication to demonstrate this was Alois Auer's The Discovery of the Nature Printing-Process. Fern bars were popular in America in the 1970s and 80s. Folklore Ferns figure in folklore, for example in legends about mythical flowers or seeds. In Slavic folklore, ferns are believed to bloom once a year, during the Ivan Kupala night. Although alleged to be exceedingly difficult to find, anyone who sees a fern flower is thought to be guaranteed to be happy and rich for the rest of their life. Similarly, Finnish tradition holds that one who finds the seed of a fern in bloom on Midsummer night will, by possession of it, be guided and be able to travel invisibly to the locations where eternally blazing Will o' the wisps called aarnivalkea mark the spot of hidden treasure. These spots are protected by a spell that prevents anyone but the fern-seed holder from ever knowing their locations. In Wicca, ferns are thought to have magical properties such as a dried fern can be thrown into hot coals of a fire to exorcise evil spirits, or smoke from a burning fern is thought to drive away snakes and such creatures. New Zealand Ferns are the national emblem of New Zealand and feature on its passport and in the design of its national airline, Air New Zealand, and of its rugby team, the All Blacks. Organisms confused with ferns Misnomers Several non-fern plants (and even animals) are called ferns and are sometimes confused with ferns. These include: Asparagus fern—This may apply to one of several species of the monocot genus Asparagus, which are flowering plants. Sweetfern—A flowering shrub of the genus Comptonia. Air fern—A group of animals called hydrozoans that are distantly related to jellyfish and corals. They are harvested, dried, dyed green, and then sold as a plant that can live on air. While it may look like a fern, it is merely the skeleton of this colonial animal. Fern bush—Chamaebatiaria millefolium—a rose family shrub with fern-like leaves. Fern tree—Jacaranda mimosifolia—an ornamental tree of the order Lamiales. Fern leaf tree—Filicium decipiens—an ornamental tree of the order Sapindales. Fern-like flowering plants Some flowering plants such as palms and members of the carrot family have pinnate leaves that somewhat resemble fern fronds. However, these plants have fully developed seeds contained in fruits, rather than the microscopic spores of ferns.
Biology and health sciences
Pteridophytes
null
66535
https://en.wikipedia.org/wiki/Alternation%20of%20generations
Alternation of generations
Alternation of generations (also known as metagenesis or heterogenesis) is the predominant type of life cycle in plants and algae. In plants both phases are multicellular: the haploid sexual phase – the gametophyte – alternates with a diploid asexual phase – the sporophyte. A mature sporophyte produces haploid spores by meiosis, a process which reduces the number of chromosomes to half, from two sets to one. The resulting haploid spores germinate and grow into multicellular haploid gametophytes. At maturity, a gametophyte produces gametes by mitosis, the normal process of cell division in eukaryotes, which maintains the original number of chromosomes. Two haploid gametes (originating from different organisms of the same species or from the same organism) fuse to produce a diploid zygote, which divides repeatedly by mitosis, developing into a multicellular diploid sporophyte. This cycle, from gametophyte to sporophyte (or equally from sporophyte to gametophyte), is the way in which all land plants and most algae undergo sexual reproduction. The relationship between the sporophyte and gametophyte phases varies among different groups of plants. In the majority of algae, the sporophyte and gametophyte are separate independent organisms, which may or may not have a similar appearance. In liverworts, mosses and hornworts, the sporophyte is less well developed than the gametophyte and is largely dependent on it. Although moss and hornwort sporophytes can photosynthesise, they require additional photosynthate from the gametophyte to sustain growth and spore development and depend on it for supply of water, mineral nutrients and nitrogen. By contrast, in all modern vascular plants the gametophyte is less well developed than the sporophyte, although their Devonian ancestors had gametophytes and sporophytes of approximately equivalent complexity. In ferns the gametophyte is a small flattened autotrophic prothallus on which the young sporophyte is briefly dependent for its nutrition. In flowering plants, the reduction of the gametophyte is much more extreme; it consists of just a few cells which grow entirely inside the sporophyte. Animals develop differently. They produce haploid gametes. No haploid spores capable of dividing are produced, so generally there is no multicellular haploid phase. Some insects have a sex-determining system whereby haploid males are produced from unfertilized eggs; however females produced from fertilized eggs are diploid. Life cycles of plants and algae with alternating haploid and diploid multicellular stages are referred to as diplohaplontic. The equivalent terms haplodiplontic, diplobiontic and dibiontic are also in use, as is describing such an organism as having a diphasic ontogeny. Life cycles of animals, in which there is only a diploid multicellular stage, are referred to as diplontic. Life cycles in which there is only a haploid multicellular stage are referred to as haplontic. Definition Alternation of generations is defined as the alternation of multicellular diploid and haploid forms in the organism's life cycle, regardless of whether these forms are free-living. In some species, such as the alga Ulva lactuca, the diploid and haploid forms are indeed both free-living independent organisms, essentially identical in appearance and therefore said to be isomorphic. In many algae, the free-swimming, haploid gametes form a diploid zygote which germinates into a multicellular diploid sporophyte. The sporophyte produces free-swimming haploid spores by meiosis that germinate into haploid gametophytes. However, in land plants, either the sporophyte or the gametophyte is very much reduced and is incapable of free living. For example, in all bryophytes the gametophyte generation is dominant and the sporophyte is dependent on it. By contrast, in all seed plants the gametophytes are strongly reduced, although the fossil evidence indicates that they were derived from isomorphic ancestors. In seed plants, the female gametophyte develops totally within the sporophyte, which protects and nurtures it and the embryonic sporophyte that it produces. The pollen grains, which are the male gametophytes, are reduced to only a few cells (just three cells in many cases). Here the notion of two generations is less obvious; as Bateman & Dimichele say "sporophyte and gametophyte effectively function as a single organism". The alternative term 'alternation of phases' may then be more appropriate. History In animals Initially, Adelbert von Chamisso (studying salps, colonial marine animals between 1815 and 1818) and Japetus Steenstrup (studying the development of trematodes in 1842, and also tunicates and cnidarians) described the succession of differently organized generations (sexual and asexual) in animals as "alternation of generations". Later, the phenomenon in animals became known as heterogamy, while the term "alternation of generations" was restricted to the life cycles of plants, meaning specifically the alternation of haploid gametophytes and diploid sporophytes. In plants In 1851, Wilhelm Hofmeister demonstrated the morphological alternation of generations in plants, between a spore-bearing generation (sporophyte) and a gamete-bearing generation (gametophyte). By that time, a debate emerged focusing on the origin of the asexual generation of land plants (i.e., the sporophyte) and is conventionally characterized as a conflict between theories of antithetic (Ladislav Josef Čelakovský, 1874) and homologous (Nathanael Pringsheim, 1876) alternation of generations. In 1874, Eduard Strasburger discovered the alternation between diploid and haploid nuclear phases, also called cytological alternation of nuclear phases. Although most often coinciding, morphological alternation and nuclear phases alternation are sometimes independent of one another, e.g., in many red algae, the same nuclear phase may correspond to two diverse morphological generations. In some ferns which lost sexual reproduction, there is no change in nuclear phase, but the alternation of generations is maintained. Alternation of generations in plants Fundamental elements The diagram above shows the fundamental elements of the alternation of generations in plants. There are many variations in different groups of plants. The processes involved are as follows: Two single-celled haploid gametes, each containing n unpaired chromosomes, fuse to form a single-celled diploid zygote, which now contains n pairs of chromosomes, i.e. 2n chromosomes in total. The single-celled diploid zygote germinates, dividing by the normal process (mitosis), which maintains the number of chromosomes at 2n. The result is a multi-cellular diploid organism, called the sporophyte (because at maturity it produces spores). When it reaches maturity, the sporophyte produces one or more sporangia (singular: sporangium) which are the organs that produce diploid spore mother cells (sporocytes). These divide by a special process (meiosis) that reduces the number of chromosomes by a half. This initially results in four single-celled haploid spores, each containing n unpaired chromosomes. The single-celled haploid spore germinates, dividing by the normal process (mitosis), which maintains the number of chromosomes at n. The result is a multi-cellular haploid organism, called the gametophyte (because it produces gametes at maturity). When it reaches maturity, the gametophyte produces one or more gametangia (singular: gametangium) which are the organs that produce haploid gametes. At least one kind of gamete possesses some mechanism for reaching another gamete in order to fuse with it. The 'alternation of generations' in the life cycle is thus between a diploid (2n) generation of multicellular sporophytes and a haploid (n) generation of multicellular gametophytes. The situation is quite different from that in animals, where the fundamental process is that a multicellular diploid (2n) individual produces haploid (n) gametes by meiosis. In animals, spores (i.e. haploid cells which are able to undergo mitosis) are not produced, so there is no asexual multicellular generation. Some insects have haploid males that develop from unfertilized eggs, but the females are all diploid. Variations The diagram shown above is a good representation of the life cycle of some multi-cellular algae (e.g. the genus Cladophora) which have sporophytes and gametophytes of almost identical appearance and which do not have different kinds of spores or gametes. However, there are many possible variations on the fundamental elements of a life cycle which has alternation of generations. Each variation may occur separately or in combination, resulting in a bewildering variety of life cycles. The terms used by botanists in describing these life cycles can be equally bewildering. As Bateman and Dimichele say "[...] the alternation of generations has become a terminological morass; often, one term represents several concepts or one concept is represented by several terms." Possible variations are: Relative importance of the sporophyte and the gametophyte. Equal (homomorphy or isomorphy).Filamentous algae of the genus Cladophora, which are predominantly found in fresh water, have diploid sporophytes and haploid gametophytes which are externally indistinguishable. No living land plant has equally dominant sporophytes and gametophytes, although some theories of the evolution of alternation of generations suggest that ancestral land plants did. Unequal (heteromorphy or anisomorphy). Dominant gametophyte (gametophytic).In liverworts, mosses and hornworts, the dominant form is the haploid gametophyte. The diploid sporophyte is not capable of an independent existence, gaining most of its nutrition from the parent gametophyte, and having no chlorophyll when mature. Dominant sporophyte (sporophytic).In ferns, both the sporophyte and the gametophyte are capable of living independently, but the dominant form is the diploid sporophyte. The haploid gametophyte is much smaller and simpler in structure. In seed plants, the gametophyte is even more reduced (at the minimum to only three cells), gaining all its nutrition from the sporophyte. The extreme reduction in the size of the gametophyte and its retention within the sporophyte means that when applied to seed plants the term 'alternation of generations' is somewhat misleading: "[s]porophyte and gametophyte effectively function as a single organism". Some authors have preferred the term 'alternation of phases'. Differentiation of the gametes. Both gametes the same (isogamy).Like other species of Cladophora, C. callicoma has flagellated gametes which are identical in appearance and ability to move. Gametes of two distinct sizes (anisogamy). Both of similar motility.Species of Ulva, the sea lettuce, have gametes which all have two flagella and so are motile. However they are of two sizes: larger 'female' gametes and smaller 'male' gametes. One large and sessile, one small and motile (oogamy). The larger sessile megagametes are eggs (ova), and smaller motile microgametes are sperm (spermatozoa, spermatozoids). The degree of motility of the sperm may be very limited (as in the case of flowering plants) but all are able to move towards the sessile eggs. When (as is almost always the case) the sperm and eggs are produced in different kinds of gametangia, the sperm-producing ones are called antheridia (singular antheridium) and the egg-producing ones archegonia (singular archegonium). Antheridia and archegonia occur on the same gametophyte, which is then called monoicous. (Many sources, including those concerned with bryophytes, use the term 'monoecious' for this situation and 'dioecious' for the opposite. Here 'monoecious' and 'dioecious' are used only for sporophytes.)The liverwort Pellia epiphylla has the gametophyte as the dominant generation. It is monoicous: the small reddish sperm-producing antheridia are scattered along the midrib while the egg-producing archegonia grow nearer the tips of divisions of the plant. Antheridia and archegonia occur on different gametophytes, which are then called dioicous.The moss Mnium hornum has the gametophyte as the dominant generation. It is dioicous: male plants produce only antheridia in terminal rosettes, female plants produce only archegonia in the form of stalked capsules. Seed plant gametophytes are also dioicous. However, the parent sporophyte may be monoecious, producing both male and female gametophytes or dioecious, producing gametophytes of one gender only. Seed plant gametophytes are extremely reduced in size; the archegonium consists only of a small number of cells, and the entire male gametophyte may be represented by only two cells. Differentiation of the spores. All spores the same size (homospory or isospory).Horsetails (species of Equisetum) have spores which are all of the same size. Spores of two distinct sizes (heterospory or anisospory): larger megaspores and smaller microspores. When the two kinds of spore are produced in different kinds of sporangia, these are called megasporangia and microsporangia. A megaspore often (but not always) develops at the expense of the other three cells resulting from meiosis, which abort. Megasporangia and microsporangia occur on the same sporophyte, which is then called monoecious.Most flowering plants fall into this category. Thus the flower of a lily contains six stamens (the microsporangia) which produce microspores which develop into pollen grains (the microgametophytes), and three fused carpels which produce integumented megasporangia (ovules) each of which produces a megaspore which develops inside the megasporangium to produce the megagametophyte. In other plants, such as hazel, some flowers have only stamens, others only carpels, but the same plant (i.e. sporophyte) has both kinds of flower and so is monoecious. Megasporangia and microsporangia occur on different sporophytes, which are then called dioecious.An individual tree of the European holly (Ilex aquifolium) produces either 'male' flowers which have only functional stamens (microsporangia) producing microspores which develop into pollen grains (microgametophytes) or 'female' flowers which have only functional carpels producing integumented megasporangia (ovules) that contain a megaspore that develops into a multicellular megagametophyte. There are some correlations between these variations, but they are just that, correlations, and not absolute. For example, in flowering plants, microspores ultimately produce microgametes (sperm) and megaspores ultimately produce megagametes (eggs). However, in ferns and their allies there are groups with undifferentiated spores but differentiated gametophytes. For example, the fern Ceratopteris thalictrioides has spores of only one kind, which vary continuously in size. Smaller spores tend to germinate into gametophytes which produce only sperm-producing antheridia. A complex life cycle Plant life cycles can be complex. Alternation of generations can take place in plants which are at once heteromorphic, sporophytic, oogametic, dioicous, heterosporic and dioecious, such as in a willow tree (as most species of the genus Salix are dioecious). The processes involved are: An immobile egg, contained in the archegonium, fuses with a mobile sperm, released from an antheridium. The resulting zygote is either male or female. A male zygote develops by mitosis into a microsporophyte, which at maturity produces one or more microsporangia. Microspores develop within the microsporangium by meiosis.In a willow (like all seed plants) the zygote first develops into an embryo microsporophyte within the ovule (a megasporangium enclosed in one or more protective layers of tissue known as integument). At maturity, these structures become the seed. Later the seed is shed, germinates and grows into a mature tree. A male willow tree (a microsporophyte) produces flowers with only stamens, the anthers of which are the microsporangia. Microspores germinate producing microgametophytes; at maturity one or more antheridia are produced. Sperm develop within the antheridia.In a willow, microspores are not liberated from the anther (the microsporangium), but develop into pollen grains (microgametophytes) within it. The whole pollen grain is moved (e.g. by an insect or by the wind) to an ovule (megagametophyte), where a sperm is produced which moves down a pollen tube to reach the egg. A female zygote develops by mitosis into a megasporophyte, which at maturity produces one or more megasporangia. Megaspores develop within the megasporangium; typically one of the four spores produced by meiosis gains bulk at the expense of the remaining three, which disappear.Female willow trees (megasporophytes) produce flowers with only carpels (modified leaves that bear the megasporangia). Megaspores germinate producing megagametophytes; at maturity one or more archegonia are produced. Eggs develop within the archegonia. The carpels of a willow produce ovules, megasporangia enclosed in integuments. Within each ovule, a megaspore develops by mitosis into a megagametophyte. An archegonium develops within the megagametophyte and produces an egg. The whole of the gametophytic generation remains within the protection of the sporophyte except for pollen grains (which have been reduced to just three cells contained within the microspore wall). Life cycles of different plant groups The term "plants" is taken here to mean the Archaeplastida, i.e. the glaucophytes, red and green algae and land plants. Alternation of generations occurs in almost all multicellular red and green algae, both freshwater forms (such as Cladophora) and seaweeds (such as Ulva). In most, the generations are homomorphic (isomorphic) and free-living. Some species of red algae have a complex triphasic alternation of generations, in which there is a gametophyte phase and two distinct sporophyte phases. For further information, see Red algae: Reproduction. Land plants all have heteromorphic (anisomorphic) alternation of generations, in which the sporophyte and gametophyte are distinctly different. All bryophytes, i.e. liverworts, mosses and hornworts, have the gametophyte generation as the most conspicuous. As an illustration, consider a monoicous moss. Antheridia and archegonia develop on the mature plant (the gametophyte). In the presence of water, the biflagellate sperm from the antheridia swim to the archegonia and fertilisation occurs, leading to the production of a diploid sporophyte. The sporophyte grows up from the archegonium. Its body comprises a long stalk topped by a capsule within which spore-producing cells undergo meiosis to form haploid spores. Most mosses rely on the wind to disperse these spores, although Splachnum sphaericum is entomophilous, recruiting insects to disperse its spores. The life cycle of ferns and their allies, including clubmosses and horsetails, the conspicuous plant observed in the field is the diploid sporophyte. The haploid spores develop in sori on the underside of the fronds and are dispersed by the wind (or in some cases, by floating on water). If conditions are right, a spore will germinate and grow into a rather inconspicuous plant body called a prothallus. The haploid prothallus does not resemble the sporophyte, and as such ferns and their allies have a heteromorphic alternation of generations. The prothallus is short-lived, but carries out sexual reproduction, producing the diploid zygote that then grows out of the prothallus as the sporophyte. In the spermatophytes, the seed plants, the sporophyte is the dominant multicellular phase; the gametophytes are strongly reduced in size and very different in morphology. The entire gametophyte generation, with the sole exception of pollen grains (microgametophytes), is contained within the sporophyte. The life cycle of a dioecious flowering plant (angiosperm), the willow, has been outlined in some detail in an earlier section (A complex life cycle). The life cycle of a gymnosperm is similar. However, flowering plants have in addition a phenomenon called 'double fertilization'. In the process of double fertilization, two sperm nuclei from a pollen grain (the microgametophyte), rather than a single sperm, enter the archegonium of the megagametophyte; one fuses with the egg nucleus to form the zygote, the other fuses with two other nuclei of the gametophyte to form 'endosperm', which nourishes the developing embryo. Evolution of the dominant diploid phase It has been proposed that the basis for the emergence of the diploid phase of the life cycle (sporophyte) as the dominant phase (e.g. as in vascular plants) is that diploidy allows masking of the expression of deleterious mutations through genetic complementation. Thus if one of the parental genomes in the diploid cells contained mutations leading to defects in one or more gene products, these deficiencies could be compensated for by the other parental genome (which nevertheless may have its own defects in other genes). As the diploid phase was becoming predominant, the masking effect likely allowed genome size, and hence information content, to increase without the constraint of having to improve accuracy of DNA replication. The opportunity to increase information content at low cost was advantageous because it permitted new adaptations to be encoded. This view has been challenged, with evidence showing that selection is no more effective in the haploid than in the diploid phases of the lifecycle of mosses and angiosperms. Similar processes in other organisms Rhizaria Some organisms currently classified in the clade Rhizaria and thus not plants in the sense used here, exhibit alternation of generations. Most Foraminifera undergo a heteromorphic alternation of generations between haploid gamont and diploid agamont forms. The diploid form is typically much larger than the haploid form; these forms are known as the microsphere and megalosphere, respectively. Fungi Fungal mycelia are typically haploid. When mycelia of different mating types meet, they produce two multinucleate ball-shaped cells, which join via a "mating bridge". Nuclei move from one mycelium into the other, forming a heterokaryon (meaning "different nuclei"). This process is called plasmogamy. Actual fusion to form diploid nuclei is called karyogamy, and may not occur until sporangia are formed. Karogamy produces a diploid zygote, which is a short-lived sporophyte that soon undergoes meiosis to form haploid spores. When the spores germinate, they develop into new mycelia. Slime moulds The life cycle of slime moulds is very similar to that of fungi. Haploid spores germinate to form swarm cells or myxamoebae. These fuse in a process referred to as plasmogamy and karyogamy to form a diploid zygote. The zygote develops into a plasmodium, and the mature plasmodium produces, depending on the species, one to many fruiting bodies containing haploid spores. Animals Alternation between a multicellular diploid and a multicellular haploid generation is never encountered in animals. In some animals, there is an alternation between parthenogenic and sexually reproductive phases (heterogamy), for instance in salps and doliolids (class Thaliacea). Both phases are diploid. This has sometimes been called "alternation of generations", but is quite different. In some other animals, such as hymenopterans, males are haploid and females diploid, but this is always the case rather than there being an alternation between distinct generations.
Biology and health sciences
Plant reproduction
null
66549
https://en.wikipedia.org/wiki/Prairie
Prairie
Prairies are ecosystems considered part of the temperate grasslands, savannas, and shrublands biome by ecologists, based on similar temperate climates, moderate rainfall, and a composition of grasses, herbs, and shrubs, rather than trees, as the dominant vegetation type. Temperate grassland regions include the Pampas of Argentina, Brazil and Uruguay, and the steppe of Ukraine, Russia, and Kazakhstan. Lands typically referred to as "prairie" tend to be in North America. The term encompasses the lower and mid-latitude of the area referred to as the Interior Plains of Canada, the United States, and Mexico. It includes all of the Great Plains as well as the wetter, hillier land to the east. From west to east, generally the drier expanse of shortgrass prairie gives way to mixed grass prairie and ultimately the richer and wetter soils of the tallgrass prairie. In the U.S., the area is constituted by most or all of the states, from north to south, of North Dakota, South Dakota, Nebraska, Kansas, and Oklahoma, and sizable parts of the states of Montana, Wyoming, Colorado, New Mexico, Texas in the west, and to the east, Minnesota, Wisconsin, Iowa, Missouri, Illinois, and Indiana. The Palouse of Washington and the Central Valley of California are also prairies. The Canadian Prairies occupy vast areas of Manitoba, Saskatchewan, and Alberta. Prairies may contain various lush flora and fauna, often contain rich soil maintained by biodiversity, with a temperate climate and a varied view. Etymology According to Theodore Roosevelt: Prairie () is the French word for "meadow", formed ultimately from the Latin root word pratum (which has the same meaning). Formation The formation of the Canadian Prairies started with the uplift of the Rocky Mountains near Alberta. The mountains created a rain shadow which resulted in lower precipitation rates downwind. The parent material of most prairie soil was distributed during the last glacial advance that began about 110,000 years ago. The glaciers expanding southward scraped the landscape, picking up geologic material and leveling the terrain. As the glaciers retreated about 10,000 years ago, they deposited this material in the form of till. Wind-based loess deposits also form an important parent material for prairie soils. Tallgrass prairie evolved over tens of thousands of years with the disturbances of grazing and fire. Native ungulates such as bison, elk, and white-tailed deer roamed the expansive, diverse grasslands before European colonization of the Americas. For 10,000-20,000 years, native people used fire annually as a tool to assist in hunting, transportation, and safety. Evidence of ignition sources of fire in the tall grass prairie are overwhelmingly human as opposed to lightning. Humans, and grazing animals, were active participants in the process of prairie formation and the establishment of the diversity of graminoid and forbs species. Fire has the effect on prairies of removing trees, clearing dead plant matter, and changing the availability of certain nutrients in the soil from the ash produced. Fire kills the vascular tissue of trees, but not prairie species, as up to 75% (depending on the species) of the total plant biomass is below the soil surface and will re-grow from its deep (upwards of 20 feet) roots. Without disturbance, trees will encroach on a grassland and cast shade, which suppresses the understory. Prairie and widely spaced oak trees evolved to coexist in the oak savanna ecosystem. Ecology Prairie ecosystems in the United States and Canada are divided into the easternmost tallgrass prairie, the westernmost shortgrass prairie, and the central mixed-grass prairie. Tallgrass prairies receive over 30 inches of rainfall per year, whereas shortgrass prairies are much more arid, receiving only 12 inches or so, and mixed-grass prairies receive intermediate rainfall. Wet, mesic, and dry prairie ecosystems can also form more locally due to soil and terrain characteristics. Wet prairies may form in low-lying areas with poor drainage; dry prairie can be found on uplands or slopes. Dry prairie is the dominant habitat type in the Southern Canadian agricultural and climatic region which is known as Palliser's Triangle. It was once thought to be completely unarable, but is now one of the most important agricultural regions in Canada thanks to advances in irrigation technology. Biodiversity The dominant plant life in prairies consists of grasses, which may include 40 to 60 different grass species. In addition to grasses, prairies can include over 300 species of flowering plants. The Konza Tallgrass Prairie in Kansas hosts 250 species of native plants and provides habitat for 208 birds, 27 mammals, 25 reptiles, and over 3,000 insects. Some of the dominant grasses of prairies are Indian grass, big bluestem, side-oats grama, Canada wildrye, and switchgrass. Prairies are considered to be fire-dependent ecosystems. Regular controlled burning by Native Americans, practices developed through observation of non-anthropogenic fire and its effects, maintained the biodiversity of the prairie, clearing away dead vegetation and preventing trees from shading out the diverse grasses and herbaceous plants. Prairies also depend on the presence of large grazing animals, particularly bison. Bison are important to the prairie ecosystem because they shape and alter the environment by grazing, trampling areas with their hooves, wallowing, and depositing manure. Bison eat more grass than flowering plants, increasing the diversity of plants in the prairie. Cattle are thought to prefer to eat flowering plants over grasses, but it is not known if that is because of inherent differences in the species or because farmed cattle tend to be confined in smaller areas. Bison dung is a vital source of nutrients for prairie soil, spreads seeds, and supports over 1,000 insect species, including specialist dung beetles which cannot subsist on the feces of any other animal. Degradation In spite of long recurrent droughts and occasional torrential rains, the grasslands of the Great Plains were not subject to great soil erosion. The root systems of native prairie grasses firmly held the soil in place to prevent run-off of soil. When the plant died, the fungi and bacteria returned its nutrients to the soil. These deep roots also helped native prairie plants reach water in even the driest conditions. Native grasses suffer much less damage from dry conditions than many farm crops currently grown. When the eastern tallgrass prairies were plowed and turned into agricultural lands, the prairie grasses with their strong root systems were destroyed. In combination with severe droughts that resulted in the Dust Bowl, a major ecological disaster in which winds picked up the dry, unprotected prairie soil and formed it into "black blizzards" of airborne dirt that blackened the skies for days at a time across 19 states and forced 400,000 people to abandon the Great Plains ecoregion. The Dust Bowl was a major reason for the Great Depression. Human use Bison hunting Nomadic hunting has been the main human activity on the prairies for the majority of the archaeological record. This once included many now-extinct species of megafauna. After the other extinctions, the main hunted animal on the prairies was the plains bison. Using loud noises and waving large signals, Native peoples would drive bison into fenced pens called buffalo pounds to be killed with bows and arrows or spears, or drive them off a cliff (called a buffalo jump), to kill or injure the bison en masse. The introduction of the horse and the gun greatly expanded the killing power of the plains Natives. That was followed by the policy of indiscriminate killing by European Americans and Canadians for both commercial reasons and to weaken the independence of plains Natives, and caused a dramatic drop in bison numbers from millions to a few hundred in a century's time, and almost caused their extinction. Farming and ranching The very dense soil plagued the first European settlers who were using wooden plows, which were more suitable for loose forest soil. On the prairie, the plows bounced around, and the soil stuck to them. This problem was solved in 1837 by an Illinois blacksmith named John Deere who developed a steel moldboard plow that was stronger and cut the roots, making the fertile soils ready for farming. Former grasslands are now among the most productive agricultural lands on Earth. The tallgrass prairie has been converted into one of the most intensive crop producing areas in North America. Less than one tenth of one percent (<0.09%) of the original landcover of the tallgrass prairie biome remains. Much of what persists is in cemetery prairies, railroad rights-of-way, or rocky/sandy/hilly places unsuitable for agriculture. States formerly with landcover in native tallgrass prairie including Iowa, Illinois, Minnesota, Wisconsin, Nebraska, and Missouri have become valued for their highly productive soils and are included in the Corn Belt. As an example of this land use intensity, Illinois and Iowa rank 49th and 50th, out of 50 US states, in total uncultivated land remaining. Drier shortgrass prairies were once used mostly for open-range ranching. With the development of barbed wire in the 1870s and improved irrigation techniques, this region has mostly been converted to cropland and small fenced pastures. In southern Canada, Palliser's Triangle has been changed into one of the most important sources of wheat in the world as a result of improved methods of watering wheat fields (along with the rest of the Southern prairie provinces which also grow wheat, canola and many other grains). Despite those advances in farming technology, the area is still very prone to extended periods of drought, which can be disastrous for the industry if it is significantly prolonged. Biofuels Research by David Tilman, ecologist at the University of Minnesota, suggests "Biofuels made from high-diversity mixtures of prairie plants can reduce global warming by removing carbon dioxide from the atmosphere. Even when grown on infertile soils, they can provide a substantial portion of global energy needs, and leave fertile land for food production." Unlike corn and soybeans, which are both directly and indirectly major food crops, including livestock feed, prairie grasses are not used for human consumption. Prairie grasses can be grown in infertile soil, eliminating the cost of adding nutrients to the soil. Tilman and his colleagues estimate that prairie grass biofuels would yield 51 percent more energy per acre than ethanol from corn grown on fertile land. Some plants commonly used are lupine, big bluestem (turkey foot), blazing star, switchgrass, and prairie clover. Preservation Because rich and thick topsoil made the land well suited for agricultural use, only 1% of tallgrass prairie remains in the U.S. today. Shortgrass prairie is more abundant. Significant preserved areas of prairie include: Alderville Black Oak Savanna; Rice Lake, Ontario American Prairie, Phillips and Blaine counties, Montana Clymer Meadow Preserve, Hunt County, Texas Cypress Hills Interprovincial Park, Alberta and Saskatchewan Goose Lake Prairie State Natural Area, Grundy County, Illinois Grasslands National Park, Saskatchewan Hoosier Prairie, Lake County, Indiana James Woodworth Prairie Preserve, a virgin prairie owned by the University of Illinois, Glenview, Illinois Jennings Environmental Education Center, Pennsylvania Kissimmee Prairie Preserve State Park, Okeechobee County, Florida Konza Prairie, Manhattan, Kansas Midewin National Tallgrass Prairie, in Will County, Illinois Mnoké Prairie, Indiana Dunes National Park, Porter, Indiana Nachusa Grasslands, a Nature Conservancy preserve near Franklin Grove, Illinois Wichita Mountains Wildlife Refuge, in Comanche County, Oklahoma Neal Smith National Wildlife Refuge, Iowa Nine-Mile Prairie, Nebraska Ojibway prairie in Windsor, Ontario Paynes Prairie Preserve State Park, Alachua County, Florida Richard Bong State Recreation Area, in Kenosha County, Wisconsin Russell R. Kirt Prairie, College of DuPage, Illinois Tallgrass Aspen Parkland, Manitoba and Minnesota Tallgrass Prairie National Preserve, Kansas Tallgrass Prairie Preserve , Oklahoma University of Wisconsin–Madison Arboretum, University of Wisconsin–Madison, Wisconsin Zumwalt Prairie, Wallowa County, Oregon Virgin prairies Virgin prairie refers to prairie land which has never been plowed. Small virgin prairies exist in the American Midwestern states and in Canada. Restored prairie refers to a prairie that has been reseeded after plowing or other disturbance. Prairie garden A prairie garden is a garden consisting primarily of plants from a prairie. Physiography The originally treeless prairies of the upper Mississippi basin began in Indiana and extended westward and north-westward until they merged with the drier region known as the Great Plains. An eastward extension of the same area, originally tree-covered, extended to central Ohio. Thus, the prairies generally lie between the Ohio and Missouri rivers on the south and the Great Lakes on the north. The prairies are a contribution of the glacial period. They consist of glacial drift deposited unconformably on an underlying rock surface of moderate or small relief. Here, the rocks are an extension of the same stratified Palaeozoic formations already described as occurring in the Appalachian region and around the Great Lakes. They are usually fine-textured limestones and shales lying horizontal. The moderate or small relief they were given by mature preglacial erosion is now buried under the drift. The most significant area of the prairies, from Indiana to North Dakota, consists of till plains, that is, sheets of unstratified drift. The plains are 30, 50 or even 100 ft (up to 30 m) thick covering the underlying rock surface for thousands of square miles except where postglacial stream erosion has locally laid it bare. The plains have an extraordinarily even surface. The till is presumably made in part of preglacial soils, but it is largely composed of rock waste mechanically transported by the creeping ice sheets. Although the crystalline rocks from Canada and some of the more resistant stratified rocks south of the Great Lakes occur as boulders and stones, a great part of the till has been crushed and ground to a clayey texture. The till plains, although sweeping in broad swells of slowly changing altitude, often appear level to the eye with a view stretching to the horizon. Here and there, faint depressions occur, occupied by marshy sloughs or floored with a rich black soil of postglacial origin. Thus, by sub-glacial aggradation, the prairies have been leveled up to a smooth surface, in contrast to the higher and non-glaciated hilly country just to the south. The great ice sheets formed terminal moraines around their border at various stages. However, the morainic belts are of slight relief in comparison to the great area of the ice. They rise gently from the till plains to 50, 100 or more feet. They may be one, two or three miles (5 km) wide and their hilly surface, dotted over with boulders, contains many small lakes in basins or hollows, instead of streams in valleys. The morainic belts are arranged in groups of concentric loops, convex southward, because the ice sheets advanced in lobes along the lowlands of the Great Lakes. Neighboring morainic loops join each other in re-entrants (north-pointing cusps), where two adjacent glacial lobes came together and formed their moraines in largest volume. The moraines are of too small relief to be shown on any maps except of the largest scale. Small as they are, they are the chief relief of the prairie states, and, in association with the nearly imperceptible slopes of the till plains, they determine the course of many streams and rivers, which as a whole are consequent upon the surface form of the glacial deposits. The complexity of the glacial period and its subdivision into several glacial epochs, separated by interglacial epochs of considerable length (certainly longer than the postglacial epoch) has a structural consequence in the superposition of successive till sheets, alternating with non-glacial deposits. It also has a physiographic consequence in the very different amount of normal postglacial erosion suffered by the different parts of the glacial deposits. The southernmost drift sheets, as in southern Iowa and northern Missouri, have lost their initially plain surface and are now maturely dissected into gracefully rolling forms. Here, the valleys of even the small streams are well opened and graded, and marshes and lakes are rare. These sheets are of early Pleistocene origin. Nearer the Great Lakes, the till sheets are trenched only by the narrow valleys of the large streams. Marshy sloughs still occupy the faint depressions in the till plains and the associated moraines have abundant small lakes in their undrained hollows. These drift sheets are of late Pleistocene origin. When the ice sheets extended to the land sloping southward to the Ohio River, Mississippi River and Missouri River, the drift-laden streams flowed freely away from the ice border. As the streams escaped from their subglacial channels, they spread into broader channels and deposited some of their load, and thus aggraded their courses. Local sheets or aprons of gravel and sand are spread more or less abundantly along the outer side of the morainic belts. Long trains of gravel and sands clog the valleys that lead southward from the glaciated to the non-glaciated area. Later, when the ice retreated further and the unloaded streams returned to their earlier degrading habit, they more or less completely scoured out the valley deposits, the remains of which are now seen in terraces on either side of the present flood plains. When the ice of the last glacial epoch had retreated so far that its front border lay on a northward slope, belonging to the drainage area of the Great Lakes, bodies of water accumulated in front of the ice margin, forming glacio-marginal lakes. The lakes were small at first, and each had its own outlet at the lowest depression of land to the south. As the ice melted further back, neighboring lakes became confluent at the level of the lowest outlet of the group. The outflowing streams grew in the same proportion and eroded a broad channel across the height of land and far down stream, while the lake waters built sand reefs or carved shore cliffs along their margin, and laid down sheets of clay on their floors. All of these features are easily recognized in the prairie region. The present site of Chicago was determined by an Indian portage or carry across the low divide between Lake Michigan and the headwaters of the Illinois River. This divide lies on the floor of the former outlet channel of the glacial Lake Michigan. Corresponding outlets are known for Lake Erie, Lake Huron, and Lake Superior. A very large sheet of water, named Lake Agassiz, once overspread a broad till plain in northern Minnesota and North Dakota. The outlet of this glacial lake, called river Warren, eroded a large channel in which the Minnesota River evident today. The Red River of the North flows northward through a plain formerly covered by Lake Agassiz. Certain extraordinary features were produced when the retreat of the ice sheet had progressed so far as to open an eastward outlet for the marginal lakes. This outlet occurred along the depression between the northward slope of the Appalachian plateau in west-central New York and the southward slope of the melting ice sheet. When this eastward outlet came to be lower than the south-westward outlet across the height of land to the Ohio or Mississippi river, the discharge of the marginal lakes was changed from the Mississippi system to the Hudson system. Many well-defined channels, cutting across the north-sloping spurs of the plateau in the neighborhood of Syracuse, New York mark the temporary paths of the ice-bordered outlet river. Successive channels are found at lower and lower levels on the plateau slope, indicating the successive courses taken by the lake outlet as the ice melted further and further back. On some of the channels, deep gorges were eroded heading in temporary cataracts which exceeded Niagara in height but not in breadth. The pools excavated by the plunging waters at the head of the gorges are now occupied by little lakes. The most significant stage in this series of changes occurred when the glacio-marginal lake waters were lowered so that the long escarpment of Niagara limestone was laid bare in western New York. The previously confluent waters were then divided into two lakes. The higher one, Lake Erie, supplied the outflowing Niagara River, which poured its waters down the escarpment to the lower, Lake Ontario. That gave rise to Niagara Falls. Lake Ontario's outlet for a time ran down the Mohawk Valley to the Hudson River. At the higher elevation, it was known as Lake Iroquois. When ice melted from the northeastern end of the lake, it dropped to a lower level, and drained through the St. Lawrence area creating a lower base level for the Niagara River and increasing its erosive capacity. In certain districts, the subglacial till was not spread out in a smooth plain, but accumulated in elliptical mounds, 100–200 feet. high and long with axes parallel to the direction of the ice motion as indicated by striae on the underlying rock floor. These hills are known by the Irish name, drumlins, used for similar hills in north-western Ireland. The most remarkable groups of drumlins occur in western New York, where their number is estimated at over 6,000, and in southern Wisconsin, where it is placed at 5,000. They completely dominate the topography of their districts. A curious deposit of an impalpably fine and unstratified silt, known by the German name bess (or loess), lies on the older drift sheets near the larger river courses of the upper Mississippi basin. It attains a thickness of or more near the rivers and gradually fades away at a distance of ten or more miles (16 or more km) on either side. It contains land shells, and hence cannot be attributed to marine or lacustrine submergence. The best explanation is that, during certain phases of the glacial period, it was carried as dust by the winds from the flood plains of aggrading rivers, and slowly deposited on the neighboring grass-covered plains. The glacial and eolian origin of this sediment is evidenced by the angularity of its grains (a bank of it will stand without slumping for years), whereas, if it had been transported significantly by water, the grains would have been rounded and polished. Loess is parent material for an extremely fertile, but droughty soil. Southwestern Wisconsin and parts of the adjacent states of Illinois, Iowa, and Minnesota are known as the driftless zone, because, although bordered by drift sheets and moraines, it is free from glacial deposits. It must therefore have been a sort of oasis, when the ice sheets from the north advanced past it on the east and west, and joined around its southern border. The reason for this exemption from glaciation is the converse of that for the southward convexity of the morainic loops. While they mark the paths of greatest glacial advance along lowland troughs (lake basins), the driftless zone is a district protected from ice invasion by reason of the obstruction which the highlands of northern Wisconsin and Michigan (part of the Superior upland) offered to glacial advance. The course of the upper Mississippi River is largely consequent upon glacial deposits. Its sources are in the morainic lakes in northern Minnesota. The drift deposits thereabouts are so heavy that the present divides between the drainage basins of Hudson Bay, Lake Superior, and the Gulf of Mexico evidently stand in no very definite relation to the preglacial divides. The course of the Mississippi through Minnesota is largely guided by the form of the drift cover. Several rapids and the Saint Anthony Falls (determining the site of Minneapolis) are signs of immaturity, resulting from superposition through the drift on the under rock. Further south, as far as the entrance of the Ohio River, the Mississippi follows a rock-walled valley deep, with a flood-plain wide. This valley seems to represent the path of an enlarged early-glacial Mississippi, when much precipitation that is today discharged to Hudson Bay and the Gulf of St. Lawrence was delivered to the Gulf of Mexico, for the curves of the present river are of distinctly smaller radii than the curves of the valley. Lake Pepin ( below St. Paul), a picturesque expansion of the river across its flood-plain, is due to the aggradation of the valley floor where the Chippewa River, coming from the northeast, brought an overload of fluvio-glacial drift. Hence, even the father of waters, like so many other rivers in the Northern states, owes many of its features more or less directly to glacial action. The fertility of the prairies is a natural consequence of their origin. During the mechanical transportation of the till, no vegetation was present to remove the minerals essential to plant growth, as is the case in the soils of normally weathered and dissected peneplains. The soil is similar to the Appalachian piedmont which though not exhausted by the primeval forest cover, are by no means so rich as the till sheets of the prairies. Moreover, whatever the rocky understructure, the till soil has been averaged by a thorough mechanical mixture of rock grindings. Hence, the prairies are continuously fertile for scores of miles together. The true prairies were once covered with a rich growth of natural grass and annual flowering plants, but today, they are covered with farms.
Physical sciences
Grasslands
null
66556
https://en.wikipedia.org/wiki/Broccoli
Broccoli
Broccoli (Brassica oleracea var. italica) is an edible green plant in the cabbage family (family Brassicaceae, genus Brassica) whose large flowering head, stalk and small associated leaves are eaten as a vegetable. Broccoli is classified in the Italica cultivar group of the species Brassica oleracea. Broccoli has large flower heads, or florets, usually dark green, arranged in a tree-like structure branching out from a thick stalk, which is usually light green. Leaves surround the mass of flower heads. Broccoli resembles cauliflower, a different but closely related cultivar group of the same Brassica species. It can be eaten either raw or cooked. Broccoli is a particularly rich source of vitamin C and vitamin K. Contents of its characteristic sulfur-containing glucosinolate compounds, isothiocyanates and sulforaphane, are diminished by boiling but are better preserved by steaming, microwaving or stir-frying. Rapini, sometimes called "broccoli rabe", is a distinct species from broccoli, forming similar but smaller heads, and is actually a type of turnip (Brassica rapa). Taxonomy Brassica oleracea var. italica was described in 1794 by Joseph Jakob von Plenck in Icones Plantarum Medicinalium 6:29, t. 534. Like all the other brassicas, broccoli was developed from the wild cabbage (Brassica oleracea var. oleracea), also called colewort or field cabbage. Etymology The word broccoli, first used in the 17th century, comes from the Italian plural of , which means "the flowering crest of a cabbage", and is the diminutive form of brocco, meaning "small nail" or "sprout". History Broccoli resulted from the breeding of landrace Brassica crops in the northern Mediterranean starting in about the sixth century BCE. Broccoli has its origins in primitive cultivars grown in the Roman Empire and was most likely improved via artificial selection in the southern Italian Peninsula or in Sicily. Broccoli was spread to northern Europe by the 18th century and brought to North America in the 19th century by Italian immigrants. After the Second World War, the breeding of the United States and Japanese F1 hybrids increased yields, quality, growth speed, and regional adaptation, which produced the cultivars commonly grown since then: 'Premium Crop', 'Packman', and 'Marathon'. Description Broccoli is an annual plant which can grow up to tall. Broccoli is very similar to cauliflower, but unlike it, its floral buds are well-formed and clearly visible. The inflorescence grows at the end of a central, thick stem and is dark green. Violet, yellow, or even white heads have been created, but these varieties are rare. The flowers are yellow with four petals. The growth season for broccoli is 14–15 weeks. Broccoli is collected by hand immediately after the head is fully formed yet the flowers are still in their bud stage. The plant develops numerous little "heads" from the lateral shoots which can be harvested later. Varieties There are three commonly grown types of broccoli. The most familiar is Calabrese broccoli, often referred to simply as "broccoli", named after Calabria in Italy. It has large green heads and thick stalks. It is a cool-season annual crop. Sprouting broccoli (white or purple) has a larger number of heads with many thin stalks. Purple cauliflower or violet cauliflower is a type of broccoli grown in Europe and North America. It has a head shaped like cauliflower but consists of many tiny flower buds. Sometimes, but not always, it has a purple cast to the tips of the flower buds. Purple cauliflower may also be white, red, green, or other colors. Beneforté is a variety of broccoli containing 2–3 times more glucoraphanin and produced by crossing broccoli with a wild Brassica variety, Brassica oleracea var villosa. Other cultivar groups of Brassica oleracea Other cultivar groups of Brassica oleracea include cabbage (Capitata Group), cauliflower and Romanesco broccoli (Botrytis Group), kale (Acephala Group), collard (Viridis Group), kohlrabi (Gongylodes Group), Brussels sprouts (Gemmifera Group), and kai-lan (Alboglabra Group). As these groups are the same species, they readily hybridize: for example, broccolini or "Tenderstem broccoli" is a cross between broccoli and kai-lan. Broccoli cultivars form the genetic basis of the "tropical cauliflowers" commonly grown in South and Southeastern Asia, although they produce a more cauliflower-like head in warmer conditions. Cultivation The majority of broccoli cultivars are cool-weather crops that do poorly in hot summer weather. Broccoli grows best when exposed to an average daily temperature between . When the cluster of flowers, also referred to as a "head" of broccoli, appears in the center of the plant, the cluster is generally green. Garden pruners or shears are used to cut the head about from the tip. Broccoli should be harvested before the flowers on the head bloom bright yellow. Broccoli cannot be harvested using machines, but rather is hand-harvested. Production In 2021, global production of broccoli (combined for production reports with cauliflowers) was 26 million tonnes, with China and India together accounting for 72% of the world total. Secondary producers, each having about one million tonnes or less annually, were the United States, Spain, and Mexico (table). In the United States, broccoli is grown year-round in California – which produced 92% of the crop nationally – with 95% of the total crop produced for fresh sales in 2018. Nutrition Raw broccoli is 89% water, 7% carbohydrates, 3% protein, and contains negligible fat (table). A reference amount of raw broccoli provides of food energy and is a rich source (20% or higher of the Daily Value, DV) of vitamin C (107% DV) and vitamin K (97% DV) (table). Raw broccoli also contains moderate amounts (10–19% DV) of several B vitamins and the dietary mineral manganese, whereas other micronutrients are low in content (less than 10% DV). Broccoli contains the dietary carotenoid, beta-carotene. Cooking Boiling substantially reduces the levels of broccoli glucosinolates, while other cooking methods, such as steaming, microwaving, and stir-frying, have no significant effect on glucosinolate levels. Taste The perceived bitterness of cruciferous vegetables, such as broccoli, results from glucosinolates and their hydrolysis products, particularly isothiocyanates and other sulfur-containing compounds. Preliminary research indicates that genetic inheritance through the gene TAS2R38 may be responsible in part for bitter taste perception in broccoli. Pests The larvae of Pieris rapae, also known as the "small white" butterfly, are a common pest in broccoli and were mostly introduced accidentally to North America, Australia, and New Zealand. Additional pests common to broccoli production include: Aphids Cabbage looper Cabbage webworm Cross-striped cabbageworm Diamondback moth Imported cabbageworm Cabbage maggot Harlequin cabbage bug Gallery
Biology and health sciences
Brassicales
null
66560
https://en.wikipedia.org/wiki/Bracken
Bracken
Bracken (Pteridium) is a genus of large, coarse ferns in the family Dennstaedtiaceae. Ferns (Pteridophyta) are vascular plants that have alternating generations, large plants that produce spores and small plants that produce sex cells (eggs and sperm). Brackens are noted for their large, highly divided leaves. They are found on all continents except Antarctica and in all environments except deserts, though their typical habitat is moorland. The genus probably has the widest distribution of any fern in the world. The word bracken is of Old Norse origin, related to Swedish bräken and Danish bregne, both meaning fern. In the past, the genus was commonly treated as having only one species, Pteridium aquilinum, but the recent trend is to subdivide it into about ten species. Like other ferns, brackens do not have seeds or fruits, but the immature fronds, known as fiddleheads, are sometimes eaten, although some are thought to be carcinogenic. Description Bracken is one of the oldest ferns, with fossil records over 55 million years old having been found. The plant sends up large, triangular fronds from a wide-creeping underground rootstock, and may form dense thickets. This rootstock may travel a meter or more underground between fronds. The fronds may grow up to long or longer with support, but typically are in the range of high. In cold environments, bracken is deciduous and, as it requires well-drained soil, is generally found growing on the sides of hills. Fern spores are contained in structures found on the underside of the leaf called sori. The linear, leaf-edge pattern of these in bracken is different from that in most other ferns, where the sori are circular and occur towards the center of the leaf. Species Distribution and habitat Pteridium aquilinum (bracken or common bracken) is the most common species with a cosmopolitan distribution, occurring in temperate and subtropical regions throughout much of the world. It is a prolific and abundant plant in the moorlands of Ireland, where it is limited to altitudes of below 600 metres. It does not like poorly drained marshes or fen. It has been observed growing in soils from pH 2.8 to 8.6. Exposure to cold or high pH inhibits its growth. It causes such a problem in invading pasturelands that at one time the British government had an eradication programme. Special filters have even been used on some British water supplies to filter out the bracken spores. Bracken is a characteristic moorland plant in Ireland which over the last decades has increasingly out-competed characteristic ground-cover plants such as moor grasses, cowberry, bilberry, and heathers, and now covers a considerable part of upland moorland. Once valued and gathered for use in animal bedding, tanning, soap and glass making, and as a fertiliser, bracken is now seen as a pernicious, invasive, and opportunistic plant, taking over from the plants traditionally associated with open moorland and reducing easy access by humans. It is toxic to cattle, dogs, sheep, pigs, and horses, and is also linked to cancers in humans. It can harbour high levels of sheep ticks, which can pass on Lyme disease. Grazing provided some control by stock trampling, but this has almost ceased since the 2001 foot-and-mouth disease outbreak reduced commercial livestock production. Global climatic changes have also suited bracken well and contributed to its rapid increase in land coverage. Bracken is a well-adapted pioneer plant which can colonise land quickly, with the potential to extend its area by as much as 1%–3% per year. This ability to expand rapidly at the expense of other plants and wildlife can cause major problems for land users and managers. It colonises ground with an open vegetation structure, but is slow to colonise healthy, well managed heather stands. Bracken presents a threat to biodiversity. Many plant species occur only on upland moorland, tied to unique features in the habitat. The loss and degradation of such areas due to the dominance of bracken has caused many species to become rare and isolated. Ecology Evolutionarily, bracken may be considered one of the most successful ferns. It is considered highly invasive, and can survive in acid soils. Fungal associations Woodland fungi such as Mycena epipterygia can be found growing under the bracken canopy. Both Camarographium stephensii and Typhula quisquiliaris grow primarily from dead bracken stems. Other plant associations Bracken is known to produce and release allelopathic chemicals, which is an important factor in its ability to dominate other vegetation, particularly in regrowth after fire. Its chemical emissions, shady canopy, and thick litter inhibit other plant species from establishing themselves – with the occasional exception of plants which support rare butterflies. Herb and tree seedling growth may be inhibited even after bracken is removed, apparently because active plant toxins remain in the soil. Bracken substitutes the characteristics of a woodland canopy, and is important for giving shade to European plants such as common bluebell and wood anemone where the woodland does not exist. These plants are intolerant to stock trampling. Dead bracken provides a warm microclimate for development of the immature stages. Climbing corydalis, wild gladiolus, and chickweed wintergreen also seem to benefit from the conditions found under bracken stands. The high humidity in the stands helps mosses survive underneath, including Campylopus flexuosus, Hypnum cupressiforme, Polytrichum commune, Pseudoscelopodium purum and Rhytidiadelphus squarrosus. Uses Food Bracken fiddleheads have been eaten by many cultures throughout history, either fresh, cooked, or pickled. Pteridium aquilinum is especially common in East Asian cuisine. In Korea, bracken (sometimes referred to as 'fernbrake' in Korean recipes) is known as gosari (고사리), and is a typical ingredient in bibimbap, a popular mixed rice dish. Stir-fried bracken (gosari namul) is also a common side dish (banchan) in Korea. In Japan, bracken is known as warabi (わらび), and is steamed, boiled, or cooked in soups. Warabimochi bracken jelly, named after its resemblance to mochi rice cakes, is a popular traditional dessert, although commercial variants are often made with cheaper potato starch instead. The fiddleheads are also preserved in salt, sake, or miso. In China, bracken is known as juecai (蕨菜), and is eaten like vegetables or preserved by drying. Also called "fernbrake", it is used as a vegetable in soups and stews. Bracken rhizomes can be ground into flour to make bread. In the Canary Islands, the rhizome was historically used to make a porridge called gofio. Both fronds and rhizomes have been used to produce beer in Siberia, and among indigenous peoples of North America. Bracken leaves are used in the Mediterranean region to filter sheep's milk, and to store freshly made ricotta cheese. P. esculentum rhizomes were traditionally used by the Māori people of New Zealand as a staple food, and are known as aruhe. They were eaten by exploring or hunting groups away from permanent settlements. The plant was widely distributed across New Zealand as a result of prehistoric deforestation, and planting on rich soils, which produced the best rhizomes. The rhizomes were dried, and could be heated and softened with a pounder (patu aruhe), after which the starch could be sucked from the fibers. Patu aruhe were important ritual items, and several distinct styles were developed. Source of potash Green bracken ferns average 25% potash and can contain as much as 55%. It has advantages over other sources of plant ash, such as hardwood, due to its high potash yield as a percentage of both dry and fresh mass, abundance, growth rate, and ease of harvesting. Bracken has been recognized as a source of potash since at least the 10th century AD, with numerous references in European texts, typically in relation to its use for soap and glass making. The turn to mined sources of potash in the industrial age ended significant use of bracken as a source of potash, contributing to its status as a troublesome weed. Others Bracken has traditionally been used for animal bedding, which later breaks down into a rich mulch that could be used as fertilizer. It is still used this way in Wales. It is also used as a winter mulch, which has been shown to reduce the loss of potassium and nitrogen in the soil, and to lower soil pH. Toxicity Bracken contains the carcinogenic compound ptaquiloside, which causes damage to DNA, thus leading to cancers of the digestive tract. High stomach cancer rates are found in Japan and North Wales, where bracken is often eaten, but it is unclear whether bracken plays a role. Consumption of ptaquiloside-contaminated milk is thought to contribute to human gastric cancer in the Andean states of Venezuela. The spores have also been implicated as carcinogens. However, ptaquiloside is water-soluble and destroyed in heat (by cooking) and alkaline conditions (by soaking). Korean and Japanese cooks have traditionally soaked the shoots in water and ash to detoxify the plant before eating. Ptaquiloside also degenerates at room temperature, and denatures almost completely at boiling temperature. Despite this, moderation of consumption is still recommended to reduce chances of cancer formation. The British Royal Horticultural Society recommends against consumption of bracken altogether, by both humans and livestock. Ptaquiloside has been shown to leach from wild bracken plants into the water supply, which has been implicated in high rates of stomach and oesophageal cancers in areas with high bracken growth, such as Wales and South America. Uncooked bracken also contains the enzyme thiaminase, which breaks down thiamine (vitamin B1). Excessive consumption of bracken can lead to vitamin B1 deficiency (beriberi), especially in animals with simple stomachs. Ruminants are less vulnerable because they synthesize thiamine. In animals Ptaquiloside from bracken has been shown to be carcinogenic in some animals. Animals may ingest the plant when other sources of food are unavailable, such as during droughts or after snowfalls. In cattle, bracken poisoning can occur in acute and chronic forms, acute poisoning being the most common. Milk from cows that have eaten bracken may also contain ptaquiloside, which is especially concentrated in buttermilk. In pigs and horses, bracken poisoning induces vitamin B1 deficiency. In insects Hydrogen cyanide is released by the young fronds of bracken when eaten by mammals or insects. Two major insect moulting hormones, alpha ecdysone and 20-hydroxyecdysone, are found in bracken. These cause uncontrollable, repeated moulting in insects ingesting the fronds, leading to rapid death. Bracken is currently under investigation as a possible source of new insecticides. Archaeology Many sites have archaeological remains dating from the Neolithic and Bronze Ages through to the Industrial Revolution. The root systems of established bracken stands degrade archaeological sites by disrupting the strata and other physical evidence. These rhizomes may travel a metre or more underground between fronds and form 90% of the plant, with only the remainder being visible. Control Some small level of scattered cover can provide beneficial habitats for some wildlife, at least in the UK (as given above). However, on balance, removing bracken encourages primary habitats to re-establish, which are of greater importance for wildlife. Control is a complex question with complex answers, which need to form part of a wider approach. Management can be difficult and expensive; plans may need to be about cost-effective, practical limitation and control rather than give an expectation for eradication. All methods need follow-up over time, starting with the advancing areas first. Given the decades elapsed to arrive at the current levels of coverage on many sites, slowing or reversing the process will be also of necessity long-term, with consistency and persistence from all parties being key. Various techniques are recommended by Natural England and the RSPB to control bracken either individually or in combination RSPB Bracken management in the uplands. Cutting — Once or twice a year, repeatedly cutting back the fronds for at least 3 years. Crushing/rolling — Using rollers, again for at least 3 years. Livestock treading — During winter, encouraging livestock to bracken areas with food. They trample the developing plants and allow frost to penetrate the rhizomes. In May and June, temporary close grazing or mob stocking on small areas away from nests, particularly using cattle, horses, pigs, or ponies may crush emerging bracken fronds resulting in reduced bracken cover. Sufficient fodder will be required to prevent livestock eating the bracken. This may suit steep areas where human access is difficult and herbicide undesirable. Herbicide — Asulam (also known as Asulox) is selective for ferns; glyphosate is not; but the latter has the advantage that the effects can be seen soon after application. They are applied when the fronds are fully unfurled to ensure that the chemical is fully absorbed. Rare ferns such as adder's tongue (Ophioglossum vulgatum), killarney (Trichomanes speciosum) and lemon-scented ferns can also be found in similar habitats and it is important that these are not destroyed in the process of bracken control. Natural England recommends that only Asulam can be sprayed aerially, Glyphosate requires spot treatment, e.g. using a weedwiper or knapsack spray. The toxicity of Asulam is low and has been generally highly cost-effective but its use is now restricted by the EU after 2012, at least until specific registered uses can be defined. Selective sprays like Starane, Access, Metsulfuron 600WG, etc. work well but only if sprayed in late autumn so the rhizomes store food for winter and hence absorb the poison. On archaeological sites, chemical control is usually required as mechanical methods may cause damage. Allowing plants to grow in its place, e.g., the establishment of woodland, causes shade that inhibits bracken growth. In the UK, trees, notably rowan, have done well since grazing reduced greatly after the foot-and-mouth epidemic in 2000 but young saplings struggle in high bracken. In decades to come and if permitted, tree shade cover may increase and so may reduce bracken growth, but this is both long-term and in some cases is contentious in the change it would bring to traditionally open heath or moorland, both aesthetically and as a valuable habitat. Burning — Useful for removing the litter, but may be counter-productive as bracken is considered to be a fire-adapted species. Ploughing — Late in the season followed by sowing seed. Any bracken control programme must be completed, or bracken will re-establish. A Bracken Control Group was established in 2012 to provide best-practice guidance for all bracken control techniques. The Group has also been responsible for submitting an application for an Emergency Authorisation to secure the continued availability of Asulam for bracken control, following the decision not to register the product under new regulations in the EU. Registration has been re-applied for but this will not be available until 2017 at the earliest. Until re-registration is approved the Group will aim to keep Asulam available under the emergency provisions. Bracken Control Group website In culture Bracken is commonly referred to by local populations in the north of England as 'Moorland Scrub'. The creature ’Bracken’ from the 2023 video game Lethal Company is named after the plant.
Biology and health sciences
Ferns
Plants
66572
https://en.wikipedia.org/wiki/ENIAC
ENIAC
ENIAC (; Electronic Numerical Integrator and Computer) was the first programmable, electronic, general-purpose digital computer, completed in 1945. Other computers had some of these features, but ENIAC was the first to have them all. It was Turing-complete and able to solve "a large class of numerical problems" through reprogramming. ENIAC was designed by John Mauchly and J. Presper Eckert to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory (which later became a part of the Army Research Laboratory). However, its first program was a study of the feasibility of the thermonuclear weapon. ENIAC was completed in 1945 and first put to work for practical purposes on December 10, 1945. ENIAC was formally dedicated at the University of Pennsylvania on February 15, 1946, having cost $487,000 (), and called a "Giant Brain" by the press. It had a speed on the order of one thousand times faster than that of electro-mechanical machines. ENIAC was formally accepted by the U.S. Army Ordnance Corps in July 1946. It was transferred to Aberdeen Proving Ground in Aberdeen, Maryland in 1947, where it was in continuous operation until 1955. The 1948 Manchester Baby was the first machine to contain all the elements essential to a modern electronic digital computer, as it could be reprogrammed electronically to hold stored programs instead of requiring setting of switches to program as ENIAC did. Development and design ENIAC's design and construction was financed by the United States Army, Ordnance Corps, Research and Development Command, led by Major General Gladeon M.Barnes. The total cost was about $487,000, . The conception of ENIAC began in June 1941, when Friden calculators and differential analyzers were used by the United States Army Ordnance Department to compute firing tables for artillery, which was done by graduate students under John Mauchly's supervision. Mauchly began to wonder if electronics could be applied to mathematics for faster calculations. He partnered up with research associate J. Presper Eckert, as Mauchly wasn't an electronics expert, to draft an electronic computer that could work at an excellent pace. Later in August 1942, Mauchly proposed an all-electronic calculating machine that could help the U.S. Army calculate complex ballistics tables. The U.S. Army Ordnance accepted their plan, giving the University of Pennsylvania a six-months research contract for $61,700. The construction contract was signed on June 5, 1943; work on the computer began in secret at the University of Pennsylvania's Moore School of Electrical Engineering the following month, under the code name "Project PX", with John Grist Brainerd as principal investigator. Herman H. Goldstine persuaded the Army to fund the project, which put him in charge to oversee it for them. Assembly for the computer began in June 1944. Later in September of that year, Eckert and Mauchly completed their conception on the computer. Construction for the computer was complete in May 1945, and testing for it began at the Moore School. Later in November of that year, the duo, along with John Brainerd and Herman Goldstine, issued the first confidential published report on the computer, which talks about how it worked and the methods by which it was programmed. ENIAC was designed by Ursinus College physics professor John Mauchly and J. Presper Eckert of the University of Pennsylvania, U.S. The team of design engineers assisting the development included Robert F. Shaw (function tables), Jeffrey Chuan Chu (divider/square-rooter), Thomas Kite Sharpless (master programmer), Frank Mural (master programmer), Arthur Burks (multiplier), Harry Huskey (reader/printer) and Jack Davis (accumulators). Significant development work was undertaken by the female mathematicians who handled the bulk of the ENIAC programming: Jean Jennings, Marlyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, and Kay McNulty. In 1946, the researchers resigned from the University of Pennsylvania and formed the Eckert–Mauchly Computer Corporation. ENIAC was a large, modular computer, composed of individual panels to perform different functions. Twenty of these modules were accumulators that could not only add and subtract, but hold a ten-digit decimal number in memory. Numbers were passed between these units across several general-purpose buses (or trays, as they were called). In order to achieve its high speed, the panels had to send and receive numbers, compute, save the answer and trigger the next operation, all without any moving parts. Key to its versatility was the ability to branch; it could trigger different operations, depending on the sign of a computed result. Components By the end of its operation in 1956, ENIAC contained 18,000 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors, and approximately 5,000,000 hand-soldered joints. It weighed more than , was roughly tall, deep, and long, occupied and consumed 150 kW of electricity. Input was possible from an IBM card reader and an IBM card punch was used for output. These cards could be used to produce printed output offline using an IBM accounting machine, such as the IBM 405. While ENIAC had no system to store memory in its inception, these punch cards could be used for external memory storage. In 1953, a 100-word magnetic-core memory built by the Burroughs Corporation was added to ENIAC. ENIAC used ten-position ring counters to store digits; each digit required 36 vacuum tubes, 10 of which were the dual triodes making up the flip-flops of the ring counter. Arithmetic was performed by "counting" pulses with the ring counters and generating carry pulses if the counter "wrapped around", the idea being to electronically emulate the operation of the digit wheels of a mechanical adding machine. ENIAC had 20 ten-digit signed accumulators, which used ten's complement representation and could perform 5,000 simple addition or subtraction operations between any of them and a source (e.g., another accumulator or a constant transmitter) per second. It was possible to connect several accumulators to run simultaneously, so the peak speed of operation was potentially much higher, due to parallel operation. It was possible to wire the carry of one accumulator into another accumulator to perform arithmetic with double the precision, but the accumulator carry circuit timing prevented the wiring of three or more for even higher precision. ENIAC used four of the accumulators (controlled by a special multiplier unit) to perform up to 385 multiplication operations per second; five of the accumulators were controlled by a special divider/square-rooter unit to perform up to 40 division operations per second or three square root operations per second. The other nine units in ENIAC were the initiating unit (started and stopped the machine), the cycling unit (used for synchronizing the other units), the master programmer (controlled loop sequencing), the reader (controlled an IBM punch-card reader), the printer (controlled an IBM card punch), the constant transmitter, and three function tables. Operation times The references by Rojas and Hashagen (or Wilkes) give more details about the times for operations, which differ somewhat from those stated above. The basic machine cycle was 200 microseconds (20 cycles of the 100 kHz clock in the cycling unit), or 5,000 cycles per second for operations on the 10-digit numbers. In one of these cycles, ENIAC could write a number to a register, read a number from a register, or add/subtract two numbers. A multiplication of a 10-digit number by a d-digit number (for d up to 10) took d+4 cycles, so the multiplication of a 10-digit number by 10-digit number took 14 cycles, or 2,800 microseconds—a rate of 357 per second. If one of the numbers had fewer than 10 digits, the operation was faster. Division and square roots took 13(d+1) cycles, where d is the number of digits in the result (quotient or square root). So a division or square root took up to 143 cycles, or 28,600 microseconds—a rate of 35 per second. (Wilkes 1956:20 states that a division with a 10-digit quotient required 6 milliseconds.) If the result had fewer than ten digits, it was obtained faster. ENIAC was able to process about 500 FLOPS, compared to modern supercomputers' petascale and exascale computing power. Reliability ENIAC used common octal-base radio tubes of the day; the decimal accumulators were made of 6SN7 flip-flops, while 6L7s, 6SJ7s, 6SA7s and 6AC7s were used in logic functions. Numerous 6L6s and 6V6s served as line drivers to drive pulses through cables between rack assemblies. Several tubes burned out almost every day, leaving ENIAC nonfunctional about half the time. Special high-reliability tubes were not available until 1948. Most of these failures, however, occurred during the warm-up and cool-down periods, when the tube heaters and cathodes were under the most thermal stress. Engineers reduced ENIAC's tube failures to the more acceptable rate of one tube every two days. According to an interview in 1989 with Eckert, "We had a tube fail about every two days and we could locate the problem within 15 minutes." In 1954, the longest continuous period of operation without a failure was 116 hours—close to five days. Programming ENIAC could be programmed to perform complex sequences of operations, including loops, branches, and subroutines. However, instead of the stored-program computers that exist today, ENIAC was just a large collection of arithmetic machines, which originally had programs set up into the machine by a combination of plugboard wiring and three portable function tables (containing 1,200 ten-way switches each). The task of taking a problem and mapping it onto the machine was complex, and usually took weeks. Due to the complexity of mapping programs onto the machine, programs were only changed after huge numbers of tests of the current program. After the program was figured out on paper, the process of getting the program into ENIAC by manipulating its switches and cables could take days. This was followed by a period of verification and debugging, aided by the ability to execute the program step by step. A programming tutorial for the modulo function using an ENIAC simulator gives an impression of what a program on the ENIAC looked like. ENIAC's six primary programmers, Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman, not only determined how to input ENIAC programs, but also developed an understanding of ENIAC's inner workings. The programmers were often able to narrow bugs down to an individual failed tube which could be pointed to for replacement by a technician. Programmers During World War II, while the U.S. Army needed to compute ballistics trajectories, many women were interviewed for this task. At least 200 women were hired by the Moore School of Engineering to work as "computers" and six of them were chosen to be the programmers of ENIAC. Betty Holberton, Kay McNulty, Marlyn Wescoff, Ruth Lichterman, Betty Jean Jennings, and Fran Bilas, programmed the ENIAC to perform calculations for ballistics trajectories electronically for the Army's Ballistic Research Laboratory. While men having the same education and experience were designated as "professionals", these women were unreasonably designated as "subprofessionals", though they had professional degrees in mathematics, and were highly trained mathematicians. These women were not "refrigerator ladies", i.e., models posing in front of the machine for press photography, as then computer scientist undergrad Kathryn Kleiman discovered in her own research as opposed to what she was told by a historian in computing. However, some of the women did not receive recognition for their work on the ENIAC in their entire lifetimes. After the war ended, the women continued to work on the ENIAC. Their expertise made their positions difficult to replace with returning soldiers. Later In the 1990s Kleiman learned that most of the ENIAC programmers were not invited to the ENIAC’s 50th anniversary event. So she made it her mission to track them down and record their oral histories. The documentary, intended to inspire young women and men to get involved in programming. "They were shocked to be discovered," Kleiman says. "They were thrilled to be recognized, but had mixed impressions about how they felt about being ignored for so long." Kleiman released a book on the six female ENIAC programmers in 2022. These early programmers were drawn from a group of about two hundred women employed as computers at the Moore School of Electrical Engineering at the University of Pennsylvania. The job of computers was to produce the numeric result of mathematical formulas needed for a scientific study, or an engineering project. They usually did so with a mechanical calculator. The women studied the machine's logic, physical structure, operation, and circuitry in order to not only understand the mathematics of computing, but also the machine itself. This was one of the few technical job categories available to women at that time. Betty Holberton (née Snyder) continued on to help write the first generative programming system (SORT/MERGE) and help design the first commercial electronic computers, the UNIVAC and the BINAC, alongside Jean Jennings. McNulty developed the use of subroutines in order to help increase ENIAC's computational capability. Herman Goldstine selected the programmers, whom he called operators, from the computers who had been calculating ballistics tables with mechanical desk calculators, and a differential analyzer prior to and during the development of ENIAC. Under Herman and Adele Goldstine's direction, the computers studied ENIAC's blueprints and physical structure to determine how to manipulate its switches and cables, as programming languages did not yet exist. Though contemporaries considered programming a clerical task and did not publicly recognize the programmers' effect on the successful operation and announcement of ENIAC, McNulty, Jennings, Snyder, Wescoff, Bilas, and Lichterman have since been recognized for their contributions to computing. Three of the current (2020) Army supercomputers Jean, Kay, and Betty are named after Jean Bartik (Betty Jennings), Kay McNulty, and Betty Snyder respectively. The "programmer" and "operator" job titles were not originally considered professions suitable for women. The labor shortage created by World War II helped enable the entry of women into the field. However, the field was not viewed as prestigious, and bringing in women was viewed as a way to free men up for more skilled labor. Essentially, women were seen as meeting a need in a temporary crisis. For example, the National Advisory Committee for Aeronautics said in 1942, "It is felt that enough greater return is obtained by freeing the engineers from calculating detail to overcome any increased expenses in the computers' salaries. The engineers admit themselves that the girl computers do the work more rapidly and accurately than they would. This is due in large measure to the feeling among the engineers that their college and industrial experience is being wasted and thwarted by mere repetitive calculation." Following the initial six programmers, an expanded team of a hundred scientists was recruited to continue work on the ENIAC. Among these were several women, including Gloria Ruth Gordon. Adele Goldstine wrote the original technical description of the ENIAC. Programming languages Several language systems were developed to describe programs for the ENIAC, including: Role in the hydrogen bomb Although the Ballistic Research Laboratory was the sponsor of ENIAC, one year into this three-year project John von Neumann, a mathematician working on the hydrogen bomb at Los Alamos National Laboratory, became aware of the ENIAC. In December 1945, the ENIAC was used to calculate thermonuclear reactions using equations. The data was used to support research on building a hydrogen bomb. Role in development of the Monte Carlo methods Related to ENIAC's role in the hydrogen bomb was its role in the Monte Carlo method becoming popular. Scientists involved in the original nuclear bomb development used massive groups of people doing huge numbers of calculations ("computers" in the terminology of the time) to investigate the distance that neutrons would likely travel through various materials. John von Neumann and Stanislaw Ulam realized the speed of ENIAC would allow these calculations to be done much more quickly. The success of this project showed the value of Monte Carlo methods in science. Later developments A press conference was held on February 1, 1946, and the completed machine was announced to the public the evening of February 14, 1946, featuring demonstrations of its capabilities. Elizabeth Snyder and Betty Jean Jennings were responsible for developing the demonstration trajectory program, although Herman and Adele Goldstine took credit for it. The machine was formally dedicated the next day at the University of Pennsylvania. None of the women involved in programming the machine or creating the demonstration were invited to the formal dedication nor to the celebratory dinner held afterwards. The original contract amount was $61,700; the final cost was almost $500,000 (approximately ). It was formally accepted by the U.S. Army Ordnance Corps in July 1946. ENIAC was shut down on November 9, 1946, for a refurbishment and a memory upgrade, and was transferred to Aberdeen Proving Ground, Maryland in 1947. There, on July 29, 1947, it was turned on and was in continuous operation until 11:45 p.m. on October 2, 1955, when it was retired in favor of the more efficient EDVAC and ORDVAC computers. Role in the development of the EDVAC A few months after ENIAC's unveiling in the summer of 1946, as part of "an extraordinary effort to jump-start research in the field", the Pentagon invited "the top people in electronics and mathematics from the United States and Great Britain" to a series of forty-eight lectures given in Philadelphia, Pennsylvania; all together called The Theory and Techniques for Design of Digital Computers—more often named the Moore School Lectures. Half of these lectures were given by the inventors of ENIAC. ENIAC was a one-of-a-kind design and was never repeated. The freeze on design in 1943 meant that it lacked some innovations that soon became well-developed, notably the ability to store a program. Eckert and Mauchly started work on a new design, to be later called the EDVAC, which would be both simpler and more powerful. In particular, in 1944 Eckert wrote his description of a memory unit (the mercury delay line) which would hold both the data and the program. John von Neumann, who was consulting for the Moore School on the EDVAC, sat in on the Moore School meetings at which the stored program concept was elaborated. Von Neumann wrote up an incomplete set of notes (First Draft of a Report on the EDVAC) which were intended to be used as an internal memorandum—describing, elaborating, and couching in formal logical language the ideas developed in the meetings. ENIAC administrator and security officer Herman Goldstine distributed copies of this First Draft to a number of government and educational institutions, spurring widespread interest in the construction of a new generation of electronic computing machines, including Electronic Delay Storage Automatic Calculator (EDSAC) at Cambridge University, England and SEAC at the U.S. Bureau of Standards. Improvements A number of improvements were made to ENIAC after 1947, including a primitive read-only stored programming mechanism using the function tables as program ROM, after which programming was done by setting the switches. The idea has been worked out in several variants by Richard Clippinger and his group, on the one hand, and the Goldstines, on the other, and it was included in the ENIAC patent. Clippinger consulted with von Neumann on what instruction set to implement. Clippinger had thought of a three-address architecture while von Neumann proposed a one-address architecture because it was simpler to implement. Three digits of one accumulator (#6) were used as the program counter, another accumulator (#15) was used as the main accumulator, a third accumulator (#8) was used as the address pointer for reading data from the function tables, and most of the other accumulators (1–5, 7, 9–14, 17–19) were used for data memory. In March 1948 the converter unit was installed, which made possible programming through the reader from standard IBM cards. The "first production run" of the new coding techniques on the Monte Carlo problem followed in April. After ENIAC's move to Aberdeen, a register panel for memory was also constructed, but it did not work. A small master control unit to turn the machine on and off was also added. The programming of the stored program for ENIAC was done by Betty Jennings, Clippinger, Adele Goldstine and others. It was first demonstrated as a stored-program computer in April 1948, running a program by Adele Goldstine for John von Neumann. This modification reduced the speed of ENIAC by a factor of 6 and eliminated the ability of parallel computation, but as it also reduced the reprogramming time to hours instead of days, it was considered well worth the loss of performance. Also analysis had shown that due to differences between the electronic speed of computation and the electromechanical speed of input/output, almost any real-world problem was completely I/O bound, even without making use of the original machine's parallelism. Most computations would still be I/O bound, even after the speed reduction imposed by this modification. Early in 1952, a high-speed shifter was added, which improved the speed for shifting by a factor of five. In July 1953, a 100-word expansion core memory was added to the system, using binary-coded decimal, excess-3 number representation. To support this expansion memory, ENIAC was equipped with a new Function Table selector, a memory address selector, pulse-shaping circuits, and three new orders were added to the programming mechanism. Comparison with other early computers Mechanical computing machines have been around since Archimedes' time (see: Antikythera mechanism), but the 1930s and 1940s are considered the beginning of the modern computer era. ENIAC was, like the IBM Harvard Mark I and the German Z3, able to run an arbitrary sequence of mathematical operations, but did not read them from a tape. Like the British Colossus, it was programmed by plugboard and switches. ENIAC combined full, Turing-complete programmability with electronic speed. The Atanasoff–Berry Computer (ABC), ENIAC, and Colossus all used thermionic valves (vacuum tubes). ENIAC's registers performed decimal arithmetic, rather than binary arithmetic like the Z3, the ABC and Colossus. Like the Colossus, ENIAC required rewiring to reprogram until April 1948. In June 1948, the Manchester Baby ran its first program and earned the distinction of first electronic stored-program computer. Though the idea of a stored-program computer with combined memory for program and data was conceived during the development of ENIAC, it was not initially implemented in ENIAC because World War II priorities required the machine to be completed quickly, and ENIAC's 20 storage locations would be too small to hold data and programs. Public knowledge The Z3 and Colossus were developed independently of each other, and of the ABC and ENIAC during World War II. Work on the ABC at Iowa State University was stopped in 1942 after John Atanasoff was called to Washington, D.C., to do physics research for the U.S. Navy, and it was subsequently dismantled. The Z3 was destroyed by the Allied bombing raids of Berlin in 1943. As the ten Colossus machines were part of the UK's war effort their existence remained secret until the late 1970s, although knowledge of their capabilities remained among their UK staff and invited Americans. ENIAC, by contrast, was put through its paces for the press in 1946, "and captured the world's imagination". Older histories of computing may therefore not be comprehensive in their coverage and analysis of this period. All but two of the Colossus machines were dismantled in 1945; the remaining two were used to decrypt Soviet messages by GCHQ until the 1960s. The public demonstration for ENIAC was developed by Snyder and Jennings who created a demo that would calculate the trajectory of a missile in 15 seconds, a task that would have taken several weeks for a human computer. Patent For a variety of reasonsincluding Mauchly's June 1941 examination of the Atanasoff–Berry computer (ABC), prototyped in 1939 by John Atanasoff and Clifford Berry for ENIAC, applied for in 1947 and granted in 1964, was voided by the 1973 decision of the landmark federal court case Honeywell, Inc. v. Sperry Rand Corp.. The decision included: that the ENIAC inventors had derived the subject matter of the electronic digital computer from Atanasoff; gave legal recognition to Atanasoff as the inventor of the first electronic digital computer; and put the invention of the electronic digital computer in the public domain. Main parts The main parts were 40 panels and three portable function tables (named A, B, and C). The layout of the panels was (clockwise, starting with the left wall): Left wall Initiating Unit Cycling Unit Master Programmer – panel 1 and 2 Function Table 1 – panel 1 and 2 Accumulator 1 Accumulator 2 Divider and Square Rooter Accumulator 3 Accumulator 4 Accumulator 5 Accumulator 6 Accumulator 7 Accumulator 8 Accumulator 9 Back wall Accumulator 10 High-speed Multiplier – panel 1, 2, and 3 Accumulator 11 Accumulator 12 Accumulator 13 Accumulator 14 Right wall Accumulator 15 Accumulator 16 Accumulator 17 Accumulator 18 Function Table 2 – panel 1 and 2 Function Table 3 – panel 1 and 2 Accumulator 19 Accumulator 20 Constant Transmitter – panel 1, 2, and 3 Printer – panel 1, 2, and 3 An IBM card reader was attached to Constant Transmitter panel 3 and an IBM card punch was attached to Printer Panel 2. The Portable Function Tables could be connected to Function Table 1, 2, and 3. Parts on display Pieces of ENIAC are held by the following institutions: The School of Engineering and Applied Science at the University of Pennsylvania has four of the original forty panels (Accumulator #18, Constant Transmitter Panel 2, Master Programmer Panel 2, and the Cycling Unit) and one of the three function tables (Function Table B) of ENIAC (on loan from the Smithsonian). The Smithsonian has five panels (Accumulators 2, 19, and 20; Constant Transmitter panels 1 and 3; Divider and Square Rooter; Function Table 2 panel 1; Function Table 3 panel 2; High-speed Multiplier panels 1 and 2; Printer panel 1; Initiating Unit) in the National Museum of American History in Washington, D.C. (but apparently not currently on display). The Science Museum in London has a receiver unit on display. The Computer History Museum in Mountain View, California has three panels (Accumulator #12, Function Table 2 panel 2, and Printer Panel 3) and portable function table C on display (on loan from the Smithsonian Institution). The University of Michigan in Ann Arbor has four panels (two accumulators, High-speed Multiplier panel 3, and Master Programmer panel 2), salvaged by Arthur Burks. The United States Army Ordnance Museum at Aberdeen Proving Ground, Maryland, where ENIAC was used, has Portable Function Table A. The U.S. Army Field Artillery Museum in Fort Sill, as of October 2014, obtained seven panels of ENIAC that were previously housed by The Perot Group in Plano, Texas. There are accumulators #7, #8, #11, and #17; panel #1 and #2 that connected to function table #1, and the back of a panel showing its tubes. A module of tubes is also on display. The United States Military Academy at West Point, New York, has one of the data entry terminals from the ENIAC. The Heinz Nixdorf Museum in Paderborn, Germany, has three panels (Printer panel 2 and High-speed Function Table) (on loan from the Smithsonian Institution). In 2014 the museum decided to rebuild one of the accumulator panels – reconstructed part has the look and feel of a simplified counterpart from the original machine. Recognition ENIAC was named an IEEE Milestone in 1987. In 1996, in honor of the ENIAC's 50th anniversary, The University of Pennsylvania sponsored a project named "ENIAC-on-a-Chip", where a very small silicon computer chip measuring 7.44 mm by 5.29 mm was built with the same functionality as ENIAC. Although this 20 MHz chip was many times faster than ENIAC, it had but a fraction of the speed of its contemporary microprocessors in the late 1990s. In 1997, the six women who did most of the programming of ENIAC were inducted into the Technology International Hall of Fame. The role of the ENIAC programmers is treated in a 2010 documentary film titled Top Secret Rosies: The Female "Computers" of WWII by LeAnn Erickson. A 2014 documentary short, The Computers by Kate McMahon, tells of the story of the six programmers; this was the result of 20 years' research by Kathryn Kleiman and her team as part of the ENIAC Programmers Project. In 2022 Grand Central Publishing released Proving Ground by Kathy Kleiman, a hardcover biography about the six ENIAC programmers and their efforts to translate block diagrams and electronic schematics of the ENIAC, then under construction, into programs that would be loaded into and run on ENIAC once it was available for use. In 2011, in honor of the 65th anniversary of the ENIAC's unveiling, the city of Philadelphia declared February 15 as ENIAC Day. The ENIAC celebrated its 70th anniversary on February 15, 2016.
Technology
Computer hardware
null
66575
https://en.wikipedia.org/wiki/Nutrient
Nutrient
A nutrient is a substance used by an organism to survive, grow and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted into smaller molecules in the process of releasing energy such as for carbohydrates, lipids, proteins and fermentation products (ethanol or vinegar) leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host. Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential to humans and some animal species but most other animals and many plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, protein, fats, sugars and vitamins. A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiological roles in cellular processes, like vascular functions or nerve conduction. Inadequate amounts of essential nutrients or diseases that interfere with absorption, result in a deficiency state that compromises growth, survival and reproduction. Consumer advisories for dietary nutrient intakes such as the United States Dietary Reference Intake, are based on the amount required to prevent deficiency and provide macronutrient and micronutrient guides for both lower and upper limits of intake. In many countries, regulations require that food product labels display information about the amount of any macronutrients and micronutrients present in the food in significant quantities. Nutrients in larger quantities than the body needs may have harmful effects. Edible plants also contain thousands of compounds generally called phytochemicals which have unknown effects on disease or health including a diverse class with non-nutrient status called polyphenols which remain poorly understood as of 2024. Types Macronutrients Macronutrients are defined in several ways. The chemical elements humans consume in the largest quantities are carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulphur, summarized as CHNOPS. The chemical compounds that humans consume in the largest quantities and provide bulk energy are classified as carbohydrates, proteins, and fats. Water must be also consumed in large quantities but does not provide caloric value. Calcium, sodium, potassium, magnesium, and chloride ions, along with phosphorus and sulfur, are listed with macronutrients because they are required in large quantities compared to micronutrients, i.e., vitamins and other minerals, the latter often described as trace or ultratrace minerals. Macronutrients provide energy: Carbohydrates are compounds made up of types of sugar. Carbohydrates are classified according to their number of sugar units: monosaccharides (such as glucose and fructose), disaccharides (such as sucrose and lactose), oligosaccharides, and polysaccharides (such as starch, glycogen, and cellulose). Proteins are organic compounds that consist of amino acids joined by peptide bonds. Since the body cannot manufacture some of the amino acids (termed essential amino acids), the diet must supply them. Through digestion, proteins are broken down by proteases back into free amino acids. Fats consist of a glycerin molecule with three fatty acids attached. Fatty acid molecules contain a -COOH group attached to unbranched hydrocarbon chains connected by single bonds alone (saturated fatty acids) or by both double and single bonds (unsaturated fatty acids). Fats are needed for construction and maintenance of cell membranes, to maintain a stable body temperature, and to sustain the health of skin and hair. Because the body does not manufacture certain fatty acids (termed essential fatty acids), they must be obtained through one's diet. Ethanol is not an essential nutrient, but it does provide calories. The United States Department of Agriculture uses a figure of per gram of alcohol ( per ml) for calculating food energy. For distilled spirits, a standard serving in the U.S. is , which at 40% ethanol (80 proof) would be 14 grams and 98 calories. Micronutrients Micronutrients are essential dietary elements required in varying quantities throughout life to serve metabolic and physiological functions. Dietary minerals, such as potassium, sodium, and iron, are elements native to Earth, and cannot be synthesized. They are required in the diet in microgram or milligram amounts. As plants obtain minerals from the soil, dietary minerals derive directly from plants consumed or indirectly from edible animal sources. Vitamins are organic compounds required in microgram or milligram amounts. The importance of each dietary vitamin was first established when it was determined that a disease would develop if that vitamin was absent from the diet. Essentiality Essential nutrients An essential nutrient is a nutrient required for normal physiological function that cannot be synthesized in the body – either at all or in sufficient quantities – and thus must be obtained from a dietary source. Apart from water, which is universally required for the maintenance of homeostasis in mammals, essential nutrients are indispensable for various cellular metabolic processes and for the maintenance and function of tissues and organs. The nutrients considered essential for humans comprise nine amino acids, two fatty acids, thirteen vitamins, fifteen minerals and choline. In addition, there are several molecules that are considered conditionally essential nutrients since they are indispensable in certain developmental and pathological states. Amino acids An essential amino acid is an amino acid that is required by an organism but cannot be synthesized de novo by it, and therefore must be supplied in its diet. Out of the twenty standard protein-producing amino acids, nine cannot be endogenously synthesized by humans: phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and histidine. Fatty acids Essential fatty acids (EFAs) are fatty acids that humans and other animals must ingest because the body requires them for good health but cannot synthesize them. Only two fatty acids are known to be essential for humans: alpha-linolenic acid (an omega-3 fatty acid) and linoleic acid (an omega-6 fatty acid). Vitamins and vitamers Vitamins occur in a variety of related forms known as vitamers. The vitamers of a given vitamin perform the functions of that vitamin and prevent symptoms of deficiency of that vitamin. Vitamins are those essential organic molecules that are not classified as amino acids or fatty acids. They commonly function as enzymatic cofactors, metabolic regulators or antioxidants. Humans require thirteen vitamins in their diet, most of which are actually groups of related molecules (e.g. vitamin E includes tocopherols and tocotrienols): vitamins A, C, D, E, K, thiamine (B1), riboflavin (B2), niacin (B3), pantothenic acid (B5), pyridoxine (B6), biotin (B7), folate (B9), and cobalamin (B12). The requirement for vitamin D is conditional, as people who get sufficient exposure to ultraviolet light, either from the sun or an artificial source, synthesize vitamin D in the skin. Minerals Minerals are the exogenous chemical elements indispensable for life. Although the four elements: carbon, hydrogen, oxygen, and nitrogen (CHON) are essential for life, they are so plentiful in food and drink that these are not considered nutrients and there are no recommended intakes for these as minerals. The need for nitrogen is addressed by requirements set for protein, which is composed of nitrogen-containing amino acids. Sulfur is essential, but again does not have a recommended intake. Instead, recommended intakes are identified for the sulfur-containing amino acids methionine and cysteine. The essential nutrient trace elements for humans, listed in order of Recommended Dietary Allowance (expressed as a mass), are potassium, chloride, sodium, calcium, phosphorus, magnesium, iron, zinc, manganese, copper, iodine, chromium, molybdenum, and selenium. Additionally, cobalt is a component of Vitamin B12 which is essential. There are other minerals which are essential for some plants and animals, but may or may not be essential for humans, such as boron and silicon. Choline Choline is an essential nutrient. The cholines are a family of water-soluble quaternary ammonium compounds. Choline is the parent compound of the cholines class, consisting of ethanolamine having three methyl substituents attached to the amino function. Healthy humans fed artificially composed diets that are deficient in choline develop fatty liver, liver damage, and muscle damage. Choline was not initially classified as essential because the human body can produce choline in small amounts through phosphatidylcholine metabolism. Conditionally essential Conditionally essential nutrients are certain organic molecules that can normally be synthesized by an organism, but under certain conditions in insufficient quantities. In humans, such conditions include premature birth, limited nutrient intake, rapid growth, and certain disease states. Inositol, taurine, arginine, glutamine and nucleotides are classified as conditionally essential and are particularly important in neonatal diet and metabolism. Non-essential Non-essential nutrients are substances within foods that can have a significant impact on health. Dietary fiber is not absorbed in the human digestive tract. Soluble fiber is metabolized to butyrate and other short-chain fatty acids by bacteria residing in the large intestine. Soluble fiber is marketed as serving a prebiotic function with claims for promoting "healthy" intestinal bacteria. Non-nutrients Ethanol (C2H5OH) is not an essential nutrient, but it does supply approximately of food energy per gram. For spirits (vodka, gin, rum, etc.) a standard serving in the United States is , which at 40%ethanol (80proof) would be 14 grams and . At 50%alcohol, 17.5 g and . Wine and beer contain a similar amount of ethanol in servings of , respectively, but these beverages also contribute to food energy intake from components other than ethanol. A serving of wine contains . A serving of beer contains . According to the U.S. Department of Agriculture, based on NHANES 2013–2014 surveys, women ages 20 and up consume on average 6.8grams of alcohol per day and men consume on average 15.5 grams per day. Ignoring the non-alcohol contribution of those beverages, the average ethanol contributions to daily food energy intake are , respectively. Alcoholic beverages are considered empty calorie foods because, while providing energy, they contribute no essential nutrients. By definition, phytochemicals include all nutritional and non-nutritional components of edible plants. Included as nutritional constituents are provitamin A carotenoids, whereas those without nutrient status are diverse polyphenols, flavonoids, resveratrol, and lignans that are present in numerous plant foods. Some phytochemical compounds are under preliminary research for their potential effects on human diseases and health. However, the qualification for nutrient status of compounds with poorly defined properties in vivo is that they must first be defined with a Dietary Reference Intake level to enable accurate food labeling, a condition not established for most phytochemicals that are claimed to provide antioxidant benefits. Deficiencies and toxicity See Vitamin, Mineral (nutrient), Protein (nutrient) An inadequate amount of a nutrient is a deficiency. Deficiencies can be due to several causes, including an inadequacy in nutrient intake, called a dietary deficiency, or any of several conditions that interfere with the utilization of a nutrient within an organism. Some of the conditions that can interfere with nutrient utilization include problems with nutrient absorption, substances that cause a greater-than-normal need for a nutrient, conditions that cause nutrient destruction, and conditions that cause greater nutrient excretion. Nutrient toxicity occurs when excess consumption of a nutrient does harm to an organism. In the United States and Canada, recommended dietary intake levels of essential nutrients are based on the minimum level that "will maintain a defined level of nutriture in an individual", a definition somewhat different from that used by the World Health Organization and Food and Agriculture Organization of a "basal requirement to indicate the level of intake needed to prevent pathologically relevant and clinically detectable signs of a dietary inadequacy". In setting human nutrient guidelines, government organizations do not necessarily agree on amounts needed to avoid deficiency or maximum amounts to avoid the risk of toxicity. For example, for vitamin C, recommended intakes range from 40 mg/day in India to 155 mg/day for the European Union. The table below shows U.S. Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for vitamins and minerals, PRIs for the European Union (same concept as RDAs), followed by what three government organizations deem to be the safe upper intake. RDAs are set higher than EARs to cover people with higher-than-average needs. Adequate Intakes (AIs) are set when there is insufficient information to establish EARs and RDAs. Countries establish tolerable upper intake levels, also referred to as upper limits (ULs), based on amounts that cause adverse effects. Governments are slow to revise information of this nature. For the U.S. values, except calcium and vitamin D, all data date from 1997 to 2004. * The daily recommended amounts of niacin and magnesium are higher than the tolerable upper limit because, for both nutrients, the ULs identify the amounts which will not increase risk of adverse effects when the nutrients are consumed as a serving of a dietary supplement. Magnesium supplementation above the UL may cause diarrhea. Supplementation with niacin above the UL may cause flushing of the face and a sensation of body warmth. Each country or regional regulatory agency decides on a safety margin below when symptoms may occur, so the ULs may differ based on source. EAR U.S. Estimated Average Requirements. RDA U.S. Recommended Dietary Allowances; higher for adults than for children, and may be even higher for women who are pregnant or lactating. AI U.S. Adequate Intake; AIs established when there is not sufficient information to set EARs and RDAs. PRI Population Reference Intake is European Union equivalent of RDA; higher for adults than for children, and may be even higher for women who are pregnant or lactating. For Thiamin and Niacin, the PRIs are expressed as amounts per megajoule (239 kilocalories) of food energy consumed. Upper Limit Tolerable upper intake levels. ND ULs have not been determined. NE EARs, PRIs or AIs have not yet been established or will not be (EU does not consider chromium an essential nutrient). Plant Plants absorb carbon, hydrogen, and oxygen from air and soil as carbon dioxide and water. Other nutrients are absorbed from soil (exceptions include some parasitic or carnivorous plants). Counting these, there are 17 important nutrients for plants: these are macronutrients; nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg), carbon (C), oxygen(O) and hydrogen (H), and the micronutrients; iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo) and nickel (Ni). In addition to carbon, hydrogen, and oxygen, nitrogen, phosphorus, and sulfur are also needed in relatively large quantities. Together, these six are the elemental macronutrients for all organisms. They are sourced from inorganic matter (for example, carbon dioxide, water, nitrates, phosphates, sulfates, and diatomic molecules of nitrogen and, especially, oxygen) and organic compounds such as carbohydrates, lipids, proteins.
Biology and health sciences
Health and fitness: General
Health
66607
https://en.wikipedia.org/wiki/Vine
Vine
A vine (; ) is any plant with a growth habit of trailing or scandent (that is, climbing) stems, lianas, or runners. The word vine can also refer to such stems or runners themselves, for instance, when used in wicker work. In parts of the world, including the British Isles, the term "vine" usually applies exclusively to grapevines, while the term "climber" is used for all climbing plants. Growth forms Certain plants always grow as vines, while a few grow as vines only part of the time. For instance, poison ivy and bittersweet can grow as low shrubs when support is not available, but will become vines when support is available. A vine displays a growth form based on very long stems. This has two purposes. A vine may use rock exposures, other plants, or other supports for growth rather than investing energy in a lot of supportive tissue, enabling the plant to reach sunlight with a minimum investment of energy. This has been a highly successful growth form for plants such as kudzu and Japanese honeysuckle, both of which are invasive exotics in parts of North America. There are some tropical vines that develop skototropism, and grow away from the light, a type of negative phototropism. Growth away from light allows the vine to reach a tree trunk, which it can then climb to brighter regions. The vine growth form may also enable plants to colonize large areas quickly, even without climbing high. This is the case with periwinkle and ground ivy. It is also an adaptation to life in areas where small patches of fertile soil are adjacent to exposed areas with more sunlight but little or no soil. A vine can root in the soil but have most of its leaves in the brighter, exposed area, getting the best of both environments. The evolution of a climbing habit has been implicated as a key innovation associated with the evolutionary success and diversification of a number of taxonomic groups of plants. It has evolved independently in several plant families, using many different climbing methods, such as: twining the stem around a support (e.g., morning glories, Ipomoea species) by way of adventitious, clinging roots (e.g., ivy, Hedera species) with twining petioles (e.g., Clematis species) using tendrils, which can be specialized shoots (Vitaceae), leaves (Bignoniaceae), or even inflorescences (Passiflora) using tendrils which also produce adhesive pads at the end that attach themselves quite strongly to the support (Parthenocissus) using thorns (e.g. climbing rose) or other hooked structures, such as hooked branches (e.g. Artabotrys hexapetalus) The climbing fetterbush (Pieris phillyreifolia) is a woody shrub-vine which climbs without clinging roots, tendrils, or thorns. It directs its stem into a crevice in the bark of fibrous barked trees (such as bald cypress) where the stem adopts a flattened profile and grows up the tree underneath the host tree's outer bark. The fetterbush then sends out branches that emerge near the top of the tree. Most vines are flowering plants. These may be divided into woody vines or lianas, such as akebia wisteria, kiwifruit, and common ivy, and herbaceous (nonwoody) vines, such as morning glory. One odd group of vining plants is the fern genus Lygodium, called climbing ferns. The stem does not climb, but rather the fronds (leaves) do. The fronds unroll from the tip, and theoretically never stop growing; they can form thickets as they unroll over other plants, rockfaces, and fences. Twining vines A twining vine, also known as a bine, is one that climbs by its shoots growing in a helix, in contrast to vines that climb using tendrils or suckers. Many bines have rough stems or downward-pointing bristles to aid their grip. Hops (used in flavoring beer) are a commercially important example of a bine. The direction of rotation of the shoot tip during climbing is autonomous and does not (as sometimes imagined) derive from the shoot's following the sun around the sky – the direction of twist does not therefore depend upon which side of the equator the plant is growing on. This is shown by the fact that some bines always twine clockwise, including runner bean (Phaseolus coccineus) and bindweed (Convolvulus species), while others twine anticlockwise, including black bryony (Dioscorea communis) and climbing honeysuckles (Lonicera species). The contrasting rotations of bindweed and honeysuckle was the theme of the satirical song "Misalliance", written and sung by Michael Flanders and Donald Swann (but the lyrics confuse the direction of twining, describing honeysuckle as right-handed and bindweed as left-handed). Horticultural climbing plants The term "vine" also applies to Cucurbitaceae like cucumbers where botanists refer to creeping vines; in commercial agriculture the natural tendency of coiling tendrils to attach themselves to pre-existing structures or espaliers is optimized by the installation of trellis netting. Gardeners can use the tendency of climbing plants to grow quickly. If a plant display is wanted quickly, a climber can achieve this. Climbers can be trained over walls, pergolas, fences, etc. Climbers can be grown over other plants to provide additional attraction. Artificial support can also be provided. Some climbers climb by themselves; others need work, such as tying them in and training them. Scientific description Vines widely differ in size, form and evolutionary origin. Darwin classified climbing groups based on their climbing method. He classified five classes of vines – twining plants, leaf climbers, tendril bearers, root climbers and hook climbers. Vines are remarkable in that they have multiple evolutionary origins. They usually reside in tropical locations and have the unique ability to climb. Vines are able to grow in both deep shade and full sun due to their uniquely wide range of phenotypic plasticity. This climbing action prevents shading by neighbors and allows the vine to grow out of reach of herbivores. The environment where a vine can grow successfully is determined by the climbing mechanism of a vine and how far it can spread across supports. There are many theories supporting the idea that photosynthetic responses are closely related to climbing mechanisms. Temperate twining vines, which twist tightly around supports, are typically poorly adapted for climbing beneath closed canopies due to their smaller support diameter and shade intolerance. In contrast, tendril vines usually grow on the forest floor and onto trees until they reach the surface of the canopy, suggesting that they have greater physiological plasticity. It has also been suggested that twining vines' revolving growth is mediated by changes in turgor pressure mediated by volume changes in the epidermal cells of the bending zone. Climbing vines can take on many unique characteristics in response to changes in their environments. Climbing vines can induce chemical defenses and modify their biomass allocation in response to herbivores. In particular, the twisting vine Convolvulus arvensis increases its twining in response to herbivore-associated leaf damage, which may lead to reduced future herbivory. Additionally, the tendrils of perennial vine Cayratia japonica are more likely to coil around nearby plants of another species than nearby plants of the same species in natural and experimental settings. This ability, which has only been previously documented in roots, demonstrates the vine's ability to distinguish whether another plant is of the same species as itself or a different one. In tendrilled vines, the tendrils are highly sensitive to touch and the coiling action is mediated by the hormones octadecanoids, jasmonates and indole-3-acetic acid. The touch stimulus and hormones may interact via volatile compounds or internal oscillation patterns. Research has found the presence of ion translocating ATPases in the Bryonia dioica species of plants, which has implications for a possible ion mediation tendril curling mechanism. In response to a touch stimulus, vanadate-sensitive K+, Mg2+ ATPase and Ca2+-translocating ATPases rapidly increase their activity. This increases transmembrane ion fluxes that appear to be involved in the early stages of tendril coiling. Example vine taxa [[File:Ficus_pumila.jpg|thumb|220px|Ficus pumila'''s vigorous wall growth]] Actinidia arguta, the tara vine Actinidia polygama, the silver vine Adlumia fungosa, the Allegheny vine Aeschynanthus radicans, the lipstick vine Akebia quinata, five leafed chocolate vine Akebia trifoliata, three leafed chocolate vine Allamanda cathartica, common trumpet vine Ampelocissus acetosa, known as wild grape or djabaru Ampelopsis glandulosa var. brevipedunculata, known as wild grape or porcelain berry Anredera cordifolia, Madeira-vine Antigonon, the coral vine Antigonon leptopus, the Confederate vine Aptenia cordifolia, the heart-leaved aptenia Araujia sericifera, moth vine Asparagus asparagoides, bridal creeper, bridal-veil creeper Banisteriopsis caapi, ayahuasca, also known as caapi, yage, and soul vine Berchemia scandens, the rattan vine Betel Bignonia, the cross vine Bougainvillea, a genus of thorny ornamental vines, bushes, and trees Callerya megasperma, native wisteria Calystegia sepium, hedge bindweed Campsis, the trumpet vine Campsis grandiflora, the Chinese trumpet vine Cardiospermum halicacabum, the balloon vine Celastrus, the staff vine Ceropegia woodii, string of hearts Clematis vitalba, traveller's joy Clerodendrum thomsoniae, bleeding-heart vine Clitoria ternatea, butterfly pea Ceropegia linearis, the rosary vine or sweetheart vine Cissus antarctica, the kangaroo vine Cissus hypoglauca, the water vine Citrullus lanatus var. lanatus, the watermelon Cobaea scandens, cup-and-saucer vine, cathedral bells, Mexican ivy Cochliasanthus, known as corkscrew vine, snail vine, snail creeper Cucumis sativus, the cucumber Cyphostemma juttae, known as wild grape Delairea odorata, German ivy Dolichandra unguis-cati, cat's claw creeper, funnel creeper, or cat's claw trumpet Epipremnum aureum, known as golden pothos and devil's ivy Fallopia baldschuanica, the Russian vine Ficus pumila, known as the climbing fig Hardenbergia violacea, lilac vine Hedera helix, known as common ivy, English ivy, European ivy, or ivy Hibbertia scandens, climbing guinea flower, golden guinea vine, gold guinea plant Hoya, a genus of about 300 species of climbing or creeping plants Humulus lupulus, common hop Hydrangea petiolaris, climbing hydrangea Ipomoea cairica, known as Cairo morning glory, coast morning glory and railroad creeper Ipomoea indica, known as ocean blue morning glory Jasminum polyanthum, pink jasmine Kadsura japonica, kadsura vine Kennedia coccinea, the common coral vine Kennedia nigricans', black coral pea Lagenaria siceraria, known as the bottle gourd, calabash, opo squash, or long melon Lathyrus odoratus, the sweet pea Lonicera japonica, known as Suikazura or Japanese honeysuckle Luffa, a genus of tropical and subtropical vines classified in the cucumber family, Cucurbitaceae Lygodium, a genus of about 40 species of ferns, known as climbing ferns Mandevilla, rocktrumpet, Brazilian jasmine Momordica charantia, the bitter gourd Mikania scandens, the hemp vine Muehlenbeckia adpressa, the macquarie vine Nepenthes, a genus of carnivorous plants known as tropical pitcher plants or monkey cups Pandorea jasminoides, bower vine Pandorea pandorana, the wonga wonga vine Parthenocissus henryana, Chinese Virginia-creeper, silver vein creeper Parthenocissus quinquefolia, known as the Virginia creeper, Victoria creeper, five-leaved ivy, or five-finger Parthenocissus tricuspidata, Boston ivy, Japanese ivy Passiflora edulis, the passion fruit Periploca graeca, the silk vine Philodendron hederaceum, heartleaf philodendron Podranea ricasoliana, the pink trumpet vine Pueraria lobata, the kudzu vine Pyrostegia venusta, flamevine or orange trumpet vine Pseudogynoxys chenopodioides, Mexican flamevine Rosa banksiae, Lady Banks' rose Rosa filipes, climbing rose Schizophragma, hydrangea vine Scindapsus pictus, the silver vine Sechium edule, known as chayote, christophene, or several other names Senecio angulatus, known as Cape ivy Solandra, a genus of flowering plants in the nightshade family Solanum laxum, the potato vine Stephania japonica, snake vine Stephanotis floribunda, known as Madagascar jasmine Strongylodon macrobotrys, the jade vine Syngonium, the goosefoot vine Syngonium podophyllum, the arrowhead vine Thunbergia alata, black-eyed Susan Thunbergia grandiflora, known as the Bengal clock vine or blue trumpet vine Thunbergia erecta, the bush clock vine Toxicodendron radicans, known as poison ivy Trachelospermum asiaticum, Asiatic jasmine Trachelospermum jasminoides, Confederate jasmine, star jasmine Vitis, any of about sixty species of grape Wisteria, a genus of flowering plants in the pea family Xerosicyos'', silver dollar vine
Biology and health sciences
Plant anatomy and morphology: General
Biology
66633
https://en.wikipedia.org/wiki/Moth
Moth
Moths are a group of insects that includes all members of the order Lepidoptera that are not butterflies. They were previously classified as suborder Heterocera, but the group is paraphyletic with respect to butterflies (suborder Rhopalocera) and neither subordinate taxon is used in modern classifications. Moths make up the vast majority of the order. There are approximately 160,000 species of moth, many of which have yet to be described. Most species of moth are nocturnal, although there are also crepuscular and diurnal species. Differences between butterflies and moths While the butterflies form a monophyletic group, the moths, comprising the rest of the Lepidoptera, do not. Many attempts have been made to group the superfamilies of the Lepidoptera into natural groups, most of which fail because one of the two groups is not monophyletic: Microlepidoptera and Macrolepidoptera, Heterocera and Rhopalocera, Jugatae and Frenatae, Monotrysia, and Ditrysia. Although the rules for distinguishing moths from butterflies are not well established, one very good guiding principle is that butterflies have thin antennae and (with the exception of the family Hedylidae) have small balls or clubs at the end of their antennae. Moth antennae are usually feathery with no ball on the end. The divisions are named by this principle: "club-antennae" (Rhopalocera) or "varied-antennae" (Heterocera). Lepidoptera first evolved during the Carboniferous period, but only evolved their characteristic proboscis alongside the rise of angiosperms in the Cretaceous period. Etymology The modern English word moth comes from Old English (cf. Northumbrian ) from Common Germanic (compare Old Norse , Dutch , and German all meaning 'moth'). Its origins are possibly related to the Old English meaning 'maggot' or from the root of midge which until the 16th century was used mostly to indicate the larva, usually in reference to devouring clothes. Caterpillar Moth larvae, or caterpillars, make cocoons from which they emerge as fully grown moths with wings. Some moth caterpillars dig holes in the ground, where they live until they are ready to turn into adult moths. History Moths evolved long before butterflies; moth fossils have been found that may be 190 million years old. Both types of Lepidoptera are thought to have co-evolved with flowering plants, mainly because most modern species, both as adults and larvae, feed on flowering plants. One of the earliest known species that is thought to be an ancestor of moths is Archaeolepis mane. Its fossil fragments show scaled wings that are similar to caddisflies in their veining. Economics Significance to humans Some moths, particularly their caterpillars, can be major agricultural pests in many parts of the world. Examples include corn borers and bollworms. The caterpillar of the spongy moth (Lymantria dispar) causes severe damage to forests in the northeastern United States, where it is an invasive species. In temperate climates, the codling moth causes extensive damage, especially to fruit farms. In tropical and subtropical climates, the diamondback moth (Plutella xylostella) is perhaps the most serious pest of brassicaceous crops. Also in sub-Saharan Africa, the African sugarcane borer is a major pest of sugarcane, maize, and sorghum. Several moths in the family Tineidae are commonly regarded as pests because their larvae eat fabric such as clothes and blankets made from natural proteinaceous fibers such as wool or silk. They are less likely to eat mixed materials containing some artificial fibers. There are some reports that they may be repelled by the scent of wood from juniper and cedar, by lavender, or by other natural oils; however, many consider this unlikely to prevent infestation. Naphthalene (the chemical used in mothballs) is considered more effective, but there are concerns over its effects on human health. Despite being commonly thought to be undertaken by all moths, only the larvae of several moth species eat animal fibres, creating holes in articles of clothing, in particular those made of wool. Most species do not eat fabrics, and some moth adults do not even eat at all. Some, like the Luna, Polyphemus, Atlas, Promethea, cecropia, and other large moths do not have mouth parts. This is possible because they live off the food stores from when they were a caterpillar, and only live a short time as an adult (roughly a week for some species). Many species of adult moths do however eat: for instance, many will drink nectar. Items of fabric infested by clothes moth larvae may be treated by freezing them for several days at a temperature below . Some moths are farmed for their economic value. The most notable of these is the silkworm, the larva of the domesticated moth Bombyx mori. It is farmed for the silk with which it builds its cocoon. , the silk industry produces more than 130 million kilograms of raw silk, worth about 250 million U.S. dollars, each year. Not all silk is produced by Bombyx mori. There are several species of Saturniidae that also are farmed for their silk, such as the ailanthus moth (Samia cynthia group of species), the Chinese oak silkmoth (Antheraea pernyi), the Assam silkmoth (Antheraea assamensis), and the Japanese silk moth (Antheraea yamamai). The larvae of many species are used as food, particularly in Africa, where they are an important source of nutrition. The mopane worm, the caterpillar of Gonimbrasia belina, from the family Saturniidae, is a significant food resource in southern Africa. Another saturniid used as food is the cavorting emperor (Usta terpsichore). In one country alone, Congo, more than 30 species of moth larvae are harvested. Some are sold not only in the local village markets, but are shipped by the ton from one country to another. Predators and parasites Nocturnal insectivores often feed on moths; these include some bats, some species of owls and other species of birds. Moths also are eaten by some species of lizards, amphibians, cats, dogs, rodents, and some bears. Moth larvae are vulnerable to being parasitized by Ichneumonidae. Baculoviruses are parasite double-stranded DNA insect viruses that are used mostly as biological control agents. They are members of the Baculoviridae, a family that is restricted to insects. Most baculovirus isolates have been obtained from insects, in particular from Lepidoptera. There is evidence that ultrasound in the range emitted by bats causes flying moths to make evasive maneuvers. Ultrasonic frequencies trigger a reflex action in the noctuid moth that causes it to drop a few centimeters or inches in its flight to evade attack, and tiger moths can emit clicks to foil bats' echolocation. The fungus Ophiocordyceps sinensis infects the larvae of many different species of moths. Ecological importance Moths, like butterflies, bees and other more popularly recognized pollinating insects, serve an essential role as pollinators for many flowering plants, including species that bees do not visit. Nocturnal moths fly from flower to flower to feed on nectar during the night much as their diurnal relatives do during the day. A study conducted in the UK found moths dusted with pollen from 47 different plant species, including seven species largely ignored by bees. Some studies indicate that certain species of moths, such as those belonging to the families Erebidae and Sphingidae, may be the key pollinators for some flowering plants in the Himalayan ecosystem. The roles of moths as pollinators have been studied less frequently than those of diurnal pollinators, but recent studies have established that moths are important, but often overlooked, nocturnal pollinators of a wide range of plants. Some researchers say it is likely that many plants thought to be dependent on bees for pollination also rely on moths, which have historically been less observed because they pollinate mainly at night. Attraction to light Moths frequently appear to circle artificial lights. The reason for this behavior (positive phototaxis) is currently unknown. One hypothesis is called celestial or transverse orientation. By maintaining a constant angular relationship to a bright celestial light, such as the moon, they can fly in a straight line. Celestial objects are so far away that, even after travelling great distances, the change in angle between the moth and the light source is negligible; further, the moon will always be in the upper part of the visual field, or on the horizon. When a moth encounters a much closer artificial light and uses it for navigation, the angle changes noticeably after only a short distance, in addition to being often below the horizon. The moth instinctively attempts to correct by turning toward the light, thereby causing airborne moths to come plummeting downward, and resulting in a spiral flight path that gets closer and closer to the light source. Studies have found that light pollution caused by increasing use of artificial lights has either led to a severe decline in moth population in some parts of the world or has severely disrupted nocturnal pollination. Noteworthy moths Atlas moth (Attacus atlas), one of the largest moths in the world Hercules moth (Coscinocera hercules), largest moth in Australia White witch moth (Thysania agrippina), the Lepidopteran with the longest wingspan Madagascan sunset moth (Chrysiridia rhipheus), considered to be one of the most impressive and beautiful Lepidoptera Death's-head hawkmoth (Acherontia spp.), is associated with the supernatural and evil and has been featured in art and movies Peppered moth (Biston betularia), the subject of a well-known study in natural selection Luna moth (Actias luna) Grease moth (Aglossa cuprina), known to have fed on the rendered fat of humans Emperor gum moth (Opodiphthera eucalypti) Polyphemus moth (Antheraea polyphemus) Bogong moth (Agrotis infusa), known to have been a food source for southeastern indigenous Australians Ornate moth (Utetheisa ornatrix), the subject of numerous behavioral studies regarding sexual selection Moth species that may cause significant economic damage Spongy moth (Lymantria dispar), an invasive species pest of hardwood trees in North America Winter moth (Operophtera brumata), an invasive species pest of hardwood trees, cranberry and blueberry in northeastern North America Corn earworm or cotton bollworm (Helicoverpa zea), a major agricultural pest Indianmeal moth (Plodia interpunctella), a major pest of grain and flour Codling moth (Cydia pomonella), a pest mostly of apple, pear and walnut trees Light brown apple moth (Epiphyas postvittana), a highly polyphagous pest Wax moths (Galleria mellonella, Achroia grisella), pests of bee hives Duponchelia fovealis, a new invasive pest of vegetables and ornamental plants in the United States Gallery
Biology and health sciences
Lepidoptera
null
66668
https://en.wikipedia.org/wiki/Bullhead%20shark
Bullhead shark
The bullhead sharks are members of the genus Heterodontus, the only members of the family Heterodontidae and only living members of the order Heterodontiformes. All are relatively small, with the largest species reaching just in maximum length. They are bottom feeders in tropical and subtropical waters. The Heterodontiforms appear in the fossil record in the Early Jurassic. The oldest fossils of the modern genus date to the Late Jurassic. Despite the very ancient origins of this genus and its abundance in the fossil record, phylogenetic evidence indicates that all extant species in the genus arose from a single common ancestor that survived the Cretaceous-Paleogene extinction, with diversification into modern species only starting around the mid-Eocene. Description Bullhead sharks have tapered bodies, with most species reaching around in length. Their bodies vary in colour, including shades of grey, brown, and red and pale colours, and are covered in a variety of patterns, including spots and stripes. They have blunt, proportionally large heads with relatively small mouths and large nostrils, with pronounced ridges above their eyes. They have two dorsal fins, both substantial in size, the first larger than the second, each of which has a rigid fin spine at the front, along with an anal fin. The tail fin is also large, with upper and lower lobes separated by a notch. Bullhead sharks have differentiated teeth, with cusped grasping teeth at the front of the mouth, and flattened teeth at the back of the mouth. Their egg cases have a spiral colarettes running along their length. Ecology Bullhead sharks live in coastal littoral environments, generally shallower than , and are usually primarily active at night. Bullhead sharks ingest prey via suction feeding. They feed on invertebrate prey, including both hard prey such as crustaceans and sea urchins, and soft bodied prey such as octopuses, as well as predating on fish. They use their flattened teeth at the back of the mouth to crush hard-shelled prey and fish. Juveniles generally take softer prey than adults. The sharp fin spines provide a deterrent to being consumed by predators. Bullhead shark egg cases are shaped like an auger, with two spiral flanges. This allows the egg cases to become wedged in the crevices of rocky sea floors, where the eggs are protected from predators; however, some bullhead sharks deposit their eggs on sponges or seaweed. Hatchlings are considered large for sharks, reaching over 14 cm in length by the time they leave the egg case. Bullhead shark eggs typically hatch after 7 to 12 months, depending on the species. The female Japanese bullhead shark has been known to deposit their eggs in one location along with other females, called a "nest". The egg case of the Mexican hornshark features a tendril and more rigid flanges, suggesting that egg case design of this species primarily involves anchoring with tendrils rather than wedging into crevices. Species Ten living species of bullhead shark have been described: Heterodontus francisci (Girard, 1855) (horn shark) Heterodontus galeatus (Günther, 1870) (crested bullhead shark) Heterodontus japonicus (Maclay & W. J. Macleay, 1884) (Japanese bullhead shark) Heterodontus marshallae White, Mollen, O'Neill, Yang & Naylor, 2023 (painted hornshark) Heterodontus mexicanus (L. R. Taylor & Castro-Aguirre, 1972) (Mexican hornshark) Heterodontus omanensis (Z. H. Baldwin, 2005) (Oman bullhead shark) Heterodontus portusjacksoni (F. A. A. Meyer, 1793) (Port Jackson shark) Heterodontus quoyi (Fréminville, 1840) (Galapagos bullhead shark) Heterodontus ramalheira (J. L. B. Smith, 1949) (whitespotted bullhead shark) Heterodontus zebra (J. E. Gray, 1831) (zebra bullhead shark)
Biology and health sciences
Sharks
Animals
66675
https://en.wikipedia.org/wiki/Light%20curve
Light curve
In astronomy, a light curve is a graph of the light intensity of a celestial object or region as a function of time, typically with the magnitude of light received on the y-axis and with time on the x-axis. The light is usually in a particular frequency interval or band. Light curves can be periodic, as in the case of eclipsing binaries, Cepheid variables, other periodic variables, and transiting extrasolar planets; or aperiodic, like the light curve of a nova, cataclysmic variable star, supernova, microlensing event, or binary as observed during occultation events. The study of the light curve, together with other observations, can yield considerable information about the physical process that produces it or constrain the physical theories about it. Variable stars Graphs of the apparent magnitude of a variable star over time are commonly used to visualise and analyse their behaviour. Although the categorisation of variable star types is increasingly done from their spectral properties, the amplitudes, periods, and regularity of their brightness changes are still important factors. Some types such as Cepheids have extremely regular light curves with exactly the same period, amplitude, and shape in each cycle. Others such as Mira variables have somewhat less regular light curves with large amplitudes of several magnitudes, while the semiregular variables are less regular still and have smaller amplitudes. The shapes of variable star light curves give valuable information about the underlying physical processes producing the brightness changes. For eclipsing variables, the shape of the light curve indicates the degree of totality, the relative sizes of the stars, and their relative surface brightnesses. It may also show the eccentricity of the orbit and distortions in the shape of the two stars. For pulsating stars, the amplitude or period of the pulsations can be related to the luminosity of the star, and the light curve shape can be an indicator of the pulsation mode. Supernovae Light curves from supernovae can be indicative of the type of supernova. Although supernova types are defined on the basis of their spectra, each has typical light curve shapes. Type I supernovae have light curves with a sharp maximum and gradually decline, while Type II supernovae have less sharp maxima. Light curves are helpful for classification of faint supernovae and for the determination of sub-types. For example, the type II-P (for plateau) have similar spectra to the type II-L (linear) but are distinguished by a light curve where the decline flattens out for several weeks or months before resuming its fade. Planetary astronomy In planetary science, a light curve can be used to derive the rotation period of a minor planet, moon, or comet nucleus. From the Earth there is often no way to resolve a small object in the Solar System, even in the most powerful of telescopes, since the apparent angular size of the object is smaller than one pixel in the detector. Thus, astronomers measure the amount of light produced by an object as a function of time (the light curve). The time separation of peaks in the light curve gives an estimate of the rotational period of the object. The difference between the maximum and minimum brightnesses (the amplitude of the light curve) can be due to the shape of the object, or to bright and dark areas on its surface. For example, an asymmetrical asteroid's light curve generally has more pronounced peaks, while a more spherical object's light curve will be flatter. This allows astronomers to infer information about the shape and spin (but not size) of asteroids. Asteroid lightcurve database Light curve quality code The Asteroid Lightcurve Database (LCDB) of the Collaborative Asteroid Lightcurve Link (CALL) uses a numeric code to assess the quality of a period solution for minor planet light curves (it does not necessarily assess the actual underlying data). Its quality code parameter U ranges from 0 (incorrect) to 3 (well-defined): U = 0 → Result later proven incorrect U = 1 → Result based on fragmentary light curve(s), may be completely wrong. U = 2 → Result based on less than full coverage. Period may be wrong by 30 percent or ambiguous. U = 3 → Secure result within the precision given. No ambiguity. U = n.a. → Not available. Incomplete or inconclusive result. A trailing plus sign (+) or minus sign (−) is also used to indicate a slightly better or worse quality than the unsigned value. Occultation light curves The occultation light curve is often characterised as binary, where the light from the star is terminated instantaneously, remains constant for the duration, and is reinstated instantaneously. The duration is equivalent to the length of a chord across the occulting body. Circumstances where the transitions are not instantaneous are; when either the occulting or occulted body are double, e.g. a double star or double asteroid, then a step light curve is observed. when the occulted body is large, e.g. a star like Antares, then the transitions are gradual. when the occulting body has an atmosphere, e.g. the moon Titan The observations are typically recorded using video equipment and the disappearance and reappearance timed using a GPS disciplined Video Time Inserter (VTI). Occultation light curves are archived at the VizieR service. Exoplanet discovery Periodic dips in a star's light curve graph could be due to an exoplanet passing in front of the star that it is orbiting. When an exoplanet passes in front of its star, light from that star is temporarily blocked, resulting in a dip in the star's light curve. These dips are periodic, as planets periodically orbit a star. Many exoplanets have been discovered via this method, which is known as the astronomical transit method. Light curve inversion Light curve inversion is a mathematical technique used to model the surfaces of rotating objects from their brightness variations. This can be used to effectively image starspots or asteroid surface albedos. Microlensing Microlensing is a process where relatively small and low-mass astronomical objects cause a brief small increase in the brightness of a more distant object. This is caused by the small relativistic effect as larger gravitational lenses, but allows the detection and analysis of otherwise-invisible stellar and planetary mass objects. The properties of these objects can be inferred from the shape of the lensing light curve. For example, PA-99-N2 is a microlensing event that may have been due to a star in the Andromeda Galaxy that has an exoplanet.
Physical sciences
Basics
Astronomy
66687
https://en.wikipedia.org/wiki/Quercus%20rubra
Quercus rubra
Quercus rubra, the northern red oak, is an oak tree in the red oak group (Quercus section Lobatae). It is a native of North America, in the eastern and central United States and southeast and south-central Canada. It has been introduced to small areas in Western Europe, where it can frequently be seen cultivated in gardens and parks. It prefers good soil that is slightly acidic. Often simply called red oak, northern red oak is so named to distinguish it from southern red oak (Q. falcata), also known as the Spanish oak. Northern red oak is sometimes called champion oak. Description In many forests, Quercus rubra grows straight and tall, to , exceptionally to tall, with a trunk of up to in diameter. Open-grown trees do not get as tall, but can develop a stouter trunk, up to in diameter. It has stout branches growing at right angles to the stem, forming a narrow round-topped head. Under optimal conditions and full sun, northern red oak is fast growing and a 10-year-old tree can be tall. Trees may live up to 400 years; a living example of 326 years was noted in 2001. Northern red oak is easy to recognize by its bark, which features ridges that appear to have shiny stripes down the center. A few other oaks have bark with this kind of appearance in the upper tree, but the northern red oak is the only tree with the striping all the way down the trunk. As with most other deciduous oaks, leafout takes place in spring when day length has reached 13 hours—it is tied entirely to photoperiod and will take place regardless of air temperature. As a consequence (see below), in cooler regions, northern red oaks often lose their flowers to late spring frosts, resulting in no seed crop for the year. The catkins and leaves emerge at the same time. The acorns develop on the tree for two growing seasons and are released from the tree in early October, and leaf drop begins when day length falls under 11 hours. The timing of leafout and leaf drop can vary by as much as three weeks in the northern and southern US. Seedlings emerge in spring when soil temperatures reach . Bark: Dark reddish gray brown, with broad, thin, rounded ridges, scaly. On young trees and large stems, smooth and light gray. Rich in tannin. Branchlets slender, at first bright green, shining, then dark red, finally dark brown. Bark is brownish gray, becoming dark brown on old trees. Wood: Pale reddish brown, sapwood darker, heavy, hard, strong, coarse-grained. Cracks in drying, but when carefully treated could be successfully used for furniture. Also used in construction and for interior finish of houses. Sp. gr., 0.6621; weight of cu. ft., 41.25 lbs. Winter buds: Dark chestnut brown (reddish brown), ovate, acute, generally long Leaves and acorns: Alternate, seven to nine-lobed, oblong-ovate to oblong, five to ten inches long, four to six inches broad; seven to eleven lobes tapering gradually from broad bases, acute, and usually repandly dentate and terminating with long bristle-pointed teeth; the second pair of lobes from apex are largest; midrib and primary veins conspicuous. Lobes are often less deeply cut than most other oaks of the red oak group. Leaves emerge from the bud convolute, pink, covered with soft silky down above, coated with thick white tomentum below. When full grown are dark green and smooth, sometimes shining above, yellow green, smooth or hairy on the axils of the veins below. In autumn they turn a rich red, sometimes brown. Often the petiole and midvein are a rich red color in midsummer and early autumn, though this is not true of all red oaks. The acorns mature in about 18 months after pollination; solitary or in pairs, sessile or stalked; nut oblong-ovoid with broad flat base, full, with acute apex, one half to one and one-fourth of an inch long, first green, maturing nut-brown; cup, saucer-shaped and shallow, wide, usually covering only the base, sometimes one-fourth of the nut, thick, shallow, reddish brown, somewhat downy within, covered with thin imbricated reddish brown scales. Its kernel is white and very bitter. Red oak acorns, unlike the white oak group, display epigeal dormancy and will not germinate without a minimum of three months' exposure to temperatures below . They also take two years of growing on the tree before development is completed. Distribution and habitat The species grows from the north end of the Great Lakes, east to Nova Scotia, south as far as Georgia, Mississippi, Alabama, and Louisiana, and west to Oklahoma, Kansas, Nebraska, and Minnesota. It grows rapidly and is tolerant of many soils and varied situations, although it prefers the glacial drift and well-drained borders of streams. In the southeastern United States, it is frequently a part of the canopy in an oak-heath forest, but generally not as important as some other oaks. Northern red oak is the most common species of oak in the northeastern US after the closely related pin oak (Q. palustris). The red oak group as a whole are more abundant today than they were when European settlement of North America began as forest clearing and exploitation for lumber much reduced the population of the formerly dominant white oaks. Reproduction “Northern red oak (Quercus rubra ) is monoecious, dichogamous, wind-pollinated, and self-incompatible”. Pollination occurs in the first growing season, but fertilization and acorn maturation occur during the second growing season. Ecology Over the last few decades, the northern red oak has dealt with several environmental factors, mainly disease, predation by insects, and limited opportunities for dispersal. These stresses have impacted the species' ability to proliferate in both the Northeast and Europe. The various environmental responses observed in Quercus rubra across several temperate environmental conditions have allowed for it to serve as a model organism for studying symbiotic relationships, dispersal, and habituation between tree species. Pests and diseases Canker pathogen, Diplodia corticola, has become a major pathogen to the species over the last decade, causing leaf browning, bark cracking and bleeding, and high rates of tree mortality across the northeastern United States. The northern red oak is also characterized as one of the most susceptible species to plant fungi Phytophthora cinnamomi and Phytophthora ramorum, which have caused severe, red-black cankers in the trunk region of the species. Both P. cinnamomi and P. ramorum grow under warmer temperature conditions; as a result, northern red oak trees found in California, France, and northern Spain all have a higher incidence of fungal infection. Oak Wilt caused by the fungus Bretziella fagacearum is a major pathogen found in eastern North America that can kill trees quickly. There has been a recent northern red oak decline in Arkansas which is “unique in that it is associated with increases in red oak borer” (Enaphalodes rufulus) which “is native to the eastern United States and usually occurs in mixed oak forests”. “It damages the phloem, sapwood, and heartwood which means the ability for growth and repair is attacked as well as the stability of the tree”. Abiotic stresses Northern red oak seedlings have been known to have a high mortality rate in northeast regions prone to spring freeze, particularly in Massachusetts. Acorns produced by oaks in this region are typically smaller in size as an adaptation to frost produced in high latitudes; however, the resulting smaller seedlings have produced limited opportunities for animal consumption and dispersal. Flooding along the continental United States has been shown to be a major issue for the northern red oak, in which decreased phloem transport and photosynthetic activity has been observed, but only after multiple days of flooding, indicating that the northern red oak has adapted moderate resistance to excess water exposure. The northern red oak has also developed tolerance mechanisms for heat stress, particularly observed in deciduous forests in the Southeastern United States, where, during summer heat waves, temperatures can exceed . The leaves of the northern red oak have been observed to have an acclimation to Rubisco activase activity that is directly correlated to acclimations with repeated exposure to heat waves. Consistent photosynthetic activity in the red oak has also been observed in the presence of high carbon dioxide levels that often occur as a result of elevated temperatures. Animals Northern red oak kernels have highly concentrated amounts of bitter-tasting tannin, a biochemical classified as a predator deterrent, which has limited appeal for consumption among animals. Despite this, the acorns are eaten by deer, squirrels and birds. In Europe, the acorns are consumed by several moth species, particularly Cydia fagiglandana and Cydia splendana, which increases their niche breadths and reduces their competition with Curculio weevils. Due to this, germination rates among the northern red oak acorns have decreased significantly and resulted in less seed dispersal by animals within Poland. In addition, limited opportunities for dispersal have become costly for the northern red oak in Europe. European animals known for dispersing tendencies, such as the European jay and wood mouse, have been found to be more attracted to local oak species. Fungi Quercus rubra has effective ectomycorrhizal relationships that have been correlated with increased growth rates. Northern red oak trees have been shown to increase growth in the presence of various ascomycetes that coil at the base of the oak trunk. The fungi, which eventually proliferate at the stumps of deciduous trees, have been found to be host-specific to both Quercus rubra and Quercus montana and primarily promote growth upon infection. Invasiveness in Europe It was introduced to Europe in the 1700s and has naturalized throughout most of western and central Europe. Across western and central Europe, the northern red oak has become the fourth-most significant invasive species, colonizing several regions across Belgium, Germany, Northern Italy, Lithuania, Poland, Ukraine, European Russia, the Urals and Western Siberia). The northern red oak is primarily found on the edges of woodland reserves in Europe, where light availability, tannin concentration, and animal dispersal are the most necessary component for the species' longevity and survival. The high influx of the species in Europe is primarily based on its economic productivity as a fast-growing source of timber; however, it has been linked to lower percentages of trace elements and minerals found in the surrounding soil and reduced richness among native oak species such as Quercus robur. Uses The northern red oak is one of the most important oaks for timber production in North America. Quality red oak is of high value as lumber and veneer, while defective logs are used as firewood. Other related oaks are also cut and marketed as red oak, although their wood is not always of as high a quality. These include eastern black oak, scarlet oak, pin oak, Shumard oak, southern red oak and other species in the red oak group. Construction uses include flooring, veneer, interior trim, and furniture. It is also used for lumber, railroad ties, and fence posts. Red oak wood grain is so open that smoke can be blown through it from end-grain to end-grain on a flat-sawn board. For this reason, it is subject to moisture infiltration and is unsuitable for outdoor uses such as boatbuilding or exterior trim. The acorns can be collected in autumn, shelled, tied up in a cloth, and leached to remove bitterness. They can then be eaten whole or ground into meal. Ornamental use Quercus rubra is grown in parks and large gardens as a specimen tree. It is not planted as often as the closely related pin oak as it develops a taproot and quickly becomes difficult to transplant, however modern growing pots have made starting seedlings with taproots easier than in the past. Culture It is the state tree of New Jersey and the provincial tree of Prince Edward Island. Famous specimens Ashford Oak – A very large Northern Red Oak in Ashford, Connecticut. The tree has suffered falling limbs because of its great age. However, this tree is still a sight to behold; the trunk is in circumference and the root-knees are also particularly impressive. The oak is located on Giant Oak Lane off U.S. Highway 44. There are several other large oaks in the area. Chase Creek Red Oak – This forest tree is located on a very rich steep slope in Anne Arundel County, Maryland. It is a high-stump coppice with three leads. It was the state champion oak in Maryland in 2002. The circumference at breast height is , the height and the spread Shera-Blair Red Oak – This majestic red oak tree is located on Shelby Street in the South Frankfort neighborhood in Franklin County, Kentucky, and is the largest red oak tree in the oldest neighborhood in Frankfort, Kentucky. It is in the backyard of a house built in 1914 by architect Arthur Raymond Smith, who at one time worked for D.X. Murphy & Bros., famed architects that designed the twin spires at Churchill Downs. The circumference at breast height is , with the trunk reaching higher than before the branches begin and an estimated height of . Zhelevo – At over 250 years old, this tree is among the oldest in Toronto. The trunk has a circumference of and the canopy is over tall. The lot where the tree stands has been purchased by the City of Toronto to be turned into a public park.
Biology and health sciences
Fagales
Plants
66706
https://en.wikipedia.org/wiki/Fraxinus
Fraxinus
Fraxinus (), commonly called ash, is a genus of plants in the olive and lilac family, Oleaceae, and comprises 45–65 species of usually medium-to-large trees, most of which are deciduous trees, although some subtropical species are evergreen trees. The genus is widespread throughout much of Europe, Asia, and North America. The leaves are opposite (rarely in whorls of three), and mostly pinnately compound, though simple in a few species. The seeds, popularly known as "keys" or "helicopter seeds", are a type of fruit known as a samara. Some Fraxinus species are dioecious, having male and female flowers on separate plants but sex in ash is expressed as a continuum between male and female individuals, dominated by unisexual trees. With age, ash may change their sexual function from predominantly male and hermaphrodite towards femaleness ; if grown as an ornamental and both sexes are present, ashes can cause a considerable litter problem with their seeds. Rowans, or mountain ashes, have leaves and buds superficially similar to those of true ashes, but belong to the unrelated genus Sorbus in the rose family. Etymology The tree's common English name, "ash", traces back to the Old English æsc, which relates to the Proto-Indo-European for the tree, while the generic name originated in Latin from a Proto-Indo-European word for birch. Both words are also used to mean "spear" in their respective languages, as the wood is good for shafts. Selected species Species are arranged into sections supported by phylogenetic analysis: Section Dipetalae Fraxinus anomala Torr. ex S.Watson – singleleaf ash Fraxinus dipetala Hook. & Arn. – California ash or two-petal ash Fraxinus parryi Moran – Chaparral ash Fraxinus quadrangulata Michx. – blue ash Fraxinus trifoliolata Section Fraxinus Fraxinus angustifolia Vahl – narrow-leaved ash Fraxinus angustifolia subsp. oxycarpa – Caucasian ash Fraxinus angustifolia subsp. syriaca Fraxinus excelsior L. – European ash Fraxinus mandshurica Rupr. – Manchurian ash Fraxinus nigra Marshall – black ash Fraxinus pallisiae Wilmott – Pallis' ash Fraxinus sogdiana – Tianshan ash Section Melioides sensu lato Fraxinus chiisanensis – Jirisan ash Fraxinus cuspidata Torr. – fragrant ash Fraxinus platypoda – Chinese red ash Fraxinus spaethiana Lingelsh. – Späth's ash Section Melioides sensu stricto Fraxinus albicans Buckley – Texas ash Fraxinus americana L. – white ash or American ash Fraxinus berlandieriana DC. – Mexican ash Fraxinus caroliniana Mill. – Carolina ash Fraxinus latifolia Benth. – Oregon ash Fraxinus papillosa Lingelsh. – Chihuahua ash Fraxinus pennsylvanica Marshall – green ash Fraxinus profunda (Bush) Bush – pumpkin ash Fraxinus uhdei (Wenz.) Lingelsh. – Shamel ash or Tropical ash Fraxinus velutina Torr. – velvet ash or Arizona ash Section Ornus Fraxinus apertisquamifera Fraxinus baroniana Fraxinus bungeana DC. – Bunge's ash Fraxinus chinensis Roxb. – Chinese ash or Korean ash Fraxinus floribunda Wall. – Himalayan manna ash Fraxinus griffithii C.B.Clarke – Griffith's ash Fraxinus insularis Hemsl. – Chinese flowering ash Fraxinus japonica – Japanese ash Fraxinus lanuginosa – Japanese ash Fraxinus longicuspis Fraxinus malacophylla Fraxinus micrantha Lingelsh. Fraxinus ornus L. – manna ash or flowering ash Fraxinus paxiana Lingelsh. Fraxinus sieboldiana Blume – Japanese flowering ash Section Pauciflorae Fraxinus dubia Fraxinus gooddingii – Goodding's ash Fraxinus greggii A.Gray – Gregg's ash Fraxinus purpusii Fraxinus rufescens Section Sciadanthus Fraxinus dimorpha Fraxinus hubeiensis Ch'u & Shang & Su – 湖北梣, Hubei qin Fraxinus xanthoxyloides (G.Don) Wall. ex DC. – Afghan ash Ecology North American native ash tree species are a critical food source for North American frogs, as their fallen leaves are particularly suitable for tadpoles to feed upon in ponds (both temporary and permanent), large puddles, and other water bodies. Lack of tannins in the American ash makes their leaves a good food source for the frogs, but also reduces its resistance to the ash borer. Species with higher leaf tannin levels (including maples and non-native ash species) are taking the place of native ash, thanks to their greater resistance to the ash borer. They produce much less suitable food for the tadpoles, resulting in poor survival rates and small frog sizes. Ash species native to North America also provide important habitat and food for various other creatures native to North America. This includes the larvae of multiple long-horn beetles, as well as other insects including those in the genus Tropidosteptes, lace bugs, aphids, larvae of gall flies, and caterpillars. Birds are also interested in black, green, and white ash trees. The black ash alone supports wood ducks, wild turkey, cardinals, pine grosbeaks, cedar waxwings, and yellow-bellied sapsuckers, with habitat and food (such as the sap being of interest to the sapsucker) among others. Many mammalian species from meadow voles eating the seeds to white-tailed deer eating the foliage to silver-haired bats nesting will also make use of ash trees. Ash is used as a food plant by the larvae of some Lepidoptera species (butterflies and moths). Threats North America The emerald ash borer (Agrilus planipennis), also called EAB, is a wood-boring beetle accidentally introduced to North America from eastern Asia via solid wood packing material in the late 1980s to early 1990s. It has killed tens of millions of trees in 22 states in the United States and adjacent Ontario and Quebec in Canada. It threatens some seven billion ash trees in North America. Research is being conducted to determine whether three native Asian wasps that are natural predators of EAB could be used as a biological control for the management of EAB populations in the United States. The public is being cautioned to avoid transporting unfinished wood products, such as firewood, to slow the spread of this insect pest. Damage occurs when emerald ash borer larvae feed on the inner bark, phloem, inside branches and tree trunks. Feeding on the phloem prevents nutrients and water transportation. If the ash is attacked, the branches can die and eventually the whole tree can as well. Ways to detect emerald ash borer infestation include seeing bark peeling off, vertical cracks in the bark, seeing galleries within the tree that contain powdery substance, and D-shaped exit holes on the branches or trunk. Not all of these may be present, but any of these warning signs could be an indication of possible infestation. Europe The European ash, Fraxinus excelsior, has been affected by the fungus Hymenoscyphus fraxineus, causing ash dieback in a large number of trees since the mid-1990s, particularly in eastern and northern Europe. The disease has infected about 90% of Denmark's ash trees. At the end of October 2012 in the UK, the Department for Environment, Food and Rural Affairs (Defra) reported that ash dieback had been discovered in mature woodland in Suffolk; previous occurrences had been on young trees imported from Europe. In 2016, the ash tree was reported as in danger of extinction in Europe. Uses Ash is a hardwood and is dense (within 20% of 670 kg/m3 for Fraxinus americana, and higher at 710 kg/m3 for Fraxinus excelsior), tough and very strong but elastic, extensively used for making bows, tool handles, baseball bats, hurleys, and other uses demanding high strength and resilience. Ash is a tonewood commonly used in the manufacture of electric guitars. It exhibits a pronounced bright tone with a scooped midrange. It is lightweight, easy to work and sand, accepts glue, stain, paint and finish very well and is inexpensive. All this has made it a favourite of large factories mass-producing instruments. The Fender musical instrument company has been continuously and uninterruptedly using Ash to make electric guitars since 1956. Swamp ash is used a lot in guitar building because of its figure. It is a choice of material for electric guitar bodies and, less commonly, for acoustic guitar bodies, known for its bright, cutting edge and sustaining quality. Some Fender Stratocasters and Telecasters are made of ash, (such as Bruce Springsteen's Telecaster on the Born to Run album cover), as an alternative to alder. Ash is also used for making drum shells. Woodworkers generally consider ash a "poor cousin" to the other major open pore wood, oak, but it is useful in any furniture application. Ash veneers are extensively used in office furniture. Ash is not used much outdoors due to the heartwood having a low durability to ground contact, meaning it will typically perish within five years. The F. japonica species is favored as a material for making baseball bats by Japanese sporting-goods manufacturers. Its robust structure, good looks, and flexibility combine to make ash ideal for staircases. Ash stairs are extremely hard-wearing, which is particularly important for treads. Due to its elasticity, ash can also be steamed and bent to produce curved stair parts such as volutes (curled sections of handrail) and intricately shaped balusters. However, a reduction in the supply of healthy trees, especially in Europe, is making ash an increasingly expensive option. Ash was commonly used for the structural members of the bodies of cars made by carriage builders. Early cars had frames which were intended to flex as part of the suspension system to simplify construction. The Morgan Motor Company of Great Britain still manufactures sports cars with frames made from ash. It was also widely used by early aviation pioneers for aircraft construction. It lights and burns easily, so is used for starting fires and barbecues, and is usable for maintaining a fire, though it produces only a moderate heat. The two most economically important species for wood production are white ash, in eastern North America, and European ash in Europe. The green ash (F. pennsylvanica) is widely planted as a street tree in the United States. The inner bark of the blue ash (F. quadrangulata) has been used as a source for blue dye. In Sicily, Italy, sugars are obtained by evaporating the sap of the manna ash, extracted by making small cuts in the bark. The manna ash, native to southern Europe and southwest Asia, produces a blue-green sap, which has medicinal value as a mild laxative, demulcent, and weak expectorant. The young seedpods, also known as "keys", are edible for human consumption. In Britain, they are traditionally pickled with vinegar, sugar and spices. Mythology and folklore In Greek mythology, the Meliae are nymphs associated with the ash, perhaps specifically of the manna ash (Fraxinus ornus), as dryads were nymphs associated with the oak. They appear in Hesiod's Theogony, which states that they were born when drops of Ouranos's blood fell on the earth (Gaia). In Norse mythology, a vast, evergreen ash tree Yggdrasil ("the steed (gallows) of Odin"), watered by three magical springs, serves as axis mundi, sustaining the nine worlds of the cosmos in its roots and branches. Askr, the first man in Norse myth, literally means 'ash'. In Italian folklore, an ash stake could be used to kill a vampire.
Biology and health sciences
Lamiales
null
66723
https://en.wikipedia.org/wiki/Respiratory%20system
Respiratory system
The respiratory system (also respiratory apparatus, ventilatory system) is a biological system consisting of specific organs and structures used for gas exchange in animals and plants. The anatomy and physiology that make this happen varies greatly, depending on the size of the organism, the environment in which it lives and its evolutionary history. In land animals, the respiratory surface is internalized as linings of the lungs. Gas exchange in the lungs occurs in millions of small air sacs; in mammals and reptiles, these are called alveoli, and in birds, they are known as atria. These microscopic air sacs have a very rich blood supply, thus bringing the air into close contact with the blood. These air sacs communicate with the external environment via a system of airways, or hollow tubes, of which the largest is the trachea, which branches in the middle of the chest into the two main bronchi. These enter the lungs where they branch into progressively narrower secondary and tertiary bronchi that branch into numerous smaller tubes, the bronchioles. In birds, the bronchioles are termed parabronchi. It is the bronchioles, or parabronchi that generally open into the microscopic alveoli in mammals and atria in birds. Air has to be pumped from the environment into the alveoli or atria by the process of breathing which involves the muscles of respiration. In most fish, and a number of other aquatic animals (both vertebrates and invertebrates), the respiratory system consists of gills, which are either partially or completely external organs, bathed in the watery environment. This water flows over the gills by a variety of active or passive means. Gas exchange takes place in the gills which consist of thin or very flat filaments and lammellae which expose a very large surface area of highly vascularized tissue to the water. Other animals, such as insects, have respiratory systems with very simple anatomical features, and in amphibians, even the skin plays a vital role in gas exchange. Plants also have respiratory systems but the directionality of gas exchange can be opposite to that in animals. The respiratory system in plants includes anatomical features such as stomata, that are found in various parts of the plant. Mammals Anatomy In humans and other mammals, the anatomy of a typical respiratory system is the respiratory tract. The tract is divided into an upper and a lower respiratory tract. The upper tract includes the nose, nasal cavities, sinuses, pharynx and the part of the larynx above the vocal folds. The lower tract (Fig. 2.) includes the lower part of the larynx, the trachea, bronchi, bronchioles and the alveoli. The branching airways of the lower tract are often described as the respiratory tree or tracheobronchial tree (Fig. 2). The intervals between successive branch points along the various branches of "tree" are often referred to as branching "generations", of which there are, in the adult human, about 23. The earlier generations (approximately generations 0–16), consisting of the trachea and the bronchi, as well as the larger bronchioles which simply act as air conduits, bringing air to the respiratory bronchioles, alveolar ducts and alveoli (approximately generations 17–23), where gas exchange takes place. Bronchioles are defined as the small airways lacking any cartilaginous support. The first bronchi to branch from the trachea are the right and left main bronchi. Second, only in diameter to the trachea (1.8 cm), these bronchi (1–1.4 cm in diameter) enter the lungs at each hilum, where they branch into narrower secondary bronchi known as lobar bronchi, and these branch into narrower tertiary bronchi known as segmental bronchi. Further divisions of the segmental bronchi (1 to 6 mm in diameter) are known as 4th order, 5th order, and 6th order segmental bronchi, or grouped together as subsegmental bronchi. Compared to the 23 number (on average) of branchings of the respiratory tree in the adult human, the mouse has only about 13 such branchings. The alveoli are the dead end terminals of the "tree", meaning that any air that enters them has to exit via the same route. A system such as this creates dead space, a volume of air (about 150 ml in the adult human) that fills the airways after exhalation and is breathed back into the alveoli before environmental air reaches them. At the end of inhalation, the airways are filled with environmental air, which is exhaled without coming in contact with the gas exchanger. Ventilatory volumes The lungs expand and contract during the breathing cycle, drawing air in and out of the lungs. The volume of air moved in or out of the lungs under normal resting circumstances (the resting tidal volume of about 500 ml), and volumes moved during maximally forced inhalation and maximally forced exhalation are measured in humans by spirometry. A typical adult human spirogram with the names given to the various excursions in volume the lungs can undergo is illustrated below (Fig. 3): Not all the air in the lungs can be expelled during maximally forced exhalation (ERV). This is the residual volume (volume of air remaining even after a forced exhalation) of about 1.0–1.5 liters which cannot be measured by spirometry. Volumes that include the residual volume (i.e. functional residual capacity of about 2.5–3.0 liters, and total lung capacity of about 6 liters) can therefore also not be measured by spirometry. Their measurement requires special techniques. The rates at which air is breathed in or out, either through the mouth or nose or into or out of the alveoli are tabulated below, together with how they are calculated. The number of breath cycles per minute is known as the respiratory rate. An average healthy human breathes 12–16 times a minute. Mechanics of breathing In mammals, inhalation at rest is primarily due to the contraction of the diaphragm. This is an upwardly domed sheet of muscle that separates the thoracic cavity from the abdominal cavity. When it contracts, the sheet flattens, (i.e. moves downwards as shown in Fig. 7) increasing the volume of the thoracic cavity in the antero-posterior axis. The contracting diaphragm pushes the abdominal organs downwards. But because the pelvic floor prevents the lowermost abdominal organs from moving in that direction, the pliable abdominal contents cause the belly to bulge outwards to the front and sides, because the relaxed abdominal muscles do not resist this movement (Fig. 7). This entirely passive bulging (and shrinking during exhalation) of the abdomen during normal breathing is sometimes referred to as "abdominal breathing", although it is, in fact, "diaphragmatic breathing", which is not visible on the outside of the body. Mammals only use their abdominal muscles during forceful exhalation (see Fig. 8, and discussion below). Never during any form of inhalation. As the diaphragm contracts, the rib cage is simultaneously enlarged by the ribs being pulled upwards by the intercostal muscles as shown in Fig. 4. All the ribs slant downwards from the rear to the front (as shown in Fig. 4); but the lowermost ribs also slant downwards from the midline outwards (Fig. 5). Thus the rib cage's transverse diameter can be increased in the same way as the antero-posterior diameter is increased by the so-called pump handle movement shown in Fig. 4. The enlargement of the thoracic cavity's vertical dimension by the contraction of the diaphragm, and its two horizontal dimensions by the lifting of the front and sides of the ribs, causes the intrathoracic pressure to fall. The lungs' interiors are open to the outside air and being elastic, therefore expand to fill the increased space, pleura fluid between double-layered pleura covering of lungs helps in reducing friction while lungs expansion and contraction. The inflow of air into the lungs occurs via the respiratory airways (Fig. 2). In a healthy person, these airways begin with the nose. (It is possible to begin with the mouth, which is the backup breathing system. However, chronic mouth breathing leads to, or is a sign of, illness.) It ends in the microscopic dead-end sacs called alveoli, which are always open, though the diameters of the various sections can be changed by the sympathetic and parasympathetic nervous systems. The alveolar air pressure is therefore always close to atmospheric air pressure (about 100 kPa at sea level) at rest, with the pressure gradients because of lungs contraction and expansion cause air to move in and out of the lungs during breathing rarely exceeding 2–3 kPa. During exhalation, the diaphragm and intercostal muscles relax. This returns the chest and abdomen to a position determined by their anatomical elasticity. This is the "resting mid-position" of the thorax and abdomen (Fig. 7) when the lungs contain their functional residual capacity of air (the light blue area in the right hand illustration of Fig. 7), which in the adult human has a volume of about 2.5–3.0 liters (Fig. 3). Resting exhalation lasts about twice as long as inhalation because the diaphragm relaxes passively more gently than it contracts actively during inhalation. The volume of air that moves in or out (at the nose or mouth) during a single breathing cycle is called the tidal volume. In a resting adult human, it is about 500 ml per breath. At the end of exhalation, the airways contain about 150 ml of alveolar air which is the first air that is breathed back into the alveoli during inhalation. This volume air that is breathed out of the alveoli and back in again is known as dead space ventilation, which has the consequence that of the 500 ml breathed into the alveoli with each breath only 350 ml (500 ml – 150 ml = 350 ml) is fresh warm and moistened air. Since this 350 ml of fresh air is thoroughly mixed and diluted by the air that remains in the alveoli after a normal exhalation (i.e. the functional residual capacity of about 2.5–3.0 liters), it is clear that the composition of the alveolar air changes very little during the breathing cycle (see Fig. 9). The oxygen tension (or partial pressure) remains close to 13–14 kPa (about 100 mm Hg), and that of carbon dioxide very close to 5.3 kPa (or 40 mm Hg). This contrasts with composition of the dry outside air at sea level, where the partial pressure of oxygen is 21 kPa (or 160 mm Hg) and that of carbon dioxide 0.04 kPa (or 0.3 mmHg). During heavy breathing (hyperpnea), as, for instance, during exercise, inhalation is brought about by a more powerful and greater excursion of the contracting diaphragm than at rest (Fig. 8). In addition, the "accessory muscles of inhalation" exaggerate the actions of the intercostal muscles (Fig. 8). These accessory muscles of inhalation are muscles that extend from the cervical vertebrae and base of the skull to the upper ribs and sternum, sometimes through an intermediary attachment to the clavicles. When they contract, the rib cage's internal volume is increased to a far greater extent than can be achieved by contraction of the intercostal muscles alone. Seen from outside the body, the lifting of the clavicles during strenuous or labored inhalation is sometimes called clavicular breathing, seen especially during asthma attacks and in people with chronic obstructive pulmonary disease. During heavy breathing, exhalation is caused by relaxation of all the muscles of inhalation. But now, the abdominal muscles, instead of remaining relaxed (as they do at rest), contract forcibly pulling the lower edges of the rib cage downwards (front and sides) (Fig. 8). This not only drastically decreases the size of the rib cage, but also pushes the abdominal organs upwards against the diaphragm which consequently bulges deeply into the thorax (Fig. 8). The end-exhalatory lung volume is now well below the resting mid-position and contains far less air than the resting "functional residual capacity". However, in a normal mammal, the lungs cannot be emptied completely. In an adult human, there is always still at least 1 liter of residual air left in the lungs after maximum exhalation. The automatic rhythmical breathing in and out, can be interrupted by coughing, sneezing (forms of very forceful exhalation), by the expression of a wide range of emotions (laughing, sighing, crying out in pain, exasperated intakes of breath) and by such voluntary acts as speech, singing, whistling and the playing of wind instruments. All of these actions rely on the muscles described above, and their effects on the movement of air in and out of the lungs. Although not a form of breathing, the Valsalva maneuver involves the respiratory muscles. It is, in fact, a very forceful exhalatory effort against a tightly closed glottis, so that no air can escape from the lungs. Instead, abdominal contents are evacuated in the opposite direction, through orifices in the pelvic floor. The abdominal muscles contract very powerfully, causing the pressure inside the abdomen and thorax to rise to extremely high levels. The Valsalva maneuver can be carried out voluntarily but is more generally a reflex elicited when attempting to empty the abdomen during, for instance, difficult defecation, or during childbirth. Breathing ceases during this maneuver. Gas exchange The primary purpose of the respiratory system is the equalizing of the partial pressures of the respiratory gases in the alveolar air with those in the pulmonary capillary blood (Fig. 11). This process occurs by simple diffusion, across a very thin membrane (known as the blood–air barrier), which forms the walls of the pulmonary alveoli (Fig. 10). It consists of the alveolar epithelial cells, their basement membranes and the endothelial cells of the alveolar capillaries (Fig. 10). This blood gas barrier is extremely thin (in humans, on average, 2.2 μm thick). It is folded into about 300 million small air sacs called alveoli (each between 75 and 300 μm in diameter) branching off from the respiratory bronchioles in the lungs, thus providing an extremely large surface area (approximately 145 m2) for gas exchange to occur. The air contained within the alveoli has a semi-permanent volume of about 2.5–3.0 liters which completely surrounds the alveolar capillary blood (Fig. 12). This ensures that equilibration of the partial pressures of the gases in the two compartments is very efficient and occurs very quickly. The blood leaving the alveolar capillaries and is eventually distributed throughout the body therefore has a partial pressure of oxygen of 13–14 kPa (100 mmHg), and a partial pressure of carbon dioxide of 5.3 kPa (40 mmHg) (i.e. the same as the oxygen and carbon dioxide gas tensions as in the alveoli). As mentioned in the section above, the corresponding partial pressures of oxygen and carbon dioxide in the ambient (dry) air at sea level are 21 kPa (160 mmHg) and 0.04 kPa (0.3 mmHg) respectively. This marked difference between the composition of the alveolar air and that of the ambient air can be maintained because the functional residual capacity is contained in dead-end sacs connected to the outside air by fairly narrow and relatively long tubes (the airways: nose, pharynx, larynx, trachea, bronchi and their branches down to the bronchioles), through which the air has to be breathed both in and out (i.e. there is no unidirectional through-flow as there is in the bird lung). This typical mammalian anatomy combined with the fact that the lungs are not emptied and re-inflated with each breath (leaving a substantial volume of air, of about 2.5–3.0 liters, in the alveoli after exhalation), ensures that the composition of the alveolar air is only minimally disturbed when the 350 ml of fresh air is mixed into it with each inhalation. Thus the animal is provided with a very special "portable atmosphere", whose composition differs significantly from the present-day ambient air. It is this portable atmosphere (the functional residual capacity) to which the blood and therefore the body tissues are exposed – not to the outside air. The resulting arterial partial pressures of oxygen and carbon dioxide are homeostatically controlled. A rise in the arterial partial pressure of CO2 and, to a lesser extent, a fall in the arterial partial pressure of O2, will reflexly cause deeper and faster breathing until the blood gas tensions in the lungs, and therefore the arterial blood, return to normal. The converse happens when the carbon dioxide tension falls, or, again to a lesser extent, the oxygen tension rises: the rate and depth of breathing are reduced until blood gas normality is restored. Since the blood arriving in the alveolar capillaries has a partial pressure of O2 of, on average, 6 kPa (45 mmHg), while the pressure in the alveolar air is 13–14 kPa (100 mmHg), there will be a net diffusion of oxygen into the capillary blood, changing the composition of the 3 liters of alveolar air slightly. Similarly, since the blood arriving in the alveolar capillaries has a partial pressure of CO2 of also about 6 kPa (45 mmHg), whereas that of the alveolar air is 5.3 kPa (40 mmHg), there is a net movement of carbon dioxide out of the capillaries into the alveoli. The changes brought about by these net flows of individual gases into and out of the alveolar air necessitate the replacement of about 15% of the alveolar air with ambient air every 5 seconds or so. This is very tightly controlled by the monitoring of the arterial blood gases (which accurately reflect composition of the alveolar air) by the aortic and carotid bodies, as well as by the blood gas and pH sensor on the anterior surface of the medulla oblongata in the brain. There are also oxygen and carbon dioxide sensors in the lungs, but they primarily determine the diameters of the bronchioles and pulmonary capillaries, and are therefore responsible for directing the flow of air and blood to different parts of the lungs. It is only as a result of accurately maintaining the composition of the 3 liters of alveolar air that with each breath some carbon dioxide is discharged into the atmosphere and some oxygen is taken up from the outside air. If more carbon dioxide than usual has been lost by a short period of hyperventilation, respiration will be slowed down or halted until the alveolar partial pressure of carbon dioxide has returned to 5.3 kPa (40 mmHg). It is therefore strictly speaking untrue that the primary function of the respiratory system is to rid the body of carbon dioxide "waste". The carbon dioxide that is breathed out with each breath could probably be more correctly be seen as a byproduct of the body's extracellular fluid carbon dioxide and pH homeostats If these homeostats are compromised, then a respiratory acidosis, or a respiratory alkalosis will occur. In the long run these can be compensated by renal adjustments to the H+ and HCO3− concentrations in the plasma; but since this takes time, the hyperventilation syndrome can, for instance, occur when agitation or anxiety cause a person to breathe fast and deeply thus causing a distressing respiratory alkalosis through the blowing off of too much CO2 from the blood into the outside air. Oxygen has a very low solubility in water, and is therefore carried in the blood loosely combined with hemoglobin. The oxygen is held on the hemoglobin by four ferrous iron-containing heme groups per hemoglobin molecule. When all the heme groups carry one O2 molecule each the blood is said to be “saturated” with oxygen, and no further increase in the partial pressure of oxygen will meaningfully increase the oxygen concentration of the blood. Most of the carbon dioxide in the blood is carried as bicarbonate ions (HCO3−) in the plasma. However the conversion of dissolved CO2 into HCO3− (through the addition of water) is too slow for the rate at which the blood circulates through the tissues on the one hand, and through alveolar capillaries on the other. The reaction is therefore catalyzed by carbonic anhydrase, an enzyme inside the red blood cells. The reaction can go in both directions depending on the prevailing partial pressure of CO2. A small amount of carbon dioxide is carried on the protein portion of the hemoglobin molecules as carbamino groups. The total concentration of carbon dioxide (in the form of bicarbonate ions, dissolved CO2, and carbamino groups) in arterial blood (i.e. after it has equilibrated with the alveolar air) is about 26 mM (or 58 ml/100 ml), compared to the concentration of oxygen in saturated arterial blood of about 9 mM (or 20 ml/100 ml blood). Control of ventilation Ventilation of the lungs in mammals occurs via the respiratory centers in the medulla oblongata and the pons of the brainstem. These areas form a series of neural pathways which receive information about the partial pressures of oxygen and carbon dioxide in the arterial blood. This information determines the average rate of ventilation of the alveoli of the lungs, to keep these pressures constant. The respiratory center does so via motor nerves which activate the diaphragm and other muscles of respiration. The breathing rate increases when the partial pressure of carbon dioxide in the blood increases. This is detected by central blood gas chemoreceptors on the anterior surface of the medulla oblongata. The aortic and carotid bodies, are the peripheral blood gas chemoreceptors which are particularly sensitive to the arterial partial pressure of O2 though they also respond, but less strongly, to the partial pressure of CO2. At sea level, under normal circumstances, the breathing rate and depth, is determined primarily by the arterial partial pressure of carbon dioxide rather than by the arterial partial pressure of oxygen, which is allowed to vary within a fairly wide range before the respiratory centers in the medulla oblongata and pons respond to it to change the rate and depth of breathing. Exercise increases the breathing rate due to the extra carbon dioxide produced by the enhanced metabolism of the exercising muscles. In addition, passive movements of the limbs also reflexively produce an increase in the breathing rate. Information received from stretch receptors in the lungs' limits tidal volume (the depth of inhalation and exhalation). Responses to low atmospheric pressures The alveoli are open (via the airways) to the atmosphere, with the result that alveolar air pressure is exactly the same as the ambient air pressure at sea level, at altitude, or in any artificial atmosphere (e.g. a diving chamber, or decompression chamber) in which the individual is breathing freely. With expansion of the lungs the alveolar air occupies a larger volume, and its pressure falls proportionally, causing air to flow in through the airways, until the pressure in the alveoli is again at the ambient air pressure. The reverse happens during exhalation. This process (of inhalation and exhalation) is exactly the same at sea level, as on top of Mt. Everest, or in a diving chamber or decompression chamber. However, as one rises above sea level the density of the air decreases exponentially (see Fig. 14), halving approximately with every 5500 m rise in altitude. Since the composition of the atmospheric air is almost constant below 80 km, as a result of the continuous mixing effect of the weather, the concentration of oxygen in the air (mmols O2 per liter of ambient air) decreases at the same rate as the fall in air pressure with altitude. Therefore, in order to breathe in the same amount of oxygen per minute, the person has to inhale a proportionately greater volume of air per minute at altitude than at sea level. This is achieved by breathing deeper and faster (i.e. hyperpnea) than at sea level (see below). There is, however, a complication that increases the volume of air that needs to be inhaled per minute (respiratory minute volume) to provide the same amount of oxygen to the lungs at altitude as at sea level. During inhalation, the air is warmed and saturated with water vapor during its passage through the nose passages and pharynx. Saturated water vapor pressure is dependent only on temperature. At a body core temperature of 37 °C it is 6.3 kPa (47.0 mmHg), irrespective of any other influences, including altitude. Thus at sea level, where the ambient atmospheric pressure is about 100 kPa, the moistened air that flows into the lungs from the trachea consists of water vapor (6.3 kPa), nitrogen (74.0 kPa), oxygen (19.7 kPa) and trace amounts of carbon dioxide and other gases (a total of 100 kPa). In dry air the partial pressure of O2 at sea level is 21.0 kPa (i.e. 21% of 100 kPa), compared to the 19.7 kPa of oxygen entering the alveolar air. (The tracheal partial pressure of oxygen is 21% of [100 kPa – 6.3 kPa] = 19.7 kPa). At the summit of Mt. Everest (at an altitude of 8,848 m or 29,029 ft), the total atmospheric pressure is 33.7 kPa, of which 7.1 kPa (or 21%) is oxygen. The air entering the lungs also has a total pressure of 33.7 kPa, of which 6.3 kPa is, unavoidably, water vapor (as it is at sea level). This reduces the partial pressure of oxygen entering the alveoli to 5.8 kPa (or 21% of [33.7 kPa – 6.3 kPa] = 5.8 kPa). The reduction in the partial pressure of oxygen in the inhaled air is therefore substantially greater than the reduction of the total atmospheric pressure at altitude would suggest (on Mt Everest: 5.8 kPa vs. 7.1 kPa). A further minor complication exists at altitude. If the volume of the lungs were to be instantaneously doubled at the beginning of inhalation, the air pressure inside the lungs would be halved. This happens regardless of altitude. Thus, halving of the sea level air pressure (100 kPa) results in an intrapulmonary air pressure of 50 kPa. Doing the same at 5500 m, where the atmospheric pressure is only 50 kPa, the intrapulmonary air pressure falls to 25 kPa. Therefore, the same change in lung volume at sea level results in a 50 kPa difference in pressure between the ambient air and the intrapulmonary air, whereas it result in a difference of only 25 kPa at 5500 m. The driving pressure forcing air into the lungs during inhalation is therefore halved at this altitude. The rate of inflow of air into the lungs during inhalation at sea level is therefore twice that which occurs at 5500 m. However, in reality, inhalation and exhalation occur far more gently and less abruptly than in the example given. The differences between the atmospheric and intrapulmonary pressures, driving air in and out of the lungs during the breathing cycle, are in the region of only 2–3 kPa. A doubling or more of these small pressure differences could be achieved only by very major changes in the breathing effort at high altitudes. All of the above influences of low atmospheric pressures on breathing are accommodated primarily by breathing deeper and faster (hyperpnea). The exact degree of hyperpnea is determined by the blood gas homeostat, which regulates the partial pressures of oxygen and carbon dioxide in the arterial blood. This homeostat prioritizes the regulation of the arterial partial pressure of carbon dioxide over that of oxygen at sea level. That is to say, at sea level the arterial partial pressure of CO2 is maintained at very close to 5.3 kPa (or 40 mmHg) under a wide range of circumstances, at the expense of the arterial partial pressure of O2, which is allowed to vary within a very wide range of values, before eliciting a corrective ventilatory response. However, when the atmospheric pressure (and therefore the partial pressure of O2 in the ambient air) falls to below 50-75% of its value at sea level, oxygen homeostasis is given priority over carbon dioxide homeostasis. This switch-over occurs at an elevation of about 2500 m (or about 8000 ft). If this switch occurs relatively abruptly, the hyperpnea at high altitude will cause a severe fall in the arterial partial pressure of carbon dioxide, with a consequent rise in the pH of the arterial plasma. This is one contributor to high altitude sickness. On the other hand, if the switch to oxygen homeostasis is incomplete, then hypoxia may complicate the clinical picture with potentially fatal results. There are oxygen sensors in the smaller bronchi and bronchioles. In response to low partial pressures of oxygen in the inhaled air these sensors reflexively cause the pulmonary arterioles to constrict. (This is the exact opposite of the corresponding reflex in the tissues, where low arterial partial pressures of O2 cause arteriolar vasodilation.) At altitude this causes the pulmonary arterial pressure to rise resulting in a much more even distribution of blood flow to the lungs than occurs at sea level. At sea level, the pulmonary arterial pressure is very low, with the result that the tops of the lungs receive far less blood than the bases, which are relatively over-perfused with blood. It is only in the middle of the lungs that the blood and air flow to the alveoli are ideally matched. At altitude, this variation in the ventilation/perfusion ratio of alveoli from the tops of the lungs to the bottoms is eliminated, with all the alveoli perfused and ventilated in more or less the physiologically ideal manner. This is a further important contributor to the acclimatatization to high altitudes and low oxygen pressures. The kidneys measure the oxygen content (mmol O2/liter blood, rather than the partial pressure of O2) of the arterial blood. When the oxygen content of the blood is chronically low, as at high altitude, the oxygen-sensitive kidney cells secrete erythropoietin (EPO) into the blood. This hormone stimulates the red bone marrow to increase its rate of red cell production, which leads to an increase in the hematocrit of the blood, and a consequent increase in its oxygen carrying capacity (due to the now high hemoglobin content of the blood). In other words, at the same arterial partial pressure of O2, a person with a high hematocrit carries more oxygen per liter of blood than a person with a lower hematocrit does. High altitude dwellers therefore have higher hematocrits than sea-level residents. Other functions of the lungs Local defenses Irritation of nerve endings within the nasal passages or airways, can induce a cough reflex and sneezing. These responses cause air to be expelled forcefully from the trachea or nose, respectively. In this manner, irritants caught in the mucus which lines the respiratory tract are expelled or moved to the mouth where they can be swallowed. During coughing, contraction of the smooth muscle in the airway walls narrows the trachea by pulling the ends of the cartilage plates together and by pushing soft tissue into the lumen. This increases the expired airflow rate to dislodge and remove any irritant particle or mucus. Respiratory epithelium can secrete a variety of molecules that aid in the defense of the lungs. These include secretory immunoglobulins (IgA), collectins, defensins and other peptides and proteases, reactive oxygen species, and reactive nitrogen species. These secretions can act directly as antimicrobials to help keep the airway free of infection. A variety of chemokines and cytokines are also secreted that recruit the traditional immune cells and others to the site of infections. Surfactant immune function is primarily attributed to two proteins: SP-A and SP-D. These proteins can bind to sugars on the surface of pathogens and thereby opsonize them for uptake by phagocytes. It also regulates inflammatory responses and interacts with the adaptive immune response. Surfactant degradation or inactivation may contribute to enhanced susceptibility to lung inflammation and infection. Most of the respiratory system is lined with mucous membranes that contain mucosa-associated lymphoid tissue, which produces white blood cells such as lymphocytes. Prevention of alveolar collapse The lungs make a surfactant, a surface-active lipoprotein complex (phospholipoprotein) formed by type II alveolar cells. It floats on the surface of the thin watery layer which lines the insides of the alveoli, reducing the water's surface tension. The surface tension of a watery surface (the water-air interface) tends to make that surface shrink. When that surface is curved as it is in the alveoli of the lungs, the shrinkage of the surface decreases the diameter of the alveoli. The more acute the curvature of the water-air interface the greater the tendency for the alveolus to collapse. This has three effects. Firstly, the surface tension inside the alveoli resists expansion of the alveoli during inhalation (i.e. it makes the lung stiff, or non-compliant). Surfactant reduces the surface tension and therefore makes the lungs more compliant, or less stiff, than if it were not there. Secondly, the diameters of the alveoli increase and decrease during the breathing cycle. This means that the alveoli have a greater tendency to collapse (i.e. cause atelectasis) at the end of exhalation than at the end of inhalation. Since surfactant floats on the watery surface, its molecules are more tightly packed together when the alveoli shrink during exhalation. This causes them to have a greater surface tension-lowering effect when the alveoli are small than when they are large (as at the end of inhalation, when the surfactant molecules are more widely spaced). The tendency for the alveoli to collapse is therefore almost the same at the end of exhalation as at the end of inhalation. Thirdly, the surface tension of the curved watery layer lining the alveoli tends to draw water from the lung tissues into the alveoli. Surfactant reduces this danger to negligible levels, and keeps the alveoli dry. Pre-term babies who are unable to manufacture surfactant have lungs that tend to collapse each time they breathe out. Unless treated, this condition, called respiratory distress syndrome, is fatal. Basic scientific experiments, carried out using cells from chicken lungs, support the potential for using steroids as a means of furthering the development of type II alveolar cells. In fact, once a premature birth is threatened, every effort is made to delay the birth, and a series of steroid injections is frequently administered to the mother during this delay in an effort to promote lung maturation. Contributions to whole body functions The lung vessels contain a fibrinolytic system that dissolves clots that may have arrived in the pulmonary circulation by embolism, often from the deep veins in the legs. They also release a variety of substances that enter the systemic arterial blood, and they remove other substances from the systemic venous blood that reach them via the pulmonary artery. Some prostaglandins are removed from the circulation, while others are synthesized in the lungs and released into the blood when lung tissue is stretched. The lungs activate one hormone. The physiologically inactive decapeptide angiotensin I is converted to the aldosterone-releasing octapeptide, angiotensin II, in the pulmonary circulation. The reaction occurs in other tissues as well, but it is particularly prominent in the lungs. Angiotensin II also has a direct effect on arteriolar walls, causing arteriolar vasoconstriction, and consequently a rise in arterial blood pressure. Large amounts of the angiotensin-converting enzyme responsible for this activation are located on the surfaces of the endothelial cells of the alveolar capillaries. The converting enzyme also inactivates bradykinin. Circulation time through the alveolar capillaries is less than one second, yet 70% of the angiotensin I reaching the lungs is converted to angiotensin II in a single trip through the capillaries. Four other peptidases have been identified on the surface of the pulmonary endothelial cells. Vocalization The movement of gas through the larynx, pharynx and mouth allows humans to speak, or phonate. Vocalization, or singing, in birds occurs via the syrinx, an organ located at the base of the trachea. The vibration of air flowing across the larynx (vocal cords), in humans, and the syrinx, in birds, results in sound. Because of this, gas movement is vital for communication purposes. Temperature control Panting in dogs, cats, birds and some other animals provides a means of reducing body temperature, by evaporating saliva in the mouth (instead of evaporating sweat on the skin). Clinical significance Disorders of the respiratory system can be classified into several general groups: Airway obstructive conditions (e.g., emphysema, bronchitis, asthma) Pulmonary restrictive conditions (e.g., fibrosis, sarcoidosis, alveolar damage, pleural effusion) Vascular diseases (e.g., pulmonary edema, pulmonary embolism, pulmonary hypertension) Infectious, environmental and other "diseases" (e.g., pneumonia, tuberculosis, asbestosis, particulate pollutants) Primary cancers (e.g. bronchial carcinoma, mesothelioma) Secondary cancers (e.g. cancers that originated elsewhere in the body, but have seeded themselves in the lungs) Insufficient surfactant (e.g. respiratory distress syndrome in pre-term babies) . Disorders of the respiratory system are usually treated by a pulmonologist and respiratory therapist. Where there is an inability to breathe or insufficiency in breathing, a medical ventilator may be used. Exceptional mammals Cetaceans Horses Horses are obligate nasal breathers which means that they are different from many other mammals because they do not have the option of breathing through their mouths and must take in air through their noses. A flap of tissue called the soft palate blocks off the pharynx from the mouth (oral cavity) of the horse, except when swallowing. This helps to prevent the horse from inhaling food, but does not allow use of the mouth to breathe when in respiratory distress, a horse can only breathe through its nostrils. Elephants The elephant is the only mammal known to have no pleural space. Instead, the parietal and visceral pleura are both composed of dense connective tissue and joined to each other via loose connective tissue. This lack of a pleural space, along with an unusually thick diaphragm, are thought to be evolutionary adaptations allowing the elephant to remain underwater for long periods while breathing through its trunk which emerges as a snorkel. In the elephant the lungs are attached to the diaphragm and breathing relies mainly on the diaphragm rather than the expansion of the ribcage. Birds The respiratory system of birds differs significantly from that found in mammals. Firstly, they have rigid lungs which do not expand and contract during the breathing cycle. Instead an extensive system of air sacs (Fig. 15) distributed throughout their bodies act as the bellows drawing environmental air into the sacs, and expelling the spent air after it has passed through the lungs (Fig. 18). Birds also do not have diaphragms or pleural cavities. Bird lungs are smaller than those in mammals of comparable size, but the air sacs account for 15% of the total body volume, compared to the 7% devoted to the alveoli which act as the bellows in mammals. Inhalation and exhalation are brought about by alternately increasing and decreasing the volume of the entire thoraco-abdominal cavity (or coelom) using both their abdominal and costal muscles. During inhalation the muscles attached to the vertebral ribs (Fig. 17) contract angling them forwards and outwards. This pushes the sternal ribs, to which they are attached at almost right angles, downwards and forwards, taking the sternum (with its prominent keel) in the same direction (Fig. 17). This increases both the vertical and transverse diameters of thoracic portion of the trunk. The forward and downward movement of, particularly, the posterior end of the sternum pulls the abdominal wall downwards, increasing the volume of that region of the trunk as well. The increase in volume of the entire trunk cavity reduces the air pressure in all the thoraco-abdominal air sacs, causing them to fill with air as described below. During exhalation the external oblique muscle which is attached to the sternum and vertebral ribs anteriorly, and to the pelvis (pubis and ilium in Fig. 17) posteriorly (forming part of the abdominal wall) reverses the inhalatory movement, while compressing the abdominal contents, thus increasing the pressure in all the air sacs. Air is therefore expelled from the respiratory system in the act of exhalation. During inhalation air enters the trachea via the nostrils and mouth, and continues to just beyond the syrinx at which point the trachea branches into two primary bronchi, going to the two lungs (Fig. 16). The primary bronchi enter the lungs to become the intrapulmonary bronchi, which give off a set of parallel branches called ventrobronchi and, a little further on, an equivalent set of dorsobronchi (Fig. 16). The ends of the intrapulmonary bronchi discharge air into the posterior air sacs at the caudal end of the bird. Each pair of dorso-ventrobronchi is connected by a large number of parallel microscopic air capillaries (or parabronchi) where gas exchange occurs (Fig. 16). As the bird inhales, tracheal air flows through the intrapulmonary bronchi into the posterior air sacs, as well as into the dorsobronchi, but not into the ventrobronchi (Fig. 18). This is due to the bronchial architecture which directs the inhaled air away from the openings of the ventrobronchi, into the continuation of the intrapulmonary bronchus towards the dorsobronchi and posterior air sacs. From the dorsobronchi the inhaled air flows through the parabronchi (and therefore the gas exchanger) to the ventrobronchi from where the air can only escape into the expanding anterior air sacs. So, during inhalation, both the posterior and anterior air sacs expand, the posterior air sacs filling with fresh inhaled air, while the anterior air sacs fill with "spent" (oxygen-poor) air that has just passed through the lungs. During exhalation the pressure in the posterior air sacs (which were filled with fresh air during inhalation) increases due to the contraction of the oblique muscle described above. The aerodynamics of the interconnecting openings from the posterior air sacs to the dorsobronchi and intrapulmonary bronchi ensures that the air leaves these sacs in the direction of the lungs (via the dorsobronchi), rather than returning down the intrapulmonary bronchi (Fig. 18). From the dorsobronchi the fresh air from the posterior air sacs flows through the parabronchi (in the same direction as occurred during inhalation) into ventrobronchi. The air passages connecting the ventrobronchi and anterior air sacs to the intrapulmonary bronchi direct the "spent", oxygen poor air from these two organs to the trachea from where it escapes to the exterior. Oxygenated air therefore flows constantly (during the entire breathing cycle) in a single direction through the parabronchi. The blood flow through the bird lung is at right angles to the flow of air through the parabronchi, forming a cross-current flow exchange system (Fig. 19). The partial pressure of oxygen in the parabronchi declines along their lengths as O2 diffuses into the blood. The blood capillaries leaving the exchanger near the entrance of airflow take up more O2 than do the capillaries leaving near the exit end of the parabronchi. When the contents of all capillaries mix, the final partial pressure of oxygen of the mixed pulmonary venous blood is higher than that of the exhaled air, but is nevertheless less than half that of the inhaled air, thus achieving roughly the same systemic arterial blood partial pressure of oxygen as mammals do with their bellows-type lungs. The trachea is an area of dead space: the oxygen-poor air it contains at the end of exhalation is the first air to re-enter the posterior air sacs and lungs. In comparison to the mammalian respiratory tract, the dead space volume in a bird is, on average, 4.5 times greater than it is in mammals of the same size. Birds with long necks will inevitably have long tracheae, and must therefore take deeper breaths than mammals do to make allowances for their greater dead space volumes. In some birds (e.g. the whooper swan, Cygnus cygnus, the white spoonbill, Platalea leucorodia, the whooping crane, Grus americana, and the helmeted curassow, Pauxi pauxi) the trachea, which some cranes can be 1.5 m long, is coiled back and forth within the body, drastically increasing the dead space ventilation. The purpose of this extraordinary feature is unknown. Reptiles The anatomical structure of the lungs is less complex in reptiles than in mammals, with reptiles lacking the very extensive airway tree structure found in mammalian lungs. Gas exchange in reptiles still occurs in alveoli however. Reptiles do not possess a diaphragm. Thus, breathing occurs via a change in the volume of the body cavity which is controlled by contraction of intercostal muscles in all reptiles except turtles. In turtles, contraction of specific pairs of flank muscles governs inhalation and exhalation. Amphibians Both the lungs and the skin serve as respiratory organs in amphibians. The ventilation of the lungs in amphibians relies on positive pressure ventilation. Muscles lower the floor of the oral cavity, enlarging it and drawing in air through the nostrils into the oral cavity. With the nostrils and mouth closed, the floor of the oral cavity is then pushed up, which forces air down the trachea into the lungs. The skin of these animals is highly vascularized and moist, with moisture maintained via secretion of mucus from specialised cells, and is involved in cutaneous respiration. While the lungs are of primary organs for gas exchange between the blood and the environmental air (when out of the water), the skin's unique properties aid rapid gas exchange when amphibians are submerged in oxygen-rich water. Some amphibians have gills, either in the early stages of their development (e.g. tadpoles of frogs), while others retain them into adulthood (e.g. some salamanders). Fish Oxygen is poorly soluble in water. Fully aerated fresh water therefore contains only 8–10 ml O2/liter compared to the O2 concentration of 210 ml/liter in the air at sea level. Furthermore, the coefficient of diffusion (i.e. the rate at which a substances diffuses from a region of high concentration to one of low concentration, under standard conditions) of the respiratory gases is typically 10,000 faster in air than in water. Thus oxygen, for instance, has a diffusion coefficient of 17.6 mm2/s in air, but only 0.0021 mm2/s in water. The corresponding values for carbon dioxide are 16 mm2/s in air and 0.0016 mm2/s in water. This means that when oxygen is taken up from the water in contact with a gas exchanger, it is replaced considerably more slowly by the oxygen from the oxygen-rich regions small distances away from the exchanger than would have occurred in air. Fish have developed gills deal with these problems. Gills are specialized organs containing filaments, which further divide into lamellae. The lamellae contain a dense thin walled capillary network that exposes a large gas exchange surface area to the very large volumes of water passing over them. Gills use a countercurrent exchange system that increases the efficiency of oxygen-uptake from the water. Fresh oxygenated water taken in through the mouth is uninterruptedly "pumped" through the gills in one direction, while the blood in the lamellae flows in the opposite direction, creating the countercurrent blood and water flow (Fig. 22), on which the fish's survival depends. Water is drawn in through the mouth by closing the operculum (gill cover), and enlarging the mouth cavity (Fig. 23). Simultaneously the gill chambers enlarge, producing a lower pressure there than in the mouth causing water to flow over the gills. The mouth cavity then contracts, inducing the closure of the passive oral valves, thereby preventing the back-flow of water from the mouth (Fig. 23). The water in the mouth is, instead, forced over the gills, while the gill chambers contract emptying the water they contain through the opercular openings (Fig. 23). Back-flow into the gill chamber during the inhalatory phase is prevented by a membrane along the ventroposterior border of the operculum (diagram on the left in Fig. 23). Thus the mouth cavity and gill chambers act alternately as suction pump and pressure pump to maintain a steady flow of water over the gills in one direction. Since the blood in the lamellar capillaries flows in the opposite direction to that of the water, the consequent countercurrent flow of blood and water maintains steep concentration gradients for oxygen and carbon dioxide along the entire length of each capillary (lower diagram in Fig. 22). Oxygen is, therefore, able to continually diffuse down its gradient into the blood, and the carbon dioxide down its gradient into the water. Although countercurrent exchange systems theoretically allow an almost complete transfer of a respiratory gas from one side of the exchanger to the other, in fish less than 80% of the oxygen in the water flowing over the gills is generally transferred to the blood. In certain active pelagic sharks, water passes through the mouth and over the gills while they are moving, in a process known as "ram ventilation". While at rest, most sharks pump water over their gills, as most bony fish do, to ensure that oxygenated water continues to flow over their gills. But a small number of species have lost the ability to pump water through their gills and must swim without rest. These species are obligate ram ventilators and would presumably asphyxiate if unable to move. Obligate ram ventilation is also true of some pelagic bony fish species. There are a few fish that can obtain oxygen for brief periods of time from air swallowed from above the surface of the water. Thus lungfish possess one or two lungs, and the labyrinth fish have developed a special "labyrinth organ", which characterizes this suborder of fish. The labyrinth organ is a much-folded suprabranchial accessory breathing organ. It is formed by a vascularized expansion of the epibranchial bone of the first gill arch, and is used for respiration in air. This organ allows labyrinth fish to take in oxygen directly from the air, instead of taking it from the water in which they reside through the use of gills. The labyrinth organ helps the oxygen in the inhaled air to be absorbed into the bloodstream. As a result, labyrinth fish can survive for a short period of time out of water, as they can inhale the air around them, provided they stay moist. Labyrinth fish are not born with functional labyrinth organs. The development of the organ is gradual and most juvenile labyrinth fish breathe entirely with their gills and develop the labyrinth organs when they grow older. Invertebrates Arthropods Some species of crab use a respiratory organ called a branchiostegal lung. Its gill-like structure increases the surface area for gas exchange which is more suited to taking oxygen from the air than from water. Some of the smallest spiders and mites can breathe simply by exchanging gas through the surface of the body. Larger spiders, scorpions and other arthropods use a primitive book lung. Insects Most insects breath passively through their spiracles (special openings in the exoskeleton) and the air reaches every part of the body by means of a series of smaller and smaller tubes called 'trachaea' when their diameters are relatively large, and 'tracheoles' when their diameters are very small. The tracheoles make contact with individual cells throughout the body. They are partially filled with fluid, which can be withdrawn from the individual tracheoles when the tissues, such as muscles, are active and have a high demand for oxygen, bringing the air closer to the active cells. This is probably brought about by the buildup of lactic acid in the active muscles causing an osmotic gradient, moving the water out of the tracheoles and into the active cells. Diffusion of gases is effective over small distances but not over larger ones, this is one of the reasons insects are all relatively small. Insects which do not have spiracles and trachaea, such as some Collembola, breathe directly through their skins, also by diffusion of gases. The number of spiracles an insect has is variable between species, however, they always come in pairs, one on each side of the body, and usually one pair per segment. Some of the Diplura have eleven, with four pairs on the thorax, but in most of the ancient forms of insects, such as Dragonflies and Grasshoppers there are two thoracic and eight abdominal spiracles. However, in most of the remaining insects, there are fewer. It is at the level of the tracheoles that oxygen is delivered to the cells for respiration. Insects were once believed to exchange gases with the environment continuously by the simple diffusion of gases into the tracheal system. More recently, however, large variation in insect ventilatory patterns has been documented and insect respiration appears to be highly variable. Some small insects do not demonstrate continuous respiratory movements and may lack muscular control of the spiracles. Others, however, utilize muscular contraction of the abdomen along with coordinated spiracle contraction and relaxation to generate cyclical gas exchange patterns and to reduce water loss into the atmosphere. The most extreme form of these patterns is termed discontinuous gas exchange cycles. Molluscs Molluscs generally possess gills that allow gas exchange between the aqueous environment and their circulatory systems. These animals also possess a heart that pumps blood containing hemocyanin as its oxygen-capturing molecule. Hence, this respiratory system is similar to that of vertebrate fish. The respiratory system of gastropods can include either gills or a lung. Plants Plants use carbon dioxide gas in the process of photosynthesis, and exhale oxygen gas as waste. The chemical equation of photosynthesis is 6 CO2 (carbon dioxide) and 6 H2O (water), which in the presence of sunlight makes C6H12O6 (glucose) and 6 O2 (oxygen). Photosynthesis uses electrons on the carbon atoms as the repository for the energy obtained from sunlight. Respiration is the opposite of photosynthesis. It reclaims the energy to power chemical reactions in cells. In so doing the carbon atoms and their electrons are combined with oxygen forming CO2 which is easily removed from both the cells and the organism. Plants use both processes, photosynthesis to capture the energy and oxidative metabolism to use it. Plant respiration is limited by the process of diffusion. Plants take in carbon dioxide through holes, known as stomata, that can open and close on the undersides of their leaves and sometimes other parts of their anatomy. Most plants require some oxygen for catabolic processes (break-down reactions that release energy). But the quantity of O2 used per hour is small as they are not involved in activities that require high rates of aerobic metabolism. Their requirement for air, however, is very high as they need CO2 for photosynthesis, which constitutes only 0.04% of the environmental air. Thus, to make 1 g of glucose requires the removal of all the CO2 from at least 18.7 liters of air at sea level. But inefficiencies in the photosynthetic process cause considerably greater volumes of air to be used.
Biology and health sciences
Respiratory system
null
66728
https://en.wikipedia.org/wiki/Trail
Trail
A trail, also known as a path or track, is an unpaved lane or a small paved road not intended for usage by motorized vehicles, usually passing through a natural area. In the United Kingdom and Ireland, a path or footpath is the preferred term for a pedestrian or hiking trail. The term is also applied in North America to accompanying routes along rivers, and sometimes to highways. In the US, the term was historically used for a route into or through wild territory used by explorers and migrants (e.g. the Oregon Trail). In the United States, "trace" is a synonym for trail, as in Natchez Trace. Some trails are dedicated only for walking, cycling, horse riding, snowshoeing or cross-country skiing, but not more than one use; others, as in the case of a bridleway in the UK, are shared-use and can be used by pedestrians, cyclists and equestrians alike. Although most trails are for low-traffic, non-motorized usage, there are also unpaved trails used by dirt bikes, quad bikes and other off-road vehicles, usually for extreme sports and rally races. In some places, like the Alps, trails are used by alpine agrarian communities for moving cattle and other livestock. Usage In Australia, the term track can be used interchangeably with trail or walk, and can refer to anything from a dirt road to an unpaved pedestrian path. In New Zealand, the terms track or walkway are used almost exclusively except when referring to cross-country skiing: "walkways vary enormously in nature, from short urban strolls, to moderate coastal locations, to challenging tramps [hikes] in the high country [mountains]". Walkway is used similarly in St. John's, Newfoundland, Canada, where the "Grand Concourse", is an integrated walkway system. In the United Kingdom, the term trail is in common usage. Longer distance walking routes, and government-promoted long-distance paths, collectively known as National Trails, are also frequently called ways as in the Pennine Way and South Downs Way. Generally, the term footpath is preferred for pedestrian routes, including long-distance trails, and is used for urban paths and sometimes in place of pavement. Track is used for wider paths (wide enough for vehicles), often used for hiking. The terms bridleway, byway, restricted byway are all recognised legal terms and to a greater or lesser extent in general usage. The increased popularity of mountain biking has led to a proliferation of mountain bike trails in many countries. Often these will be grouped to form larger complexes, known as trail centers. In the early years of the 20th century, the term auto trail was used for a marked highway route, and trail is now used to designate routes, including highway routes, designated for tourist interest like the Cabot Trail, Nova Scotia, Canada and the Quilt Trails in the US. The term trail has been used by developers and urban planners for a variety of modern paved roads, highways, and boulevards, in these countries, and some highways continue to be officially called a trail, such as the Susquehanna Trail in Pennsylvania, a designation that varies from a two-lane road to a four-lane freeway. An unusual use of the term is in the Canadian province of Alberta, which has multi-lane freeways called trails. History Animals created the first trails, which were "later adapted by humans". Subsequently, farmers moved cattle to market along drove roads and between winter and summer grazing creating trails. More recently, former industrial routes, such as railway rights of way and canal towpaths, have been turned into recreational trails. Many historic routes, like the Silk Road, the Amber Road and the Royal Road of the Persian Empire, existed before the Christian era and covered great distances. The Post Track, a prehistoric causeway in the valley of the River Brue in the Somerset Levels, England, is one of the oldest known constructed trackways and dates from around 3838 BC. The idea of following a path or track for exercise or pleasure developed during the 18th century in Europe and arose because of changing attitudes to the landscape and nature associated with the Romantic movement. In earlier times, walking generally indicated poverty and was associated with vagrancy. In previous centuries long walks were undertaken as part of religious pilgrimages and this tradition continues throughout the world. The first footpath built specifically for recreational hiking in America, and likely the world, is the Crawford Path in the White Mountains of New Hampshire. The path was blazed in 1819 by Abel Crawford and his son, Ethan Allen. Originally 8.25 miles in length (now 8.5 miles), the trail leads to the summit of Mt. Washington. Types Trails can be located in different settings for various uses. These can include: Segregated Trail segregation, the practice of designating certain trails as having a specific preferred or exclusive use, is increasingly common and diverse. For example, bike trails are used not only on roads open to motor vehicles but also in trail systems open to other trail users. Some trails are segregated for use by both equestrians and mountain bikes or by equestrians or mountain bikes alone. Designated "wilderness area" trails may be segregated for non-wheeled use permitting backpacking and horses but not permitting mountain bikes and motorized vehicles. Often, trail segregation for a particular use is accompanied by prohibitions against that use on other trails within the trail system. Trail segregation may be supported by signage, markings, trail design and construction (especially the selection of tread materials), and by separation between parallel treads. Separation may be achieved by "natural" barriers including distance, ditching, banking, grading, and vegetation, and by "artificial" barriers including fencing, curbing, and walls. Bicycle Bicycle trails encompass a wide variety of trail types, including shared-use paths used for commuting, off-road cross-country trails and downhill mountain bike trails. The number of off-road cycle trails has increased significantly, along with the popularity of mountain bikes. Off-road bicycle trails are generally function-specific and most commonly waymarked along their route. They may take the form of single routes or form part of larger complexes, known as trail centers. Off-road trails often incorporate a mix of challenging terrain, singletrack, smooth fireroads, and even paved paths. Trails with an easy or moderate technical complexity are generally deemed cross-country trails, while trails difficult even to experienced riders are more often dubbed all-mountain, freeride, or downhill. Downhilling is popular at ski resorts like Mammoth Mountain in California, or Whistler Blackcomb in British Columbia, where ski lifts are used to get bikes and riders to the top of the mountain. EuroVelo bicycle routes are a network of (currently 17) long-distance cycling routes criss-crossing Europe in various stages of completion; more than was in place by 2020. EuroVelo is a project of the European Cyclists' Federation (ECF). EuroVelo routes can be used for bicycle touring across the continent, and by local people making short journeys. The routes comprise both existing national bike routes, such as the Dutch LF-Routes, the German D-Routes, and the British National Cycle Network, and existing general-purpose roads, together with new stretches of cycle routes to connect them. Off-road cycling can cause soil erosion and habitat destruction if not carried out on established trails. This is true when trails are wet, though overall, cycling may not have more of an impact as other trail users. Cross-country skiing In cross-country skiing, a trail is also called a track or piste. Recreational cross-country skiing is also called touring, especially in Europe. Some skiers stay out for extended periods using tents and equipment similar to bushwalkers and hikers, whereas others take shorter trips from ski resorts on maintained trails. In some countries, organizations maintain a network of huts for use by cross-country skiers in wintertime. For example, the Norwegian Mountain Touring Association maintains over 400 huts stretching across hundreds of kilometres of trails hikers use in the summer and skiers use in the winter. Equestrian Horse riding and other equestrian uses of trails continue to be a popular activity for many trail users. Horses can usually negotiate much the same grades as hikers, but not always, although they can more easily clear obstacles in the path such as logs. The Bicentennial National Trail (BNT) in Australia is one of the longest marked multi-use trail in the world, stretching from Cooktown, Queensland, through New South Wales to Healesville, Victoria. This trail runs the length of the rugged Great Dividing Range through national parks, private property and alongside of wilderness areas. One of the objectives was to develop a trail that linked up the brumby tracks, mustering and stock routes along the Great Dividing Range, thus providing an opportunity to legally ride the routes of stockmen and drovers who once travelled these areas with pack horses. This Trail provides access to some of the wildest, most remote country in the world. The Bicentennial National Trail is suitable for self-reliant horse riders, fit walkers and mountain bike riders. Within the United States National Trail Classification System, equestrian trails include simple day-use bridle paths and others built to accommodate long strings of pack animals on journeys lasting many days. Trail design parameters for these uses include trail base width and material, trail clear width, trail clear height, access to water suitable for stock (not human) use, and trail routing. Pedestrian A footpath is a type of thoroughfare that is intended for use only by pedestrians either within an urban area or through the countryside. An urban footpath is usually called an alley or lane and often paved (see also: sidewalk and pavement). Other public rights of way, such as bridleways, byways, towpaths, and green lanes are also used by pedestrians. In England and Wales, there are rights of way on which pedestrians have a legally protected right to travel. National parks, nature preserves, conservation areas and other protected wilderness areas may have trails that are restricted to pedestrians. Footpaths can be connected to form a long-distance trail or way, which can be used by both day hikers and backpackers. Some trails are over long. In the US and Canada, where urban sprawl has reached rural communities, developers and local leaders are currently striving to make their communities more conducive to non-motorized transportation through the use of less traditional trails. The Robert Wood Johnson Foundation in the US has established the Active Living by Design program to improve the livability of communities in part through developing trails, The Upper Valley Trails Alliance in Vermont has done similar work on traditional trails, while the Somerville Community Path in Somerville, Massachusetts, and related paths, are examples of urban initiatives. In St. John's, Newfoundland, Canada the "Grand Concourse", is an integrated walkway system that has over of walkways, which link every major park, river, pond and green space in six municipalities. Motor A motorized trail is a trail intended for off-road vehicles for example 4×4 cars, dirt bikes, All-terrain vehicles (ATV). Motorized trail use remains very popular with some people, particularly in the US. The Recreational Trails Program defined as part of the Intermodal Surface Transportation Efficiency Act of 1991 mandates that states must use a minimum of 30 percent of these funds for motorized trail uses. Some members of the US government and environmental organizations, including the Sierra Club and The Wilderness Society. have criticized off-road vehicle use on public land. They have noted several consequences of illegal ORV use such as pollution, trail damage, erosion, land degradation, possible species extinction, and habitat destruction which can leave hiking trails impassable. ORV proponents argue legal use taking place under planned access along with the multiple environmental and trail conservation efforts by ORV groups will mitigate these issues. Groups such as the BlueRibbon Coalition advocate Treadlightly, which is the responsible use of public lands used for off-road activities. Noise pollution is also a concern, and several studies conducted by Montana State University, California State University, the University of Florida and others have cited possible negative behavioral changes in wildlife as the result of some ORV use. Several US states such as Washington have laws to reduce noise generated by off-road and non-highway vehicles. Water Water trails, also referred to as blueways or paddling trails, are marked routes on navigable waterways such as rivers, lakes, canals and coastlines for people using small non-motorized boats such as kayaks, canoes, rafts, or rowboats. Some trails may be suitable for float tubing or developed in concert with motorized use. They include: signs and route markers; maps; facilities for parking, boat ramps or docks, and places to camp and picnic. There are also state programs and other promotion for water trails in the United States. The American Canoe Association has compiled a database of water trails in the United States. The National Park Service Rivers, Trails, and Conservation Assistance Program has compiled a list of water trail resources, success stories, and statewide contacts for water trails. Shared-use Shared use may be achieved by sharing a trail easement, but maintaining segregated and sometimes also separated trail treads within it. This is common with rail trails. Shared use may also refer to alternate day arrangements, whereby two uses are segregated by being permitted every other day. This is increasingly common on long-distance trails shared by equestrians and mountain bike users; these two user communities have similar trail requirements but may experience difficult encounters with each other on the trail. The Trans Canada Trail can be used by cyclists, hikers, horseback riders, and walkers, as well as cross-country skiers, snowmobilers and snowshoers in winter. In the United States, the East Coast Greenway from Key West to the Canadian border and the 11 September National Memorial Trail, a triangular loop connecting the three 9/11 memorial sites, are two long-distance multi-use paths for cyclists, runners, walkers, even equestrians. In Belgium RAVeL, French for réseau autonome de voies lentes (autonomous network of slow ways), is a Walloon initiative aimed at creating a network of route itineraries reserved for pedestrians, cyclists, horse riders and people with reduced mobility. The network makes use of towpaths on river banks and disused railway or vicinal tramway lines ( narrow-gauge tramways). Old railway lines have been leased by the Walloon Government for 99 years using emphyteutic lease contracts. Where necessary, new paths are created to link parts of the network. In England and Wales a bridleway is a trail intended for use by equestrians, but walkers also have a right of way, and Section 30 of the Countryside Act 1968, permits the riding of bicycles (but not motor-cycles) on public bridleways, though the act says it "shall not create any obligation to facilitate the use of the bridleway by cyclists". Thus the right to cycle exists even though it may be difficult to exercise on occasion, especially in winter. Cyclists using a bridleway must give way to other users on foot or horseback. The seawall in Stanley Park, Vancouver, British Columbia, Canada is popular for walking, running, cycling, and inline skating. There are two paths, one for skaters and cyclists and the other for pedestrians. The lane for cyclists and skaters goes one-way in a counterclockwise loop. Foreshoreway (also oceanway) is a term used in Australia for a type of greenway that provides a public right-of-way along the edge of the sea open to both walkers and cyclists. Forest road A forest road is a type of rudimentary access road, built mainly for the forest industry. In some cases they are used for backcountry recreation access. There is open access to most Forestry Commission roads and land in Great Britain for walkers, cyclists and horse riders and, since the Countryside Bill of 1968, it has become the largest provider of outdoor recreation in Britain. The commission works with associations involved in rambling, cycling, mountain biking and horse riding to promote the use of its land for recreation. The trails open to the public are not just forest roads. A notable example of the commission's promotion of outdoor activity is the 7stanes project in Scotland, where seven purpose built areas of mountain bike trails have been laid, including facilities for disabled cyclists. Holloway A Holloway (also hollow way) is a sunken path or lane, i.e., a road or track that is significantly lower than the land on either side, not formed by the (recent) engineering of a road cutting but possibly of much greater age. Various mechanisms have been proposed for how holloways may have been formed, including erosion by water or traffic; the digging of embankments to assist with the herding of livestock; and the digging of double banks to mark the boundaries of estates. These mechanisms are all possible and could apply in different cases. Rail Rail trails or paths are shared-use paths that take advantage of abandoned railway corridors. They can be used for walking, cycling and horseback riding. They exist throughout the world. RailTrails Australia describes them as:Following the route of the railways, they cut through hills, under roads, over embankments and across gullies and creeks. Apart from being great places to walk, cycle or horse ride, rail trails are linear conservation corridors protecting native plants and animals. They often link remnant vegetation in farming areas and contain valuable flora and fauna habitat. Wineries and other attractions are near many trails as well as B&B's and other great places to stay. Most trails have a gravel or dirt surface suitable for walking, mountain bikes and horses. In the USA the Cheshire Rail Trail, in New Hampshire, can be used by hikers, horseback riders, snowmobilers, cross-country skiers, cyclists, and even dogsledders. In Canada, following the abandonment of the Prince Edward Island Railway in 1989, the government of Prince Edward Island purchased the right-of-way to the entire railway system. The Confederation Trail was developed as a tip-to-tip walking/cycling gravel rail trail which doubles as a monitored and groomed snowmobile trail during the winter months, operated by the PEI Snowmobile Association. A considerable part of the Trans Canada trail is repurposed defunct rail lines donated to provincial governments by the Canadian Pacific and Canadian National railways rebuilt as walking trails. Much of the Trans Canada Trail development emulated the successful Rails-to-Trails initiative in the United States. The Trail is multi-use and depending on the section may allow hikers, bicyclists, horseback riders, cross-country skiers and snowmobilers. Towpath A towpath is a road or path on the bank of a river, canal, or other inland waterway. The original purpose of a towpath was to allow a horse, or a team of human pullers, to tow a boat, often a barge. They can be paved or unpaved and are popular with cyclists and walkers; some are suitable for equestrians. Equestrians have legal access to all towpaths in Scotland, and there is a campaign for similar rights in England and Wales. In snowy winters in the USA they are popular with cross-country skiers and snowmobile users. Most canals were owned by private companies in Britain, and the towpaths were deemed to be private, for the benefit of legitimate users of the canal. The nationalisation of the canal system in 1948 did not result in the towpaths becoming public rights of way, and subsequent legislation, such as the Transport Act of 1968, which defined the government's obligations to the maintenance of the inland waterways for which it was now responsible, did not include any commitment to maintain towpaths for use by anyone. Ten years later British Waterways started to relax the rule that a permit was required to give access to a towpath, and began to encourage leisure usage by walkers, anglers and in some areas, cyclists. The British Waterways Act 1995 still did not enshrine any right of public access, although it did encourage recreational access of all kinds to the network, although the steady development of the leisure use of the canals and the decline of commercial traffic had resulted in a general acceptance that towpaths are open to everyone, and not just boat users. The concept of free access to towpaths is enshrined in the legislation which transferred responsibility for the English and Welsh canals from British Waterways to the Canal & River Trust in 2012. Not all towpaths are suitable for use by cyclists, but where they are, and the canal is owned by British Waterways, a permit is required. There is no charge for a permit, but it acts as an opportunity to inform cyclists about safe and unsafe areas to cycle. Some areas including London are exempt from this policy, but are covered instead by the London Towpath Code of Conduct and cyclists must have a bell, which they ring twice when approaching pedestrians. Parts of some towpaths have been incorporated into the National Cycle Network, and in most cases this has resulted in the surface being improved. In France it is possible to cycle, rollerblade, and hike along the banks of the Canal du Midi. A paved stretch of from Toulouse to Avignonet-Lauragais and another between Béziers and Portiragnes are particularly suited to cycling and rollerblading. It is possible to cycle or walk the entire Canal des Deux Mers from Sète to Bordeaux. Other French canals provide walkers "with many excellent routes, as they are always accompanied by a towpath, which makes a pleasant off-road track, and have the added virtues of flatness, shade and an abundance of villages along the way", though walking a canal can be monotonous, so that "a long trip beside a canal is better done by bicycle". Urban An urban trail is a citywide network of non-motorized, multi-use pathways that are used by bicyclists, walkers and runners for both transportation and recreation. Urban trails average ten foot in width and are surfaced with asphalt or concrete. Some are striped likes roads to designate two-way traffic. Urban trails are designed with connections to neighborhoods, businesses, places of employment and public transport stops. Alley Urban pedestrian footpaths are sometimes called alleys or lanes and in older cities and towns in Europe and are often what is left of a medieval street network or rights-of-way or ancient footpaths. Similar paths also exist in some older North American towns and cities, like Charleston, South Carolina, New Castle, Delaware, and Pittsburgh, Pennsylvania. Such urban trails or footpaths are narrow, usually paved and often between the walls of buildings. This type is usually short and straight, and on steep ground can consist partially or entirely of steps. Some are named. Because of geography steps are a common form of footpath in hilly cities and towns. This includes Pittsburgh (see Steps of Pittsburgh), Cincinnati (see Steps of Cincinnati), Seattle, and San Francisco in the United States, as well as Hong Kong, Quebec City, Quebec, Canada, and Rome. Stairway trails are found in a number of hilly American cities. This includes the Stairway Trails in Bernal Heights, East San Francisco. System layout Linear A linear trail goes from one point to another without connecting trails. These trails are also known as "out-and-back" or "destination" trails. Rail trails and long-distance trails are examples of linear trails. Linear trails usually follow long distances. A shorter linear trail is a spur trail, which takes a user to a particular point-of-interest, such as a waterfall or mountain summit. Looped A looped trail allows a user to end up where they started with either minimal or no repeating parts of the trail. Looped-trail systems come in many permutations. A single-looped trail system is often used around lakes, wetlands, and other geological features. A series of looped trails is a stacked-loop trail system. A stacked loop trail system has several interconnected looped trails. This creates an efficient, compact design with many route options. In a multiple-loop system, each loop extends from a single trailhead. Trail systems often combine linear trails with looped trails. In a spoked-wheel system, linear trails connect a central trailhead with an outer loop. In a primary-and-secondary loop system, linear trails connect a primary loop with secondary loops. Last, a maze system incorporates both loops and linear trails. Maze systems provide users many choices; some users may find navigation difficult. Administration Europe A group of public and private organisations from the eight Alpine countries in Europe created the Via Alpina in 2000, receiving EU funding from 2001 until 2008. It was initiated by the Association Grande Traversée des Alpes in Grenoble, which hosted the Via Alpina international secretariat until January 2014, when it was transferred to the International Commission for the Protection of the Alps CIPRA, in Liechtenstein. There are national secretariats (hosted by public administrations or hiking associations) in each country. Its aim is to support sustainable development in remote mountain areas and promote the Alpine cultures and cultural exchanges. The Grande Randonnée (French), Grote Routepaden or Lange-afstand-wandelpaden (Dutch), Grande Rota (Portuguese) or Gran Recorrido (Spanish) is a network of long-distance footpaths in Europe, mostly in France, Belgium, the Netherlands and Spain. Many GR routes make up part of the longer European walking routes which cross several countries. In France alone, the trails cover approximately . In France, the network is maintained by the Fédération Française de la Randonnée Pédestre (French Hiking Federation), and in Spain by the Spanish Mountain Sports Federation. UK and Ireland In England and Wales, many trails and footpaths are of ancient origin and are protected under law as rights of way. In Ireland, the Keep Ireland Open organization is campaigning for similar rights. Local highway authorities, in England and Wales, (usually county councils or unitary authorities) are required to maintain the definitive map of all public rights of way in their areas, and these can be inspected at council offices. If a path is shown on the definitive map, and no subsequent order (e.g. a stopping up) exists, then the right of way is conclusive in law. But just because a path is not on that map, does not mean that it is not a public path, as the rights may not have been recorded. The Countryside Agency estimated that over 10% of public paths are not yet listed on the definitive map. The Countryside and Rights of Way Act 2000 provides that paths that are not recorded on the definitive map by 2026 and that were in use prior to 1949 will automatically be deemed stopped-up on 1 January 2026. In Scotland, a right of way is a route over which the public has passed unhindered for at least 20 years. The route must link two "public places", such as villages, churches or roads. Unlike in England and Wales, there is no obligation on Scottish local authorities to signpost or mark a right of way. The charity Scotways, formed in 1845 to protect rights of way, records and signs the routes. There is no legal distinction between footpaths and bridleways in Scotland, as there is in England and Wales, though it is generally accepted that cyclists and horseback riders may follow rights of way with suitable surfaces. The Land Reform (Scotland) Act 2003 established a general presumption of access to all land in Scotland, making the existence of rights of way less important in terms of access to land in Scotland. Certain categories of land are excluded from this presumption of open access such as railway land, airfields and private gardens. Northern Ireland has very few public rights of way and access to land there is more restricted than other parts of the UK. In many areas, walkers can enjoy the countryside only because of the goodwill and tolerance of landowners. Northern Ireland shares the same legal system as England, including concepts about the ownership of land and public rights of way, but it has its own court structure, system of precedents and specific access legislation. In England and Wales a National Trails system of long-distance footpaths also exists administered by Natural England and the Natural Resources Wales, statutory agencies of the UK and the Welsh Governments, respectively. These include Hadrian's Wall Path, the Pembrokeshire Coast Path, the Pennine Bridleway (bridleway), the South West Coast Path (South West Way) (the longest), and the Thames Path, and many more. Together these are over long. In Scotland, the equivalent trails are called Scotland's Great Trails and are administered by NatureScot. The first, and probably the most popular, is the West Highland Way, which is long and was opened in 1980. Sustrans is a British charity that promotes sustainable transport, and it works on projects to encourage people to walk, cycle, and use public transport, to give people the choice of "travelling in ways that benefit their health and the environment". Sustrans' flagship project is the National Cycle Network, which has created over of signed cycle routes throughout the UK. United States In 1968, the United States' National Trails System, which includes National Scenic Trails, National Historic Trails and National Recreation Trails, was created under the National Trails System Act. The most famous American long trails are the Appalachian National Scenic Trail, generally known as the Appalachian Trail and the Pacific Crest Trail. The Appalachian Trail is a marked hiking route in the eastern United States extending between Springer Mountain, Georgia, and Mount Katahdin, Maine. The trail is approximately long. The Pacific Crest Trail is a long-distance hiking and equestrian trail closely aligned with the highest portion of the Sierra Nevada and Cascade mountain ranges, which lie east of the US Pacific coast. The trail's southern terminus is on the US border with Mexico and its northern terminus on the US-Canada border on the edge of Manning Park in British Columbia, Canada; its corridor through the US is in the states of California, Oregon, and Washington. It is long. The land management agency in charge of a trail writes and enforces the rules and regulations for it. A trail may be completely contained within one administration (e.g. a State Park) or it may pass through multiple administrations, leading to a confusing array of regulations, allowing dogs or mountain bikes in one segment but not in another, or requiring Wilderness Permits for a portion of the trail, but not everywhere. In the United States agencies administering trails include the National Park Service, the US Forest Service, the Bureau of Land Management, State Park systems, County Parks, cities, private organizations such as land trusts, businesses and individual property owners. New trail construction by an agency must often be assessed for its environmental impact and conformance with State or Federal laws. For example, in California new trails must undergo reviews specified by the California Environmental Quality Act (CEQA). Universal access All trails and shared use paths—indeed, any areas open to pedestrians—that are owned or operated by a public or private entity covered by the Americans with Disabilities Act of 1990 are subject to federal regulations on Other Power-Driven Mobility Devices ("OPDMDs"). These rules potentially greatly expand the types of vehicular devices that must be permitted on trails, shared use paths, other routes, and other areas open to the public. This publication discusses ways to manage access by these vehicles. There are many types of non-motorized, land-based recreational trails and shared use paths: hiker and pedestrian trails, mountain biking trails, equestrian trails, and multi-use trails designed for several user types. The companion guide to this publication, the 2013 Pennsylvania Trail Design and Development Principles: Guidelines for Sustainable, Non-Motorized Trails (the "Pennsylvania Trail Design Manual"), provides guidance and detailed information about the characteristics of the various types of trails and paths. The publication is a resource to help evaluate, plan, design, construct, and manage a route on a site. The publication Universal Access Trails and Shared Use Paths: Design, Management, Ethical and Legal Considerations focuses on the accessibility aspects of the most commonly constructed types. Construction While most trails have arisen through common usage, the design and construction of good quality new paths is a complex process that requires certain skills. When a trail passes across a flat area that is not wet, brush, tree limbs and undergrowth are removed to create a clear, walkable trail. A bridge is built when a stream or river is sufficiently deep to make it necessary. Other options are culverts, stepping stones, and shallow fords. For equestrian use, shallow fords may be preferred. In wet areas an elevated trailway with fill or a boardwalk is often used, though boardwalks require frequent maintenance and replacement, because boards in poor condition can become slippery and hazardous. Slopes Trail gradients are determined based on a site specific assessment of soils and geology, drainage patterns of the slope, surrounding vegetation types, position on the slope of a given trail segment (bottom, mid-slope, ridgeline), average precipitation, storm intensities, types of use, volume and intensity of use, and a host of other factors affecting the ability of the trail substrate to resist erosion and provide a navigable surface. Trails that ascend steep slopes may use switchbacks, but switchback design and construction is a specialized topic. Trails that are accessible by users with disabilities are mandated by the U.S. Federal Government to have slope of less than 12%, with no more than 30% of the trail having slope greater than 8.33%. Trails outside of wilderness areas have outward side-to-side gradients less than 8%,. A flat or inward-sloping trail collects water and causes extra trail maintenance. The ideal path is built almost, but not quite, level in cross-section. To achieve a proper slope in hilly terrain, a sidehill trail is excavated. This type of trailway is created by establishing a line of a suitable slope across a hillside, which is then dug out by means of a mattock or similar tool. This may be a full-bench trail, where the treadway is only on the firm ground surface after the overlying soil is removed and sidecast (thrown to the side as waste), or a half-bench trail, where soil is removed and packed to the side so that the treadway is half on firm old ground and half on new packed fill. In areas near drainages, creeks and other waterways, excavation spoils are taken away in bulk and deposited in an environmentally benign area. Trails are established entirely on fill in problem areas. In such cases, the soil is packed down firmly and the site is periodically checked to maintain the stability of the trail. Cycle trails built for commuting may be built to a different set of standards than pedestrian-only trails and, in some cases, may require a harder surface, fewer changes in grade and slope, increased sight visibility, and fewer sharp changes in direction. On the other hand, the cross-slope of a bicycle trail may be significantly greater than a foot trail, and the path may be narrower in some cases. The American Association of State Highway and Transportation Officials recommends different widths for different types of bicycle facilities. For example, a shared use path has a recommended one directional width of , while a bidirectional path should be significantly wider () to accommodate bidirectional traffic and users. The US Department of Transportation provides additional guidance on recreational bicycle and pedestrian trail planning and design standards. A well designed recreational mountain bike path for the exclusive use of bicycles has an average grade of less than 10% and generally follows a contour line, rather than straight downhill. Drainage Trail construction requires proper drainage. If drainage is inadequate, three issues may occur: water may accumulate on flat terrain to the point that the trail becomes unusable; moving water can erode trails on slopes; or inadequate drainage may create local mud spots. Mountain bike trails slope out or across the trail 3–5% downhill to encourage water to run off the side, rather than down the trail bed. To remedy the first problem, water accumulation on flat terrain, raised walkways are often built. They include turnpikes, causeways, embankments, stepping stones, and bridges (or deckwalks). The earthen approaches are often done by cutting poles from the woods, staking parallel poles in place on the ground, then filling between them with whatever material is available to create the raised walkway. The more elaborate option of the deckwalk is by necessity reserved for shorter stretches in very high-traffic areas. Water accumulation is particularly common in the North Country of England. The second problem, water erosion, is caused because trails, by their nature, tend to become drainage channels and eventually gullies when the drainage is poorly controlled. Where a trail is near the top of a hill or ridge, this is usually a minor issue, but when it is farther down, it can become a very major one. In areas of heavy water flow along a trail, a ditch is often dug on the uphill side of the trail with drainage points across the trail. The cross-drainage is also accomplished by means of culverts cleared on a semi-annual basis, or by means of cross-channels, often created by placing logs or timbers across the trail in a downhill direction, called "thank-you-marms", "deadmen", or waterbars. Timbers or rocks are also used for this purpose to create erosion barriers. Rock paving in the bottom of these channels and in the trailside ditches is sometimes used to maintain stability. The creation of water bars, with or without ditching, at major points of water flow on or along the trail, and in conjunction with existing drainage channels below the trail, is a technique that can be applied. Another technique that has been adopted is the construction of coweeta dips, or drain dips, points on the trail where it falls briefly (for a meter or so) and then rises again. These provide positive drainage points that are almost never clogged by debris. The third type of problem can occur both on bottomlands and on ridgetops and a variety of other spots. A local spot or short stretch of the trail may be chronically wet. If the trail is not directly on rock, then a mud pit forms. Trail users go to the side of the trail to avoid the mud pit, and the trail becomes widened. A "corduroy" is a technique used when an area cannot be drained. This ranges from random sticks to split logs being laid across the path. Some early turnpikes in the United States were corduroy roads, and these can still be found in third-world forested areas. With recreational trails, it is common for the sticks to be one to three inches thick and laid in place, close together. Sometimes, a short bridge is used. Maintenance Natural surface, single-track trails will require some ongoing maintenance. If the trail is properly designed and constructed, maintenance should be limited to clearing downed trees, trimming back brush and clearing drainages. Depending on location, if the trail is properly designed, there should be no need for major rework such as grading or erosion control efforts. Mountain trails which see both significant rainfall and human traffic may require "trail hardening" efforts to prevent further erosion. Most of the seemingly natural rock steps on the mountain trails of the northeast United States are the work of professional and volunteer trail crews. Navigation For long-distance trails, or trails where there is any possibility of someone taking a wrong turn, blazing or signage is provided (the term waymarking is used in Britain). This is accomplished by using either paint on natural surfaces or by placing pre-made medallions or sometimes cairns. Horseshoe-shaped blazes are used frequently for bridle trails. The Appalachian Trail is blazed with white rectangles, and blue is often used for side trails. European long-distance walking paths are blazed with yellow points encircled with red. Other walking paths in European countries are blazed in a variety of manners. Where bike trails intersect with pedestrian or equestrian trails, signage at the intersections and high visibility onto the intersecting trails are needed to prevent collisions between fast-moving cyclists and slower moving hikers and horses. Bicycles and horses can share the same trails where the trail is wide enough with good visibility. The US Department of Transportation provides standards and guidelines for traffic control, including signage and striping, for bicycle facilities. Classification A simple colored symbol to classify a trail's difficulty in the USA was first used for ski trails and is now being used for hiking, bicycle, other trails. Green circle – easy Blue square – moderate Black diamond – difficult Other systems may be used in different locations. In Switzerland, paths are classified by three levels of difficulties: Hiking paths (yellow markers), mountain paths (white-red-white markers) and alpine paths (white-blue-white markers).
Technology
Road transport
null
66781
https://en.wikipedia.org/wiki/Oberon%20%28moon%29
Oberon (moon)
Oberon , also designated , is the outermost and second-largest major moon of the planet Uranus. It is the second-most massive of the Uranian moons, and the tenth-largest moon in the Solar System. Discovered by William Herschel in 1787, Oberon is named after the mythical king of the fairies who appears as a character in Shakespeare's A Midsummer Night's Dream. Its orbit lies partially outside Uranus's magnetosphere. Oberon likely formed from the accretion disk that surrounded Uranus just after the planet's formation. The moon consists of approximately equal amounts of ice and rock, and is probably differentiated into a rocky core and an icy mantle. A layer of liquid water may be present at the boundary between the mantle and the core. The surface of Oberon, which is dark and slightly red in color, appears to have been primarily shaped by asteroid and comet impacts. It is covered by numerous impact craters reaching 210 km in diameter. Oberon possesses a system of chasmata (graben or scarps) formed during crustal extension as a result of the expansion of its interior during its early evolution. The Uranian system has been studied up close only once: the spacecraft Voyager 2 took several images of Oberon in January 1986, allowing 40% of the moon's surface to be mapped. Discovery and naming Oberon was discovered by William Herschel on January 11, 1787; on the same day, he discovered Uranus's largest moon, Titania. He later reported the discoveries of four more satellites, although they were subsequently revealed as spurious. For nearly fifty years following their discovery, Titania and Oberon would not be observed by any instrument other than William Herschel's, although the moon can be seen from Earth with a present-day high-end amateur telescope. All of the moons of Uranus are named after characters created by William Shakespeare or Alexander Pope. The name Oberon was derived from Oberon, the King of the Fairies in A Midsummer Night's Dream. The names of all four satellites of Uranus then known were suggested by Herschel's son John in 1852, at the request of William Lassell, who had discovered the other two moons, Ariel and Umbriel, the year before. It is uncertain if Herschel devised the names, or if Lassell did so and then sought Herschel's permission. The adjectival form of the name is Oberonian, . Oberon was initially referred to as "the second satellite of Uranus" and in 1848 was given the designation by Lassell, although he sometimes used Herschel's numbering (where Titania and Oberon are II and IV). In 1851, Lassell eventually numbered all four known satellites in order of their distance from the planet by Roman numerals, and since then Oberon has been designated . Orbit Oberon orbits Uranus at a distance of about 584,000 km, being the farthest from the planet among its five major moons. Oberon's orbit has a small orbital eccentricity and inclination relative to the equator of Uranus. Its orbital period is around 13.5 days, coincident with its rotational period. In other words, Oberon is tidally locked, with one face always pointing toward the planet. Oberon spends a significant part of its orbit outside the Uranian magnetosphere. As a result, its surface is directly struck by the solar wind. This is important, because the trailing hemispheres of satellites orbiting inside a magnetosphere are struck by the magnetospheric plasma, which co-rotates with the planet. This bombardment may lead to the darkening of the trailing hemispheres, which is actually observed for all Uranian moons except Oberon (see below). Because Uranus orbits the Sun almost on its side, and its moons orbit in the planet's equatorial plane, they (including Oberon) are subject to an extreme seasonal cycle. Both northern and southern poles spend 42 years in a complete darkness, and another 42 years in continuous sunlight, with the sun rising close to the zenith over one of the poles at each solstice. The Voyager 2 flyby coincided with the southern hemisphere's 1986 summer solstice, when nearly the entire northern hemisphere was in darkness. Once every 42 years, when Uranus has an equinox and its equatorial plane intersects the Earth, mutual occultations of Uranus's moons become possible. One such event, which lasted for about six minutes, was observed on May 4, 2007, when Oberon occulted Umbriel. Composition and internal structure Oberon is the second-largest and second-most massive of the Uranian moons after Titania, and the ninth-most massive moon in the Solar System. It is the tenth-largest moon by size however, since Rhea, the second-largest moon of Saturn and the ninth-largest moon, is nearly the same size as Oberon although it is about 0.4% larger, despite Oberon having more mass than Rhea. Oberon's density of 1.68 g/cm3, which is higher than the typical density of Saturn's satellites, indicates that it consists of roughly equal proportions of water ice and a dense non-ice component. The latter could be made of rock and carbonaceous material including heavy organic compounds. The presence of water ice is supported by spectroscopic observations, which have revealed crystalline water ice on the surface of the moon. Water ice absorption bands are stronger on Oberon's trailing hemisphere than on the leading hemisphere. This is the opposite of what is observed on other Uranian moons, where the leading hemisphere exhibits stronger water ice signatures. The cause of this asymmetry is not known, but it may be related to impact gardening (the creation of soil via impacts) of the surface, which is stronger on the leading hemisphere. Meteorite impacts tend to sputter (knock out) ice from the surface, leaving dark non-ice material behind. The dark material itself may have formed as a result of radiation processing of methane clathrates or radiation darkening of other organic compounds. Oberon may be differentiated into a rocky core surrounded by an icy mantle. If this is the case, the radius of the core (480 km) is about 63% of the radius of the moon, and its mass is around 54% of the moon's mass—the proportions are dictated by the moon's composition. The pressure in the center of Oberon is about 0.5 GPa (5 kbar). The current state of the icy mantle is unclear. If the ice contains enough ammonia or other antifreeze, Oberon may possess a liquid ocean layer at the core–mantle boundary. The thickness of this ocean, if it exists, is up to 40 km and its temperature is around 180 K (close to the water–ammonia eutectic temperature of 176 K). However, the internal structure of Oberon depends heavily on its thermal history, which is poorly known at present. Albeit more recent publications seem to be in favour of active subterranean oceans throughout the larger moons of Uranus. Surface features and geology Oberon is the second-darkest large moon of Uranus after Umbriel. Its surface shows a strong opposition surge: its reflectivity decreases from 31% at a phase angle of 0° (geometrical albedo) to 22% at an angle of about 1°. Oberon has a low Bond albedo of about 14%. Its surface is generally red in color, except for fresh impact deposits, which are neutral or slightly blue. Oberon is, in fact, the reddest among the major Uranian moons. Its trailing and leading hemispheres are asymmetrical: the latter is much redder than the former, because it contains more dark red material. The reddening of the surfaces is often a result of space weathering caused by bombardment of the surface by charged particles and micrometeorites over the age of the Solar System. However, the color asymmetry of Oberon is more likely caused by accretion of a reddish material spiraling in from outer parts of the Uranian system, possibly from irregular satellites, which would occur predominately on the leading hemisphere, similar to Saturn's moon Iapetus. Two primary classes of geological features dominate Oberon's surface: impact craters and chasmata ('canyons'—deep, elongated, steep-sided depressions which would probably be described as rift valleys or escarpments if on Earth). Oberon's surface is the most heavily cratered of all the Uranian moons, with a crater density approaching saturation—when the formation of new craters is balanced by destruction of old ones. This high number of craters indicates that Oberon has the most ancient surface among Uranus's moons. The crater diameters range up to 206 kilometers for the largest known crater, Hamlet. Many large craters are surrounded by bright impact ejecta (rays) consisting of relatively fresh ice. The largest craters, Hamlet, Othello and Macbeth, have floors made of a very dark material deposited after their formation. A peak with a height of about 11 km was observed in some Voyager images near the south-eastern limb of Oberon, which may be the central peak of a large impact basin with a diameter of about 375 km. Oberon's surface is intersected by a system of canyons, which, however, are less widespread than those found on Titania. The canyons' sides are probably scarps produced by normal faults which can be either old or fresh: the latter transect the bright deposits of some large craters, indicating that they formed later. The most prominent Oberonian canyon is Mommur Chasma. The geology of Oberon was influenced by two competing forces: impact crater formation and endogenic resurfacing. The former acted over the moon's entire history and is primarily responsible for its present-day appearance. The latter processes were active for a period following the moon's formation. The endogenic processes were mainly tectonic in nature and led to the formation of the canyons, which are actually giant cracks in the ice crust. The canyons obliterated parts of the older surface. The cracking of the crust was caused by the expansion of Oberon by about 0.5%, which occurred in two phases corresponding to the old and young canyons. The nature of the dark patches, which mainly occur on the leading hemisphere and inside craters, is not known. Some scientists hypothesized that they are of cryovolcanic origin (analogs of lunar maria), while others think that the impacts excavated dark material buried beneath the pure ice (crust). In the latter case Oberon should be at least partially differentiated, with the ice crust lying atop the non-differentiated interior. Origin and evolution Oberon is thought to have formed from an accretion disc or subnebula: a disc of gas and dust that either existed around Uranus for some time after its formation or was created by the giant impact that most likely gave Uranus its large obliquity. The precise composition of the subnebula is not known; however, the relatively high density of Oberon and other Uranian moons compared to the moons of Saturn indicates that it may have been relatively water-poor. Significant amounts of carbon and nitrogen may have been present in the form of carbon monoxide and N2 instead of methane and ammonia. The moons that formed in such a subnebula would contain less water ice (with CO and N2 trapped as clathrate) and more rock, explaining the higher density. Oberon's accretion probably lasted for several thousand years. The impacts that accompanied accretion caused heating of the moon's outer layer. The maximum temperature of around 230 K was reached at the depth of about 60 km. After the end of formation, the subsurface layer cooled, while the interior of Oberon heated due to decay of radioactive elements present in its rocks. The cooling near-surface layer contracted, while the interior expanded. This caused strong extensional stresses in the moon's crust leading to cracking. The present-day system of canyons may be a result of this process, which lasted for about 200 million years, implying that any endogenous activity from this cause ceased billions of years ago. The initial accretional heating together with continued decay of radioactive elements were probably strong enough to melt the ice if some antifreeze like ammonia (in the form of ammonia hydrate) or some salt was present. Further melting may have led to the separation of ice from rocks and formation of a rocky core surrounded by an icy mantle. A layer of liquid water ('ocean') rich in dissolved ammonia may have formed at the core–mantle boundary. The eutectic temperature of this mixture is 176 K. If the temperature dropped below this value the ocean would have frozen by now. Freezing of the water would have led to expansion of the interior, which may have also contributed to the formation of canyon-like graben. Still, present knowledge of the evolution of Oberon is very limited. Although recent analysis concluded that its more likely that the larger moons of Uranus having active subsurface oceans. Exploration So far the only close-up images of Oberon have been from the Voyager 2 probe, which photographed the moon during its flyby of Uranus in January 1986. Since the closest approach of Voyager 2 to Oberon was 470,600 km, the best images of this moon have spatial resolution of about 6 km. The images cover about 40% of the surface, but only 25% of the surface was imaged with a resolution that allows geological mapping. At the time of the flyby the southern hemisphere of Oberon was pointed towards the Sun, so the dark northern hemisphere could not be studied. No other spacecraft has ever visited the Uranian system.
Physical sciences
Solar System
Astronomy
66797
https://en.wikipedia.org/wiki/Fibroblast
Fibroblast
A fibroblast is a type of biological cell typically with a spindle shape that synthesizes the extracellular matrix and collagen, produces the structural framework (stroma) for animal tissues, and plays a critical role in wound healing. Fibroblasts are the most common cells of connective tissue in animals. Structure Fibroblasts have a branched cytoplasm surrounding an elliptical, speckled nucleus having two or more nucleoli. Active fibroblasts can be recognized by their abundant rough endoplasmic reticulum (RER). Inactive fibroblasts, called 'fibrocytes', are smaller, spindle-shaped, and have less RER. Although disjointed and scattered when covering large spaces, fibroblasts often locally align in parallel clusters when crowded together. Unlike the epithelial cells lining the body structures, fibroblasts do not form flat monolayers and are not restricted by a polarizing attachment to a basal lamina on one side, although they may contribute to basal lamina components in some situations (e.g. subepithelial myofibroblasts in intestine may secrete the α-2 chain-carrying component of the laminin, which is absent only in regions of follicle-associated epithelia which lack the myofibroblast lining). Fibroblasts can also migrate slowly over substratum as individual cells, again in contrast to epithelial cells. While epithelial cells form the lining of body structures, fibroblasts and related connective tissues sculpt the "bulk" of an organism. The life span of a fibroblast, as measured in chick embryos, is 57 ± 3 days. Relationship with fibrocytes Fibroblasts and fibrocytes are two states of the same cells, the former being the activated state, the latter the less active state, concerned with maintenance and tissue metabolism. Currently, there is a tendency to call both forms fibroblasts. The suffix "-blast" is used in cellular biology to denote a stem cell or a cell in an activated state of metabolism. Fibroblasts are morphologically heterogeneous with diverse appearances depending on their location and activity. Though morphologically inconspicuous, ectopically transplanted fibroblasts can often retain positional memory of the location and tissue context where they had previously resided, at least over a few generations. This remarkable behavior may lead to discomfort in the rare event that they stagnate there excessively. Development The main function of fibroblasts is to maintain the structural integrity of connective tissues by continuously secreting precursors of the extracellular matrix (ECM), providing all such components, primarily the ground substance and a variety of fibers. The composition of the ECM determines the physical properties of connective tissues. Like other cells of connective tissue, fibroblasts are derived from primitive mesenchyme. Hence, they express the intermediate filament protein vimentin, a feature used as a marker to distinguish their mesodermal origin. However, this test is not specific as epithelial cells cultured in vitro on adherent substratum may also express vimentin after some time. In certain situations, epithelial cells can give rise to fibroblasts, a process called epithelial-mesenchymal transition. Conversely, fibroblasts in some situations may give rise to epithelia by undergoing a mesenchymal to epithelial transition and organizing into a condensed, polarized, laterally connected true epithelial sheet. This process is seen in many developmental situations (e.g. nephron and notocord development), as well as in wound healing and tumorigenesis. Function Fibroblasts make collagen fibers, glycosaminoglycans, reticular and elastic fibers. The fibroblasts of growing individuals divide and synthesize ground substance. Tissue damage stimulates fibrocytes and induces the production of fibroblasts. Inflammation Besides their commonly known role as structural components, fibroblasts play a critical role in an immune response to a tissue injury. They are early players in initiating inflammation in the presence of invading microorganisms. They induce chemokine synthesis through the presentation of receptors on their surface. Immune cells then respond and initiate a cascade of events to clear the invasive microorganisms. Receptors on the surface of fibroblasts also allow regulation of hematopoietic cells and provide a pathway for immune cells to regulate fibroblasts. Tumour mediation Fibroblasts, like tumor-associated host fibroblasts (TAF), play a crucial role in immune regulation through TAF-derived ECM components and modulators. TAF are known to be significant in the inflammatory response as well as immune suppression in tumors. TAF-derived ECM components cause alterations in ECM composition and initiate the ECM remodeling. ECM remodeling is described as changes in the ECM as a result of enzyme activity which can lead to degradation of the ECM. Immune regulation of tumors is largely determined by ECM remodeling because the ECM is responsible for regulating a variety of functions, such as proliferation, differentiation, and morphogenesis of vital organs. In many tumor types, especially those related to the epithelial cells, ECM remodeling is common. Examples of TAF-derived ECM components include Tenascin and Thrombospondin-1 (TSP-1), which can be found in sites of chronic inflammation and carcinomas, respectively. Immune regulation of tumors can also occur through the TAF-derived modulators. Although these modulators may sound similar to the TAF-derived ECM components, they differ in the sense that they are responsible for the variation and turnover of the ECM. Cleaved ECM molecules can play a critical role in immune regulation. Proteases like matrix metalloproteineases and the uPA system are known to cleave the ECM. These proteases are derived from fibroblasts. Use of fibroblasts as feeder cells Mouse embryonic fibroblasts (MEFs) are often used as supportive "feeder cells" in research using human embryonic stem cells, induced pluripotent stem cells and primary epithelial cell culture. However, many researchers are trying to phase out MEFs in favor of culture media with precisely defined ingredients in order to facilitate the development of clinical-grade products. In view of the potential clinical applications of stem cell-derived tissues or primary epithelial cells, the use of human fibroblasts as an alternative to MEF feeders has been studied. Whereas the fibroblasts are usually used to maintain pluripotency of the stem cells, they can also be used to facilitate development of the stem cells into specific type of cells such as cardiomyocytes. Host immune response Fibroblasts from different anatomical sites in the body express many genes that code for immune mediators and proteins. These mediators of immune response enable the cellular communication with hematopoietic immune cells. The immune activity of non-hematopoietic cells, such as fibroblasts, is referred to as “structural immunity”. In order to facilitate a fast response to immunological challenges, fibroblasts encode crucial aspects of the structural cell immune response in the epigenome.
Biology and health sciences
Tissues
Biology
66811
https://en.wikipedia.org/wiki/Seasonal%20affective%20disorder
Seasonal affective disorder
Seasonal affective disorder (SAD) is a mood disorder subset in which people who typically have normal mental health throughout most of the year exhibit depressive symptoms at the same time each year. It is commonly, but not always, associated with the reductions or increases in total daily sunlight hours that occur during the winter or summer. Common symptoms include sleeping too much, having little to no energy, and overeating. The condition in the summer can include heightened anxiety. In the DSM-IV and DSM-5, its status as a standalone condition was changed: It is no longer classified as a unique mood disorder but is now a specifier (called "with seasonal pattern") for recurrent major depressive disorder that occurs at a specific time of the year and fully remits otherwise. Although experts were initially skeptical, this condition is now recognized as a common disorder. The validity of SAD was called into question, however, by a 2016 analysis by the Centers for Disease Control in which no links were detected between depression and seasonality or sunlight exposure. In the United States, the percentage of the population affected by SAD ranges from 1.4% of the population in Florida to 9.9% in Alaska. SAD was formally described and named in 1984 by Norman E. Rosenthal and colleagues at the National Institute of Mental Health. History SAD was first systematically reported and named in the early 1980s by Norman E. Rosenthal and his colleagues at the National Institute of Mental Health (NIMH). The initial investigation was motivated by observations of depression occurring during the dark winter months in northern regions of the United States, known as polar night. Rosenthal proposed that the reduction in available natural light during winter could contribute to this phenomenon. Subsequently, he and his colleagues conducted a placebo-controlled study that utilized light therapy to document the effects of the condition. A paper based on Rosenthal's research was published in 1984. Although Rosenthal's ideas were initially greeted with skepticism, SAD has become well recognized, and his 1993 book Winter Blues has become the standard introduction to the subject. Research on SAD in the United States began in 1979 when Herb Kern, a research engineer, noticed he felt depressed during the winter months. Kern suspected that scarcer natural light in winter was the cause and discussed the idea with NIMH scientists working on bodily rhythms. They were intrigued and responded by inventing a lightbox to treat Kern's depression, which improved. Signs and symptoms SAD is a type of major depressive disorder, and those with the condition may exhibit any of the associated symptoms, such as feelings of hopelessness and worthlessness, thoughts of suicide, loss of interest in activities, withdrawal from social interaction, sleep and appetite problems, difficulty with concentrating and making decisions, decreased libido, a lack of energy, or agitation. Symptoms of winter SAD often include falling asleep earlier or in less than 5 minutes in the evening, oversleeping or difficulty waking up in the morning, nausea, and a tendency to overeat, often with a craving for carbohydrates, which leads to weight gain. SAD is typically associated with winter depression, but springtime lethargy or other seasonal mood patterns are not uncommon. Although each individual case is different, in contrast to winter SAD, people who experience spring and summer depression may be more likely to show symptoms such as insomnia, decreased appetite and weight loss, and agitation or anxiety. Bipolar disorder With seasonal pattern is a specifier for bipolar and related disorders, including bipolar I disorder and bipolar II disorder. Most people with SAD experience major depressive disorder, but as many as 20% may have a bipolar disorder. It is important to distinguish between diagnoses because there are important treatment differences. In these cases, people who have the With seasonal pattern specifier may experience a depressive episode either due to major depressive disorder or as part of bipolar disorder during the winter and remit in the summer. Around 25% of patients with bipolar disorder may present with a depressive seasonal pattern, which is associated with bipolar II disorder, rapid cycling, eating disorders, and more depressive episodes. Differences in biological sex display distinct clinical characteristics associated to seasonal pattern: males present with more Bipolar II disorder and a higher number of depressive episodes, and females with rapid cycling and eating disorders. ADHD A study by the National Institute of Health published findings in 2016 that concluded, "seasonal and circadian rhythm disturbances are significantly associated with ADHD symptoms." Participants in the study who had ADHD were three times more likely to have SAD symptoms (9.9% vs 3.3%), and about 2.7 times more likely to have s-SAD symptoms (12.5% vs 4.6%). Cause In many species, activity is diminished during the winter months, in response to the reduction in available food, the reduction of sunlight (especially for diurnal animals), and the difficulties of surviving in cold weather. Hibernation is an extreme example, but even species that do not hibernate often exhibit changes in behavior during the winter. Various proximate causes have been proposed. One possibility is that SAD is related to a lack of serotonin, and serotonin polymorphisms could play a role in SAD, although this has been disputed. Mice incapable of turning serotonin into N-acetylserotonin (by serotonin N-acetyltransferase) appear to express "depression-like" behavior, and antidepressants such as fluoxetine increase the amount of the enzyme serotonin N-acetyltransferase, resulting in an antidepressant-like effect. Another theory is that the cause may be related to melatonin, which is produced in dim light and darkness by the pineal gland, since there are direct connections, via the retinohypothalamic tract and the suprachiasmatic nucleus, between the retina and the pineal gland. Melatonin secretion is controlled by the endogenous circadian clock, but can also be suppressed by bright light. One study looked at whether some people could be predisposed to SAD based on personality traits. Correlations between certain personality traits, higher levels of neuroticism, agreeableness, openness, and an avoidance-oriented coping style, appeared to be common in those with SAD. Pathophysiology Seasonal mood variations are believed to be related to light. An argument for this view is the effectiveness of bright-light therapy. SAD is measurably present at latitudes in the Arctic region, such as northern Finland (64°00′N), where the rate of SAD is 9.5%. Cloud cover may contribute to the negative effects of SAD. There is evidence that many patients with SAD have a delay in their circadian rhythm, and that bright light treatment corrects these delays which may be responsible for the improvement in patients. The symptoms of it mimic those of dysthymia or even major depressive disorder. There is also potential risk of suicide in some patients experiencing SAD. One study reports 6–35% of people with the condition required hospitalization during one period of illness. At times, patients may not feel depressed, but rather lack energy to perform everyday activities. Subsyndromal Seasonal Affective Disorder (s-SAD or SSAD) is a milder form of SAD experienced by an estimated 14.3% (vs. 6.1% SAD) of the U.S. population. The blue feeling experienced by both those with SAD and with SSAD can usually be dampened or extinguished by exercise and increased outdoor activity, particularly on sunny days, resulting in increased solar exposure. Connections between human mood, as well as energy levels, and the seasons are well documented, even in healthy individuals. Diagnosis According to the American Psychiatric Association DSM-IV criteria, Seasonal Affective Disorder is not regarded as a separate disorder. It is called a "course specifier" and may be applied as an added description to the pattern of major depressive episodes in patients with major depressive disorder or patients with bipolar disorder. The "Seasonal Pattern Specifier" must meet four criteria: depressive episodes at a particular time of the year; remissions or mania/hypomania at a characteristic time of year; these patterns must have lasted two years with no nonseasonal major depressive episodes during that same period; and these seasonal depressive episodes outnumber other depressive episodes throughout the patient's lifetime. The Mayo Clinic describes three types of SAD, each with its own set of symptoms. Management Treatments for classic (winter-based) seasonal affective disorder include light therapy, medication, ionized-air administration, cognitive-behavioral therapy, and carefully timed supplementation of the hormone melatonin. Light therapy Photoperiod-related alterations of the duration of melatonin secretion may affect the seasonal mood cycles of SAD. This suggests that light therapy may be an effective treatment for SAD. Light therapy uses a lightbox, which emits far more lumens than a customary incandescent lamp. Bright white "full spectrum" light at 10,000 lux, blue light at a wavelength of 480 nm at 2,500 lux or green (actually cyan or blue-green) light at a wavelength of 500 nm at 350 lux are used, with the first-mentioned historically preferred. Bright light therapy is effective with the patient sitting a prescribed distance, commonly 30–60 cm, in front of the box with their eyes open, but not staring at the light source, for 30–60 minutes. A study published in May 2010 suggests that the blue light often used for SAD treatment should perhaps be replaced by green or white illumination. Discovering the best schedule is essential. One study has shown that up to 69% of patients find lightbox treatment inconvenient, and as many as 19% stop use because of this. Dawn simulation has also proven to be effective; in some studies, there is an 83% better response when compared to other bright light therapy. When compared in a study to negative air ionization, bright light was shown to be 57% effective vs. dawn simulation 50%. Patients using light therapy can experience improvement during the first week, but increased results are evident when continued throughout several weeks. Certain symptoms like hypersomnia, early insomnia, social withdrawal, and anxiety resolve more rapidly with light therapy than with cognitive behavioral therapy. Most studies have found it effective without use year round, but rather as a seasonal treatment lasting for several weeks, until frequent light exposure is naturally obtained. Light therapy can also consist of exposure to sunlight, either by spending more time outside or using a computer-controlled heliostat to reflect sunlight into the windows of a home or office. Although light therapy is the leading treatment for seasonal affective disorder, prolonged direct sunlight or artificial lights that don't block the ultraviolet range should be avoided, due to the threat of skin cancer. The evidence base for light therapy as a preventive treatment for seasonal affective disorder is limited. The decision to use light therapy to treat people with a history of winter depression before depressive symptoms begin should be based on a person's preference of treatment. Medication SSRI (selective serotonin reuptake inhibitor) antidepressants have proven effective in treating SAD. Effective antidepressants are fluoxetine, sertraline, or paroxetine. Both fluoxetine and light therapy are 67% effective in treating SAD, according to direct head-to-head trials conducted during the 2006 Can-SAD study. Subjects using the light therapy protocol showed earlier clinical improvement, generally within one week of beginning the clinical treatment. Bupropion extended-release has been shown to prevent SAD for one in four people, but has not been compared directly to other preventive options in trials. In a 2021 updated Cochrane review of second-generation antidepressant medications for the treatment of SAD, a definitive conclusion could not be drawn, due to lack of evidence, and the need for larger randomized controlled trials. Modafinil may be an effective and well-tolerated treatment in patients with seasonal affective disorder/winter depression. Another explanation is that vitamin D levels are too low when people do not get enough Ultraviolet-B on their skin. An alternative to using bright lights is to take vitamin D supplements. However, studies did not show a link between vitamin D levels and depressive symptoms in elderly Chinese, nor among elderly British women given only 800IU when 6,000IU is needed. 5-HTP (an amino acid that helps to produce serotonin, and is often used to help those with depression) has also been suggested as a supplement that may help treat the symptoms of SAD, by lifting mood, and regulating sleep schedule for those with the condition. However, those who take antidepressants are not advised to take 5-HTP, as antidepressant medications may combine with the supplement to create dangerously high levels of serotonin – potentially resulting in serotonin syndrome. Other treatments Depending upon the patient, one treatment (e.g., lightbox) may be used in conjunction with another (e.g., medication). Negative air ionization, which involves releasing charged particles into the sleep environment, has been found effective, with a 47.9% improvement if the negative ions are in sufficient density (quantity). Physical exercise has shown to be an effective form of depression therapy, particularly when in addition to another form of treatment for SAD. One particular study noted marked effectiveness for treatment of depressive symptoms, when combining regular exercise with bright light therapy. Patients exposed to exercise which had been added to their treatments in 20 minutes intervals on the aerobic bike during the day, along with the same amount of time underneath the UV light were seen to make a quick recovery. Of all the psychological therapies aimed at the prevention of SAD, cognitive-behaviour therapy, typically involving thought records, activity schedules and a positive data log, has been the subject of the most empirical work. However, evidence for cognitive behavioral therapy or any of the psychological therapies aimed at preventing SAD remains inconclusive. Epidemiology Nordic countries Winter depression is a common slump in the mood of some inhabitants of most of the Nordic countries. Iceland, however, seems to be an exception. A study of more than 2000 people there found the prevalence of seasonal affective disorder and seasonal changes in anxiety and depression to be unexpectedly low in both sexes. The study's authors suggested that propensity for SAD may differ due to some genetic factor within the Icelandic population. A study of Canadians of wholly Icelandic descent also showed low levels of SAD. It has more recently been suggested that this may be attributed to the large amount of fish traditionally eaten by Icelandic people. In 2007, about 90 kilograms of fish per person was consumed per year in Iceland, as opposed to about 24 kilograms in the US and Canada, rather than to genetic predisposition; a similar anomaly is noted in Japan, where annual fish consumption in recent years averages about 60 kilograms per capita. Fish are high in vitamin D. Fish also contain docosahexaenoic acid (DHA), which helps with a variety of neurological dysfunctions. Other countries In the United States, a diagnosis of seasonal affective disorder was first proposed by Norman E. Rosenthal, M.D. in 1984. Rosenthal wondered why he became sluggish during the winter after moving from sunny South Africa to (cloudy in winter) New York. He started experimenting with increasing exposure to artificial light, and found this made a difference. In Alaska it has been established that there is a SAD rate of 8.9%, and an even greater rate of 24.9% for subsyndromal SAD. Around 20% of Irish people are affected by SAD, according to a survey conducted in 2007. The survey also shows women are more likely to be affected by SAD than men. An estimated 3% of the population in the Netherlands experience winter SAD.
Biology and health sciences
Mental disorders
Health
66926
https://en.wikipedia.org/wiki/Demodulation
Demodulation
Demodulation is the process of extracting the original information-bearing signal from a carrier wave. A demodulator is an electronic circuit (or computer program in a software-defined radio) that is used to recover the information content from the modulated carrier wave. There are many types of modulation, and there are many types of demodulators. The signal output from a demodulator may represent sound (an analog audio signal), images (an analog video signal) or binary data (a digital signal). These terms are traditionally used in connection with radio receivers, but many other systems use many kinds of demodulators. For example, in a modem, which is a contraction of the terms modulator/demodulator, a demodulator is used to extract a serial digital data stream from a carrier signal which is used to carry it through a telephone line, coaxial cable, or optical fiber. History Demodulation was first used in radio receivers. In the wireless telegraphy radio systems used during the first 3 decades of radio (1884–1914) the transmitter did not communicate audio (sound) but transmitted information in the form of pulses of radio waves that represented text messages in Morse code. Therefore, the receiver merely had to detect the presence or absence of the radio signal, and produce a click sound. The device that did this was called a detector. The first detectors were coherers, simple devices that acted as a switch. The term detector stuck, was used for other types of demodulators and continues to be used to the present day for a demodulator in a radio receiver. The first type of modulation used to transmit sound over radio waves was amplitude modulation (AM), invented by Reginald Fessenden around 1900. An AM radio signal can be demodulated by rectifying it to remove one side of the carrier, and then filtering to remove the radio-frequency component, leaving only the modulating audio component. This is equivalent to peak detection with a suitably long time constant. The amplitude of the recovered audio frequency varies with the modulating audio signal, so it can drive an earphone or an audio amplifier. Fessendon invented the first AM demodulator in 1904 called the electrolytic detector, consisting of a short needle dipping into a cup of dilute acid. The same year John Ambrose Fleming invented the Fleming valve or thermionic diode which could also rectify an AM signal. Techniques There are several ways of demodulation depending on how parameters of the base-band signal such as amplitude, frequency or phase are transmitted in the carrier signal. For example, for a signal modulated with a linear modulation like amplitude modulation (AM), we can use a synchronous detector. On the other hand, for a signal modulated with an angular modulation, we must use a frequency modulation (FM) demodulator or a phase modulation (PM) demodulator. Different kinds of circuits perform these functions. Many techniques such as carrier recovery, clock recovery, bit slip, frame synchronization, rake receiver, pulse compression, Received Signal Strength Indication, error detection and correction, etc., are only performed by demodulators, although any specific demodulator may perform only some or none of these techniques. Many things can act as a demodulator, if they pass the radio waves on nonlinearly. AM radio An AM signal encodes the information into the carrier wave by varying its amplitude in direct sympathy with the analogue signal to be sent. There are two methods used to demodulate AM signals: The envelope detector is a very simple method of demodulation that does not require a coherent demodulator. It consists of a rectifier (anything that will pass current in one direction only) or other non-linear component that enhances one half of the received signal over the other and a low-pass filter. The rectifier may be in the form of a single diode or may be more complex. Many natural substances exhibit this rectification behaviour, which is why it was the earliest modulation and demodulation technique used in radio. The filter is usually an RC low-pass type but the filter function can sometimes be achieved by relying on the limited frequency response of the circuitry following the rectifier. The crystal set exploits the simplicity of AM modulation to produce a receiver with very few parts, using the crystal as the rectifier and the limited frequency response of the headphones as the filter. The product detector multiplies the incoming signal by the signal of a local oscillator with the same frequency and phase as the carrier of the incoming signal. After filtering, the original audio signal will result. SSB is a form of AM in which the carrier is reduced or suppressed entirely, which require coherent demodulation. For further reading, see sideband. FM radio Frequency modulation (FM) has numerous advantages over AM such as better fidelity and noise immunity. However, it is much more complex to both modulate and demodulate a carrier wave with FM, and AM predates it by several decades. There are several common types of FM demodulators: The quadrature detector, which phase shifts the signal by 90 degrees and multiplies it with the unshifted version. One of the terms that drops out from this operation is the original information signal, which is selected and amplified. The signal is fed into a PLL and the error signal is used as the demodulated signal. The most common is a Foster–Seeley discriminator. This is composed of an electronic filter which decreases the amplitude of some frequencies relative to others, followed by an AM demodulator. If the filter response changes linearly with frequency, the final analog output will be proportional to the input frequency, as desired. A variant of the Foster–Seeley discriminator called the ratio detector Another method uses two AM demodulators, one tuned to the high end of the band and the other to the low end, and feed the outputs into a difference amplifier. Using a digital signal processor, as used in software-defined radio. PM QAM QAM demodulation requires a coherent receiver. It uses two product detectors whose local reference signals are a quarter cycle apart in phase: one for the in-phase component and one for the quadrature component. The demodulator keeps these product detectors tuned to a continuous or intermittent pilot signal.
Technology
Telecommunications
null
66962
https://en.wikipedia.org/wiki/Typha
Typha
Typha is a genus of about 30 species of monocotyledonous flowering plants in the family Typhaceae. These plants have a variety of common names, in British English as bulrush or (mainly historically) reedmace, in American English as cattail, or punks, in Australia as cumbungi or bulrush, in Canada as bulrush or cattail, and in New Zealand as reed, cattail, bulrush or raupo. Other taxa of plants may be known as bulrush, including some sedges in Scirpus and related genera. The genus is largely distributed in the Northern Hemisphere, where it is found in a variety of wetland habitats. The rhizomes are edible, though at least some species are known to accumulate toxins and so must first undergo treatment before being eaten. Evidence of preserved starch grains on grinding stones suggests they were already eaten in Europe 30,000 years ago. Description Typha are aquatic or semi-aquatic, rhizomatous, herbaceous perennial plants. The leaves are glabrous (hairless), linear, alternate and mostly basal on a simple, jointless stem that bears the flowering spikes. The plants are monoecious, with unisexual flowers that develop in dense racemes. The numerous male flowers form a narrow spike at the top of the vertical stem. Each male (staminate) flower is reduced to a pair of stamens and hairs, and withers once the pollen is shed. Large numbers of tiny female flowers form a dense, sausage-shaped spike on the stem below the male spike. In larger species this can be up to long and thick. The seeds are minute, long, and attached to fine hairs. When ripe, the heads disintegrate into a cottony fluff from which the seeds disperse by wind. Fruits of Typha have been found as long ago as 69 mya in modern Central Europe. General ecology Typha are often among the first wetland plants to colonize areas of newly exposed wet mud, with their abundant wind-dispersed seeds. Buried seeds can survive in the soil for long periods of time. They germinate best with sunlight and fluctuating temperatures, which is typical of many wetland plants that regenerate on mud flats. The plants also spread by rhizomes, forming large, interconnected stands. Typha are considered to be dominant competitors in wetlands in many areas, and they often exclude other plants with their dense canopy. In the bays of the Great Lakes, for example, they are among the most abundant wetland plants. Different species of cattails are adapted to different water depths. Well-developed aerenchyma make the plants tolerant of submersion. Even the dead stalks are capable of transmitting oxygen to the rooting zone. Although Typha are native wetland plants, they can be aggressive in their competition with other native species. They have been problematic in many regions in North America, from the Great Lakes to the Everglades. Native sedges are displaced and wet meadows shrink, likely as a response to altered hydrology of the wetlands and increased nutrient levels. An introduced or hybrid species may be contributing to the problem. Control is difficult. The most successful strategy appears to be mowing or burning to remove the aerenchymous stalks, followed by prolonged flooding. It may be more important to prevent invasion by preserving water level fluctuations, including periods of drought, and to maintain infertile conditions. Typha are frequently eaten by wetland mammals such as muskrats, which also use them to construct feeding platforms and dens, thereby also providing nesting and resting places for waterfowl. Accepted species and natural hybrids The following species and hybrids are currently accepted: The most widespread species is Typha latifolia, which is distributed across the entire temperate northern hemisphere. It has also been introduced to Australia. T. angustifolia is nearly as widespread, but does not extend as far north; it may be introduced and invasive in North America. T. domingensis has a more southern American distribution, and it occurs in Australia. T. orientalis is widespread in Asia, Australia, and New Zealand. T. laxmannii, T. minima, and T. shuttleworthii are largely restricted to Asia and southern Europe. Uses Culinary Many parts of the Typha plant are edible to humans. Before the plant flowers, the tender inside of the shoots can be squeezed out and eaten raw or cooked. The starchy rhizomes are nutritious with a protein content comparable to that of maize or rice. They can be processed into a flour with 266 kcal per 100 grams, and are most often harvested from late autumn to early spring. They are fibrous, and the starch must be scraped or sucked from the tough fibers. Baby shoots emerging from the rhizomes, which are sometimes subterranean, can be picked and eaten raw. Also underground is a carbohydrate lump which can be peeled and eaten raw or cooked like a potato. The plant is one championed by survival experts because various parts can be eaten throughout the year. Plants growing in polluted water can accumulate lead and pesticide residues in their rhizomes, and these should not be eaten. The rind of young stems can be peeled off, and the tender white heart inside can be eaten raw or boiled and eaten like asparagus. This food has been popular among the Cossacks in Russia, and has been called "Cossack asparagus". The leaf bases can be eaten raw or cooked, especially in late spring when they are young and tender. In early summer the sheath can be removed from the developing green flower spike, which can then be boiled and eaten like corn on the cob. In mid-summer when the male flowers are mature, the pollen can be collected and used as a flour supplement or thickener; the Māori of New Zealand have a special bread called pungapunga made from the pollen of T. orientalis. Agriculture The seeds have a high linoleic acid content and can be used to feed cattle and chickens. They can also be found in African countries like Ghana. Harvesting cattail removes nutrients from the wetland that would otherwise return via the decomposition of decaying plant matter. Floating mats of cattails remove nutrients from eutrophied bodies of freshwater. Building material For local native tribes around Lake Titicaca in Peru and Bolivia, Typha were among the most important plants and every part of the plant had multiple uses. For example, they were used to construct rafts and other boats. During World War II, the United States Navy used the down of Typha as a substitute for kapok in life vests and aviation jackets. Tests showed that even after 100 hours of submersion, the buoyancy was still effective. Typha are used as thermal insulation in buildings as an organic alternative to conventional insulating materials such as glass wool or stone wool. Paper Typha stems and leaves can be used to make paper. It is strong with a heavy texture and it is hard to bleach, so it is not suitable for industrial production of graphical paper. In 1853, considerable amounts of cattail paper were produced in New York, due to a shortage of raw materials. In 1948, French scientists tested methods for annual harvesting of the leaves. Because of the high cost, these methods were abandoned and no further research was done. Today Typha is used to make decorative paper. Fiber Fibers up to 4 meters long can be obtained from the stems when they are treated mechanically or chemically with sodium hydroxide. The stem fibers resemble jute and can be used to produce raw textiles. The leaf fibers can be used as an alternative to cotton and linen in clothing. The yield of leaf fiber is 30 to 40 percent and Typha glauca can produce 7 to 10 tons per hectare annually. Biofuel Typha can be used as a source of starch to produce ethanol. Because of their high productivity in northern latitudes, Typha are considered to be a bioenergy crop. Other The seed hairs were used by some indigenous peoples of the Americas as tinder for starting fires. Some tribes also used Typha down to line moccasins, and for bedding, diapers, baby powder, and cradleboards. One Native American word for Typha meant "fruit for papoose's bed". Typha down is still used in some areas to stuff clothing items and pillows. Typha can be dipped in wax or fat and then lit as a candle, the stem serving as a wick. Without the use of wax or fat it will smolder slowly, somewhat like incense, and may repel insects. The flower stalks can be made into chopsticks. The leaves can be treated to weave into baskets, mats, or sandals. The rushes are harvested and the leaves often dried for later use in chair seats. Re-wetted, the leaves are twisted and wrapped around the chair rungs to form a densely woven seat that is then stuffed (usually with the left over rush). Small-scale experiments have indicated that Typha are able to remove arsenic from drinking water. The boiled rootstocks have been used as a diuretic for increasing urination, or mashed to make a jelly-like paste for sores, boils, wounds, burns, scabs, and smallpox pustules. Cattail pollen is used as a banker source of food for predatory insects and mites (such as Amblyseius swirskii) in greenhouses. The cattail, or, as it is commonly referred to in the American Midwest, the sausage tail, has been the subject of multiple artist renditions, gaining popularity in the mid-twentieth century. The term, sausage tail, derives from the similarity that cattails have with sausages, a name given to the plant by the Midwest Polish community who had noticed a striking similarity between the plant and a common Polish dish, kiełbasa.
Biology and health sciences
Poales
null
66966
https://en.wikipedia.org/wiki/Vascular%20plant
Vascular plant
Vascular plants (), also called tracheophytes (, ) or collectively tracheophyta (; ), are plants that have lignified tissues (the xylem) for conducting water and minerals throughout the plant. They also have a specialized non-lignified tissue (the phloem) to conduct products of photosynthesis. The group includes most land plants ( accepted known species) other than mosses. Vascular plants include the clubmosses, horsetails, ferns, gymnosperms (including conifers), and angiosperms (flowering plants). They are contrasted with nonvascular plants such as mosses and green algae. Scientific names for the vascular plants group include Tracheophyta, Tracheobionta and Equisetopsida sensu lato. Some early land plants (the rhyniophytes) had less developed vascular tissue; the term eutracheophyte has been used for all other vascular plants, including all living ones. Historically, vascular plants were known as "higher plants", as it was believed that they were further evolved than other plants due to being more complex organisms. However, this is an antiquated remnant of the obsolete scala naturae, and the term is generally considered to be unscientific. Characteristics Botanists define vascular plants by three primary characteristics: Vascular plants have vascular tissues which distribute resources through the plant. Two kinds of vascular tissue occur in plants: xylem and phloem. Phloem and xylem are closely associated with one another and are typically located immediately adjacent to each other in the plant. The combination of one xylem and one phloem strand adjacent to each other is known as a vascular bundle. The evolution of vascular tissue in plants allowed them to evolve to larger sizes than non-vascular plants, which lack these specialized conducting tissues and are thereby restricted to relatively small sizes. In vascular plants, the principal generation or phase is the sporophyte, which produces spores and is diploid (having two sets of chromosomes per cell). (By contrast, the principal generation phase in non-vascular plants is the gametophyte, which produces gametes and is haploid, with one set of chromosomes per cell.) Vascular plants have true roots, leaves, and stems, even if some groups have secondarily lost one or more of these traits. Cavalier-Smith (1998) treated the Tracheophyta as a phylum or botanical division encompassing two of these characteristics defined by the Latin phrase "facies diploida xylem et phloem instructa" (diploid phase with xylem and phloem). One possible mechanism for the presumed evolution from emphasis on haploid generation to emphasis on diploid generation is the greater efficiency in spore dispersal with more complex diploid structures. Elaboration of the spore stalk enabled the production of more spores and the development of the ability to release them higher and to broadcast them further. Such developments may include more photosynthetic area for the spore-bearing structure, the ability to grow independent roots, woody structure for support, and more branching. Phylogeny A proposed phylogeny of the vascular plants after Kenrick and Crane 1997 is as follows, with modification to the gymnosperms from Christenhusz et al. (2011a), Pteridophyta from Smith et al. and lycophytes and ferns by Christenhusz et al. (2011b) The cladogram distinguishes the rhyniophytes from the "true" tracheophytes, the eutracheophytes. This phylogeny is supported by several molecular studies. Other researchers state that taking fossils into account leads to different conclusions, for example that the ferns (Pteridophyta) are not monophyletic. Hao and Xue presented an alternative phylogeny in 2013 for pre-euphyllophyte plants. Nutrient distribution Water and nutrients in the form of inorganic solutes are drawn up from the soil by the roots and transported throughout the plant by the xylem. Organic compounds such as sucrose produced by photosynthesis in leaves are distributed by the phloem sieve-tube elements. The xylem consists of vessels in flowering plants and of tracheids in other vascular plants. Xylem cells are dead, hard-walled hollow cells arranged to form files of tubes that function in water transport. A tracheid cell wall usually contains the polymer lignin. The phloem, on the other hand, consists of living cells called sieve-tube members. Between the sieve-tube members are sieve plates, which have pores to allow molecules to pass through. Sieve-tube members lack such organs as nuclei or ribosomes, but cells next to them, the companion cells, function to keep the sieve-tube members alive. Transpiration The most abundant compound in all plants, as in all cellular organisms, is water, which has an important structural role and a vital role in plant metabolism. Transpiration is the main process of water movement within plant tissues. Plants constantly transpire water through their stomata to the atmosphere and replace that water with soil moisture taken up by their roots. When the stomata are closed at night, water pressure can build up in the plant. Excess water is excreted through pores known as hydathodes. The movement of water out of the leaf stomata sets up transpiration pull or tension in the water column in the xylem vessels or tracheids. The pull is the result of water surface tension within the cell walls of the mesophyll cells, from the surfaces of which evaporation takes place when the stomata are open. Hydrogen bonds exist between water molecules, causing them to line up; as the molecules at the top of the plant evaporate, each pulls the next one up to replace it, which in turn pulls on the next one in line. The draw of water upwards may be entirely passive and can be assisted by the movement of water into the roots via osmosis. Consequently, transpiration requires the plant to expend very little energy on water movement. Transpiration assists the plant in absorbing nutrients from the soil as soluble salts. Transpiration plays an important role in the absorption of nutrients from the soil as soluble salts are transported along with the water from the soil to the leaves. Plants can adjust their transpiration rate to optimize the balance between water loss and nutrient absorption. Absorption Living root cells passively absorb water. Pressure within the root increases when transpiration demand via osmosis is low and decreases when water demand is high. No water movement towards the shoots and leaves occurs when evapotranspiration is absent. This condition is associated with high temperature, high humidity, darkness, and drought. Conduction Xylem is the water-conducting tissue, and the secondary xylem provides the raw material for the forest products industry. Xylem and phloem tissues each play a part in the conduction processes within plants. Sugars are conveyed throughout the plant in the phloem; water and other nutrients pass through the xylem. Conduction occurs from a source to a sink for each separate nutrient. Sugars are produced in the leaves (a source) by photosynthesis and transported to the growing shoots and roots (sinks) for use in growth, cellular respiration or storage. Minerals are absorbed in the roots (a source) and transported to the shoots to allow cell division and growth.
Biology and health sciences
Vascular plants (except seed plants)
Plants
66973
https://en.wikipedia.org/wiki/Lycopodiopsida
Lycopodiopsida
Lycopodiopsida is a class of vascular plants also known as lycopsids, lycopods, or lycophytes. Members of the class are also called clubmosses, firmosses, spikemosses and quillworts. They have dichotomously branching stems bearing simple leaves called microphylls and reproduce by means of spores borne in sporangia on the sides of the stems at the bases of the leaves. Although living species are small, during the Carboniferous, extinct tree-like forms (Lepidodendrales) formed huge forests that dominated the landscape and contributed to coal deposits. The nomenclature and classification of plants with microphylls varies substantially among authors. A consensus classification for extant (living) species was produced in 2016 by the Pteridophyte Phylogeny Group (PPG I), which places them all in the class Lycopodiopsida, which includes the classes Isoetopsida and Selaginellopsida used in other systems. (See Table 2.) Alternative classification systems have used ranks from division (phylum) to subclass. In the PPG I system, the class is divided into three orders, Lycopodiales, Isoetales and Selaginellales. Characteristics Club-mosses (Lycopodiales) are homosporous, but the genera Selaginella (spikemosses) and Isoetes (quillworts) are heterosporous, with female spores larger than the male. As a result of fertilisation, the female gametophyte produces sporophytes. A few species of Selaginella such as S. apoda and S. rupestris are also viviparous; the gametophyte develops on the mother plant, and only when the sporophyte's primary shoot and root is developed enough for independence is the new plant dropped to the ground. Many club-moss gametophytes are mycoheterotrophic and long-lived, residing underground for several years before emerging from the ground and progressing to the sporophyte stage. Lycopodiaceae and spikemosses (Selaginella) are the only vascular plants with biflagellate sperm, an ancestral trait in land plants otherwise only seen in bryophytes. The only exceptions are Isoetes and Phylloglossum, which independently has evolved multiflagellated sperm cells with approximately 20 flagella (sperm flagella in other vascular plants can count at least thousand at most, but the number is generally much lower, and flagella are completely absent in seed plants except for Ginkgo and cycads). Because only two flagella puts a size limit on the genome, we find the largest known genomes in the clade in Isoetes, as multiflagellated sperm is not exposed for the same selection pressure as biflagellate sperm in regard of size. Taxonomy Phylogeny The extant lycophytes are vascular plants (tracheophytes) with microphyllous leaves, distinguishing them from the euphyllophytes (plants with megaphyllous leaves). The sister group of the extant lycophytes and their closest extinct relatives are generally believed to be the zosterophylls, a paraphyletic or plesion group. Ignoring some smaller extinct taxa, the evolutionary relationships are as shown below. , there was broad agreement, supported by both molecular and morphological evidence, that the extant lycophytes fell into three groups, treated as orders in PPG I, and that these, both together and individually, are monophyletic, being related as shown in the cladogram below: Classification The rank and name used for the taxon holding the extant lycophytes (and their closest extinct relatives) varies widely. Table 1 below shows some of the highest ranks that have been used. Systems may use taxa at a rank lower than the highest given in the table with the same circumscription; for example, a system that uses Lycopodiophyta as the highest ranked taxon may place all of its members in a single subclass. Some systems use a higher rank for a more broadly defined taxon of lycophytes that includes some extinct groups more distantly related to extant lycophytes, such as the zosterophylls. For example, Kenrick & Crane (1997) use the subdivision Lycophytina for this purpose, with all extant lycophytes falling within the class Lycopsida. Other sources exclude the zosterophylls from any "lycophyte" taxon. In the Pteridophyte Phylogeny Group classification of 2016 (PPG I), the three orders are placed in a single class, Lycopodiopsida, holding all extant lycophyte species. Older systems have used either three classes, one for each order, or two classes, recognizing the closer relationship between Isoetales and Selaginellales. In these cases, a higher ranked taxon is needed to contain the classes (see Table 1). As Table 2 shows, the names "Lycopodiopsida" and "Isoetopsida" are both ambiguous. Subdivisions The PPG I system divides up the extant lycophytes as shown below. Class Lycopodiopsida Bartl. (3 orders) Order Lycopodiales DC. ex Bercht. & J.Presl (1 extant family) Family Lycopodiaceae P.Beauv. (16 extant genera) Order Isoetales Prantl (1 extant family) Family Isoetaceae Dumort. (1 extant genus) Order Selaginellales Prantl (1 extant family) Family Selaginellaceae Willk (1 extant genus) Some extinct groups, such as zosterophylls, fall outside the limits of the taxon as defined by the classifications in Table 1 above. However, other extinct groups fall within some circumscriptions of this taxon. Taylor et al. (2009) and Mauseth (2014) include a number of extinct orders in their division (phylum) Lycophyta, although they differ on the placement of some genera. The orders included by Taylor et al. are: Order †Drepanophycales (including Baragwanathia, Drepanophycus and Asteroxylon) Order †Protolepidodendrales Order †Lepidodendrales Order †Pleuromeiales Mauseth uses the order †Asteroxylales, placing Baragwanathia in the Protolepidodendrales. The relationship between some of these extinct groups and the extant ones was investigated by Kenrick and Crane in 1997. When the genera they used are assigned to orders, their suggested relationship is: Evolution The Lycopodiopsida are distinguished from other vascular plants by the possession of microphylls and by their sporangia, which are lateral as opposed to terminal and which open (dehisce) transversely rather than longitudinally. In some groups, the sporangia are borne on sporophylls that are clustered into strobili. Phylogenetic analysis shows the group branching off at the base of the evolution of vascular plants and they have a long evolutionary history. Fossils are abundant worldwide, especially in coal deposits. Fossils that can be ascribed to the Lycopodiopsida first appear in the Silurian period, along with a number of other vascular plants. The Silurian Baragwanathia longifolia is one of the earliest identifiable species. Lycopodolica is another Silurian genus which appears to be an early member of this group. The group evolved roots independently from the rest of the vascular plants. From the Devonian onwards, some species grew large and tree-like. Devonian fossil lycopsids from Svalbard, growing in equatorial regions, raise the possibility that they drew down enough carbon dioxide to change the Earth's climate significantly. During the Carboniferous, tree-like plants (such as Lepidodendron, Sigillaria, and other extinct genera of the order Lepidodendrales) formed huge forests that dominated the landscape. Unlike modern trees, leaves grew out of the entire surface of the trunk and branches, but fell off as the plant grew, leaving only a small cluster of leaves at the top. The lycopsids had distinctive features such as Lepidodendron lycophytes, which were marked with diamond-shaped scars where they once had leaves. Quillworts (order Isoetales) and Selaginella are considered their closest extant relatives and share some unusual features with these fossil lycopods, including the development of both bark, cambium and wood, a modified shoot system acting as roots, bipolar and secondary growth, and an upright stance. The remains of Lepidodendron lycopods formed many fossil coal deposits. In Fossil Grove, Victoria Park, Glasgow, Scotland, fossilized lycophytes can be found in sandstone. The Lycopodiopsida had their maximum diversity in the Pennsylvanian (Upper Carboniferous), particularly tree-like Lepidodendron and Sigillaria that dominated tropical wetlands. The complex ecology of these tropical rainforests collapsed during the Middle Pennsylvanian due to a change in climate. In Euramerica, tree-like species apparently became extinct in the Late Pennsylvanian, as a result of a transition to a much drier climate, giving way to conifers, ferns and horsetails. In Cathaysia (now South China), tree-like species survived into the Permian. Nevertheless, lycopodiopsids are rare in the Lopingian (latest Permian), but regained dominance in the Induan (earliest Triassic), particularly Pleuromeia. After the worldwide Permian–Triassic extinction event, members of this group pioneered the repopulation of habitats as opportunistic plants. The heterogeneity of the terrestrial plant communities increased markedly during the Middle Triassic when plant groups like horsetails, ferns, pteridosperms, cycads, ginkgos and conifers resurfaced and diversified quickly. Microbial associations Lycophytes form associations with microbes such as fungi and bacteria, including arbuscular mycorrhizal and endophytic associations. Arbuscular mycorrhizal associations have been characterized in all stages of the lycophyte lifecycle: mycoheterotrophic gametophyte, photosynthetic surface-dwelling gametophyte, young sporophyte, and mature sporophyte. Arbuscular mycorrhizae have been found in Selaginella spp. roots and vesicles. During the mycoheterotrophic gametophyte lifecycle stage, lycophytes gain all of their carbon from subterranean glomalean fungi. In other plant taxa, glomalean networks transfer carbon from neighboring plants to mycoheterotrophic gametophytes. Something similar could be occurring in Huperzia hypogeae gametophytes which associate with the same glomalean phenotypes as nearby Huperzia hypogeae sporophytes. Fungal endophytes have been found in many species of lycophyte, however the function of these endophytes in host plant biology is not known. Endophytes of other plant taxa perform roles such as improving plant competitive fitness, conferring biotic and abiotic stress tolerance, promoting plant growth through phytohormone production or production of limiting nutrients. However, some endophytic fungi in lycophytes do produce medically relevant compounds. Shiraia sp Slf14 is an endophytic fungus present in Huperzia serrata that produces Huperzine A, a biomedical compound which has been approved as a drug in China and a dietary supplement in the U.S. to treat Alzheimer's Disease. This fungal endophyte can be cultivated much more easily and on a much larger scale than H. serrata itself which could increase the availability of Huperzine A as a medicine. Uses The spores of lycopods are highly flammable and so have been used in fireworks. Lycopodium powder, the dried spores of the common clubmoss, was used in Victorian theater to produce flame-effects. A blown cloud of spores burned rapidly and brightly, but with little heat. (It was considered safe by the standards of the time.)
Biology and health sciences
Lycophytes
Plants
66981
https://en.wikipedia.org/wiki/Epidemic
Epidemic
An epidemic (from Greek ἐπί epi "upon or above" and δῆμος demos "people") is the rapid spread of disease to a large number of hosts in a given population within a short period of time. For example, in meningococcal infections, an attack rate in excess of 15 cases per 100,000 people for two consecutive weeks is considered an epidemic. Epidemics of infectious disease are generally caused by several factors including a change in the ecology of the host population (e.g., increased stress or increase in the density of a vector species), a genetic change in the pathogen reservoir or the introduction of an emerging pathogen to a host population (by movement of pathogen or host). Generally, an epidemic occurs when host immunity to either an established pathogen or newly emerging novel pathogen is suddenly reduced below that found in the endemic equilibrium and the transmission threshold is exceeded. An epidemic may be restricted to one location; however, if it spreads to other countries or continents and affects a substantial number of people, it may be termed as a pandemic. The declaration of an epidemic usually requires a good understanding of a baseline rate of incidence; epidemics for certain diseases, such as influenza, are defined as reaching some defined increase in incidence above this baseline. A few cases of a very rare disease may be classified as an epidemic, while many cases of a common disease (such as the common cold) would not. An epidemic can cause enormous damage through financial and economic losses in addition to impaired health and loss of life. Definition The United States Centers for Disease Control and Prevention defines epidemic broadly: "Epidemic refers to an increase, often sudden, in the number of cases of a disease above what is normally expected in that population in that area." The term "outbreak" can also apply, but is usually restricted to smaller events. Any sudden increase in disease prevalence may generally be termed an epidemic. This may include contagious disease (i.e. easily spread between persons) such as influenza; vector-borne diseases such as malaria; water-borne diseases such as cholera; and sexually transmitted diseases such as HIV/AIDS. The term can also be used for non-communicable health issues such as obesity. The term epidemic derives from a word form attributed to Homer's Odyssey, which later took its medical meaning from the Epidemics, a treatise by Hippocrates. Before Hippocrates, , , , and other variants had meanings similar to the current definitions of "indigenous" or "endemic". Thucydides' description of the Plague of Athens is considered one of the earliest accounts of a disease epidemic. By the early 17th century, the terms endemic and epidemic referred to contrasting conditions of population-level disease, with the endemic condition a "common sicknesse" and the epidemic "hapning in some region, or countrey, at a certaine time, ....... producing in all sorts of people, one and the same kind of sicknesse". The term "epidemic" is often applied to diseases in non-human animals, although "epizootic" is technically preferable. Causes There are several factors that may contribute (individually or in combination) to causing an epidemic. There may be changes in a pathogen, in the population that it can infect, in the environment, or in the interaction between all three. Factors include the following: Antigenic Change An antigen is a protein on the virus' surface that host antibodies can recognize and attack. Changes in the antigenic characteristics of the agent make it easier for the changed virus to spread throughout a previously immune population. There are two natural mechanisms for change - antigenic drift and antigenic shift. Antigenic drift arises over a period of time as an accumulation of mutations in the virus genes, possibly through a series of hosts, and eventually gives rise to a new strain of virus which can evade existing immunity. Antigenic shift is abrupt - in this, two or more different strains of a virus, coinfecting a single host, combine to form a new subtype having a mixture of characteristics of the original strains. The best known and best documented example of both processes is influenza. SARS-CoV2 has demonstrated antigenic drift and possibly shift as well. Drug resistance Antibiotic resistance applies specifically to bacteria that become resistant to antibiotics. Resistance in bacteria can arise naturally by genetic mutation, or by one species acquiring resistance from another through horizontal gene transfer. Extended use of antibiotics appears to encourage selection for mutations which can render antibiotics ineffective. This is especially true of tuberculosis, with increasing occurrence of multiple drug-resistant tuberculosis (MDR-TB) worldwide. Changes in transmission Pathogen transmission is a term used to describe the mechanisms by which a disease-causing agent (virus, bacterium, or parasite) spreads from one host to another. Common modes of transmission include: - airborne (as with influenza and COVID-19), fecal-oral (as with cholera and typhoid), vector-borne (malaria, Zika) and sexual (syphilis, HIV) The first three of these require that pathogen must survive away from its host for a period of time; an evolutionary change which increases survival time will result in increased virulence. Another possibility, although rare, is that a pathogen may adapt to take advantage of a new mode of transmission Seasonality Seasonal diseases arise due to the change in the environmental conditions, especially such as humidity and temperature, during different seasons. Many diseases display seasonality, This may be due to one or more of the following underlying factors: - The ability of the pathogen to survive outside the host - e.g. water-borne cholera which becomes prevalent in tropical wet seasons, or influenza which peaks in temperate regions during winter. The behaviour of people susceptible to the disease - such as spending more time in close contact indoors. Changes in immune function during winter - one possibility is a reduction in vitamin D, and another is the effect of cold on mucous membranes in the nose. Abundance of vectors such as mosquitoes. Human behaviour Changes in behaviour can affect the likelihood or severity of epidemics. The classic example is the 1854 Broad Street cholera outbreak, in which a cholera outbreak was mitigated by removing a supply of contaminated water - an event now regarded as the foundation of the science of epidemiology. Urbanisation and overcrowding (e.g. in refugee camps) increase the likelihood of disease outbreaks. A factor which contributed to the initial rapid increase in the 2014 Ebola virus epidemic was ritual bathing of (infective) corpses; one of the control measures was an education campaign to change behaviour around funeral rites. Changes in the host population The level of immunity to a disease in a population - herd immunity - is at its peak after a disease outbreak or a vaccination campaign. In the following years, immunity will decline, both within individuals and in the population as a whole as older individuals die and new individuals are born. Eventually, unless there is another vaccination campaign, an outbreak or epidemic will recur. It's also possible for disease which is endemic in one population to become epidemic if it is introduced into a novel setting where the host population is not immune. An example of this was the introduction European diseases such as smallpox into indigenous populations during the 16th century. Zoonosis A zoonosis is an infectious disease of humans caused by a pathogen that can jump from a non-human host to a human. Major diseases such as Ebola virus disease and salmonellosis are zoonoses. HIV was a zoonotic disease transmitted to humans in the early part of the 20th century, though it has now evolved into a separate human-only disease. Some strains of bird flu and swine flu are zoonoses; these viruses occasionally recombine with human strains of the flu and can cause pandemics such as the 1918 Spanish flu or the 2009 swine flu. Types Common source outbreak In a common source outbreak epidemic, the affected individuals had an exposure to a common agent. If the exposure is singular and all of the affected individuals develop the disease over a single exposure and incubation course, it can be termed as a point source outbreak. If the exposure was continuous or variable, it can be termed as a continuous outbreak or intermittent outbreak, respectively. Propagated outbreak In a propagated outbreak, the disease spreads person-to-person. Affected individuals may become independent reservoirs leading to further exposures.Many epidemics will have characteristics of both common source and propagated outbreaks (sometimes referred to as mixed outbreak). For example, secondary person-to-person spread may occur after a common source exposure or an environmental vector may spread a zoonotic diseases agent. Preparation Preparations for an epidemic include having a disease surveillance system; the ability to quickly dispatch emergency workers, especially local-based emergency workers; and a legitimate way to guarantee the safety and health of health workers. Effective preparations for a response to a pandemic are multi-layered. The first layer is a disease surveillance system. Tanzania, for example, runs a national lab that runs testing for 200 health sites and tracks the spread of infectious diseases. The next layer is the actual response to an emergency. According to U.S.-based columnist Michael Gerson in 2015, only the U.S. military and NATO have the global capability to respond to such an emergency. Still, despite the most extensive preparatory measures, a fast-spreading pandemic may easily exceed and overwhelm existing health-care resources. Consequently, early and aggressive mitigation efforts, aimed at the so-called "epidemic curve flattening" need to be taken. Such measures usually consist on non-pharmacological interventions such as social/physical distancing, aggressive contact tracing, "stay-at-home" orders, as well as appropriate personal protective equipment (i.e., masks, gloves, and other physical barriers to spread). Moreover, India has taken significant strides in its efforts to prepare for future respiratory pandemics through the development of the National Pandemic Preparedness Plan for Respiratory Viruses using a multisectoral approach. Preceding this national effort, a regional workshop on the Preparedness and Resilience for Emerging Threats (PRET) initiative was organized by WHO's South-East Asia Regional Office on October 12-13, 2023. Recognizing that the same capacities and capabilities can be leveraged and applied for groups of pathogens based on their mode of transmission, the workshop aimed to facilitate pandemic planning efficiency for countries in the region. The participating countries, in the aftermath of the workshop, outlined their immediate next steps and sought support from WHO and its partners to bolster regional preparedness against respiratory pathogen pandemics.
Biology and health sciences
Concepts
Health
66986
https://en.wikipedia.org/wiki/Woodland
Woodland
A woodland () is, in the broad sense, land covered with woody plants (trees and shrubs), or in a narrow sense, synonymous with wood (or in the U.S., the plurale tantum woods), a low-density forest forming open habitats with plenty of sunlight and limited shade (see differences between British, American and Australian English explained below). Some savannas may also be woodlands, such as savanna woodland, where trees and shrubs form a light canopy. Woodlands may support an understory of shrubs and herbaceous plants including grasses. Woodland may form a transition to shrubland under drier conditions or during early stages of primary or secondary succession. Higher-density areas of trees with a largely closed canopy that provides extensive and nearly continuous shade are often referred to as forests. Extensive efforts by conservationist groups have been made to preserve woodlands from urbanization and agriculture. For example, the woodlands of Northwest Indiana have been preserved as part of the Indiana Dunes. Definitions United Kingdom Woodland is used in British woodland management to mean tree-covered areas which arose naturally and which are then managed. At the same time, forest is usually used in the British Isles to describe plantations, usually more extensive, or hunting Forests, which are a land use with a legal definition and may not be wooded at all. The term ancient woodland is used in British nature conservation to refer to any wooded land that has existed since 1600, and often (though not always) for thousands of years, since the last Ice Age (equivalent to the American term old-growth forest) North America Woodlot is a closely related term in American forest management, which refers to a stand of trees generally used for firewood. While woodlots often technically have closed canopies, they are so small that light penetration from the edge makes them ecologically closer to woodland than forest. North American forests vary widely in their ecology and are greatly dependent on abiotic factors such as climate and elevation. Much of the old-growth deciduous and pine-dominated forests of the eastern United States was harvested for lumber, paper pulp, telephone poles, creosote, pitch, and tar. Australia In Australia, a woodland is defined as an area with a sparse (10–30%) cover of trees, and an open woodland has a very sparse (<10%) cover. Woodlands are also subdivided into tall woodlands or low woodlands if their trees are over or under high, respectively. This contrasts with forests, which have more than 30% of their area covered by trees. Woodland ecoregions Tropical and subtropical grasslands, savannas, and shrublands Afrotropical realm Angolan miombo woodlands (Angola) Angolan mopane woodlands (Angola, Namibia) Central Zambezian miombo woodlands (Angola, Burundi, Democratic Republic of the Congo, Malawi, Tanzania, Zambia) Eastern miombo woodlands (Mozambique, Tanzania) Kalahari Acacia-Baikiaea woodlands (Botswana, Namibia, South Africa, Zimbabwe) Zambezian and mopane woodlands (Botswana, Eswatini, Malawi, Mozambique, Namibia, South Africa, Zambia, Zimbabwe) Zambezian Baikiaea woodlands (Angola, Botswana, Namibia, Zambia, Zimbabwe) Nearctic realm Madrean pine–oak woodlands (Mexico) Neotropical realm Cerrado woodlands and savannas (Bolivia, Brazil, Paraguay) Temperate grasslands, savannas, and shrublands Afrotropical realm Al Hajar montane woodlands (Oman) Australasian realm Central Hunter Valley eucalypt forest and woodland (Australia) Cumberland Plain Woodland (Australia) Gippsland Plains Grassy Woodland (Australia) Grey Box Grassy Woodlands (Australia) Lowland Grassy Woodland (Australia) New England Peppermint Grassy Woodland (Australia) Nearctic realm Central forest–grasslands transition (United States) Upper Midwest forest–savanna transition (United States) Palearctic realm Gissaro-Alai open woodlands (Kyrgyzstan, Tajikistan, Uzbekistan) Montane grasslands and shrublands Afrotropical realm Angolan Scarp savanna and woodlands (Angola) Drakensberg alti-montane grasslands and woodlands (Lesotho, South Africa) Drakensberg montane grasslands, woodlands and forests (Eswatini, Lesotho, South Africa) East African montane moorlands (Kenya, Sudan, Tanzania, Uganda) Ethiopian montane grasslands and woodlands (Ethiopia) Nearctic realm Pinyon–juniper woodland (United States) Palearctic realm Kopet Dag woodlands and forest steppe (Iran, Turkmenistan) Mediterranean forests, woodlands, and scrub Australasian realm Banksia Woodlands of the Swan Coastal Plain (Australia) Coolgardie woodlands (Australia) Mount Lofty woodlands (Australia) Murray-Darling woodlands and mallee (Australia) Naracoorte woodlands (Australia) Southwest Australia woodlands (Australia) Swan Coastal Plain Shrublands and Woodlands (Australia) Nearctic realm California chaparral and woodlands (United States) California montane chaparral and woodlands (United States) California interior chaparral and woodlands (United States) Palearctic realm Canary Islands dry woodlands and forests (Spain) Eastern Mediterranean conifer–sclerophyllous–broadleaf forests (Turkey, Syria, Israel, Jordan, Iraq, Lebanon) Mediterranean acacia-argania dry woodlands and succulent thickets (Morocco, Canary Islands) Mediterranean dry woodlands and steppe (Algeria, Egypt, Libya, Morocco, Tunisia) Mediterranean woodlands and forests (Algeria, Morocco, Tunisia) Southeastern Iberian shrubs and woodlands (Spain) Deserts and xeric shrublands Afrotropical realm East Saharan montane xeric woodlands (Chad, Sudan) Madagascar succulent woodlands (Madagascar) Somali montane xeric woodlands (Somalia) Southwestern Arabian montane woodlands (Saudi Arabia, Yemen) Palearctic realm Baluchistan xeric woodlands (Afghanistan, Pakistan) Central Afghan Mountains xeric woodlands (Afghanistan) Central Asian riparian woodlands (Kazakhstan) North Saharan steppe and woodlands (Algeria, Egypt, Libya, Morocco, Tunisia, Western Sahara) Paropamisus xeric woodlands (Afghanistan) South Saharan steppe and woodlands (Algeria, Chad, Mali, Mauritania, Niger, Sudan) Tibesti-Jebel Uweinat montane xeric woodlands (Chad, Egypt, Libya, Sudan) West Saharan montane xeric woodlands (Algeria, Mali, Mauritania, Niger)
Physical sciences
Forests
Earth science
66997
https://en.wikipedia.org/wiki/Epidemiology
Epidemiology
Epidemiology is the study and analysis of the distribution (who, when, and where), patterns and determinants of health and disease conditions in a defined population, and application of this knowledge to prevent diseases. It is a cornerstone of public health, and shapes policy decisions and evidence-based practice by identifying risk factors for disease and targets for preventive healthcare. Epidemiologists help with study design, collection, and statistical analysis of data, amend interpretation and dissemination of results (including peer review and occasional systematic review). Epidemiology has helped develop methodology used in clinical research, public health studies, and, to a lesser extent, basic research in the biological sciences. Major areas of epidemiological study include disease causation, transmission, outbreak investigation, disease surveillance, environmental epidemiology, forensic epidemiology, occupational epidemiology, screening, biomonitoring, and comparisons of treatment effects such as in clinical trials. Epidemiologists rely on other scientific disciplines like biology to better understand disease processes, statistics to make efficient use of the data and draw appropriate conclusions, social sciences to better understand proximate and distal causes, and engineering for exposure assessment. Epidemiology, literally meaning "the study of what is upon the people", is derived , suggesting that it applies only to human populations. However, the term is widely used in studies of zoological populations (veterinary epidemiology), although the term "epizoology" is available, and it has also been applied to studies of plant populations (botanical or plant disease epidemiology). The distinction between "epidemic" and "endemic" was first drawn by Hippocrates, to distinguish between diseases that are "visited upon" a population (epidemic) from those that "reside within" a population (endemic). The term "epidemiology" appears to have first been used to describe the study of epidemics in 1802 by the Spanish physician in Epidemiología Española. Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic. The term epidemiology is now widely applied to cover the description and causation of not only epidemic, infectious disease, but of disease in general, including related conditions. Some examples of topics examined through epidemiology include as high blood pressure, mental illness and obesity. Therefore, this epidemiology is based upon how the pattern of the disease causes change in the function of human beings. History The Greek physician Hippocrates, taught by Democritus, was known as the father of medicine, sought a logic to sickness; he is the first person known to have examined the relationships between the occurrence of disease and environmental influences. Hippocrates believed sickness of the human body to be caused by an imbalance of the four humors (black bile, yellow bile, blood, and phlegm). The cure to the sickness was to remove or add the humor in question to balance the body. This belief led to the application of bloodletting and dieting in medicine. He coined the terms endemic (for diseases usually found in some places but not in others) and epidemic (for diseases that are seen at some times but not others). Modern era In the middle of the 16th century, a doctor from Verona named Girolamo Fracastoro was the first to propose a theory that the very small, unseeable, particles that cause disease were alive. They were considered to be able to spread by air, multiply by themselves and to be destroyable by fire. In this way he refuted Galen's miasma theory (poison gas in sick people). In 1543 he wrote a book De contagione et contagiosis morbis, in which he was the first to promote personal and environmental hygiene to prevent disease. The development of a sufficiently powerful microscope by Antonie van Leeuwenhoek in 1675 provided visual evidence of living particles consistent with a germ theory of disease. During the Ming dynasty, Wu Youke (1582–1652) developed the idea that some diseases were caused by transmissible agents, which he called Li Qi (戾气 or pestilential factors) when he observed various epidemics rage around him between 1641 and 1644. His book Wen Yi Lun (瘟疫论, Treatise on Pestilence/Treatise of Epidemic Diseases) can be regarded as the main etiological work that brought forward the concept. His concepts were still being considered in analysing SARS outbreak by WHO in 2004 in the context of traditional Chinese medicine. Another pioneer, Thomas Sydenham (1624–1689), was the first to distinguish the fevers of Londoners in the later 1600s. His theories on cures of fevers met with much resistance from traditional physicians at the time. He was not able to find the initial cause of the smallpox fever he researched and treated. John Graunt, a haberdasher and amateur statistician, published Natural and Political Observations ... upon the Bills of Mortality in 1662. In it, he analysed the mortality rolls in London before the Great Plague, presented one of the first life tables, and reported time trends for many diseases, new and old. He provided statistical evidence for many theories on disease, and also refuted some widespread ideas on them. John Snow is famous for his investigations into the causes of the 19th-century cholera epidemics, and is also known as the father of (modern) Epidemiology. He began with noticing the significantly higher death rates in two areas supplied by Southwark Company. His identification of the Broad Street pump as the cause of the Soho epidemic is considered the classic example of epidemiology. Snow used chlorine in an attempt to clean the water and removed the handle; this ended the outbreak. This has been perceived as a major event in the history of public health and regarded as the founding event of the science of epidemiology, having helped shape public health policies around the world. However, Snow's research and preventive measures to avoid further outbreaks were not fully accepted or put into practice until after his death due to the prevailing Miasma Theory of the time, a model of disease in which poor air quality was blamed for illness. This was used to rationalize high rates of infection in impoverished areas instead of addressing the underlying issues of poor nutrition and sanitation, and was proven false by his work. Other pioneers include Danish physician Peter Anton Schleisner, who in 1849 related his work on the prevention of the epidemic of neonatal tetanus on the Vestmanna Islands in Iceland. Another important pioneer was Hungarian physician Ignaz Semmelweis, who in 1847 brought down infant mortality at a Vienna hospital by instituting a disinfection procedure. His findings were published in 1850, but his work was ill-received by his colleagues, who discontinued the procedure. Disinfection did not become widely practiced until British surgeon Joseph Lister 'discovered' antiseptics in 1865 in light of the work of Louis Pasteur. In the early 20th century, mathematical methods were introduced into epidemiology by Ronald Ross, Janet Lane-Claypon, Anderson Gray McKendrick, and others. In a parallel development during the 1920s, German-Swiss pathologist Max Askanazy and others founded the International Society for Geographical Pathology to systematically investigate the geographical pathology of cancer and other non-infectious diseases across populations in different regions. After World War II, Richard Doll and other non-pathologists joined the field and advanced methods to study cancer, a disease with patterns and mode of occurrences that could not be suitably studied with the methods developed for epidemics of infectious diseases. Geography pathology eventually combined with infectious disease epidemiology to make the field that is epidemiology today. Another breakthrough was the 1954 publication of the results of a British Doctors Study, led by Richard Doll and Austin Bradford Hill, which lent very strong statistical support to the link between tobacco smoking and lung cancer. In the late 20th century, with the advancement of biomedical sciences, a number of molecular markers in blood, other biospecimens and environment were identified as predictors of development or risk of a certain disease. Epidemiology research to examine the relationship between these biomarkers analyzed at the molecular level and disease was broadly named "molecular epidemiology". Specifically, "genetic epidemiology" has been used for epidemiology of germline genetic variation and disease. Genetic variation is typically determined using DNA from peripheral blood leukocytes. 21st century Since the 2000s, genome-wide association studies (GWAS) have been commonly performed to identify genetic risk factors for many diseases and health conditions. While most molecular epidemiology studies are still using conventional disease diagnosis and classification systems, it is increasingly recognized that disease progression represents inherently heterogeneous processes differing from person to person. Conceptually, each individual has a unique disease process different from any other individual ("the unique disease principle"), considering uniqueness of the exposome (a totality of endogenous and exogenous / environmental exposures) and its unique influence on molecular pathologic process in each individual. Studies to examine the relationship between an exposure and molecular pathologic signature of disease (particularly cancer) became increasingly common throughout the 2000s. However, the use of molecular pathology in epidemiology posed unique challenges, including lack of research guidelines and standardized statistical methodologies, and paucity of interdisciplinary experts and training programs. Furthermore, the concept of disease heterogeneity appears to conflict with the long-standing premise in epidemiology that individuals with the same disease name have similar etiologies and disease processes. To resolve these issues and advance population health science in the era of molecular precision medicine, "molecular pathology" and "epidemiology" was integrated to create a new interdisciplinary field of "molecular pathological epidemiology" (MPE), defined as "epidemiology of molecular pathology and heterogeneity of disease". In MPE, investigators analyze the relationships between (A) environmental, dietary, lifestyle and genetic factors; (B) alterations in cellular or extracellular molecules; and (C) evolution and progression of disease. A better understanding of heterogeneity of disease pathogenesis will further contribute to elucidate etiologies of disease. The MPE approach can be applied to not only neoplastic diseases but also non-neoplastic diseases. The concept and paradigm of MPE have become widespread in the 2010s. By 2012, it was recognized that many pathogens' evolution is rapid enough to be highly relevant to epidemiology, and that therefore much could be gained from an interdisciplinary approach to infectious disease integrating epidemiology and molecular evolution to "inform control strategies, or even patient treatment." Modern epidemiological studies can use advanced statistics and machine learning to create predictive models as well as to define treatment effects. There is increasing recognition that a wide range of modern data sources, many not originating from healthcare or epidemiology, can be used for epidemiological study. Such digital epidemiology can include data from internet searching, mobile phone records and retail sales of drugs. Types of studies Epidemiologists employ a range of study designs from the observational to experimental and generally categorized as descriptive (involving the assessment of data covering time, place, and person), analytic (aiming to further examine known associations or hypothesized relationships), and experimental (a term often equated with clinical or community trials of treatments and other interventions). In observational studies, nature is allowed to "take its course", as epidemiologists observe from the sidelines. Conversely, in experimental studies, the epidemiologist is the one in control of all of the factors entering a certain case study. Epidemiological studies are aimed, where possible, at revealing unbiased relationships between exposures such as alcohol or smoking, biological agents, stress, or chemicals to mortality or morbidity. The identification of causal relationships between these exposures and outcomes is an important aspect of epidemiology. Modern epidemiologists use informatics and infodemiology as tools. Observational studies have two components, descriptive and analytical. Descriptive observations pertain to the "who, what, where and when of health-related state occurrence". However, analytical observations deal more with the 'how' of a health-related event. Experimental epidemiology contains three case types: randomized controlled trials (often used for a new medicine or drug testing), field trials (conducted on those at a high risk of contracting a disease), and community trials (research on social originating diseases). The term 'epidemiologic triad' is used to describe the intersection of Host, Agent, and Environment in analyzing an outbreak. Case series Case-series may refer to the qualitative study of the experience of a single patient, or small group of patients with a similar diagnosis, or to a statistical factor with the potential to produce illness with periods when they are unexposed. The former type of study is purely descriptive and cannot be used to make inferences about the general population of patients with that disease. These types of studies, in which an astute clinician identifies an unusual feature of a disease or a patient's history, may lead to a formulation of a new hypothesis. Using the data from the series, analytic studies could be done to investigate possible causal factors. These can include case-control studies or prospective studies. A case-control study would involve matching comparable controls without the disease to the cases in the series. A prospective study would involve following the case series over time to evaluate the disease's natural history. The latter type, more formally described as self-controlled case-series studies, divide individual patient follow-up time into exposed and unexposed periods and use fixed-effects Poisson regression processes to compare the incidence rate of a given outcome between exposed and unexposed periods. This technique has been extensively used in the study of adverse reactions to vaccination and has been shown in some circumstances to provide statistical power comparable to that available in cohort studies. Case-control studies Case-control studies select subjects based on their disease status. It is a retrospective study. A group of individuals that are disease positive (the "case" group) is compared with a group of disease negative individuals (the "control" group). The control group should ideally come from the same population that gave rise to the cases. The case-control study looks back through time at potential exposures that both groups (cases and controls) may have encountered. A 2×2 table is constructed, displaying exposed cases (A), exposed controls (B), unexposed cases (C) and unexposed controls (D). The statistic generated to measure association is the odds ratio (OR), which is the ratio of the odds of exposure in the cases (A/C) to the odds of exposure in the controls (B/D), i.e. OR = (AD/BC). If the OR is significantly greater than 1, then the conclusion is "those with the disease are more likely to have been exposed", whereas if it is close to 1 then the exposure and disease are not likely associated. If the OR is far less than one, then this suggests that the exposure is a protective factor in the causation of the disease. Case-control studies are usually faster and more cost-effective than cohort studies but are sensitive to bias (such as recall bias and selection bias). The main challenge is to identify the appropriate control group; the distribution of exposure among the control group should be representative of the distribution in the population that gave rise to the cases. This can be achieved by drawing a random sample from the original population at risk. This has as a consequence that the control group can contain people with the disease under study when the disease has a high attack rate in a population. A major drawback for case control studies is that, in order to be considered to be statistically significant, the minimum number of cases required at the 95% confidence interval is related to the odds ratio by the equation: where N is the ratio of cases to controls. As the odds ratio approaches 1, the number of cases required for statistical significance grows towards infinity; rendering case-control studies all but useless for low odds ratios. For instance, for an odds ratio of 1.5 and cases = controls, the table shown above would look like this: For an odds ratio of 1.1: Cohort studies Cohort studies select subjects based on their exposure status. The study subjects should be at risk of the outcome under investigation at the beginning of the cohort study; this usually means that they should be disease free when the cohort study starts. The cohort is followed through time to assess their later outcome status. An example of a cohort study would be the investigation of a cohort of smokers and non-smokers over time to estimate the incidence of lung cancer. The same 2×2 table is constructed as with the case control study. However, the point estimate generated is the relative risk (RR), which is the probability of disease for a person in the exposed group, Pe = A / (A + B) over the probability of disease for a person in the unexposed group, Pu = C / (C + D), i.e. RR = Pe / Pu. As with the OR, a RR greater than 1 shows association, where the conclusion can be read "those with the exposure were more likely to develop the disease." Prospective studies have many benefits over case control studies. The RR is a more powerful effect measure than the OR, as the OR is just an estimation of the RR, since true incidence cannot be calculated in a case control study where subjects are selected based on disease status. Temporality can be established in a prospective study, and confounders are more easily controlled for. However, they are more costly, and there is a greater chance of losing subjects to follow-up based on the long time period over which the cohort is followed. Cohort studies also are limited by the same equation for number of cases as for cohort studies, but, if the base incidence rate in the study population is very low, the number of cases required is reduced by . Causal inference Although epidemiology is sometimes viewed as a collection of statistical tools used to elucidate the associations of exposures to health outcomes, a deeper understanding of this science is that of discovering causal relationships. "Correlation does not imply causation" is a common theme for much of the epidemiological literature. For epidemiologists, the key is in the term inference. Correlation, or at least association between two variables, is a necessary but not sufficient criterion for the inference that one variable causes the other. Epidemiologists use gathered data and a broad range of biomedical and psychosocial theories in an iterative way to generate or expand theory, to test hypotheses, and to make educated, informed assertions about which relationships are causal, and about exactly how they are causal. Epidemiologists emphasize that the "one cause – one effect" understanding is a simplistic mis-belief. Most outcomes, whether disease or death, are caused by a chain or web consisting of many component causes. Causes can be distinguished as necessary, sufficient or probabilistic conditions. If a necessary condition can be identified and controlled (e.g., antibodies to a disease agent, energy in an injury), the harmful outcome can be avoided (Robertson, 2015). One tool regularly used to conceptualize the multicausality associated with disease is the causal pie model. Bradford Hill criteria In 1965, Austin Bradford Hill proposed a series of considerations to help assess evidence of causation, which have come to be commonly known as the "Bradford Hill criteria". In contrast to the explicit intentions of their author, Hill's considerations are now sometimes taught as a checklist to be implemented for assessing causality. Hill himself said "None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required sine qua non." Strength of Association: A small association does not mean that there is not a causal effect, though the larger the association, the more likely that it is causal. Consistency of Data: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect. Specificity: Causation is likely if a very specific population at a specific site and disease with no other likely explanation. The more specific an association between a factor and an effect is, the bigger the probability of a causal relationship. Temporality: The effect has to occur after the cause (and if there is an expected delay between the cause and expected effect, then the effect must occur after that delay). Biological gradient: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence. Plausibility: A plausible mechanism between cause and effect is helpful (but Hill noted that knowledge of the mechanism is limited by current knowledge). Coherence: Coherence between epidemiological and laboratory findings increases the likelihood of an effect. However, Hill noted that "... lack of such [laboratory] evidence cannot nullify the epidemiological effect on associations". Experiment: "Occasionally it is possible to appeal to experimental evidence". Analogy: The effect of similar factors may be considered. Legal interpretation Epidemiological studies can only go to prove that an agent could have caused, but not that it did cause, an effect in any particular case: In United States law, epidemiology alone cannot prove that a causal association does not exist in general. Conversely, it can be (and is in some circumstances) taken by US courts, in an individual case, to justify an inference that a causal association does exist, based upon a balance of probability. The subdiscipline of forensic epidemiology is directed at the investigation of specific causation of disease or injury in individuals or groups of individuals in instances in which causation is disputed or is unclear, for presentation in legal settings. Population-based health management Epidemiological practice and the results of epidemiological analysis make a significant contribution to emerging population-based health management frameworks. Population-based health management encompasses the ability to: Assess the health states and health needs of a target population; Implement and evaluate interventions that are designed to improve the health of that population; and Efficiently and effectively provide care for members of that population in a way that is consistent with the community's cultural, policy and health resource values. Modern population-based health management is complex, requiring a multiple set of skills (medical, political, technological, mathematical, etc.) of which epidemiological practice and analysis is a core component, that is unified with management science to provide efficient and effective health care and health guidance to a population. This task requires the forward-looking ability of modern risk management approaches that transform health risk factors, incidence, prevalence and mortality statistics (derived from epidemiological analysis) into management metrics that not only guide how a health system responds to current population health issues but also how a health system can be managed to better respond to future potential population health issues. Examples of organizations that use population-based health management that leverage the work and results of epidemiological practice include Canadian Strategy for Cancer Control, Health Canada Tobacco Control Programs, Rick Hansen Foundation, Canadian Tobacco Control Research Initiative. Each of these organizations uses a population-based health management framework called Life at Risk that combines epidemiological quantitative analysis with demographics, health agency operational research and economics to perform: Population Life Impacts Simulations: Measurement of the future potential impact of disease upon the population with respect to new disease cases, prevalence, premature death as well as potential years of life lost from disability and death; Labour Force Life Impacts Simulations: Measurement of the future potential impact of disease upon the labour force with respect to new disease cases, prevalence, premature death and potential years of life lost from disability and death; Economic Impacts of Disease Simulations: Measurement of the future potential impact of disease upon private sector disposable income impacts (wages, corporate profits, private health care costs) and public sector disposable income impacts (personal income tax, corporate income tax, consumption taxes, publicly funded health care costs). Applied field epidemiology Applied epidemiology is the practice of using epidemiological methods to protect or improve the health of a population. Applied field epidemiology can include investigating communicable and non-communicable disease outbreaks, mortality and morbidity rates, and nutritional status, among other indicators of health, with the purpose of communicating the results to those who can implement appropriate policies or disease control measures. Humanitarian context As the surveillance and reporting of diseases and other health factors become increasingly difficult in humanitarian crisis situations, the methodologies used to report the data are compromised. One study found that less than half (42.4%) of nutrition surveys sampled from humanitarian contexts correctly calculated the prevalence of malnutrition and only one-third (35.3%) of the surveys met the criteria for quality. Among the mortality surveys, only 3.2% met the criteria for quality. As nutritional status and mortality rates help indicate the severity of a crisis, the tracking and reporting of these health factors is crucial. Vital registries are usually the most effective ways to collect data, but in humanitarian contexts these registries can be non-existent, unreliable, or inaccessible. As such, mortality is often inaccurately measured using either prospective demographic surveillance or retrospective mortality surveys. Prospective demographic surveillance requires much manpower and is difficult to implement in a spread-out population. Retrospective mortality surveys are prone to selection and reporting biases. Other methods are being developed, but are not common practice yet. Characterization, validity, and bias Epidemic wave The concept of waves in epidemics has implications especially for communicable diseases. A working definition for the term "epidemic wave" is based on two key features: 1) it comprises periods of upward or downward trends, and 2) these increases or decreases must be substantial and sustained over a period of time, in order to distinguish them from minor fluctuations or reporting errors. The use of a consistent scientific definition is to provide a consistent language that can be used to communicate about and understand the progression of the COVID-19 pandemic, which would aid healthcare organizations and policymakers in resource planning and allocation. Validities Different fields in epidemiology have different levels of validity. One way to assess the validity of findings is the ratio of false-positives (claimed effects that are not correct) to false-negatives (studies which fail to support a true effect). In genetic epidemiology, candidate-gene studies may produce over 100 false-positive findings for each false-negative. By contrast genome-wide association appear close to the reverse, with only one false positive for every 100 or more false-negatives. This ratio has improved over time in genetic epidemiology, as the field has adopted stringent criteria. By contrast, other epidemiological fields have not required such rigorous reporting and are much less reliable as a result. Random error Random error is the result of fluctuations around a true value because of sampling variability. Random error is just that: random. It can occur during data collection, coding, transfer, or analysis. Examples of random errors include poorly worded questions, a misunderstanding in interpreting an individual answer from a particular respondent, or a typographical error during coding. Random error affects measurement in a transient, inconsistent manner and it is impossible to correct for random error. There is a random error in all sampling procedures sampling error. Precision in epidemiological variables is a measure of random error. Precision is also inversely related to random error, so that to reduce random error is to increase precision. Confidence intervals are computed to demonstrate the precision of relative risk estimates. The narrower the confidence interval, the more precise the relative risk estimate. There are two basic ways to reduce random error in an epidemiological study. The first is to increase the sample size of the study. In other words, add more subjects to your study. The second is to reduce the variability in measurement in the study. This might be accomplished by using a more precise measuring device or by increasing the number of measurements. Note, that if sample size or number of measurements are increased, or a more precise measuring tool is purchased, the costs of the study are usually increased. There is usually an uneasy balance between the need for adequate precision and the practical issue of study cost. Systematic error A systematic error or bias occurs when there is a difference between the true value (in the population) and the observed value (in the study) from any cause other than sampling variability. An example of systematic error is if, unknown to you, the pulse oximeter you are using is set incorrectly and adds two points to the true value each time a measurement is taken. The measuring device could be precise but not accurate. Because the error happens in every instance, it is systematic. Conclusions you draw based on that data will still be incorrect. But the error can be reproduced in the future (e.g., by using the same mis-set instrument). A mistake in coding that affects all responses for that particular question is another example of a systematic error. The validity of a study is dependent on the degree of systematic error. Validity is usually separated into two components: Internal validity is dependent on the amount of error in measurements, including exposure, disease, and the associations between these variables. Good internal validity implies a lack of error in measurement and suggests that inferences may be drawn at least as they pertain to the subjects under study. External validity pertains to the process of generalizing the findings of the study to the population from which the sample was drawn (or even beyond that population to a more universal statement). This requires an understanding of which conditions are relevant (or irrelevant) to the generalization. Internal validity is clearly a prerequisite for external validity. Selection bias Selection bias occurs when study subjects are selected or become part of the study as a result of a third, unmeasured variable which is associated with both the exposure and outcome of interest. For instance, it has repeatedly been noted that cigarette smokers and non smokers tend to differ in their study participation rates. (Sackett D cites the example of Seltzer et al., in which 85% of non smokers and 67% of smokers returned mailed questionnaires.) Such a difference in response will not lead to bias if it is not also associated with a systematic difference in outcome between the two response groups. Information bias Information bias is bias arising from systematic error in the assessment of a variable. An example of this is recall bias. A typical example is again provided by Sackett in his discussion of a study examining the effect of specific exposures on fetal health: "in questioning mothers whose recent pregnancies had ended in fetal death or malformation (cases) and a matched group of mothers whose pregnancies ended normally (controls) it was found that 28% of the former, but only 20% of the latter, reported exposure to drugs which could not be substantiated either in earlier prospective interviews or in other health records". In this example, recall bias probably occurred as a result of women who had had miscarriages having an apparent tendency to better recall and therefore report previous exposures. Design-related bias Next to sample- and variable-related bias, bias can also arise from an imperfect study design. One example is immortal time bias, where during study period, there is some interval during which the outcome event cannot occur (making these individual "immortal"). Confounding Confounding has traditionally been defined as bias arising from the co-occurrence or mixing of effects of extraneous factors, referred to as confounders, with the main effect(s) of interest. A more recent definition of confounding invokes the notion of counterfactual effects. According to this view, when one observes an outcome of interest, say Y=1 (as opposed to Y=0), in a given population A which is entirely exposed (i.e. exposure X = 1 for every unit of the population) the risk of this event will be RA1. The counterfactual or unobserved risk RA0 corresponds to the risk which would have been observed if these same individuals had been unexposed (i.e. X = 0 for every unit of the population). The true effect of exposure therefore is: RA1 − RA0 (if one is interested in risk differences) or RA1/RA0 (if one is interested in relative risk). Since the counterfactual risk RA0 is unobservable we approximate it using a second population B and we actually measure the following relations: RA1 − RB0 or RA1/RB0. In this situation, confounding occurs when RA0 ≠ RB0. (NB: Example assumes binary outcome and exposure variables.) Some epidemiologists prefer to think of confounding separately from common categorizations of bias since, unlike selection and information bias, confounding stems from real causal effects. The profession Few universities have offered epidemiology as a course of study at the undergraduate level. An undergraduate program exists at Johns Hopkins University in which students who major in public health can take graduate-level courses—including epidemiology—during their senior year at the Bloomberg School of Public Health. In addition to its master's and doctoral degrees in epidemiology, the University of Michigan School of Public Health has offered undergraduate degree programs since 2017 that include coursework in epidemiology. Although epidemiologic research is conducted by individuals from diverse disciplines, variable levels of training in epidemiologic methods are provided during pharmacy, medical, veterinary, social work, podiatry, nursing, physical therapy, and clinical psychology doctoral programs in addition to the formal training master's and doctoral students in public health fields receive. As public health practitioners, epidemiologists work in a number of different settings. Some epidemiologists work "in the field" (i.e., in the community; commonly in a public health service), and are often at the forefront of investigating and combating disease outbreaks. Others work for non-profit organizations, universities, hospitals, or larger government entities (e.g., state and local health departments in the United States), ministries of health, Doctors without Borders, the Centers for Disease Control and Prevention (CDC), the Health Protection Agency, the World Health Organization (WHO), or the Public Health Agency of Canada. Epidemiologists can also work in for-profit organizations (e.g., pharmaceutical and medical device companies) in groups such as market research or clinical development. COVID-19 An April 2020 University of Southern California article noted that, "The coronavirus epidemic... thrust epidemiology – the study of the incidence, distribution and control of disease in a population – to the forefront of scientific disciplines across the globe and even made temporary celebrities out of some of its practitioners."
Biology and health sciences
Fields of medicine
null
67049
https://en.wikipedia.org/wiki/Selaginella
Selaginella
Selaginella, also known as spikemosses or lesser clubmosses is a genus of lycophyte. It is usually treated as the only genus in the family Selaginellaceae, with over 750 known species. This family is distinguished from Lycopodiaceae (the clubmosses) by having scale-leaves bearing a ligule and by having spores of two types. They are sometimes included in an informal paraphyletic group called the "fern allies". The species S. moellendorffii is an important model organism. Its genome has been sequenced by the United States Department of Energy's Joint Genome Institute. The name Selaginella was erected by Palisot de Beauvois solely for the species Selaginella selaginoides, which turns out (with the closely related Selaginella deflexa) to be a clade that is sister to all other Selaginellas, so any definitive subdivision of the species into separate genera leaves two taxa in Selaginella, with the hundreds of other species in new or resurrected genera. Selaginella occurs mostly in the tropical regions of the world, with a handful of species to be found in the arctic-alpine zones of both hemispheres. Fossils assignable to the modern genus are known spanning over 300 million years from the Late Carboniferous to the present. Description Selaginella species are creeping or ascendant plants with simple, scale-like leaves (microphylls) on branching stems from which roots also arise. The stems are aerial, horizontally creeping on the substratum (as in Selaginella kraussiana), sub-erect (Selaginella trachyphylla) or erect (as in Selaginella erythropus). The vascular steles are polystelic protosteles. Stem section shows the presence of more than two protosteles. Each stele is made up of diarch (having two strands of xylem) and exarch (growing outward in) xylems. The steles are connected with the cortex by means of many tube-like structures called trabeculae, which are modified endodermal cells with casparian strips on their lateral walls. The stems contain no pith. In Selaginella, each microphyll and sporophyll has a small scale-like outgrowth called a ligule at the base of the upper surface. The plants are heterosporous with spores of two different size classes, known as megaspores and microspores. Unusual for the lycopods, which nearly always have microphylls with a single unbranched vein, the microphylls of a few Selaginella species contain a branched vascular trace. Under dry conditions, some species of Selaginella can survive dehydration. In this state, they may roll up into brown balls and be uprooted, but can rehydrate under moist conditions, become green again and resume growth. This phenomenon is known as poikilohydry, and poikilohydric plants such as Selaginella bryopteris are sometimes referred to as resurrection plants. There is no evidence of whole genome duplication in Selaginella's evolutionary history. Instead they have gone through tandem gene duplications, which is particularly noticeable in genes relevant for desiccation tolerance. Their chloroplasts are missing about two-thirds of their plastidial tRNA genes, which are instead found in the genome of the nucleus. The genus is unique among vascular plants in having species with monoplastidic cells, single giant chloroplasts, located mostly in their dorsal epidermal cells, but also in the upper mesophyll of some species. This appears to be a derived traits and an adaptation to low-light conditions, having originated at least twice. Cells with multiplastidic chloroplasts, more than ten chloroplasts per cell, are considered most basal, and are found in species exposed to more light. Oligoplastidic cells, cells with 3 to ten chloroplats, are more adated to weaker light, and the monoplastidic species being the most shade-loving forms. It is estimated that 70% of Selaginella species are monoplastidic. These receive just 0.4~2.1% of full sunlight, while species with multiple chloroplasts live in open places where they on average receive more than 40.5% of full sunlight. Taxonomy Some scientists still place the Selaginellales in the class Lycopodiopsida (often misconstructed as "Lycopsida"). Some modern authors recognize three generic divisions of Selaginella: Selaginella, Bryodesma Sojak 1992, and Lycopodioides Boehm 1760. Lycopodioides would include the North American species S. apoda and S. eclipes, while Bryodesma would include S. rupestris (as Bryodesma rupestre). Stachygynandrum is also sometimes used to include the bulk of species. The first major attempt to define and subdivide the group was by Palisot de Beauvois in 1803–1805. He established the genus Selaginella as a monotypic genus, and placed the bulk of species in Stachygynandrum. Gymnogynum was another monotypic genus, but that name is superseded by his own earlier name of Didiclis. This turns out, today, to be a group of around 45–50 species also known as the Articulatae, since his genus Didiclis/Gymnogynum was based on Selaginella plumosa. He also described the genus Diplostachyum to include a group of species similar to Selaginella apoda. Spring inflated the genus Selaginella to hold all selaginelloid species four decades later. Phylogenetic studies by Korall & Kenrick determined that the Euselaginella group, comprising solely the type species, Selaginella selaginoides and a closely related Hawaiian species, Selaginella deflexa, is a basal and anciently diverging sister to all other Selaginella species. Beyond this, their study split the remainder of species into two broad groups, one including the Bryodesma species, the Articulatae, section Ericetorum Jermy and others, and the other centered on the broad Stachygynandrum group. In 2023, Zhou & Zhang suggested that the genus should be broken up into 19 different genera. Zhang & Zhou, 2015 classification subgenus: Selaginella Type: S. selaginoides (L.) P.Beauv. ex Mart. & Schrank subgenus: Boreoselaginella Type: S. sanguinolenta (L.) Spring subgenus: Ericetorum Type: S. uliginosa (Labill.) Spring section: Lyallia Type: S. uliginosa (Labill.) Spring section: Myosurus Type: S. myosurus Alston section: Megalosporarum Type: S. exaltata (Kunze) Spring section: Articulatae Type: S. kraussiana (Kunze) A.Braun section: Homoeophyllae Type: S. rupestris (L.) Spring (=Bryodesma Sojak or Tetragonostachys Jermy) section: Lepidophyllae Type: S. lepidophylla (Hook. & Grev.) Spring subgenus: Pulviniella Type: S. pulvinata (Hook. & Grev.) Maxim subgenus: Heterostachys Type: S. heterostachys Baker section: Oligomacrosporangiatae Type: Selaginella uncinata (Desv. ex Poir.) Spring section: Auriculatae Type: S. douglasii (Hook. & Grev.) Spring section: Homostachys Type: : S. helvetica (L.) Link section: Tetragonostachyae Type: S. proniflora (L.) Baker section: Heterostachys Type: S. brachystachya (Hook. & Grev.) Spring subgenus: Stachygynandrum Type: S. flabellata (L.) Spring section: Plagiophyllae Type: S. biformis A.Braun ex Kuhn section: Circinatae Type: S. involvens (Sw.) Spring section: Heterophyllae Type: S. flexuosa Spring section: Austroamericanae Type: S. hartwegiana Spring section: Pallescentes Type: S. pallescens (C.Presl) Spring section: Proceres Type: S. oaxacana Spring section: Ascendentes Type: S. alopecuroides Baker Species There are about 750 known species of Selaginella. They show a wide range of characters; the genus is overdue for a revision which might include subdivision into several genera. Species of spikemoss include: Selaginella apoda – meadow spikemoss; eastern North America Selaginella arizonica Maxon – west Texas to Arizona and Sonora, Mexico Selaginella asprella Selaginella bifida – Rodrigues Island Selaginella biformis Selaginella bigelovii Selaginella braunii – Braun's spikemoss; China Selaginella bryopteris – sanjeevani; India Selaginella canaliculata – clubmoss; southeast Asia, Maluku Islands Selaginella carinata Selaginella cinerascens Selaginella densa – lesser spikemoss; western North America Selaginella denticulata Selaginella eclipes – hidden spikemoss; eastern North America Selaginella elmeri Selaginella eremophila Maxon Selaginella erythropus Selaginella galotteii Selaginella gigantea – From Venezuela. Selaginella hansenii Selaginella kraussiana – Krauss's spikemoss; Africa, Azores Selaginella lepidophylla – resurrection plant, dinosaur plant, and flower of stone; Chihuahuan Desert, North America Selaginella martensii – variegated spikemoss Selaginella moellendorffii Selaginella oregana Selaginella plana – Asian spikemoss; tropical Asia Selaginella poulteri Selaginella pulcherrima Selaginella rupestris – rock spikemoss, festoon pine, and northern Selaginella (eastern North America) Selaginella rupincola Underw. – west Texas to Arizona and Sonora, Mexico Selaginella selaginoides – lesser clubmoss; north temperate Europe, Asia and North America) Selaginella sericea A.Braun – Ecuador Selaginella serpens Selaginella sibirica Selaginella stellata – starry spikemoss; Mexico, Central America Selaginella substipitata Selaginella tamariscina Selaginella tortipila Selaginella uliginosa – Australia Selaginella umbrosa Selaginella uncinata – peacock moss, peacock spikemoss, blue spikemoss Selaginella underwoodii Hieron. – west Texas to Wyoming and west into Arizona Selaginella wallacei Selaginella watsonii Selaginella willdenowii – Willdenow's spikemoss, peacock fern; southeast Asia A few species of Selaginella are desert plants known as "resurrection plants", because they curl up in a tight, brown or reddish ball during dry times, and uncurl and turn green in the presence of moisture. Other species are tropical forest plants that appear at first glance to be ferns. Cultivation A number of Selaginella species are popular plants for cultivation, mostly tropical species. Some of the species popularly cultivated and actively available commercially include: S. kraussiana: golden clubmoss S. martensii: frosty fern S. moellendorffii: gemmiferous spikemoss S. erythropus: red selaginella or ruby-red spikemoss S. uncinata: peacock moss S. lepidophylla: resurrection plant S. braunii: arborvitae fern
Biology and health sciences
Lycophytes
Plants
67058
https://en.wikipedia.org/wiki/Abdominal%20cavity
Abdominal cavity
The abdominal cavity is a large body cavity in humans and many other animals that contain organs. It is a part of the abdominopelvic cavity. It is located below the thoracic cavity, and above the pelvic cavity. Its dome-shaped roof is the thoracic diaphragm, a thin sheet of muscle under the lungs, and its floor is the pelvic inlet, opening into the pelvis. Structure Organs Organs of the abdominal cavity include the stomach, liver, gallbladder, spleen, pancreas, small intestine, kidneys, large intestine, and adrenal glands. Peritoneum The abdominal cavity is lined with a protective membrane termed the peritoneum. The inside wall is covered by the parietal peritoneum. The kidneys are located behind the peritoneum, in the retroperitoneum, outside the abdominal cavity. The viscera are also covered by visceral peritoneum. Between the visceral and parietal peritoneum is the peritoneal cavity, which is a potential space. It contains a serous fluid called peritoneal fluid that allows motion. This motion is apparent of the gastrointestinal tract. The peritoneum, by virtue of its connection to the two (parietal and visceral) portions, gives support to the abdominal organs. The peritoneum divides the cavity into numerous compartments. One of these the lesser sac is located behind the stomach and joins into the greater sac via the foramen of Winslow. Some of the organs are attached to the walls of the abdomen via folds of peritoneum and ligaments, such as the liver and others use broad areas of the peritoneum, such as the pancreas. The peritoneal ligaments are actually dense folds of the peritoneum that are used to connect viscera to viscera or viscera to the walls of the abdomen. They are named in such a way as to show what they connect typically. For example, the gastrocolic ligament connects the stomach and colon and the splenocolic ligament connects the spleen and the colon, or sometimes by their shape as the round ligament or triangular ligament. Mesentery Mesenteries are folds of peritoneum that are attached to the walls of the abdomen and enclose viscera completely. They are supplied with plentiful amounts of blood. The three most important mesenteries are mesentery for the small intestine, the transverse mesocolon, which attaches the back portion of the colon to the abdominal wall, and the sigmoid mesocolon which enfolds the sigmoid colon. Omenta The omentum are specialized folds of peritoneum that enclose nerves, blood vessels, lymph channels, fatty tissue, and connective tissue. There are two omenta. First, is the greater omentum that hangs off of the transverse colon and greater curvature of the stomach. The other is the lesser omentum that extends between the stomach and the liver. Clinical significance Ascites When fluid collects in the abdominal cavity, this condition is called ascites. This is usually not noticeable until enough fluid has collected to distend the abdomen. The collection of fluid will cause pressure on the viscera, veins, and thoracic cavity. Treatment is directed at the cause of the fluid accumulation. One method is to decrease the portal vein pressure, especially useful in treating cirrhosis. Chylous ascites heals best if the lymphatic vessel involved is closed. Heart failure can cause recurring ascites. Inflammation Another disorder is called peritonitis which usually accompanies inflammatory processes elsewhere. It can be caused by damage to an organ, or from a contusion to the abdominal wall from the outside or by surgery. It may be brought in by the bloodstream or the lymphatic system. The most common origin is the gastrointestinal tract. Peritonitis can be acute or chronic, generalized or localized, and may have one origin or multiple origins. The omenta can help control the spread of infection; however without treatment, the infection will spread throughout the cavity. An abscess may also form as a secondary reaction to an infection. Antibiotics have become an important tool in fighting abscesses; however, external drainage is usually required also.
Biology and health sciences
External anatomy and regions of the body
Biology
67060
https://en.wikipedia.org/wiki/Hot%20spring
Hot spring
A hot spring, hydrothermal spring, or geothermal spring is a spring produced by the emergence of geothermally heated groundwater onto the surface of the Earth. The groundwater is heated either by shallow bodies of magma (molten rock) or by circulation through faults to hot rock deep in the Earth's crust. Hot spring water often contains large amounts of dissolved minerals. The chemistry of hot springs ranges from acid sulfate springs with a pH as low as 0.8, to alkaline chloride springs saturated with silica, to bicarbonate springs saturated with carbon dioxide and carbonate minerals. Some springs also contain abundant dissolved iron. The minerals brought to the surface in hot springs often feed communities of extremophiles, microorganisms adapted to extreme conditions, and it is possible that life on Earth had its origin in hot springs. Humans have made use of hot springs for bathing, relaxation, or medical therapy for thousands of years. However, some are hot enough that immersion can be harmful, leading to scalding and, potentially, death. Definitions There is no universally accepted definition of a hot spring. For example, one can find the phrase hot spring defined as any spring heated by geothermal activity a spring with water temperatures above its surroundings a natural spring with water temperature above human body temperature (normally about ) a natural spring of water whose temperature is greater than a type of thermal spring whose water temperature is usually or more above mean air temperature. a spring with water temperatures above The related term "warm spring" is defined as a spring with water temperature less than a hot spring by many sources, although Pentecost et al. (2003) suggest that the phrase "warm spring" is not useful and should be avoided. The US NOAA Geophysical Data Center defines a "warm spring" as a spring with water between . Sources of heat Water issuing from a hot spring is heated geothermally, that is, with heat produced from the Earth's mantle. This takes place in two ways. In areas of high volcanic activity, magma (molten rock) may be present at shallow depths in the Earth's crust. Groundwater is heated by these shallow magma bodies and rises to the surface to emerge at a hot spring. However, even in areas that do not experience volcanic activity, the temperature of rocks within the earth increases with depth. The rate of temperature increase with depth is known as the geothermal gradient. If water percolates deeply enough into the crust, it will be heated as it comes into contact with hot rock. This generally takes place along faults, where shattered rock beds provide easy paths for water to circulate to greater depths. Much of the heat is created by decay of naturally radioactive elements. An estimated 45 to 90 percent of the heat escaping from the Earth originates from radioactive decay of elements mainly located in the mantle. The major heat-producing isotopes in the Earth are potassium-40, uranium-238, uranium-235, and thorium-232. In areas with no volcanic activity, this heat flows through the crust by a slow process of thermal conduction, but in volcanic areas, the heat is carried to the surface more rapidly by bodies of magma. A hot spring that periodically jets water and steam is called a geyser. In active volcanic zones such as Yellowstone National Park, magma may be present at shallow depths. If a hot spring is connected to a large natural cistern close to such a magma body, the magma may superheat the water in the cistern, raising its temperature above the normal boiling point. The water will not immediately boil, because the weight of the water column above the cistern pressurizes the cistern and suppresses boiling. However, as the superheated water expands, some of the water will emerge at the surface, reducing pressure in the cistern. This allows some of the water in the cistern to flash into steam, which forces more water out of the hot spring. This leads to a runaway condition in which a sizable amount of water and steam are forcibly ejected from the hot spring as the cistern is emptied. The cistern then refills with cooler water, and the cycle repeats. Geysers require both a natural cistern and an abundant source of cooler water to refill the cistern after each eruption of the geyser. If the water supply is less abundant, so that the water is boiled as fast as it can accumulate and only reaches the surface in the form of steam, the result is a fumarole. If the water is mixed with mud and clay, the result is a mud pot. An example of a non-volcanic warm spring is Warm Springs, Georgia (frequented for its therapeutic effects by paraplegic U.S. President Franklin D. Roosevelt, who built the Little White House there). Here the groundwater originates as rain and snow (meteoric water) falling on the nearby mountains, which penetrates a particular formation (Hollis Quartzite) to a depth of and is heated by the normal geothermal gradient. Chemistry Because heated water can hold more dissolved solids than cold water, the water that issues from hot springs often has a very high mineral content, containing everything from calcium to lithium and even radium. The overall chemistry of hot springs varies from alkaline chloride to acid sulfate to bicarbonate to iron-rich, each of which defines an end member of a range of possible hot spring chemistries. Alkaline chloride hot springs are fed by hydrothermal fluids that form when groundwater containing dissolved chloride salts reacts with silicate rocks at high temperature. These springs have nearly neutral pH but are saturated with silica (). The solubility of silica depends strongly upon temperature, so upon cooling, the silica is deposited as geyserite, a form of opal (opal-A: ). This process is slow enough that geyserite is not all deposited immediately around the vent, but tends to build up a low, broad platform for some distance around the spring opening. Acid sulfate hot springs are fed by hydrothermal fluids rich in hydrogen sulfide (), which is oxidized to form sulfuric acid, . The pH of the fluids is thereby lowered to values as low as 0.8. The acid reacts with rock to alter it to clay minerals, oxide minerals, and a residue of silica. Bicarbonate hot springs are fed by hydrothermal fluids that form when carbon dioxide () and groundwater react with carbonate rocks. When the fluids reach the surface, is rapidly lost and carbonate minerals precipitate as travertine, so that bicarbonate hot springs tend to form high-relief structures around their openings. Iron-rich springs are characterized by the presence of microbial communities that produce clumps of oxidized iron from iron in the hydrothermal fluids feeding the spring. Some hot springs produce fluids that are intermediate in chemistry between these extremes. For example, mixed acid-sulfate-chloride hot springs are intermediate between acid sulfate and alkaline chloride springs and may form by mixing of acid sulfate and alkaline chloride fluids. They deposit geyserite, but in smaller quantities than alkaline chloride springs. Flow rates Hot springs range in flow rate from the tiniest "seeps" to veritable rivers of hot water. Sometimes there is enough pressure that the water shoots upward in a geyser, or fountain. High-flow hot springs There are many claims in the literature about the flow rates of hot springs. There are many more high flow non-thermal springs than geothermal springs. Springs with high flow rates include: The Dalhousie Springs complex in Australia had a peak total flow of more than 23,000 liters/second in 1915, giving the average spring in the complex an output of more than 325 liters/second. This has been reduced now to a peak total flow of 17,370 liters/second so the average spring has a peak output of about 250 liters/second. The 2,850 hot springs of Beppu in Japan are the highest flow hot spring complex in Japan. Together the Beppu hot springs produce about 1,592 liters/second, or corresponding to an average hot spring flow of 0.56 liters/second. The 303 hot springs of Kokonoe in Japan produce 1,028 liters/second, which gives the average hot spring a flow of 3.39 liters/second. Ōita Prefecture has 4,762 hot springs, with a total flow of 4,437 liters/second, so the average hot spring flow is 0.93 liters/second. The highest flow rate hot spring in Japan is the Tamagawa Hot Spring in Akita Prefecture, which has a flow rate of 150 liters/second. The Tamagawa Hot Spring feeds a wide stream with a temperature of . The most famous hot springs of Brazil's Caldas Novas ("New Hot Springs" in Portuguese) are tapped by 86 wells, from which 333 liters/second are pumped for 14 hours per day. This corresponds to a peak average flow rate of 3.89 liters/second per well. In Florida, there are 33 recognized "magnitude one springs" (having a flow in excess of ). Silver Springs, Florida has a flow of more than . The Excelsior Geyser Crater in Yellowstone National Park yields about . Evans Plunge in Hot Springs, South Dakota has a flow rate of of spring water. The Plunge, built in 1890, is the world's largest natural warm water indoor swimming pool. The hot spring of Saturnia, Italy with around 500 liters a second Lava Hot Springs in Idaho has a flow of 130 liters/second. Glenwood Springs in Colorado has a flow of 143 liters/second. Elizabeth Springs in western Queensland, Australia might have had a flow of 158 liters/second in the late 19th century, but now has a flow of about 5 liters/second. Deildartunguhver in Iceland has a flow of 180 liters/second. There are at least three hot springs in the Nage region south west of Bajawa in Indonesia that collectively produce more than 453.6 liters/second. There are another three large hot springs (Mengeruda, Wae Bana and Piga) north east of Bajawa, Indonesia that together produce more than 450 liters/second of hot water. Ecosystems Hot springs often host communities of microorganisms adapted to life in hot, mineral-laden water. These include thermophiles, which are a type of extremophile that thrives at high temperatures, between . Further from the vent, where the water has had time to cool and precipitate part of its mineral load, conditions favor organisms adapted to less extreme conditions. This produces a succession of microbial communities as one moves away from the vent, which in some respects resembles the successive stages in the evolution of early life. For example, in a bicarbonate hot spring, the community of organisms immediately around the vent is dominated by filamentous thermophilic bacteria, such as Aquifex and other Aquificales, that oxidize sulfide and hydrogen to obtain energy for their life processes. Further from the vent, where water temperatures have dropped below , the surface is covered with microbial mats thick that are dominated by cyanobacteria, such as Spirulina, Oscillatoria, and Synechococcus, and green sulfur bacteria such as Chloroflexus. These organisms are all capable of photosynthesis, though green sulfur bacteria produce sulfur rather than oxygen during photosynthesis. Still further from the vent, where temperatures drop below , conditions are favorable for a complex community of microorganisms that includes Spirulina, Calothrix, diatoms and other single-celled eukaryotes, and grazing insects and protozoans. As temperatures drop close to those of the surroundings, higher plants appear. Alkali chloride hot springs show a similar succession of communities of organisms, with various thermophilic bacteria and archaea in the hottest parts of the vent. Acid sulfate hot springs show a somewhat different succession of microorganisms, dominated by acid-tolerant algae (such as members of Cyanidiophyceae), fungi, and diatoms. Iron-rich hot springs contain communities of photosynthetic organisms that oxidize reduced (ferrous) iron to oxidized (ferric) iron. Hot springs are a dependable source of water that provides a rich chemical environment. This includes reduced chemical species that microorganisms can oxidize as a source of energy. Significance to abiogenesis Hot spring hypothesis In contrast with "black smokers" (hydrothermal vents on the ocean floor), hot springs similar to terrestrial hydrothermal fields at Kamchatka produce fluids having suitable pH and temperature for early cells and biochemical reactions. Dissolved organic compounds were found in hot springs at Kamchatka . Metal sulfides and silica minerals in these environments would act as photocatalysts. They experience cycles of wetting and drying which promote the formation of biopolymers which are then encapsulated in vesicles after rehydration. Solar UV exposure to the environment promotes synthesis to monomeric biomolecules. The ionic composition and concentration of hot springs (K, B, Zn, P, O, S, C, Mn, N, and H) are identical to the cytoplasm of modern cells and possibly to those of the LUCA or early cellular life according to phylogenomic analysis. For these reasons, it has been hypothesized that hot springs may be the place of origin of life on Earth. The evolutionary implications of the hypothesis imply a direct evolutionary pathway to land plants. Where continuous exposure to sunlight leads to the development of photosynthetic properties and later colonize on land and life at hydrothermal vents is suggested to be a later adaptation. Recent experimental studies at hot springs support this hypothesis. They show that fatty acids self-assemble into membranous structures and encapsulate synthesized biomolecules during exposure to UV light and multiple wet-dry cycles at slightly alkaline or acidic hot springs, which would not happen at saltwater conditions as the high concentrations of ionic solutes there would inhibit the formation of membranous structures. David Deamer and Bruce Damer note that these hypothesized prebiotic environments resemble Charles Darwin's imagined "warm little pond". If life did not emerge at deep sea hydrothermal vents, rather at terrestrial pools, extraterrestrial quinones transported to the environment would generate redox reactions conducive to proton gradients. Without continuous wet-dry cycling to maintain stability of primitive proteins for membrane transport and other biological macromolecules, they would go through hydrolysis in an aquatic environment. Scientists discovered a 3.48 billion year old geyserite that seemingly preserved fossilized microbial life, stromatolites, and biosignatures. Researchers propose pyrophosphite to have been used by early cellular life for energy storage and it might have been a precursor to pyrophosphate. Phosphites, which are present at hot springs, would have bonded together into pyrophosphite within hot springs through wet-dry cycling. Like alkaline hydrothermal vents, the Hakuba Happo hot spring goes through serpentinization, suggesting methanogenic microbial life possibly originated in similar habitats. Limitations A problem with the hot spring hypothesis for an origin of life is that phosphate has low solubility in water. Pyrophosphite could have been present within protocells, however all modern life forms use pyrophosphate for energy storage. Kee suggests that pyrophosphate could have been utilized after the emergence of enzymes. Dehydrated conditions would favor phosphorylation of organic compounds and condensation of phosphate to polyphosphate. Another problem is that solar ultraviolet radiation and frequent impacts would have inhibited habitability of early cellular life at hot springs, although biological macromolecules might have undergone selection during exposure to solar ultraviolet radiation and would have been catalyzed by photocatalytic silica minerals and metal sulfides. Carbonaceous meteors during the Late Heavy Bombardment would not have caused cratering on Earth as they would produce fragments upon atmospheric entry. The meteors are estimated to have been 40 to 80 meters in diameter however larger impactors would produce larger craters. Metabolic pathways have not yet been demonstrated at these environments, but the development of proton gradients might have been generated by redox reactions coupled to meteoric quinones or protocell growth. Metabolic reactions in the Wood-Ljungdahl pathway and reverse Krebs cycle have been produced in acidic conditions and thermophilic temperatures in the presence of metals which is consistent with observations of RNA mostly stable at acidic pH. Human uses History Hot springs have been enjoyed by humans for thousands of years. Even macaques are known to have extended their northern range into Japan by making use of hot springs to protect themselves from cold stress. Hot spring baths (onsen) have been in use in Japan for at least two thousand years, traditionally for cleanliness and relaxation, but increasingly for their therapeutic value. In the Homeric Age of Greece (ca. 1000 BCE), baths were primarily for hygiene, but by the time of Hippocrates (ca. 460 BCE), hot springs were credited with healing power. The popularity of hot springs has fluctuated over the centuries since, but they are now popular around the world. Therapeutic uses Because of both the folklore and the claimed medical value attributed to some hot springs, they are often popular tourist destinations, and locations for rehabilitation clinics for those with disabilities. However, the scientific basis for therapeutic bathing in hot springs is uncertain. Hot bath therapy for lead poisoning was common and reportedly highly successful in the 18th and 19th centuries, and may have been due to diuresis (increased production of urine) from sitting in hot water, which increased excretion of lead; better food and isolation from lead sources; and increased intake of calcium and iron. Significant improvement in patients with rheumatoid arthritis and ankylosing spondylitis have been reported in studies of spa therapy, but these studies have methodological problems, such as the obvious impracticality of placebo-controlled studies (in which a patient does not know if they are receiving the therapy). As a result, the therapeutic effectiveness of hot spring therapy remains uncertain. Precautions Hot springs in volcanic areas are often at or near the boiling point. People have been seriously scalded and even killed by accidentally or intentionally entering these springs. Some hot springs microbiota are infectious to humans: Naegleria fowleri, an excavate amoeba, lives in warm unsalted waters worldwide and causes a fatal meningitis should the organisms enter the nose. Acanthamoeba also can spread through hot springs, according to the US Centers for Disease Control - The organisms enter through the eyes or via an open wound. Legionella bacteria have been spread through hot springs. Neisseria gonorrhoeae was reported to have very likely been acquired from bathing in a hot spring according to one case study, with the near-body temperature, slightly acidic, isotonic, organic matter-containing waters thought to facilitate the survival of the pathogen. Etiquette The customs and practices observed differ depending on the hot spring. It is common practice that bathers should wash before entering the water so as not to contaminate the water (with/without soap). In many countries, like Japan, it is required to enter the hot spring with no clothes on, including swimwear. Often there are different facilities or times for men and women, but mixed onsen do exist. In some countries, if it is a public hot spring, swimwear is required. Examples There are hot springs in many places and on all continents of the world. Countries that are renowned for their hot springs include China, Costa Rica, Hungary, Iceland, Iran, Japan, New Zealand, Brazil, Peru, Serbia, South Korea, Taiwan, Turkey, and the United States, but there are hot springs in many other places as well: Widely renowned since a chemistry professor's report in 1918 classified them as one of the world's most electrolytic mineral waters, the Rio Hondo Hot Springs in northern Argentina have become among the most visited on earth. The Cacheuta Spa is another famous hot springs in Argentina. The springs in Europe with the highest temperatures are located in France, in a small village named Chaudes-Aigues. Located at the heart of the French volcanic region Auvergne, the thirty natural hot springs of Chaudes-Aigues have temperatures ranging from to more than . The hottest one, the "Source du Par", has a temperature of . The hot waters running under the village have provided heat for the houses and for the church since the 14th Century. Chaudes-Aigues (Cantal, France) is a spa town known since the Roman Empire for the treatment of rheumatism. Carbonate aquifers in foreland tectonic settings can host important thermal springs although located in areas commonly not characterised by regional high heat flow values. In these cases, when thermal springs are located close or along the coastlines, the subaerial and/or submarine thermal springs constitute the outflow of marine groundwater, flowing through localised fractures and karstic rock-volumes. This is the case of springs occurring along the south-easternmost portion of the Apulia region (Southern Italy) where few sulphurous and warm waters () outflow in partially submerged caves located along the Adriatic coast, thus supplying the historical spas of Santa Cesarea Terme. These springs are known from ancient times (Aristotele in III Century BC) and the physical-chemical features of their thermal waters resulted to be partly influenced by the sea level variations. One of the potential geothermal energy reservoirs in India is the Tattapani thermal springs of Madhya Pradesh. The silica-rich deposits found in Nili Patera, the volcanic caldera in Syrtis Major, Mars, are thought to be the remains of an extinct hot spring system.
Physical sciences
Volcanology
Earth science
67075
https://en.wikipedia.org/wiki/Isoetes
Isoetes
Isoetes, commonly known as the quillworts, is a genus of lycopod. It is the only living genus in the family Isoetaceae and order Isoetales. , there were about 200 recognized species, with a cosmopolitan distribution mostly in aquatic habitats but with the individual species often scarce to rare. Species virtually identical to modern quillworts have existed since the Jurassic epoch, though the timing of the origin of modern Isoetes is subject to considerable uncertainty. The name of the genus may also be spelled Isoëtes. The diaeresis (two dots over the e) indicates that the o and the e are to be pronounced in two distinct syllables. Including this in print is optional; either spelling (Isoetes or Isoëtes) is correct. Description Quillworts are mostly aquatic or semi-aquatic in clear ponds and slow-moving streams, though several (e.g. I. butleri, I. histrix and I. nuttallii) grow on wet ground that dries out in the summer. The quillworts are spore-producing plants and highly reliant on water dispersion. Quillworts have different ways to spread their spores based on the environment. Quillwort leaves are hollow and quill-like, with a minute ligule at the base of the upper surface. arising from a central corm. The sporangia are sunk deeply in the leaf bases. Each leaf will either have many small spores or fewer large spores. Both types of leaf are found on each plant. Each leaf is narrow, long (exceptionally up to ) and wide; they can be either evergreen, winter deciduous, or dry-season deciduous. Only 4% of total biomass, the tips of the leaves, is chlorophyllous. The roots broaden to a swollen base up to wide where they attach in clusters to a bulb-like, underground rhizome characteristic of most quillwort species, though a few (e.g. I. tegetiformans) form spreading mats. This swollen base also contains male and female sporangia, protected by a thin, transparent covering (velum), which is used diagnostically to help identify quillwort species. They are heterosporous. Quillwort species are very difficult to distinguish by general appearance. The best way to identify them is by examining their megaspores under a microscope. Moreover, habitat, texture, spore size, and velum provide features that distinguish Isoëtes taxa. They also possess a vestigial form of secondary growth in the basal portions of its cormlike stem, an indication that they evolved from larger ancestors. Biochemistry and genetics Quillworts use crassulacean acid metabolism (CAM) for carbon fixation. Some aquatic species do not have stomata and the leaves have a thick cuticle which prevents CO2 uptake, a task that is performed by their hollow roots instead, which absorb CO2 from the sediment. This has been studied extensively in Isoetes andicola. CAM is normally considered an adaptation to life in arid environments to prevent water loss with the plants opening their stomata at night rather than in the heat of the day. This allows CO2 to enter and minimises water loss. As mostly submerged aquatic plants, quillworts do not lack water and the use of CAM is considered to avoid competition with other aquatic plants for CO2 during daytime. The first detailed quillwort genome sequence, of I. taiwanensis, showed that there were differences from CAM in terrestrial plants. CAM involves the enzyme phosphoenolpyruvate carboxylase (PEPC) and plants have two forms of the enzyme. One is normally involved in photosynthesis and the other in central metabolism. From the genome sequence, it appears that in quillworts, both forms are involved in photosynthesis. In addition, circadian expression of key CAM pathway genes peaked at different times of day than in angiosperms. These fundamental differences in biochemistry suggest that CAM in quillworts is probably another example of convergent evolution of CAM during the more than 300 million years since the genus diverged from other plants. However, they may also be because of differences between life in water and in the air. The genome sequence also provided two insights into its structure. First, genes and repeated non-coding regions were fairly evenly distributed across all the chromosomes. This is similar to genomes of other non-seed plants, but different from the seed plants (angiosperms) where there are distinctly more genes at the ends of chromosomes. Secondly, there was also evidence that the whole genome had been duplicated in the ancient past. Reproduction Overview Like all land plants, Isoetes undergoes an alternation of generations between a diploid sporophyte stage and a sexual haploid gametophyte stage. However, the dominance of one stage over the other has shifted over time. The development of vascular tissue and subsequent diversification of land plants coincides with the increased dominance of the sporophyte and reduction of the gametophyte. Isoetes, as members of the Lycopodiopsida class, are part of the oldest extant lineage that reflects this shift to a sporophyte dominant lifecycle. In closely related lineages, such as the extinct Lepidodendron, spores were dispersed by the sporophyte through large collections of sporangia called strobili for wind-based spore dispersal. However, Isoetes are small heterosporous semi-aquatic plants, with different reproductive needs and challenges than large tree-like land plants. Description Like the rest of the Lycopodiopsida class, Isoetes reproduces with spores. Among the lycophytes, both Isoetes and the Selaginellaceae (spikemosses) are heterosporous, while the remaining lycophyte family Lycopodiaceae (clubmosses) is homosporous. As heterosporous plants, fertile Isoetes sporophytes produce megaspores and microspores, which develop in the megasporangia and microsporangia. These spores are highly ornate and are the primary way by which species are identified, although no one functional purpose of the intricate surface patterns is agreed upon. The megasporangia occur within the outermost microphylls (single-veined leaves) of the plant while the microsporangia are found in the innermost microphylls. This pattern of development is hypothesized to improve the dispersal of the heavier megaspore. These spores then germinate and divide into mega- and micro- gametophytes. The microgametophytes have antheridia, which in turn produce sperm. The megagametophytes have archegonia, which produce egg cells. Fertilization takes place when the motile sperm from a microgametophyte locates the archegonia of a megagametophyte and swims inside to fertilize the egg. Outside of heterospory, a distinguishing feature of Isoetes (and Selaginella) from other pteridophytes, is that their gametophytes grow inside the spores. This means that the gametophytes never leave the protection of the spore that disperses them, cracking the perispore (the outer layer of the spore) just enough to allow the passage of gametes. This is fundamentally different from ferns, where the gametophyte is a photosynthetic plant exposed to the elements of its environment. However, containment creates a separate problem for Isoetes, which is that the gametophytes have no way to acquire energy on their own. Isoetes sporophytes solve this problem by provisioning starches and other nutrients to the spores as an energy reserve for the eventual gametophytes. Although not a homologous process, this provisioning is somewhat analogous to other modes of offspring resource investment in seed-plants, such as fruits and seeds. The extent to which resources provisioned to the megaspore also support the growth of the new sporophyte is unknown in Isoetes. Dispersal Spore dispersal occurs primarily in water (hydrochory) but may also occur via adherence to animals (zoochory) and as a result of ingestion (endozoochory). These are among the reasons suggested for the ornamentations of the spore, with some authors demonstrating that certain patterns seem well-adapted for sticking to relevant animals like waterfowl. Another critical element of dispersal is the observation that in some species of Isoetes, the outer coat of megaspores have pockets that trap microspores, a condition known as synaptospory. Typically, heterospory means that colonization and long-dispersal are more difficult due to the fact that a single spore cannot grow a bisexual gametophyte and thus cannot establish a new population from a single spore as can happen in homosporous ferns. Isoetes may mitigate this issue via microspores stuck to megaspores, greatly increasing the possibility of successful fertilization upon dispersal. Taxonomy Compared to other genera, Isoetes is poorly known. The first critical monograph on their taxonomy, written by Norma Etta Pfeiffer, was published in 1922 and remained a standard reference into the twenty-first century. Even after studies with cytology, scanning electron microscopy, and chromatography, species are difficult to identify and their phylogeny is disputed. Vegetative characteristics commonly used to distinguish other genera, such as leaf length, rigidity, color, or shape are variable and depend on the habitat. Most classification systems for Isoetes rely on spore characteristics, which make species identification nearly impossible without microscopy. Some botanists split the genus, separating two South American species into the genus Stylites, although molecular data place these species among other species of Isoetes, so that Stylites does not warrant taxonomic recognition. Evolution The earliest fossil that has been assigned to the genus is Isoetes beestonii from the latest Permian of New South Wales, Australia, around 252 million years ago. However, the relationships of pre-Jurassic isoetaleans to modern Isotetes have been regarded as unclear by other authors. Isoetites rolandii from the Late Jurassic of North America has been described as the "earliest clear example of a isoetalean lycopsid containing all the major features uniting modern Isoetes", including the loss of the elongated stem and vegetative leaves. Based on this, it has been stated that "the overall morphology of Isoetes appears to have persisted virtually unchanged since at least the Jurassic". The timing of the origin of the crown group is uncertain. Wood et al (2020) asserted there to be no morphological features that define the major clades within Isoetes, and no fossils are known that can be definitively assigned to the crown group. While Wood et al. suggested a young origin dating to the early Cenozoic based on molecular clock estimates, the results were questioned by Wikström et al. (2023) who regarded the molecular clock as providing no firm evidence for the origin time of the genus, which could date to the Mesozoic or even the late Paleozoic, depending on the calibration method used. Extant species , Plants of the World Online accepted the following extant species: I. abyssinica I. acadiensis I. aemulans I. aequinoctialis I. alcalophila I. alpina I. alstonii I. amazonica I. anatolica I. andicola I. andina I. appalachiana I. araucaniana I. asiatica I. attenuata I. australis I. azorica I. baculata I. biafrana I. bischlerae I. bolanderi I. boliviensis I. boomii I. boryana I. boyacensis I. bradei I. brasiliensis I. brevicula I. butleri I. cangae I. capensis I. caroli I. caroliniana is regarded by Plants of the World Online as a synonym of I. valida, but other sources treat it as a valid species I. chubutiana I. coromandelina I. creussensis I. cristata I. cubana I. delilei I. dispora I. dixitii I. drummondii I. durieui I. echinospora I. ecuadoriensis I. ekmanii I. elatior I. eludens I. engelmannii I. escondidensis I. eshbaughii I. flaccida I. fluitans I. fuliginosa I. fuscomarginata I. gardneriana I. georgiana I. giessii I. gigantea I. graniticola I. gunnii I. gymnocarpa I. habbemensis I. hallasanensis I. haussknechtii I. hawaiiensis I. heldreichii I. hemivelata I. herzogii I. hewitsonii I. hieronymi I. histrix I. hopei I. howellii I. humilior I. hypsophila I. inflata I. jaegeri I. jamaicensis I. japonica I. jejuensis I. junciformis I. karstenii I. killipii I. kirkii I. labri-draconis I. lacustris I. laosiensis I. lechleri I. libanotica I. lithophila I. longissima I. louisianensis I. luetzelburgii I. macrospora I. malinverniana I. maritima – maritime quillwort I. martii I. mattaponica I. maxima I. melanopoda (I. virginica ) I. melanospora I. melanotheca I. mexicana (syn. Isoetes montezumae ) I. microvela I. minima I. mississippiensis I. mongerensis I. montana I. mourabaptistae I. muelleri I. naipiana I. nana I. neoguineensis I. nigritiana I. nigroreticulata I. novogranadensis I. nuttallii I. occidentalis I. olympica I. orcuttii I. organensis I. orientalis I. ovata I. pallida I. palmeri I. panamensis I. parvula I. pedersenii I. perralderiana I. perrieriana I. philippinensis I. phrygia I. piedmontana I. pitotii I. precocia I. pringlei I. prototypus I. pseudojaponica I. pusilla I. quiririensis I. ramboi I. riparia (syn I. hyemalis ) I. sabatina I. saccharata I. sahyadrii I. saracochensis I. savatieri I. schweinfurthii I. sehnemii I. septentrionalis I. serracarajensis I. setacea I. sinensis (synonym I. coreana ) I. smithii I. spannagelii I. spinulospora I. stellenbossiensis I. stephanseniae I. stevensii I. storkii I. taiwanensis I. tamaulipana I. tegetiformans I. tenella I. tennesseensis I. tenuifolia I. tenuissima I. texana I. todaroana I. toximontana I. transvaalensis I. triangula I. tripus I. truncata I. tuckermanii I. tuerckheimii I. udupiensis I. ulei I. valida I. vanensis I. vermiculata I. viridimontana I. weberi I. welwitschii I. wormaldii I. yunguiensis Many species, such as the Louisiana quillwort and the mat-forming quillwort, are endangered species. Several species of Isoetes are commonly called Merlin's grass, especially I. lacustris, but also the endangered species I. tegetiformans. Hybrids I. × altonharvillii I. × brittonii I. × bruntonii I. × carltaylorii I. × dodgei I. × eatonii – Eaton's quillwort I. × echtuckerii I. × fairbrothersii I. × foveolata I. × gopalkrishnae I. × harveyi (syn. I. × heterospora ) I. × herb-wagneri I. × hickeyi I. × jeffreyi I. × marensis I. × michinokuana I. × novae-angliae I. × paratunica I. × pseudotruncata Fossil species †Isoetes beestonii (Permian, Australia) †Isoetes bulbiformis (Cretaceous, Australia) †lsoetes ermayinensis (Triassic, China) †Isoetes gramineoides (Triassic, US) †Isoetes hillii (Miocene, Tasmania)
Biology and health sciences
Lycophytes
Plants
67077
https://en.wikipedia.org/wiki/Equisetum
Equisetum
Equisetum (; horsetail) is the only living genus in Equisetaceae, a family of vascular plants that reproduce by spores rather than seeds. Equisetum is a "living fossil", the only living genus of the entire subclass Equisetidae, which for over 100 million years was much more diverse and dominated the understorey of late Paleozoic forests. Some equisetids were large trees reaching to tall. The genus Calamites of the family Calamitaceae, for example, is abundant in coal deposits from the Carboniferous period. The pattern of spacing of nodes in horsetails, wherein those toward the apex of the shoot are increasingly close together, is said to have inspired John Napier to invent logarithms. Modern horsetails first appeared during the Jurassic period. A superficially similar but entirely unrelated flowering plant genus, mare's tail (Hippuris), is occasionally referred to as "horsetail", and adding to confusion, the name "mare's tail" is sometimes applied to Equisetum. Etymology The name "horsetail", often used for the entire group, arose because the branched species somewhat resemble a horse's tail. Similarly, the scientific name Equisetum is derived from the Latin ('horse') + ('bristle'). Other names include candock for branching species, marestail, puzzlegrass, and snake grass or scouring-rush for unbranched or sparsely branched species. The latter name refers to the rush-like appearance of the plants and to the fact that the stems are coated with abrasive silicates, making them useful for scouring (cleaning) metal items such as cooking pots or drinking mugs, particularly those made of tin. Equisetum hyemale, rough horsetail, is still boiled and then dried in Japan to be used for the final polishing process on woodcraft to produce a smooth finish. In German, the corresponding name is ('tin-herb'). In Spanish-speaking countries, these plants are known as ('horsetail'). Description Equisetum leaves are greatly reduced and usually non-photosynthetic. They contain a single, non-branching vascular trace, which is the defining feature of microphylls. However, it has recently been recognised that horsetail microphylls are probably not ancestral as in lycophytes (clubmosses and relatives), but rather derived adaptations, evolved by reduction of megaphylls. The leaves of horsetails are arranged in whorls fused into nodal sheaths. The stems are usually green and photosynthetic, and are distinctive in being hollow, jointed and ridged (with sometimes 3 but usually 6–40 ridges). There may or may not be whorls of branches at the nodes. Unusually, the branches often emerge below the leaves in an internode, and grow from buds between their bases. Spores The spores are borne under sporangiophores in strobili, cone-like structures at the tips of some of the stems. In many species the cone-bearing shoots are unbranched, and in some (e.g. E. arvense, field horsetail) they are non-photosynthetic, produced early in spring. In some other species (e.g. E. palustre, marsh horsetail) they are very similar to sterile shoots, photosynthetic and with whorls of branches. Horsetails are mostly homosporous, though in the field horsetail, smaller spores give rise to male prothalli. The spores have four elaters that act as moisture-sensitive springs, assisting spore dispersal through crawling and hopping motions after the sporangia have split open longitudinally. They are photosynthetic and have a lifespan that is usually two weeks at most, but will germinate immediately under humid conditions and develop into a gametophyte. Cell walls The crude cell extracts of all Equisetum species tested contain mixed-linkage glucan : xyloglucan endotransglucosylase (MXE) activity. This is a novel enzyme and is not known to occur in any other plants. In addition, the cell walls of all Equisetum species tested contain mixed-linkage glucan (MLG), a polysaccharide which, until recently, was thought to be confined to the Poales. The evolutionary distance between Equisetum and the Poales suggests that each evolved MLG independently. The presence of MXE activity in Equisetum suggests that they have evolved MLG along with some mechanism of cell wall modification. Non-Equisetum land plants tested lack detectable MXE activity. An observed negative correlation between XET activity and cell age led to the suggestion that XET is catalysing endotransglycosylation in controlled wall-loosening during cell expansion. The lack of MXE in the Poales suggests that there it must play some other, currently unknown, role. Due to the correlation between MXE activity and cell age, MXE has been proposed to promote the cessation of cell expansion. Taxonomy Species Currently, 18 species of Equisetum are accepted by Plants of the World Online. The living members are divided into three distinct lineages, which are usually treated as subgenera. The name of the type subgenus, Equisetum, means "horse hair" in Latin, while the name of the other large subgenus, Hippochaete, means "horse hair" in Greek. Hybrids are common, but hybridization has only been recorded between members of the same subgenus. Two Equisetum plants are sold commercially under the names Equisetum japonicum (barred horsetail) and Equisetum camtschatcense (Kamchatka horsetail). These are both types of E. hyemale var. hyemale, although they may also be listed as separate varieties of E. hyemale. Evolutionary history The oldest remains of modern horsetails of the genus Equisetum first appear in the Early Jurassic, represented by Equisetum dimorphum from the Early Jurassic of Patagonia and Equisetum laterale from the Early-Middle Jurassic of Australia. Silicified remains of Equisetum thermale from the Late Jurassic of Argentina exhibit all the morphological characters of modern members of the genus. The estimated split between Equisetum bogotense and all other living Equisetum is estimated to have occurred no later than the Early Jurassic. Subgenus Paramochaete – Andean horsetail; upland South America up to Costa Rica; includes E. rinihuense, sometimes treated as a separate species. Previously included in subg. Equisetum, but Christenhusz et al. (2019) transfer this here, as E. bogotense appears to be sister to all the remaining species in the genus. Subgenus Equisetum – field horsetail or common horsetail; circumboreal down through temperate zones – northern giant horsetail, syn. E. telmateia subsp. braunii (Milde) Hauke.; west coast of North America – Himalayan horsetail; Himalayan India and China and adjacent nations above about – water horsetail; circumboreal down through temperate zones – marsh horsetail; circumboreal down through temperate zones – shady horsetail, meadow horsetail, shade horsetail; circumboreal except for tundra down through cool temperate zones – wood horsetail; circumboreal down through cool temperate zones, more restricted in east Asia – great horsetail; Europe to Asia Minor and north Africa. The former North American subspecies Equisetum telmateia subsp. braunii (Milde) Hauke is now treated as a separate species Subgenus Hippochaete – southern giant horsetail or giant horsetail; temperate to tropical South America and Central America north to southern Mexico – rough horsetail; most of non-tropical Old World. The former North American subspecies Equisetum hyemale subsp. affine (Engelm.) A.A.Eat. is now treated as a separate species – smooth horsetail, smooth scouringrush; western 3/4 of North America down into northwestern Mexico; also sometimes known as Equisetum kansanum – Mexican giant horsetail; from central Mexico south to Peru – scouringrush horsetail, syn. E. hyemale subsp. affine (Engelm.) A.A.Eat.; temperate North America (including E. debile) – branched horsetail; Asia, Europe, Africa, southwest Pacific islands – dwarf horsetail, dwarf scouringrush; northern (cool temperate) zones worldwide – variegated horsetail, variegated scouringrush; northern (cool temperate) zones worldwide, except for northeasternmost Asia – Atacama Desert giant horsetail; southern Peru, northern Chile Unplaced to subgenus Equisetum dimorphum – Early Jurassic, Argentina Equisetum laterale – Early to Middle Jurassic, Australia Equisetum thermale – Middle to Late Jurassic, Argentina Equisetum similkamense – Ypresian, British Columbia Named hybrids Hybrids between species in subgenus Equisetum Equisetum × bowmanii (Equisetum sylvaticum × Equisetum telmateia) Equisetum × dycei (Equisetum fluviatile × Equisetum palustre) Equisetum × font-queri (Equisetum palustre × Equisetum telmateia) Equisetum × litorale (Equisetum arvense × Equisetum fluviatile) Equisetum × mchaffieae (Equisetum fluviatile × Equisetum pratense) Equisetum × mildeanum (Equisetum pratense × Equisetum sylvaticum) Equisetum × robertsii (Equisetum arvense × Equisetum telmateia) Equisetum × rothmaleri (Equisetum arvense × Equisetum palustre) Equisetum × willmotii (Equisetum fluviatile × Equisetum telmateia) Hybrids between species in subgenus Hippochaete Equisetum × ferrissii (Equisetum hyemale × Equisetum laevigatum) Equisetum × moorei (Equisetum hyemale × Equisetum ramosissimum) Equisetum × nelsonii (Equisetum laevigatum × Equisetum variegatum) Equisetum × schaffneri (Equisetum giganteum × Equisetum myriochaetum) Equisetum × trachyodon (Equisetum hyemale × Equisetum variegatum) Phylogeny Distribution and ecology The genus Equisetum as a whole, while concentrated in the non-tropical northern hemisphere, is near-cosmopolitan, being absent naturally only from Antarctica, Australia, New Zealand, and the islands of the Pacific Ocean. They are most common in northern Europe, with ten species (E. arvense, E. fluviatile, E. hyemale, E. palustre, E. pratense, E. ramosissimum, E. scirpoides, E. sylvaticum, E. telmateia, and E. variegatum); Great Britain has nine of these species, missing only E. scirpoides of the European list. Northern North America (Canada and the northernmost United States), also has nine species (E. arvense, E. fluviatile, E. laevigatum, E. palustre, E. praealtum, E. pratense, E. scirpoides, E. sylvaticum, and E. variegatum). Only five (E. bogotense, E. giganteum, E. myriochaetum, E. ramosissimum, and E. xylochaetum) of the eighteen species are known to be native south of the Equator. They are perennial plants, herbaceous and dying back in winter in most temperate species, or evergreen as most tropical species and the temperate species E. hyemale (rough horsetail), E. ramosissimum (branched horsetail), E. scirpoides (dwarf horsetail) and E. variegatum (variegated horsetail). They typically grow 20 cm–1.5 m (8 in–5 ft) tall, though the subtropical "giant horsetails" are recorded to grow as high as (E. giganteum, southern giant horsetail) or (E. myriochaetum, Mexican giant horsetail), and allegedly even more. One species, Equisetum fluviatile, is an emergent aquatic, rooted in water with shoots growing into the air. The stalks arise from rhizomes that are deep underground and difficult to dig out. Field horsetail (E. arvense) can be a nuisance weed, readily regrowing from the rhizome after being pulled out. It is unaffected by many herbicides designed to kill seed plants. Since the stems have a waxy coat, the plant is resistant to contact weedkillers like glyphosate. However, as E. arvense prefers an acid soil, lime may be used to assist in eradication efforts to bring the soil pH to 7 or 8. Members of the genus have been declared noxious weeds in Australia and in the US state of Oregon. All the Equisetum are classed as "unwanted organisms" in New Zealand and are listed on the National Pest Plant Accord. Consumption People have regularly consumed horsetails. The fertile stems bearing strobili of some species can be cooked and eaten like asparagus (a dish called in Japan). Indigenous nations across Cascadia consume and use horsetails in a variety of ways, with the Squamish calling them sx̱ém'x̱em and the Lushootseed using gʷəɫik, or horsetail roots, for cedar root baskets. The young plants are eaten cooked or raw, but considerable care must be taken. If eaten over a long enough period of time, some species of horsetail can be poisonous to grazing animals, including horses. The toxicity appears to be due to thiaminase, which can cause thiamin (vitamin B1) deficiency. Equisetum species may have been a common food for herbivorous dinosaurs. With studies showing that horsetails are nutritionally of high quality, it is assumed that horsetails were an important component of herbivorous dinosaur diets. Analysis of the scratch marks on hadrosaur teeth is consistent with grazing on hard plants like horsetails. Folk medicine and safety concerns Extracts and other preparations of E. arvense have served as herbal remedies, with records dating over centuries. In 2009, the European Food Safety Authority concluded there was no evidence for the supposed health effects of E. arvense, such as for invigoration, weight control, skincare, hair health or bone health. , there is insufficient scientific evidence for its effectiveness as a medicine to treat any human condition. E. arvense contains thiaminase, which metabolizes the B vitamin, thiamine, potentially causing thiamine deficiency and associated liver damage, if taken chronically. Horsetail might produce a diuretic effect. Further, its safety for oral consumption has not been sufficiently evaluated and it may be toxic, especially to children and pregnant women.
Biology and health sciences
Pteridophytes
null
67088
https://en.wikipedia.org/wiki/Conservation%20of%20energy
Conservation of energy
The law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. In the case of a closed system, the principle says that the total amount of energy within the system can only be changed through energy entering or leaving the system. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite. Classically, the conservation of energy was distinct from the conservation of mass. However, special relativity shows that mass is related to energy and vice versa by , the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. Theoretically, this implies that mass can itself be converted to energy, and vice versa. However, this is believed to be possible only under the most extreme of physical conditions, such as likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation. Given the stationary-action principle, the conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time. A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, the conservation of energy can arguably be violated by general relativity on the cosmological scale. In quantum mechanics, Noether's theorem is known to apply to the expected value, making any consistent conservation violation provably impossible, but whether individual conservation-violating events could ever exist or be observed is subject to some debate. History Ancient philosophers as far back as Thales of Miletus  550 BCE had inklings of the conservation of some underlying substance of which everything is made. However, there is no particular reason to identify their theories with what we know today as "mass-energy" (for example, Thales thought it was water). Empedocles (490–430 BCE) wrote that in his universal system, composed of four roots (earth, air, water, fire), "nothing comes to be or perishes"; instead, these elements suffer continual rearrangement. Epicurus ( 350 BCE) on the other hand believed everything in the universe to be composed of indivisible units of matter—the ancient precursor to 'atoms'—and he too had some idea of the necessity of conservation, stating that "the sum total of things was always such as it is now, and such it will ever remain." In 1605, the Flemish scientist Simon Stevin was able to solve a number of problems in statics based on the principle that perpetual motion was impossible. In 1639, Galileo published his analysis of several situations—including the celebrated "interrupted pendulum"—which can be described (in modern language) as conservatively converting potential energy to kinetic energy and back again. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height to which a moving body ascends on a frictionless surface does not depend on the shape of the surface. In 1669, Christiaan Huygens published a brief account on his laws of collision. Among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momenta as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time. This led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, Huygens gave a much clearer statement regarding the height of ascent of a moving body, and connected this idea with the impossibility of perpetual motion. His study of the dynamics of pendulum motion was based on a single principle, known as Torricelli's Principle: that the center of gravity of a heavy object, or collection of objects, cannot lift itself. Using this principle, Huygens was able to derive the formula for the center of oscillation by an "energy" method, without dealing with forces or torques. Between 1676 and 1689, Gottfried Leibniz first attempted a mathematical formulation of the kind of energy that is associated with motion (kinetic energy). Using Huygens's work on collision, Leibniz noticed that in many mechanical systems (of several masses mi, each with velocity vi), was conserved so long as the masses did not interact. He called this quantity the vis viva or living force of the system. The principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, including Isaac Newton, held that the conservation of momentum, which holds even in systems with friction, as defined by the momentum: was the conserved vis viva. It was later shown that both quantities are conserved simultaneously given the proper conditions, such as in an elastic collision. In 1687, Isaac Newton published his Principia, which set out his laws of motion. It was organized around the concept of force and momentum. However, the researchers were quick to recognize that the principles set out in the book, while fine for point masses, were not sufficient to tackle the motions of rigid and fluid bodies. Some other principles were also required. By the 1690s, Leibniz was arguing that conservation of vis viva and conservation of momentum undermined the then-popular philosophical doctrine of interactionist dualism. (During the 19th century, when conservation of energy was better understood, Leibniz's basic argument would gain widespread acceptance. Some modern scholars continue to champion specifically conservation-based attacks on dualism, while others subsume the argument into a more general argument about causal closure.) The law of conservation of vis viva was championed by the father and son duo, Johann and Daniel Bernoulli. The former enunciated the principle of virtual work as used in statics in its full generality in 1715, while the latter based his Hydrodynamica, published in 1738, on this single vis viva conservation principle. Daniel's study of loss of vis viva of flowing water led him to formulate the Bernoulli's principle, which asserts the loss to be proportional to the change in hydrodynamic pressure. Daniel also formulated the notion of work and efficiency for hydraulic machines; and he gave a kinetic theory of gases, and linked the kinetic energy of gas molecules with the temperature of the gas. This focus on the vis viva by the continental physicists eventually led to the discovery of stationarity principles governing mechanics, such as the D'Alembert's principle, Lagrangian, and Hamiltonian formulations of mechanics. Émilie du Châtelet (1706–1749) proposed and tested the hypothesis of the conservation of total energy, as distinct from momentum. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in 1722 in which balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy—as indicated by the quantity of material displaced—was shown to be proportional to the square of the velocity. The deformation of the clay was found to be directly proportional to the height from which the balls were dropped, equal to the initial potential energy. Some earlier workers, including Newton and Voltaire, had believed that "energy" was not distinct from momentum and therefore proportional to velocity. According to this understanding, the deformation of the clay should have been proportional to the square root of the height from which the balls were dropped. In classical physics, the correct formula is , where is the kinetic energy of an object, its mass and its speed. On this basis, du Châtelet proposed that energy must always have the same dimensions in any form, which is necessary to be able to consider it in different forms (kinetic, potential, heat, ...). Engineers such as John Smeaton, Peter Ewart, , Gustave-Adolphe Hirn, and Marc Seguin recognized that conservation of momentum alone was not adequate for practical calculation and made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston. Academics such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics, but in the 18th and 19th centuries, the fate of the lost energy was still unknown. Gradually it came to be suspected that the heat inevitably generated by motion under friction was another form of vis viva. In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of vis viva and caloric theory. Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat and (that it was important) that the conversion was quantitative and could be predicted (allowing for a universal conversion constant between kinetic energy and heat). Vis viva then started to be known as energy, after the term was first used in that sense by Thomas Young in 1807. The recalibration of vis viva to which can be understood as converting kinetic energy to work, was largely the result of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839. The former called the quantity quantité de travail (quantity of work) and the latter, travail mécanique (mechanical work), and both championed its use in engineering calculations. In the paper Über die Natur der Wärme (German "On the Nature of Heat/Warmth"), published in the in 1837, Karl Friedrich Mohr gave one of the earliest general statements of the doctrine of the conservation of energy: "besides the 54 known chemical elements there is in the physical world one agent only, and this is called Kraft [energy or work]. It may appear, according to circumstances, as motion, chemical affinity, cohesion, electricity, light and magnetism; and from any one of these forms it can be transformed into any of the others." Mechanical equivalent of heat A key stage in the development of the modern conservation principle was the demonstration of the mechanical equivalent of heat. The caloric theory maintained that heat could neither be created nor destroyed, whereas conservation of energy entails the contrary principle that heat and mechanical work are interchangeable. In the middle of the eighteenth century, Mikhail Lomonosov, a Russian scientist, postulated his corpusculo-kinetic theory of heat, which rejected the idea of a caloric. Through the results of empirical studies, Lomonosov came to the conclusion that heat was not transferred through the particles of the caloric fluid. In 1798, Count Rumford (Benjamin Thompson) performed measurements of the frictional heat generated in boring cannons and developed the idea that heat is a form of kinetic energy; his measurements refuted caloric theory, but were imprecise enough to leave room for doubt. The mechanical equivalence principle was first stated in its modern form by the German surgeon Julius Robert von Mayer in 1842. Mayer reached his conclusion on a voyage to the Dutch East Indies, where he found that his patients' blood was a deeper red because they were consuming less oxygen, and therefore less energy, to maintain their body temperature in the hotter climate. He discovered that heat and mechanical work were both forms of energy, and in 1845, after improving his knowledge of physics, he published a monograph that stated a quantitative relationship between them. Meanwhile, in 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. In one of them, now called the "Joule apparatus", a descending weight attached to a string caused a paddle immersed in water to rotate. He showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. Over the period 1840–1843, similar work was carried out by engineer Ludwig A. Colding, although it was little known outside his native Denmark. Both Joule's and Mayer's work suffered from resistance and neglect but it was Joule's that eventually drew the wider recognition. In 1844, the Welsh scientist William Robert Grove postulated a relationship between mechanics, heat, light, electricity, and magnetism by treating them all as manifestations of a single "force" (energy in modern terms). In 1846, Grove published his theories in his book The Correlation of Physical Forces. In 1847, drawing on the earlier work of Joule, Sadi Carnot, and Émile Clapeyron, Hermann von Helmholtz arrived at conclusions similar to Grove's and published his theories in his book Über die Erhaltung der Kraft (On the Conservation of Force, 1847). The general modern acceptance of the principle stems from this publication. In 1850, the Scottish mathematician William Rankine first used the phrase the law of the conservation of energy for the principle. In 1877, Peter Guthrie Tait claimed that the principle originated with Sir Isaac Newton, based on a creative reading of propositions 40 and 41 of the Philosophiae Naturalis Principia Mathematica. This is now regarded as an example of Whig history. Mass–energy equivalence Matter is composed of atoms and what makes up atoms. Matter has intrinsic or rest mass. In the limited range of recognized experience of the nineteenth century, it was found that such rest mass is conserved. Einstein's 1905 theory of special relativity showed that rest mass corresponds to an equivalent amount of rest energy. This means that rest mass can be converted to or from equivalent amounts of (non-material) forms of energy, for example, kinetic energy, potential energy, and electromagnetic radiant energy. When this happens, as recognized in twentieth-century experience, rest mass is not conserved, unlike the total mass or total energy. All forms of energy contribute to the total mass and total energy. For example, an electron and a positron each have rest mass. They can perish together, converting their combined rest energy into photons which have electromagnetic radiant energy but no rest mass. If this occurs within an isolated system that does not release the photons or their energy into the external surroundings, then neither the total mass nor the total energy of the system will change. The produced electromagnetic radiant energy contributes just as much to the inertia (and to any weight) of the system as did the rest mass of the electron and positron before their demise. Likewise, non-material forms of energy can perish into matter, which has rest mass. Thus, conservation of energy (total, including material or rest energy) and conservation of mass (total, not just rest) are one (equivalent) law. In the 18th century, these had appeared as two seemingly-distinct laws. Conservation of energy in beta decay The discovery in 1911 that electrons emitted in beta decay have a continuous rather than a discrete spectrum appeared to contradict conservation of energy, under the then-current assumption that beta decay is the simple emission of an electron from a nucleus. This problem was eventually resolved in 1933 by Enrico Fermi who proposed the correct description of beta-decay as the emission of both an electron and an antineutrino, which carries away the apparently missing energy. First law of thermodynamics For a closed thermodynamic system, the first law of thermodynamics may be stated as: , or equivalently, where is the quantity of energy added to the system by a heating process, is the quantity of energy lost by the system due to work done by the system on its surroundings, and is the change in the internal energy of the system. The δ's before the heat and work terms are used to indicate that they describe an increment of energy which is to be interpreted somewhat differently than the increment of internal energy (see Inexact differential). Work and heat refer to kinds of process which add or subtract energy to or from a system, while the internal energy is a property of a particular state of the system when it is in unchanging thermodynamic equilibrium. Thus the term "heat energy" for means "that amount of energy added as a result of heating" rather than referring to a particular form of energy. Likewise, the term "work energy" for means "that amount of energy lost as a result of work". Thus one can state the amount of internal energy possessed by a thermodynamic system that one knows is presently in a given state, but one cannot tell, just from knowledge of the given present state, how much energy has in the past flowed into or out of the system as a result of its being heated or cooled, nor as a result of work being performed on or by the system. Entropy is a function of the state of a system which tells of limitations of the possibility of conversion of heat into work. For a simple compressible system, the work performed by the system may be written: where is the pressure and is a small change in the volume of the system, each of which are system variables. In the fictive case in which the process is idealized and infinitely slow, so as to be called quasi-static, and regarded as reversible, the heat being transferred from a source with temperature infinitesimally above the system temperature, the heat energy may be written where is the temperature and is a small change in the entropy of the system. Temperature and entropy are variables of the state of a system. If an open system (in which mass may be exchanged with the environment) has several walls such that the mass transfer is through rigid walls separate from the heat and work transfers, then the first law may be written as where is the added mass of species and is the corresponding enthalpy per unit mass. Note that generally in this case, as matter carries its own entropy. Instead, , where is the entropy per unit mass of type , from which we recover the fundamental thermodynamic relation because the chemical potential is the partial molar Gibbs free energy of species and the Gibbs free energy . Noether's theorem The conservation of energy is a common feature in many physical theories. From a mathematical point of view it is understood as a consequence of Noether's theorem, developed by Emmy Noether in 1915 and first published in 1918. In any physical theory that obeys the stationary-action principle, the theorem states that every continuous symmetry has an associated conserved quantity; if the theory's symmetry is time invariance, then the conserved quantity is called "energy". The energy conservation law is a consequence of the shift symmetry of time; energy conservation is implied by the empirical fact that the laws of physics do not change with time itself. Philosophically this can be stated as "nothing depends on time per se". In other words, if the physical system is invariant under the continuous symmetry of time translation, then its energy (which is the canonical conjugate quantity to time) is conserved. Conversely, systems that are not invariant under shifts in time (e.g. systems with time-dependent potential energy) do not exhibit conservation of energy – unless we consider them to exchange energy with another, external system so that the theory of the enlarged system becomes time-invariant again. Conservation of energy for finite systems is valid in physical theories such as special relativity and quantum theory (including QED) in the flat space-time. Special relativity With the discovery of special relativity by Henri Poincaré and Albert Einstein, the energy was proposed to be a component of an energy-momentum 4-vector. Each of the four components (one of energy and three of momentum) of this vector is separately conserved across time, in any closed system, as seen from any given inertial reference frame. Also conserved is the vector length (Minkowski norm), which is the rest mass for single particles, and the invariant mass for systems of particles (where momenta and energy are separately summed before the length is calculated). The relativistic energy of a single massive particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame) of a massive particle, or else in the center of momentum frame for objects or systems which retain kinetic energy, the total energy of a particle or object (including internal kinetic energy in systems) is proportional to the rest mass or invariant mass, as described by the equation . Thus, the rule of conservation of energy over time in special relativity continues to hold, so long as the reference frame of the observer is unchanged. This applies to the total energy of systems, although different observers disagree as to the energy value. Also conserved, and invariant to all observers, is the invariant mass, which is the minimal system mass and energy that can be seen by any observer, and which is defined by the energy–momentum relation. General relativity General relativity introduces new phenomena. In an expanding universe, photons spontaneously redshift and tethers spontaneously gain tension; if vacuum energy is positive, the total vacuum energy of the universe appears to spontaneously increase as the volume of space increases. Some scholars claim that energy is no longer meaningfully conserved in any identifiable form. John Baez's view is that energy–momentum conservation is not well-defined except in certain special cases. Energy-momentum is typically expressed with the aid of a stress–energy–momentum pseudotensor. However, since pseudotensors are not tensors, they do not transform cleanly between reference frames. If the metric under consideration is static (that is, does not change with time) or asymptotically flat (that is, at an infinite distance away spacetime looks empty), then energy conservation holds without major pitfalls. In practice, some metrics, notably the Friedmann–Lemaître–Robertson–Walker metric that appears to govern the universe, do not satisfy these constraints and energy conservation is not well defined. Besides being dependent on the coordinate system, pseudotensor energy is dependent on the type of pseudotensor in use; for example, the energy exterior to a Kerr–Newman black hole is twice as large when calculated from Møller's pseudotensor as it is when calculated using the Einstein pseudotensor. For asymptotically flat universes, Einstein and others salvage conservation of energy by introducing a specific global gravitational potential energy that cancels out mass-energy changes triggered by spacetime expansion or contraction. This global energy has no well-defined density and cannot technically be applied to a non-asymptotically flat universe; however, for practical purposes this can be finessed, and so by this view, energy is conserved in our universe. Alan Guth stated that the universe might be "the ultimate free lunch", and theorized that, when accounting for gravitational potential energy, the net energy of the Universe is zero. Quantum theory In quantum mechanics, the energy of a quantum system is described by a self-adjoint (or Hermitian) operator called the Hamiltonian, which acts on the Hilbert space (or a space of wave functions) of the system. If the Hamiltonian is a time-independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent. The local energy conservation in quantum field theory is ensured by the quantum Noether's theorem for the energy-momentum tensor operator. Thus energy is conserved by the normal unitary evolution of a quantum system. However, when the non-unitary Born rule is applied, the system's energy is measured with an energy that can be below or above the expectation value, if the system was not in an energy eigenstate. (For macroscopic systems, this effect is usually too small to measure.) The disposition of this energy gap is not well-understood; most physicists believe that the energy is transferred to or from the macroscopic environment in the course of the measurement process, while others believe that the observable energy is only conserved "on average". No experiment has been confirmed as definitive evidence of violations of the conservation of energy principle in quantum mechanics, but that does not rule out that some newer experiments, as proposed, may find evidence of violations of the conservation of energy principle in quantum mechanics. Status In the context of perpetual motion machines such as the Orbo, Professor Eric Ash has argued at the BBC: "Denying [conservation of energy] would undermine not just little bits of science - the whole edifice would be no more. All of the technology on which we built the modern world would lie in ruins". It is because of conservation of energy that "we know - without having to examine details of a particular device - that Orbo cannot work." Energy conservation has been a foundational physical principle for about two hundred years. From the point of view of modern general relativity, the lab environment can be well approximated by Minkowski spacetime, where energy is exactly conserved. The entire Earth can be well approximated by the Schwarzschild metric, where again energy is exactly conserved. Given all the experimental evidence, any new theory (such as quantum gravity), in order to be successful, will have to explain why energy has appeared to always be exactly conserved in terrestrial experiments. In some speculative theories, corrections to quantum mechanics are too small to be detected at anywhere near the current TeV level accessible through particle accelerators. Doubly special relativity models may argue for a breakdown in energy-momentum conservation for sufficiently energetic particles; such models are constrained by observations that cosmic rays appear to travel for billions of years without displaying anomalous non-conservation behavior. Some interpretations of quantum mechanics claim that observed energy tends to increase when the Born rule is applied due to localization of the wave function. If true, objects could be expected to spontaneously heat up; thus, such models are constrained by observations of large, cool astronomical objects as well as the observation of (often supercooled) laboratory experiments. Milton A. Rothman wrote that the law of conservation of energy has been verified by nuclear physics experiments to an accuracy of one part in a thousand million million (1015). He then defines its precision as "perfect for all practical purposes".
Physical sciences
Basics_6
null
67158
https://en.wikipedia.org/wiki/Red%20blood%20cell
Red blood cell
Red blood cells (RBCs), referred to as erythrocytes (, with -cyte translated as 'cell' in modern usage) in academia and medical publishing, also known as red cells, erythroid cells, and rarely haematids, are the most common type of blood cell and the vertebrate's principal means of delivering oxygen () to the body tissues—via blood flow through the circulatory system. Erythrocytes take up oxygen in the lungs, or in fish the gills, and release it into tissues while squeezing through the body's capillaries. The cytoplasm of a red blood cell is rich in hemoglobin (Hb), an iron-containing biomolecule that can bind oxygen and is responsible for the red color of the cells and the blood. Each human red blood cell contains approximately 270 million hemoglobin molecules. The cell membrane is composed of proteins and lipids, and this structure provides properties essential for physiological cell function such as deformability and stability of the blood cell while traversing the circulatory system and specifically the capillary network. In humans, mature red blood cells are flexible biconcave disks. They lack a cell nucleus (which is expelled during development) and organelles, to accommodate maximum space for hemoglobin; they can be viewed as sacks of hemoglobin, with a plasma membrane as the sack. Approximately 2.4 million new erythrocytes are produced per second in human adults. The cells develop in the bone marrow and circulate for about 100–120 days in the body before their components are recycled by macrophages. Each circulation takes about 60 seconds (one minute). Approximately 84% of the cells in the human body are the 20–30 trillion red blood cells. Nearly half of the blood's volume (40% to 45%) is red blood cells. Packed red blood cells are red blood cells that have been donated, processed, and stored in a blood bank for blood transfusion. Structure Vertebrates The vast majority of vertebrates, including mammals and humans, have red blood cells. Red blood cells are cells present in blood to transport oxygen. The only known vertebrates without red blood cells are the crocodile icefish (family Channichthyidae); they live in very oxygen-rich cold water and transport oxygen freely dissolved in their blood. While they no longer use hemoglobin, remnants of hemoglobin genes can be found in their genome. Vertebrate red blood cells consist mainly of hemoglobin, a complex metalloprotein containing heme groups whose iron atoms temporarily bind to oxygen molecules (O2) in the lungs or gills and release them throughout the body. Oxygen can easily diffuse through the red blood cell's cell membrane. Hemoglobin in the red blood cells also carries some of the waste product carbon dioxide back from the tissues; most waste carbon dioxide, however, is transported back to the pulmonary capillaries of the lungs as bicarbonate (HCO3−) dissolved in the blood plasma. Myoglobin, a compound related to hemoglobin, acts to store oxygen in muscle cells. The color of red blood cells is due to the heme group of hemoglobin. The blood plasma alone is straw-colored, but the red blood cells change color depending on the state of the hemoglobin: when combined with oxygen the resulting oxyhemoglobin is scarlet, and when oxygen has been released the resulting deoxyhemoglobin is of a dark red burgundy color. However, blood can appear bluish when seen through the vessel wall and skin. Pulse oximetry takes advantage of the hemoglobin color change to directly measure the arterial blood oxygen saturation using colorimetric techniques. Hemoglobin also has a very high affinity for carbon monoxide, forming carboxyhemoglobin which is a very bright red in color. Flushed, confused patients with a saturation reading of 100% on pulse oximetry are sometimes found to be suffering from carbon monoxide poisoning. Having oxygen-carrying proteins inside specialized cells (as opposed to oxygen carriers being dissolved in body fluid) was an important step in the evolution of vertebrates as it allows for less viscous blood, higher concentrations of oxygen, and better diffusion of oxygen from the blood to the tissues. The size of red blood cells varies widely among vertebrate species; red blood cell width is on average about 25% larger than capillary diameter, and it has been hypothesized that this improves the oxygen transfer from red blood cells to tissues. Mammals The red blood cells of mammals are typically shaped as biconcave disks: flattened and depressed in the center, with a dumbbell-shaped cross section, and a torus-shaped rim on the edge of the disk. This shape allows for a high surface-area-to-volume (SA/V) ratio to facilitate diffusion of gases. However, there are some exceptions concerning shape in the artiodactyl order (even-toed ungulates including cattle, deer, and their relatives), which displays a wide variety of bizarre red blood cell morphologies: small and highly ovaloid cells in llamas and camels (family Camelidae), tiny spherical cells in mouse deer (family Tragulidae), and cells which assume fusiform, lanceolate, crescentic, and irregularly polygonal and other angular forms in red deer and wapiti (family Cervidae). Members of this order have clearly evolved a mode of red blood cell development substantially different from the mammalian norm. Overall, mammalian red blood cells are remarkably flexible and deformable so as to squeeze through tiny capillaries, as well as to maximize their apposing surface by assuming a cigar shape, where they efficiently release their oxygen load. Red blood cells in mammals are unique amongst vertebrates as they do not have nuclei when mature. They do have nuclei during early phases of erythropoiesis, but extrude them during development as they mature; this provides more space for hemoglobin. The red blood cells without nuclei, called reticulocytes, subsequently lose all other cellular organelles such as their mitochondria, Golgi apparatus and endoplasmic reticulum. The spleen acts as a reservoir of red blood cells, but this effect is somewhat limited in humans. In some other mammals such as dogs and horses, the spleen sequesters large numbers of red blood cells, which are dumped into the blood during times of exertion stress, yielding a higher oxygen transport capacity. Human A typical human red blood cell has a disk diameter of approximately 6.2–8.2 μm and a maximum thickness of 2–2.5 μm and a minimum thickness in the centre of 0.8–1 μm, being much smaller than most other human cells. These cells have an average volume of about 90 fL with a surface area of about 136 μm2, and can swell up to a sphere shape containing 150 fL, without membrane distension. Adult humans have roughly 20–30 trillion red blood cells at any given time, constituting approximately 70% of all cells by number. Women have about 4–5 million red blood cells per microliter (cubic millimeter) of blood and men about 5–6 million; people living at high altitudes with low oxygen tension will have more. Red blood cells are thus much more common than the other blood particles: there are about 4,000–11,000 white blood cells and about 150,000–400,000 platelets per microliter. Human red blood cells take on average 60 seconds to complete one cycle of circulation. The blood's red color is due to the spectral properties of the hemic iron ions in hemoglobin. Each hemoglobin molecule carries four heme groups; hemoglobin constitutes about a third of the total cell volume. Hemoglobin is responsible for the transport of more than 98% of the oxygen in the body (the remaining oxygen is carried dissolved in the blood plasma). The red blood cells of an average adult human male store collectively about 2.5 grams of iron, representing about 65% of the total iron contained in the body. Microstructure Nucleus Red blood cells in mammals are anucleate when mature, meaning that they lack a cell nucleus. In comparison, the red blood cells of other vertebrates have nuclei; the only known exceptions are salamanders of the family Plethodontidae, where five different clades has evolved various degrees of enucleated red blood cells (most evolved in some species of the genus Batrachoseps), and fish of the genus Maurolicus. The elimination of the nucleus in vertebrate red blood cells has been offered as an explanation for the subsequent accumulation of non-coding DNA in the genome. The argument runs as follows: Efficient gas transport requires red blood cells to pass through very narrow capillaries, and this constrains their size. In the absence of nuclear elimination, the accumulation of repeat sequences is constrained by the volume occupied by the nucleus, which increases with genome size. Nucleated red blood cells in mammals consist of two forms: normoblasts, which are normal erythropoietic precursors to mature red blood cells, and megaloblasts, which are abnormally large precursors that occur in megaloblastic anemias. Membrane composition Red blood cells are deformable, flexible, are able to adhere to other cells, and are able to interface with immune cells. Their membrane plays many roles in this. These functions are highly dependent on the membrane composition. The red blood cell membrane is composed of 3 layers: the glycocalyx on the exterior, which is rich in carbohydrates; the lipid bilayer which contains many transmembrane proteins, besides its lipidic main constituents; and the membrane skeleton, a structural network of proteins located on the inner surface of the lipid bilayer. Half of the membrane mass in human and most mammalian red blood cells are proteins. The other half are lipids, namely phospholipids and cholesterol. Membrane lipids The red blood cell membrane comprises a typical lipid bilayer, similar to what can be found in virtually all human cells. Simply put, this lipid bilayer is composed of cholesterol and phospholipids in equal proportions by weight. The lipid composition is important as it defines many physical properties such as membrane permeability and fluidity. Additionally, the activity of many membrane proteins is regulated by interactions with lipids in the bilayer. Unlike cholesterol, which is evenly distributed between the inner and outer leaflets, the 5 major phospholipids are asymmetrically disposed, as shown below: Outer monolayer Phosphatidylcholine (PC); Sphingomyelin (SM). Inner monolayer Phosphatidylethanolamine (PE); Phosphoinositol (PI) (small amounts). Phosphatidylserine (PS); This asymmetric phospholipid distribution among the bilayer is the result of the function of several energy-dependent and energy-independent phospholipid transport proteins. Proteins called "Flippases" move phospholipids from the outer to the inner monolayer, while others called "floppases" do the opposite operation, against a concentration gradient in an energy-dependent manner. Additionally, there are also "scramblase" proteins that move phospholipids in both directions at the same time, down their concentration gradients in an energy-independent manner. There is still considerable debate ongoing regarding the identity of these membrane maintenance proteins in the red cell membrane. The maintenance of an asymmetric phospholipid distribution in the bilayer (such as an exclusive localization of PS and PIs in the inner monolayer) is critical for the cell integrity and function due to several reasons: Macrophages recognize and phagocytose red cells that expose PS at their outer surface. Thus the confinement of PS in the inner monolayer is essential if the cell is to survive its frequent encounters with macrophages of the reticuloendothelial system, especially in the spleen. Premature destruction of thallassemic and sickle red cells has been linked to disruptions of lipid asymmetry leading to exposure of PS on the outer monolayer. An exposure of PS can potentiate adhesion of red cells to vascular endothelial cells, effectively preventing normal transit through the microvasculature. Thus it is important that PS is maintained only in the inner leaflet of the bilayer to ensure normal blood flow in microcirculation. Both PS and phosphatidylinositol 4,5-bisphosphate (PIP2) can regulate membrane mechanical function, due to their interactions with skeletal proteins such as spectrin and protein 4.1R. Recent studies have shown that binding of spectrin to PS promotes membrane mechanical stability. PIP2 enhances the binding of protein band 4.1R to glycophorin C but decreases its interaction with protein band 3, and thereby may modulate the linkage of the bilayer to the membrane skeleton. The presence of specialized structures named "lipid rafts" in the red blood cell membrane have been described by recent studies. These are structures enriched in cholesterol and sphingolipids associated with specific membrane proteins, namely flotillins, STOMatins (band 7), G-proteins, and β-adrenergic receptors. Lipid rafts that have been implicated in cell signaling events in nonerythroid cells have been shown in erythroid cells to mediate β2-adregenic receptor signaling and increase cAMP levels, and thus regulating entry of malarial parasites into normal red cells. Membrane proteins The proteins of the membrane skeleton are responsible for the deformability, flexibility and durability of the red blood cell, enabling it to squeeze through capillaries less than half the diameter of the red blood cell (7–8 μm) and recovering the discoid shape as soon as these cells stop receiving compressive forces, in a similar fashion to an object made of rubber. There are currently more than 50 known membrane proteins, which can exist in a few hundred up to a million copies per red blood cell. Approximately 25 of these membrane proteins carry the various blood group antigens, such as the A, B and Rh antigens, among many others. These membrane proteins can perform a wide diversity of functions, such as transporting ions and molecules across the red cell membrane, adhesion and interaction with other cells such as endothelial cells, as signaling receptors, as well as other currently unknown functions. The blood types of humans are due to variations in surface glycoproteins of red blood cells. Disorders of the proteins in these membranes are associated with many disorders, such as hereditary spherocytosis, hereditary elliptocytosis, hereditary stomatocytosis, and paroxysmal nocturnal hemoglobinuria. The red blood cell membrane proteins organized according to their function: Transport Band 3 – Anion transporter, also an important structural component of the red blood cell membrane, makes up to 25% of the cell membrane surface, each red cell contains approximately one million copies. Defines the Diego Blood Group; Aquaporin 1 – water transporter, defines the Colton Blood Group; Glut1 – glucose and L-dehydroascorbic acid transporter; MCT1 – Monocarboxylate transporter for exporting Lactic acid to the liver. See Cori cycle.; Kidd antigen protein – urea transporter; RHAG – gas transporter, probably of carbon dioxide, defines Rh Blood Group and the associated unusual blood group phenotype Rhnull; Na+/K+ – ATPase; Ca2+ – ATPase; Na+ K+ 2Cl− – cotransporter; Na+-Cl− – cotransporter; Na-H exchanger; K-Cl – cotransporter; Gardos Channel. Cell adhesion ICAM-4 – interacts with integrins; BCAM – a glycoprotein that defines the Lutheran blood group and also known as Lu or laminin-binding protein. Structural role – The following membrane proteins establish linkages with skeletal proteins and may play an important role in regulating cohesion between the lipid bilayer and membrane skeleton, likely enabling the red cell to maintain its favorable membrane surface area by preventing the membrane from collapsing (vesiculating). Ankyrin-based macromolecular complex – proteins linking the bilayer to the membrane skeleton through the interaction of their cytoplasmic domains with Ankyrin. Band 3 – also assembles various glycolytic enzymes, the presumptive CO2 transporter, and carbonic anhydrase into a macromolecular complex termed a "metabolon," which may play a key role in regulating red cell metabolism and ion and gas transport function. RHAG – also involved in transport, defines associated unusual blood group phenotype Rhmod. Protein 4.1R-based macromolecular complex – proteins interacting with Protein 4.1R. Protein 4.1R – weak expression of Gerbich antigens; Glycophorin C and D – glycoprotein, defines Gerbich Blood Group; XK – defines the Kell Blood Group and the Mcleod unusual phenotype (lack of Kx antigen and greatly reduced expression of Kell antigens); RhD/RhCE – defines Rh Blood Group and the associated unusual blood group phenotype Rhnull; Duffy protein – has been proposed to be associated with chemokine clearance; Adducin – interaction with band 3; Dematin- interaction with the Glut1 glucose transporter. Surface electrostatic potential The zeta potential is an electrochemical property of cell surfaces that is determined by the net electrical charge of molecules exposed at the surface of cell membranes of the cell. The normal zeta potential of the red blood cell is −15.7 millivolts (mV). Much of this potential appears to be contributed by the exposed sialic acid residues in the membrane: their removal results in zeta potential of −6.06 mV. Function Role in transport Recall that respiration, as illustrated schematically here with a unit of carbohydrate, produces about as many molecules of carbon dioxide, CO2, as it consumes of oxygen, O2. HCOH + O2 -> CO2 + H2O Thus, the function of the circulatory system is as much about the transport of carbon dioxide as about the transport of oxygen. As stated elsewhere in this article, most of the carbon dioxide in the blood is in the form of bicarbonate ion. The bicarbonate provides a critical pH buffer. Thus, unlike hemoglobin for O2 transport, there is a physiological advantage to not having a specific CO2 transporter molecule. Red blood cells, nevertheless, play a key role in the CO2 transport process, for two reasons. First, because, besides hemoglobin, they contain a large number of copies of the enzyme carbonic anhydrase on the inside of their cell membrane. Carbonic anhydrase, as its name suggests, acts as a catalyst of the exchange between carbonic acid and carbon dioxide (which is the anhydride of carbonic acid). Because it is a catalyst, it can affect many CO2 molecules, so it performs its essential role without needing as many copies as are needed for O2 transport by hemoglobin. In the presence of this catalyst carbon dioxide and carbonic acid reach an equilibrium very rapidly, while the red cells are still moving through the capillary. Thus it is the RBC that ensures that most of the CO2 is transported as bicarbonate. At physiological pH the equilibrium strongly favors carbonic acid, which is mostly dissociated into bicarbonate ion. CO2 + H2O <=>> H2CO3 <=>> HCO3- + H+ The H+ ions released by this rapid reaction within RBC, while still in the capillary, act to reduce the oxygen binding affinity of hemoglobin, the Bohr effect. The second major contribution of RBC to carbon dioxide transport is that carbon dioxide directly reacts with globin protein components of hemoglobin to form carbaminohemoglobin compounds. As oxygen is released in the tissues, more CO2 binds to hemoglobin, and as oxygen binds in the lung, it displaces the hemoglobin bound CO2, this is called the Haldane effect. Despite the fact that only a small amount of the CO2 in blood is bound to hemoglobin in venous blood, a greater proportion of the change in CO2 content between venous and arterial blood comes from the change in this bound CO2. That is, there is always an abundance of bicarbonate in blood, both venous and arterial, because of its aforementioned role as a pH buffer. In summary, carbon dioxide produced by cellular respiration diffuses very rapidly to areas of lower concentration, specifically into nearby capillaries. When it diffuses into a RBC, CO2 is rapidly converted by the carbonic anhydrase found on the inside of the RBC membrane into bicarbonate ion. The bicarbonate ions in turn leave the RBC in exchange for chloride ions from the plasma, facilitated by the band 3 anion transport protein colocated in the RBC membrane. The bicarbonate ion does not diffuse back out of the capillary, but is carried to the lung. In the lung the lower partial pressure of carbon dioxide in the alveoli causes carbon dioxide to diffuse rapidly from the capillary into the alveoli. The carbonic anhydrase in the red cells keeps the bicarbonate ion in equilibrium with carbon dioxide. So as carbon dioxide leaves the capillary, and CO2 is displaced by O2 on hemoglobin, sufficient bicarbonate ion converts rapidly to carbon dioxide to maintain the equilibrium. Secondary functions When red blood cells undergo shear stress in constricted vessels, they release ATP, which causes the vessel walls to relax and dilate so as to promote normal blood flow. When their hemoglobin molecules are deoxygenated, red blood cells release S-Nitrosothiols, which also act to dilate blood vessels, thus directing more blood to areas of the body depleted of oxygen. Red blood cells can also synthesize nitric oxide enzymatically, using L-arginine as substrate, as do endothelial cells. Exposure of red blood cells to physiological levels of shear stress activates nitric oxide synthase and export of nitric oxide, which may contribute to the regulation of vascular tonus. Red blood cells can also produce hydrogen sulfide, a signalling gas that acts to relax vessel walls. It is believed that the cardioprotective effects of garlic are due to red blood cells converting its sulfur compounds into hydrogen sulfide. Red blood cells also play a part in the body's immune response: when lysed by pathogens such as bacteria, their hemoglobin releases free radicals, which break down the pathogen's cell wall and membrane, killing it. Cellular processes As a result of not containing mitochondria, red blood cells use none of the oxygen they transport; instead they produce the energy carrier ATP by the glycolysis of glucose and lactic acid fermentation on the resulting pyruvate. Furthermore, the pentose phosphate pathway plays an important role in red blood cells; see glucose-6-phosphate dehydrogenase deficiency for more information. As red blood cells contain no nucleus, protein biosynthesis is currently assumed to be absent in these cells. Because of the lack of nuclei and organelles, mature red blood cells do not contain DNA and cannot synthesize any RNA (although it does contain RNAs), and consequently cannot divide and have limited repair capabilities. The inability to carry out protein synthesis means that no virus can evolve to target mammalian red blood cells. However, infection with parvoviruses (such as human parvovirus B19) can affect erythroid precursors while they still have DNA, as recognized by the presence of giant pronormoblasts with viral particles and inclusion bodies, thus temporarily depleting the blood of reticulocytes and causing anemia. Life cycle Human red blood cells are produced through a process named erythropoiesis, developing from committed stem cells to mature red blood cells in about 7 days. When matured, in a healthy individual these cells live in blood circulation for about 100 to 120 days (and 80 to 90 days in a full term infant). At the end of their lifespan, they are removed from circulation. In many chronic diseases, the lifespan of the red blood cells is reduced. Creation Erythropoiesis is the process by which new red blood cells are produced; it lasts about 7 days. Through this process red blood cells are continuously produced in the red bone marrow of large bones. (In the embryo, the liver is the main site of red blood cell production.) The production can be stimulated by the hormone erythropoietin (EPO), synthesised by the kidney. Just before and after leaving the bone marrow, the developing cells are known as reticulocytes; these constitute about 1% of circulating red blood cells. Functional lifetime The functional lifetime of a red blood cell is about 100–120 days, during which time the red blood cells are continually moved by the blood flow push (in arteries), pull (in veins) and a combination of the two as they squeeze through microvessels such as capillaries. They are also recycled in the bone marrow. Senescence The aging red blood cell undergoes changes in its plasma membrane, making it susceptible to selective recognition by macrophages and subsequent phagocytosis in the mononuclear phagocyte system (spleen, liver and lymph nodes), thus removing old and defective cells and continually purging the blood. This process is termed eryptosis, red blood cell programmed death. This process normally occurs at the same rate of production by erythropoiesis, balancing the total circulating red blood cell count. Eryptosis is increased in a wide variety of diseases including sepsis, haemolytic uremic syndrome, malaria, sickle cell anemia, beta-thalassemia, glucose-6-phosphate dehydrogenase deficiency, phosphate depletion, iron deficiency and Wilson's disease. Eryptosis can be elicited by osmotic shock, oxidative stress, and energy depletion, as well as by a wide variety of endogenous mediators and xenobiotics. Excessive eryptosis is observed in red blood cells lacking the cGMP-dependent protein kinase type I or the AMP-activated protein kinase AMPK. Inhibitors of eryptosis include erythropoietin, nitric oxide, catecholamines and high concentrations of urea. Much of the resulting breakdown products are recirculated in the body. The heme constituent of hemoglobin are broken down into iron (Fe3+) and biliverdin. The biliverdin is reduced to bilirubin, which is released into the plasma and recirculated to the liver bound to albumin. The iron is released into the plasma to be recirculated by a carrier protein called transferrin. Almost all red blood cells are removed in this manner from the circulation before they are old enough to hemolyze. Hemolyzed hemoglobin is bound to a protein in plasma called haptoglobin, which is not excreted by the kidney. Clinical significance Disease Blood diseases involving the red blood cells include: Anemias (or anaemias) are diseases characterized by low oxygen transport capacity of the blood, because of low red cell count or some abnormality of the red blood cells or the hemoglobin. Iron deficiency anemia is the most common anemia; it occurs when the dietary intake or absorption of iron is insufficient, and hemoglobin, which contains iron, cannot be formed. Pernicious anemia is an autoimmune disease wherein the body lacks intrinsic factor, required to absorb vitamin B12 from food. Vitamin B12 is needed for the production of red blood cells and hemoglobin. Sickle-cell disease is a genetic disease that results in abnormal hemoglobin molecules. When these release their oxygen load in the tissues, they become insoluble, leading to mis-shaped red blood cells. These sickle shaped red cells are less deformable and viscoelastic, meaning that they have become rigid and can cause blood vessel blockage, pain, strokes, and other tissue damage. Thalassemia is a genetic disease that results in the production of an abnormal ratio of hemoglobin subunits. Hereditary spherocytosis syndromes are a group of inherited disorders characterized by defects in the red blood cell's cell membrane, causing the cells to be small, sphere-shaped, and fragile instead of donut-shaped and flexible. These abnormal red blood cells are destroyed by the spleen. Several other hereditary disorders of the red blood cell membrane are known. Aplastic anemia is caused by the inability of the bone marrow to produce blood cells. Pure red cell aplasia is caused by the inability of the bone marrow to produce only red blood cells. Hemolysis is the general term for excessive breakdown of red blood cells. It can have several causes and can result in hemolytic anemia. The malaria parasite spends part of its life-cycle in red blood cells, feeds on their hemoglobin and then breaks them apart, causing fever. Both sickle-cell disease and thalassemia are more common in malaria areas, because these mutations convey some protection against the parasite. Polycythemias (or erythrocytoses) are diseases characterized by a surplus of red blood cells. The increased viscosity of the blood can cause a number of symptoms. In polycythemia vera the increased number of red blood cells results from an abnormality in the bone marrow. Several microangiopathic diseases, including disseminated intravascular coagulation and thrombotic microangiopathies, present with pathognomonic (diagnostic) red blood cell fragments called schistocytes. These pathologies generate fibrin strands that sever red blood cells as they try to move past a thrombus. Transfusion Red blood cells may be given as part of a blood transfusion. Blood may be donated from another person, or stored by the recipient at an earlier date. Donated blood usually requires screening to ensure that donors do not contain risk factors for the presence of blood-borne diseases, or will not suffer themselves by giving blood. Blood is usually collected and tested for common or serious blood-borne diseases including Hepatitis B, Hepatitis C and HIV. The blood type (A, B, AB, or O) or the blood product is identified and matched with the recipient's blood to minimise the likelihood of acute hemolytic transfusion reaction, a type of transfusion reaction. This relates to the presence of antigens on the cell's surface. After this process, the blood is stored, and within a short duration is used. Blood can be given as a whole product or the red blood cells separated as packed red blood cells. Blood is often transfused when there is known anaemia, active bleeding, or when there is an expectation of serious blood loss, such as prior to an operation. Before blood is given, a small sample of the recipient's blood is tested with the transfusion in a process known as cross-matching. In 2008 it was reported that human embryonic stem cells had been successfully coaxed into becoming red blood cells in the lab. The difficult step was to induce the cells to eject their nucleus; this was achieved by growing the cells on stromal cells from the bone marrow. It is hoped that these artificial red blood cells can eventually be used for blood transfusions. A human trial is conducted in 2022, using blood cultured from stem cells obtained from donor blood. Tests Several blood tests involve red blood cells. These include a RBC count (the number of red blood cells per volume of blood), calculation of the hematocrit (percentage of blood volume occupied by red blood cells), and the erythrocyte sedimentation rate. The blood type needs to be determined to prepare for a blood transfusion or an organ transplantation. Many diseases involving red blood cells are diagnosed with a blood film (or peripheral blood smear), where a thin layer of blood is smeared on a microscope slide. This may reveal poikilocytosis, which are variations in red blood cell shape. When red blood cells sometimes occur as a stack, flat side next to flat side. This is known as rouleaux formation, and it occurs more often if the levels of certain serum proteins are elevated, as for instance during inflammation. Separation and blood doping Red blood cells can be obtained from whole blood by centrifugation, which separates the cells from the blood plasma in a process known as blood fractionation. Packed red blood cells, which are made in this way from whole blood with the plasma removed, are used in transfusion medicine. During plasma donation, the red blood cells are pumped back into the body right away and only the plasma is collected. Some athletes have tried to improve their performance by blood doping: first about 1 litre of their blood is extracted, then the red blood cells are isolated, frozen and stored, to be reinjected shortly before the competition. (Red blood cells can be conserved for 5 weeks at , or over 10 years using cryoprotectants) This practice is hard to detect but may endanger the human cardiovascular system which is not equipped to deal with blood of the resulting higher viscosity. Another method of blood doping involves injection with erythropoietin to stimulate production of red blood cells. Both practices are banned by the World Anti-Doping Agency. History The first person to describe red blood cells was the young Dutch biologist Jan Swammerdam, who had used an early microscope in 1658 to study the blood of a frog. Unaware of this work, Anton van Leeuwenhoek provided another microscopic description in 1674, this time providing a more precise description of red blood cells, even approximating their size, "25,000 times smaller than a fine grain of sand". In the 1740s, Vincenzo Menghini in Bologna was able to demonstrate the presence of iron by passing magnets over the powder or ash remaining from heated red blood cells. In 1901, Karl Landsteiner published his discovery of the three main blood groups—A, B, and C (which he later renamed to O). Landsteiner described the regular patterns in which reactions occurred when serum was mixed with red blood cells, thus identifying compatible and conflicting combinations between these blood groups. A year later Alfred von Decastello and Adriano Sturli, two colleagues of Landsteiner, identified a fourth blood group—AB. In 1959, by use of X-ray crystallography, Max Perutz was able to unravel the structure of hemoglobin, the red blood cell protein that carries oxygen. The oldest intact red blood cells ever discovered were found in Ötzi the Iceman, a natural mummy of a man who died around 3255 BCE. These cells were discovered in May 2012.
Biology and health sciences
Circulatory system
null
67159
https://en.wikipedia.org/wiki/Blood%20cell
Blood cell
A blood cell (also called a hematopoietic cell, hemocyte, or hematocyte) is a cell produced through hematopoiesis and found mainly in the blood. Major types of blood cells include red blood cells (erythrocytes), white blood cells (leukocytes), and platelets (thrombocytes). Together, these three kinds of blood cells add up to a total 45% of the blood tissue by volume, with the remaining 55% of the volume composed of plasma, the liquid component of blood. Red blood cells Red blood cells or erythrocytes primarily carry oxygen and collect carbon dioxide through the use of hemoglobin. Hemoglobin is an iron-containing protein that gives red blood cells their color and facilitates transportation of oxygen from the lungs to tissues and carbon dioxide from tissues to the lungs to be exhaled. Red blood cells are the most abundant cell in the blood, accounting for about 40–45% of its volume. Red blood cells are circular, biconcave, disk-shaped and deformable to allow them to squeeze through narrow capillaries. They do not have a nucleus. Red blood cells are much smaller than most other human cells. RBCs are formed in the red bone marrow from hematopoietic stem cells in a process known as erythropoiesis. In adults, about 2.4 million RBCs are produced each second. The normal RBCs count is 4.5 to 5 millions per cu.mm. RBCs have a lifespan of approximately 100-120 days. After they have completed their lifespan, they are removed from the bloodstream by the spleen. Mature red blood cells are unique among cells in the human body in that they lack a nucleus (although erythroblasts do have a nucleus). The condition of having too few red blood cells is known as anemia, while having too many is polycythemia. Erythrocyte sedimentation rate (ESR) is the rate at which RBCs sink to the bottom (when placed in a vertical column after adding an anticoagulant). Normal values of ESR are: • 3 to 5 mm per hour in males. • 4 to 7 mm per hour in females. White blood cells White blood cells or leukocytes, are cells of the immune system involved in defending the body against both infectious disease and foreign materials. They are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. There are a variety of types of white blood cells that serve specific roles in the human immune system. WBCs constitute approximately 1% of the blood volume. White blood cells are divided into granulocytes and agranulocytes, distinguished by the presence or absence of granules in the cytoplasm. Granulocytes include basophils, eosinophils, neutrophils, and mast cells. Agranulocytes include lymphocytes and monocytes. The condition of having too few white blood cells is leukopenia, while having too many is leukocytosis. There are individual terms for the lack or overabundance of specific types of white blood cells. The number of white blood cells in circulation is commonly increased in the incidence of infection. Many hematological cancers are based on the inappropriate production of white blood cells. Platelets Platelets, or thrombocytes, are very small, irregularly shaped clear cell fragments, 2–3 μm in diameter, which derive from fragmentation of megakaryocytes. The average lifespan of a platelet is normally just 5 to 9 days. Platelets are a natural source of growth factors. They circulate in the blood of mammals and are involved in hemostasis, leading to the formation of blood clots. Platelets release thread-like fibers to form these clots. The normal range (99% of population analyzed) for platelets is 150,000 to 450,000 per cubic millimeter. If the number of platelets is too low, excessive bleeding can occur. However, if the number of platelets is too high, blood clots can form thrombosis, which may obstruct blood vessels and result in such events as a stroke, myocardial infarction, pulmonary embolism, or blockage of blood vessels to other parts of the body, such as the extremities of the arms or legs. An abnormality or disease of the platelets is called a thrombocytopathy, which can be either a low number of platelets (thrombocytopenia), a decrease in function of platelets (thrombasthenia), or an increase in the number of platelets (thrombocytosis). There are disorders that reduce the number of platelets, such as heparin-induced thrombocytopenia (HIT) or thrombotic thrombocytopenic purpura (TTP), that typically cause thromboses, or clots, instead of bleeding. Platelets release a multitude of growth factors including platelet-derived growth factor (PDGF), a potent chemotactic agent, and TGF beta, which stimulates the deposition of extracellular matrix. Both of these growth factors have been shown to play a significant role in the repair and regeneration of connective tissues. Other healing-associated growth factors produced by platelets include basic fibroblast growth factor (bFGF), insulin-like growth factor 1 (IGF-1), platelet-derived epidermal growth factor, and vascular endothelial growth factor (VEGF). Local application of these factors in increased concentrations through platelet-rich plasma (PRP) has been used as an adjunct to wound healing for several decades. Complete blood count A complete blood count (CBC) is a test panel requested by a doctor or other medical professional that gives information about the cells in a patient's blood. A scientist or lab technician performs the requested testing and provides the requesting medical professional with the results of the CBC. In the past, counting the cells in a patient's blood was performed manually, by viewing a slide prepared with a sample of the patient's blood under a microscope. Today, this process is generally automated by use of an automated analyzer, with only approximately 10-20% of samples now being examined manually. Abnormally high or low counts may indicate the presence of many forms of disease, and hence blood counts are amongst the most commonly performed blood tests in medicine, as they can provide an overview of a patient's general health status. Discovery In 1658 Dutch naturalist Jan Swammerdam was the first person to observe red blood cells under a microscope, and in 1695, microscopist Antoni van Leeuwenhoek, also Dutch, was the first to draw an illustration of "red corpuscles", as they were called. No further blood cells were discovered until 1842 when French physician Alfred Donné discovered platelets. The following year leukocytes were first observed by Gabriel Andral, a French professor of medicine, and William Addison, a British physician, simultaneously. Both men believed that both red and white cells were altered in disease. With these discoveries, hematology, a new field of medicine, was established. Even though agents for staining tissues and cells were available, almost no advances were made in knowledge about the morphology of blood cells until 1879, when Paul Ehrlich published his technique for staining blood films and his method for differential blood cell counting.
Biology and health sciences
Circulatory system
Biology
67166
https://en.wikipedia.org/wiki/Placenta
Placenta
The placenta (: placentas or placentae) is a temporary embryonic and later fetal organ that begins developing from the blastocyst shortly after implantation. It plays critical roles in facilitating nutrient, gas and waste exchange between the physically separate maternal and fetal circulations, and is an important endocrine organ, producing hormones that regulate both maternal and fetal physiology during pregnancy. The placenta connects to the fetus via the umbilical cord, and on the opposite aspect to the maternal uterus in a species-dependent manner. In humans, a thin layer of maternal decidual (endometrial) tissue comes away with the placenta when it is expelled from the uterus following birth (sometimes incorrectly referred to as the 'maternal part' of the placenta). Placentas are a defining characteristic of placental mammals, but are also found in marsupials and some non-mammals with varying levels of development. Mammalian placentas probably first evolved about 150 million to 200 million years ago. The protein syncytin, found in the outer barrier of the placenta (the syncytiotrophoblast) between mother and fetus, has a certain RNA signature in its genome that has led to the hypothesis that it originated from an ancient retrovirus: essentially a virus that helped pave the transition from egg-laying to live-birth. The word placenta comes from the Latin word for a type of cake, from Greek πλακόεντα/πλακοῦντα plakóenta/plakoúnta, accusative of πλακόεις/πλακούς plakóeis/plakoús, "flat, slab-like", with reference to its round, flat appearance in humans. The classical plural is placentae, but the form placentas is more common in modern English. Evolution and phylogenetic diversity The placenta has evolved independently multiple times, probably starting in fish, where it originated multiple times, including the genus Poeciliopsis. Placentation has also evolved in some reptiles. The mammalian placenta evolved more than 100 million years ago and was a critical factor in the explosive diversification of placental mammals. Although all mammalian placentas have the same functions, there are important differences in structure and function in different groups of mammals. For example, human, bovine, equine and canine placentas are very different at both the gross and the microscopic levels. Placentas of these species also differ in their ability to provide maternal immunoglobulins to the fetus. Structure Placental mammals, including humans, have a chorioallantoic placenta that forms from the chorion and allantois. In humans, the placenta averages 22 cm (9 inch) in length and 2–2.5 cm (0.8–1 inch) in thickness, with the center being the thickest, and the edges being the thinnest. It typically weighs approximately 500 grams (just over 1 lb). It has a dark reddish-blue or crimson color. It connects to the fetus by an umbilical cord of approximately 55–60 cm (22–24 inch) in length, which contains two umbilical arteries and one umbilical vein. The umbilical cord inserts into the chorionic plate (has an eccentric attachment). Vessels branch out over the surface of the placenta and further divide to form a network covered by a thin layer of cells. This results in the formation of villous tree structures. On the maternal side, these villous tree structures are grouped into lobules called cotyledons. In humans, the placenta usually has a disc shape, but size varies vastly between different mammalian species. The placenta occasionally takes a form in which it comprises several distinct parts connected by blood vessels. The parts, called lobes, may number two, three, four, or more. Such placentas are described as bilobed/bilobular/bipartite, trilobed/trilobular/tripartite, and so on. If there is a clearly discernible main lobe and auxiliary lobe, the latter is called a succenturiate placenta. Sometimes the blood vessels connecting the lobes get in the way of fetal presentation during labor, which is called vasa previa. Gene and protein expression About 20,000 protein coding genes are expressed in human cells and 70% of these genes are expressed in the normal mature placenta. Some 350 of these genes are more specifically expressed in the placenta and fewer than 100 genes are highly placenta specific. The corresponding specific proteins are mainly expressed in trophoblasts and have functions related to pregnancy. Examples of proteins with elevated expression in placenta compared to other organs and tissues are PEG10 and the cancer testis antigen PAGE4 and expressed in cytotrophoblasts, CSH1 and KISS1 expressed in syncytiotrophoblasts, and PAPPA2 and PRG2 expressed in extravillous trophoblasts. Physiology Development The placenta begins to develop upon implantation of the blastocyst into the maternal endometrium, very early on in pregnancy at about week 4. The outer layer of the late blastocyst, is formed of trophoblasts, cells that form the outer layer of the placenta. This outer layer is divided into two further layers: the underlying cytotrophoblast layer and the overlying syncytiotrophoblast layer. The syncytiotrophoblast is a multinucleated continuous cell layer that covers the surface of the placenta. It forms as a result of differentiation and fusion of the underlying cytotrophoblasts, a process that continues throughout placental development. The syncytiotrophoblast contributes to the barrier function of the placenta. The placenta grows throughout pregnancy. Development of the maternal blood supply to the placenta is complete by the end of the first trimester of pregnancy week 14 (DM). Placental circulation Maternal placental circulation In preparation for implantation of the blastocyst, the endometrium undergoes decidualization. Spiral arteries in the decidua are remodeled so that they become less convoluted and their diameter is increased. The increased diameter and straighter flow path both act to increase maternal blood flow to the placenta. There is relatively high pressure as the maternal blood fills intervillous space through these spiral arteries which bathe the fetal villi in blood, allowing an exchange of gases to take place. In humans and other hemochorial placentals, the maternal blood comes into direct contact with the fetal chorion, though no fluid is exchanged. As the pressure decreases between pulses, the deoxygenated blood flows back through the endometrial veins. Maternal blood flow begins between days 5–12, and is approximately 600–700 ml/min at term. Fetoplacental circulation Deoxygenated fetal blood passes through umbilical arteries to the placenta. At the junction of umbilical cord and placenta, the umbilical arteries branch radially to form chorionic arteries. Chorionic arteries, in turn, branch into cotyledon arteries. In the villi, these vessels eventually branch to form an extensive arterio-capillary-venous system, bringing the fetal blood extremely close to the maternal blood; but no intermingling of fetal and maternal blood occurs ("placental barrier"). Endothelin and prostanoids cause vasoconstriction in placental arteries, while nitric oxide causes vasodilation. On the other hand, there is no neural vascular regulation, and catecholamines have only little effect. The fetoplacental circulation is vulnerable to persistent hypoxia or intermittent hypoxia and reoxygenation, which can lead to generation of excessive free radicals. This may contribute to pre-eclampsia and other pregnancy complications. It is proposed that melatonin plays a role as an antioxidant in the placenta. This begins at day 17–22. Birth Placental expulsion begins as a physiological separation from the wall of the uterus. The period from just after the child is born until just after the placenta is expelled is called the "third stage of labor". Placental expulsion can be managed actively, for example by giving oxytocin via intramuscular injection followed by cord traction to assist in delivering the placenta. Alternatively, it can be managed expectantly, allowing the placenta to be expelled without medical assistance. Blood loss and the risk of postpartum bleeding may be reduced in women offered active management of the third stage of labour, however there may be adverse effects and more research is necessary. The habit is to cut the cord immediately after birth, but it may be no medical reason to do this; on the contrary, not cutting the cord could sometimes help the baby in its adaptation to extrauterine life, for preterm infants. Microbiome The placenta is traditionally thought to be sterile, but recent research suggests that a resident, non-pathogenic, and diverse population of microorganisms may be present in healthy tissue. However, whether these microbes exist or are clinically important is highly controversial and is the subject of active research. Physiology of placenta Nutrition and gas exchange The placenta intermediates the transfer of nutrients between mother and fetus. The perfusion of the intervillous spaces of the placenta with maternal blood allows the transfer of nutrients and oxygen from the mother to the fetus and the transfer of waste products and carbon dioxide back from the fetus to the maternal blood. Nutrient transfer to the fetus can occur via both active and passive transport. Placental nutrient metabolism was found to play a key role in limiting the transfer of some nutrients. Adverse pregnancy situations, such as those involving maternal diabetes or obesity, can increase or decrease levels of nutrient transporters in the placenta potentially resulting in overgrowth or restricted growth of the fetus. Excretion Waste products excreted from the fetus such as urea, uric acid, and creatinine are transferred to the maternal blood by diffusion across the placenta. Immunity The placenta functions as a selective barrier between maternal and fetal cells, preventing maternal blood, proteins and microbes (including bacteria and most viruses) from crossing the maternal-fetal barrier. Deterioration in placental functioning, referred to as placental insufficiency, may be related to mother-to-child transmission of some infectious diseases. A very small number of viruses including rubella virus, Zika virus and cytomegalovirus (CMV) can travel across the placental barrier, generally taking advantage of conditions at certain gestational periods as the placenta develops. CMV and Zika travel from the maternal bloodstream via placental cells to the fetal bloodstream. Beginning as early as 13 weeks of gestation, and increasing linearly, with the largest transfer occurring in the third trimester, IgG antibodies can pass through the human placenta, providing protection to the fetus in utero. This passive immunity lingers for several months after birth, providing the newborn with a carbon copy of the mother's long-term humoral immunity to see the infant through the crucial first months of extrauterine life. IgM antibodies, because of their larger size, cannot cross the placenta, one reason why infections acquired during pregnancy can be particularly hazardous for the fetus. Hormonal regulation The first hormone released by the placenta is called the human chorionic gonadotropin (hCG) hormone. This is responsible for stopping the process at the end of menses when the corpus luteum ceases activity and atrophies. If hCG did not interrupt this process, it would lead to spontaneous abortion of the fetus. The corpus luteum also produces and releases progesterone and estrogen, and hCG stimulates it to increase the amount that it releases. hCG is the indicator of pregnancy that pregnancy tests look for. These tests will work when menses has not occurred or after implantation has happened on days seven to ten. hCG may also have an anti-antibody effect, protecting it from being rejected by the mother's body. hCG also assists the male fetus by stimulating the testes to produce testosterone, which is the hormone needed to allow the sex organs of the male to grow. Progesterone helps the embryo implant by assisting passage through the fallopian tubes. It also affects the fallopian tubes and the uterus by stimulating an increase in secretions necessary for fetal nutrition. Progesterone, like hCG, is necessary to prevent spontaneous abortion because it prevents contractions of the uterus and is necessary for implantation. Estrogen is a crucial hormone in the process of proliferation. This involves the enlargement of the breasts and uterus, allowing for growth of the fetus and production of milk. Estrogen is also responsible for increased blood supply towards the end of pregnancy through vasodilation. The levels of estrogen during pregnancy can increase so that they are thirty times what a non-pregnant woman mid-cycles estrogen level would be. Human placental lactogen (hPL) is a hormone used in pregnancy to develop fetal metabolism and general growth and development. Human placental lactogen works with growth hormone to stimulate insulin-like growth factor production and regulating intermediary metabolism. In the fetus, hPL acts on lactogenic receptors to modulate embryonic development, metabolism and stimulate production of IGF, insulin, surfactant and adrenocortical hormones. hPL values increase with multiple pregnancies, intact molar pregnancy, diabetes and Rh incompatibility. They are decreased with toxemia, choriocarcinoma, and Placental insufficiency. Immunological barrier The placenta and fetus may be regarded as a foreign body inside the mother and must be protected from the normal immune response of the mother that would cause it to be rejected. The placenta and fetus are thus treated as sites of immune privilege, with immune tolerance. For this purpose, the placenta uses several mechanisms : It secretes neurokinin B-containing phosphocholine molecules. This is the same mechanism used by parasitic nematodes to avoid detection by the immune system of their host. There is presence of small lymphocytic suppressor cells in the fetus that inhibit maternal cytotoxic T cells by inhibiting the response to interleukin 2. However, the placental barrier is not the sole means of evading the immune system, as foreign fetal cells also persist in the maternal circulation, on the other side of the placental barrier. DNA methylation The trophoblast is the outer layer of cells of the blastocyst (see day 9 in Figure, above, showing the initial stages of human embryogenesis). Placental trophoblast cells have a unique genome-wide DNA methylation pattern determined by de novo methyltransferases during embryogenesis. This methylation pattern is principally required to regulate placental development and function, which in turn is critical for embryo survival. Other The placenta also provides a reservoir of blood for the fetus, delivering blood to it in case of hypotension and vice versa, comparable to a capacitor. Clinical significance Numerous pathologies can affect the placenta. Placenta accreta, when the placenta implants too deeply, all the way to the actual muscle of uterine wall (without penetrating it) Placenta praevia, when the placement of the placenta is too close to or blocks the cervix Placental abruption, premature detachment of the placenta Placentitis, inflammation of the placenta, such as by TORCH infections. Society and culture The placenta often plays an important role in various cultures, with many societies conducting rituals regarding its disposal. In the Western world, the placenta is most often incinerated. Some cultures bury the placenta for various reasons. The Māori of New Zealand traditionally bury the placenta from a newborn child to emphasize the relationship between humans and the earth. Likewise, the Navajo bury the placenta and umbilical cord at a specially chosen site, particularly if the baby dies during birth. In Cambodia and Costa Rica, burial of the placenta is believed to protect and ensure the health of the baby and the mother. If a mother dies in childbirth, the Aymara of Bolivia bury the placenta in a secret place so that the mother's spirit will not return to claim her baby's life. The placenta is believed by some communities to have power over the lives of the baby or its parents. The Kwakiutl of British Columbia bury girls' placentas to give the girl skill in digging clams, and expose boys' placentas to ravens to encourage future prophetic visions. In Turkey, the proper disposal of the placenta and umbilical cord is believed to promote devoutness in the child later in life. In Transylvania and Japan, interaction with a disposed placenta is thought to influence the parents' future fertility. Several cultures believe the placenta to be or have been alive, often a relative of the baby. Nepalese think of the placenta as a friend of the baby; the orang Asli and Malay populations in Malay Peninsula regard it as the baby's older sibling. Native Hawaiians believe that the placenta is a part of the baby, and traditionally plant it with a tree that can then grow alongside the child. Various cultures in Indonesia, such as Javanese and Malay, believe that the placenta has a spirit and needs to be buried outside the family house. Some Malays would bury the baby's placenta with a pencil (if it is a boy) or a needle and thread (if it is a girl). In some cultures, the placenta is eaten, a practice known as human placentophagy. In some eastern cultures, such as China, the dried placenta (ziheche , literally "purple river car") is thought to be a healthful restorative and is sometimes used in preparations of traditional Chinese medicine and various health products. The practice of human placentophagy has become a more recent trend in western cultures and is not without controversy; its practice being considered cannibalism is debated. Some cultures have alternative uses for placenta that include the manufacturing of cosmetics, pharmaceuticals and food. Additional images
Biology and health sciences
Reproductive system
null
67193
https://en.wikipedia.org/wiki/Testicle
Testicle
A testicle or testis ( testes) is the gonad in all male bilaterians, including humans, and is homologous to the ovary in females. Its primary functions are the production of sperm and the secretion of androgens, primarily testosterone. The release of testosterone is regulated by luteinizing hormone (LH) from the anterior pituitary gland. Sperm production is controlled by follicle-stimulating hormone (FSH) from the anterior pituitary gland and by testosterone produced within the gonads. Structure Appearance Males have two testicles of similar size contained within the scrotum, which is an extension of the abdominal wall. Scrotal asymmetry, in which one testicle extends farther down into the scrotum than the other, is common. This is because of the differences in the vasculature's anatomy. For 85% of men, the right testis hangs lower than the left one. Measurement and volume The volume of the testicle can be estimated by palpating it and comparing it to ellipsoids (an orchidometer) of known sizes. Another method is to use calipers, a ruler, or an ultrasound image to obtain the three measurements of the x, y, and z axes (length, depth and width). These measurements can then be used to calculate the volume, using the formula for the volume of an ellipsoid: However, the most accurate calculation of actual testicular volume is gained from the formula: An average adult testicle measures up to . The Tanner scale, which is used to assess the maturity of the male genitalia, assigns a maturity stage to the calculated volume ranging from stage I, a volume of less than 1.5 cm3; to stage V, a volume greater than 20 cm3. Normal volume is 15 to 25 cm3; the average is 18 cm3 per testis (range 12–30 cm3). The number of spermatozoa an adult human male produces is directly proportional to testicular volume, as larger testicles contain more seminiferous tubules and Sertoli cells as a result. As such, men with larger testicles produce on average more sperm cells in each ejaculate, as testicular volume is positively correlated with semen profiles. Internal structure Duct system The testes are covered by a tough fibrous shell called the tunica albuginea. Under the tunica albuginea, the testes contain very fine-coiled tubes called seminiferous tubules. The tubules are lined with a layer of cells (germ cells) that develop from puberty through old age into sperm cells (also known as spermatozoa or male gametes). The developing sperm travel through the seminiferous tubules to the rete testis located in the mediastinum testis, to the efferent ducts, and then to the epididymis where newly created sperm cells mature (spermatogenesis). The sperm move into the vas deferens, and are eventually expelled through the urethra and out of the urethral orifice through muscular contractions. Primary cell types Within the seminiferous tubules, the germ cells develop into spermatogonia, spermatocytes, spermatids and spermatozoa through the process of spermatogenesis. The gametes contain DNA for fertilization of an ovum. Sertoli cellsthe true epithelium of the seminiferous epithelium, critical for the support of germ cell development into spermatozoa. Sertoli cells secrete inhibin. Peritubular myoid cells surround the seminiferous tubules. Between tubules (interstitial cells) exist Leydig cellscells localized between seminiferous tubules that produce and secrete testosterone and other androgens important for puberty (including secondary sexual characteristics like facial hair), sexual behavior, and libido. Sertoli cells support spermatogenesis. Testosterone controls testicular volume. Immature Leydig cells and interstitial macrophages and epithelial cells are also present. Blood supply and lymphatic drainage The testis has three sources of arterial blood supply: the testicular artery, the cremasteric artery, and the artery to the ductus deferens. Blood supply and lymphatic drainage of the testes and scrotum are distinct: The paired testicular arteries arise directly from the abdominal aorta and descend through the inguinal canal, while the scrotum and the rest of the external genitalia is supplied by the internal pudendal artery (a branch of the internal iliac artery). The testis has collateral blood supply from the cremasteric artery (a branch of the inferior epigastric artery, which is a branch of the external iliac artery), and the artery to the ductus deferens (a branch of the inferior vesical artery, which is a branch of the internal iliac artery). Therefore, if the testicular artery is ligated, e.g., during a Fowler-Stevens orchiopexy for a high undescended testis, the testis will usually survive on these other blood supplies. Lymphatic drainage of the testes follows the testicular arteries back to the paraaortic lymph nodes, while lymph from the scrotum drains to the inguinal lymph nodes. Layers Many anatomical features of the adult testis reflect its developmental origin in the abdomen. The layers of tissue enclosing each testicle are derived from the layers of the anterior abdominal wall. The cremasteric muscle arises from the internal oblique muscle. The blood–testis barrier Large molecules cannot pass from the blood into the lumen of a seminiferous tubule due to the presence of tight junctions between adjacent Sertoli cells. The spermatogonia occupy the basal compartment (deep to the level of the tight junctions) and the more mature forms, such as primary and secondary spermatocytes and spermatids, occupy the adluminal compartment. The function of the blood–testis barrier may be to prevent an auto-immune reaction. Mature sperm (and their antigens) emerge significantly after immune tolerance is set in infancy. Since sperm are antigenically different from self-tissue, a male animal can react immunologically to his own sperm. The male can make antibodies against them. Injection of sperm antigens causes inflammation of the testis (auto-immune orchitis) and reduced fertility. The blood–testis barrier may reduce the likelihood that sperm proteins will induce an immune response. Temperature regulation and responses Carl Richard Moore in 1926 proposed that testicles were external due to spermatogenesis being enhanced at temperatures slightly less than core body temperature outside the body. The spermatogenesis is less efficient at lower and higher temperatures than 33 °C. Because the testes are located outside the body, the smooth tissue of the scrotum can move them closer or further away from the body. The temperature of the testes is maintained at 34.4 °C, a little below body temperature, as temperatures above 36.7 °C impede spermatogenesis. There are a number of mechanisms to maintain the testes at the optimum temperature. The cremasteric muscle covers the testicles and the spermatic cord. When this muscle contracts, the cord shortens and the testicles move closer up toward the body, which provides slightly more warmth to maintain optimal testicular temperature. When cooling is required, the cremasteric muscle relaxes and the testicles lower away from the warm body and are able to cool. Contraction also occurs in response to physical stress, such as blunt trauma; the testicles withdraw and the scrotum shrinks very close to the body in an effort to protect them. The cremasteric reflex will reflexively raise the testicles. The testicles can also be lifted voluntarily using the pubococcygeus muscle, which partially activates related muscles. Gene and protein expression The human genome includes approximately 20,000 protein coding genes: 80% of these genes are expressed in adult testes. The testes have the highest fraction of tissue type-specific genes compared to other organs and tissues. About 1000 of them are highly specific for the testes, and about 2,200 show an elevated pattern of expression. A majority of these genes encode for proteins that are expressed in the seminiferous tubules and have functions related to spermatogenesis. Sperm cells express proteins that result in the development of flagella; these same proteins are expressed in the female in cells lining the fallopian tube and cause the development of cilia. Sperm cell flagella and fallopian tube cilia are homologous structures. The testis-specific proteins that show the highest level of expression are protamines. Development There are two phases in which the testes grow substantially. These are the embryonic and pubertal phases. During mammalian development, the gonads are at first capable of becoming either ovaries or testes. In humans, starting at about week 4, the gonadal rudiments are present within the intermediate mesoderm adjacent to the developing kidneys. At about week 6, sex cords develop within the forming testes. These are made up of early Sertoli cells that surround and nurture the germ cells that migrate into the gonads shortly before sex determination begins. In males, the sex-specific gene SRY that is found on the Y chromosome initiates sex determination by downstream regulation of sex-determining factors (such as GATA4, SOX9 and AMH), which lead to development of the male phenotype, including directing development of the early bipotential gonad toward the male path of development. Testes follow the path of descent, from high in the posterior fetal abdomen to the inguinal ring and beyond to the inguinal canal and into the scrotum. In most cases (97% full-term, 70% preterm), both testes have descended by birth. In most other cases, only one testis fails to descend. This is called cryptorchidism. In most cases of cryptorchidism, the issue will mostly resolve itself within the first half year of life. However, if the testes do not descend far enough into the scrotum, surgical anchoring in the scrotum is required due to risks of infertility and testicular cancer. The testes grow in response to the start of spermatogenesis. Size depends on lytic function, sperm production (amount of spermatogenesis present in testis), interstitial fluid, and Sertoli cell fluid production. The testicles are fully descended before the male reaches puberty. Clinical significance Protection and injury The testicles are very sensitive to impact and injury. The pain involved travels up from each testicle into the abdominal cavity, via the spermatic plexus, which is the primary nerve of each testicle. This will cause pain in the hip and the back. The pain usually fades within a few minutes. Testicular torsion is a medical emergency. This is because the longer it takes to access medical intervention with respect to extending ischemia, the higher the chance that the testicle will be lost. There is a 90% chance to save the testicle if de-torsion surgery is performed within six hours of testicular torsion onset. Testicular rupture is severe trauma affecting the tunica albuginea. Penetrating injuries to the scrotum may cause castration, or physical separation or destruction of the testes, possibly along with part or all of the penis, which results in total sterility if the testicles are not reattached quickly. In an effort to avoid severe infection, ample application of saline and bacitracin help remove debris and foreign objects from the wound. Jockstraps support and protect the testicles. Diseases and conditions To improve the chances of catching cases of testicular cancer, other neoplasms, or other health issues early, regular testicular self-examination is recommended. Varicocele, swollen vein(s) from the testes, usually affecting the left side, the testis usually being normal. Hydrocele testis is swelling around testes caused by accumulation of clear liquid within a membranous sac, the testis usually being normal. It is the most common cause of scrotal swelling. Spermatocele is a retention cyst of a tubule of the rete testis or the head of the epididymis distended with barely watery fluid that contains spermatozoa. Endocrine disorders can also affect the size and function of the testis. Certain inherited conditions involving mutations in key developmental genes also impair testicular descent, resulting in abdominal or inguinal testes, which remain nonfunctional and may become cancerous. Other genetic conditions can result in the loss of the Wolffian ducts and allow for the persistence of Müllerian ducts. Both excess and deficient levels of estrogens can disrupt spermatogenesis and cause infertility. Bell-clapper deformity is a deformity in which the testicle is not attached to the scrotal walls, and can rotate freely on the spermatic cord within the tunica vaginalis. Those with Bell-clapper are at a higher risk of testicular torsion. Orchitis is inflammation of the testicles Epididymitis is a painful inflammation of the epididymis or epididymides, frequently caused by bacterial infection but sometimes of unknown origin. Anorchia is the absence of one or both testicles. Cryptorchidism, or "undescended testicles", is when the testicle does not descend into the scrotum of an infant boy. Testicular enlargement is an unspecific sign of various testicular diseases, and can be defined as a testicular size of more than 5 cm (long axis) × 3 cm (short axis). Blue balls is a condition concerning temporary fluid congestion in the testicles and prostate region, caused by prolonged sexual arousal. Testicular prostheses are available to mimic the appearance and feel of one or both testicles, when absent as from injury or as treatment in association to gender dysphoria. There have also been some instances of their implantation in dogs. Scientists are working on developing lab-grown testicles that might help infertile men in the future. Effects of exogenous hormones To some extent, it is possible to change testicular size. Short of direct injury or subjecting them to adverse conditions, e.g., higher temperature than they are normally accustomed to, they can be shrunk by competing against their intrinsic hormonal function through the use of externally administered steroidal hormones. Steroids taken for muscle enhancement (especially anabolic steroids) often have the undesired side effect of testicular shrinkage. Stimulation of testicular functions via gonadotropic-like hormones may enlarge their size. Testes may shrink or atrophy during hormone replacement therapy or through chemical castration. In all cases, the loss in testes volume corresponds with a loss of spermatogenesis. Society and culture The testicles of calves, lambs, roosters, turkeys, and other animals are eaten in many parts of the world, often under euphemistic culinary names. Testicles are a by-product of the castration of young animals raised for meat, so they might have been a late-spring seasonal specialty. In modern times, they are generally frozen and available year-round. In the Middle Ages, men who wanted a boy sometimes had their left testicle removed. This was because people believed that the right testicle made "boy" sperm and the left made "girl" sperm. As early as 330 BC, Aristotle prescribed the ligation (tying off) of the left testicle in men wishing to have boys. Etymology and slang One theory about the etymology of the word testis is based on Roman law. The original Latin word , "witness", was used in the firmly established legal principle "" (one witness [equals] no witness), meaning that testimony by any one person in court was to be disregarded unless corroborated by the testimony of at least another. This led to the common practice of producing two witnesses, bribed to testify the same way in cases of lawsuits with ulterior motives. Since such witnesses always came in pairs, the meaning was accordingly extended, often in the diminutive (testiculus, testiculi). Another theory says that testis is influenced by a loan translation, from Greek "defender (in law), supporter" that is "two glands side by side". There are multiple slang terms for the testes. They may be referred to as "balls". Frequently, "nuts" (sometimes intentionally misspelled as "nutz") are also a slang term for the testes due to the geometric resemblance. One variant of the term includes "Deez Nuts", which was used for a satirical political candidate in 2016. In Spanish, the term is used, which is Spanish for eggs. Other animals External appearance In seasonal breeders, the weight of the testes often increases during the breeding season. The testicles of a dromedary camel are long, deep and in width. The right testicle is often smaller than the left. In sharks, the testicle on the right side is usually larger. In many bird and mammal species, the left may be larger. Fish usually have two testes of a similar size. The primitive jawless fish have only a single testis, located in the midline of the body, although this forms from the fusion of paired structures in the embryo. Location Internal The basal condition for mammals is to have internal testes. The testes of monotremes, xenarthrans, and afrotherians remain within the abdomen (testicondy). There are also some marsupials with external testes and boreoeutherian mammals with internal testes, such as the rhinoceros. Cetaceans such as whales and dolphins also have internal testes. As external testes would increase drag in the water, they have internal testes, which are kept cool by special circulatory systems that cool the arterial blood going to the testes by placing the arteries near veins bringing cooled venous blood from the skin. In odobenids and phocids, the location of the testes is para-abdominal, though otariids have scrotal testes. External Boreoeutherian land mammals, the large group of mammals that includes humans, have externalized testes. Their testes function best at temperatures lower than their core body temperature. Their testes are located outside of the body and are suspended by the spermatic cord within the scrotum. There are several hypotheses as to why most boreotherian mammals have external testes that operate best at a temperature that is slightly less than the core body temperature. One view is that it is stuck with enzymes evolved in a colder temperature due to external testes evolving for different reasons. Another view is that the lower temperature of the testes simply is more efficient for sperm production. The classic hypothesis is that cooler temperature of the testes allows for more efficient fertile spermatogenesis. There are no possible enzymes operating at normal core body temperature that are as efficient as the ones evolved. Early mammals had lower body temperatures and thus their testes worked efficiently within their body. However, boreotherian mammals may have higher body temperatures than the other mammals and had to develop external testes to keep them cool. One argument is that mammals with internal testes, such as the monotremes, armadillos, sloths, elephants, and rhinoceroses, have a lower core body temperatures than those mammals with external testes. Researchers have wondered why birds, despite having very high core body temperatures, have internal testes and did not evolve external testes. It was once theorized that birds used their air sacs to cool the testes internally, but later studies revealed that birds' testes are able to function at core body temperature. Some mammals with seasonal breeding cycles keep their testes internal until the breeding season. After that, their testes descend and increase in size and become external. The ancestor of the boreoeutherian mammals may have been a small mammal that required very large testes for sperm competition and thus had to place its testes outside the body. This might have led to enzymes involved in spermatogenesis, spermatogenic DNA polymerase beta and recombinase activities evolving a unique temperature optimum that is slightly less than core body temperature. When the boreoeutherian mammals diversified into forms that were larger or did not require intense sperm competition, they still produced enzymes that operated best at cooler temperatures and had to keep their testes outside the body. This position is made less parsimonious because the kangaroo, a non-boreoeutherian mammal, has external testicles. Separately from boreotherian mammals, the ancestors of kangaroos might have also been subject to heavy sperm competition and thus developed external testes; however, kangaroo external testes are suggestive of a possible adaptive function for external testes in large animals. One argument for the evolution of external testes is that it protects the testes from abdominal cavity pressure changes caused by jumping and galloping. Mild, transient scrotal heat stress causes DNA damage, reduced fertility and abnormal embryonic development in mice. DNA strand breaks were found in spermatocytes recovered from testicles subjected to 40 °C or 42 °C for 30 minutes. These findings suggest that the external location of the testicles provides the adaptive benefit of protecting spermatogenic cells from heat-induced DNA damage that could otherwise lead to infertility and germline mutation. Size The relative size of the testes is often influenced by mating systems. Testicular size as a proportion of body weight varies widely. In the mammalian kingdom, there is a tendency for testicular size to correspond with multiple mates (e.g., harems, polygamy). Production of testicular output sperm and spermatic fluid is also larger in polygamous animals, possibly a spermatogenic competition for survival. The testes of the right whale are likely to be the largest of any animal, each weighing around 500 kg (1,100 lb). Among the Hominidae, gorillas have little female promiscuity and sperm competition and the testes are small compared to body weight (0.03%). Chimpanzees have high promiscuity and large testes compared to body weight (0.3%). Human testicular size falls between these extremes (0.08%). Testis weight also varies in seasonal breeders like red foxes, golden jackals, and coyotes. Internal structure Amphibians and most fish do not possess seminiferous tubules. Instead, the sperm are produced in spherical structures called sperm ampullae. These are seasonal structures, releasing their contents during the breeding season, and then being reabsorbed by the body. Before the next breeding season, new sperm ampullae begin to form and ripen. The ampullae are otherwise essentially identical to the seminiferous tubules in higher vertebrates, including the same range of cell types. Gallery
Biology and health sciences
Reproductive system
null
67211
https://en.wikipedia.org/wiki/Electron%20configuration
Electron configuration
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals. For example, the electron configuration of the neon atom is , meaning that the 1s, 2s, and 2p subshells are occupied by two, two, and six electrons, respectively. Electronic configurations describe each electron as moving independently in an orbital, in an average field created by the nuclei and all the other electrons. Mathematically, configurations are described by Slater determinants or configuration state functions. According to the laws of quantum mechanics, a level of energy is associated with each electron configuration. In certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon. Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements, for describing the chemical bonds that hold atoms together, and in understanding the chemical formulas of compounds and the geometries of molecules. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors. Shells and subshells Electron configuration was first conceived under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons. An electron shell is the set of allowed states that share the same principal quantum number, n, that electrons may occupy. In each term of an electron configuration, n is the positive integer that precedes each orbital letter (helium's electron configuration is 1s2, therefore n = 1, and the orbital contains two electrons). An atom's nth electron shell can accommodate 2n2 electrons. For example, the first shell can accommodate two electrons, the second shell eight electrons, the third shell eighteen, and so on. The factor of two arises because the number of allowed states doubles with each successive shell due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin + (usually denoted by an up-arrow) and one with a spin of − (with a down-arrow). A subshell is the set of states defined by a common azimuthal quantum number, , within a shell. The value of is in the range from 0 to n − 1. The values  = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. For example, the 3d subshell has n = 3 and  = 2. The maximum number of electrons that can be placed in a subshell is given by 2(2 + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. The numbers of electrons that can occupy each shell and each subshell arise from the equations of quantum mechanics, in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers. Exhaustive technical details about the complete quantum mechanical theory of atomic spectra and structure can be found and studied in the basic book of Robert D. Cowan. Notation Physicists and chemists use a standard notation to indicate the electron configurations of atoms and molecules. For atoms, the notation consists of a sequence of atomic subshell labels (e.g. for phosphorus the sequence 1s, 2s, 2p, 3s, 3p) with the number of electrons assigned to each subshell placed as a superscript. For example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s-subshell and one in the (higher-energy) 2s-subshell, so its configuration is written 1s2 2s1 (pronounced "one-s-two, two-s-one"). Phosphorus (atomic number 15) is as follows: 1s2 2s2 2p6 3s2 3p3. For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used. The electron configuration can be visualized as the core electrons, equivalent to the noble gas of the preceding period, and the valence electrons: each element in a period differs only by the last few subshells. Phosphorus, for instance, is in the third period. It differs from the second-period neon, whose configuration is 1s2 2s2 2p6, only by the presence of a third shell. The portion of its configuration that is equivalent to neon is abbreviated as [Ne], allowing the configuration of phosphorus to be written as [Ne] 3s2 3p3 rather than writing out the details of the configuration of neon explicitly. This convention is useful as it is the electrons in the outermost shell that most determine the chemistry of the element. For a given configuration, the order of writing the orbitals is not completely fixed since only the orbital occupancies have physical significance. For example, the electron configuration of the titanium ground state can be written as either [Ar] 4s2 3d2 or [Ar] 3d2 4s2. The first notation follows the order based on the Madelung rule for the configurations of neutral atoms; 4s is filled before 3d in the sequence Ar, K, Ca, Sc, Ti. The second notation groups all orbitals with the same value of n together, corresponding to the "spectroscopic" order of orbital energies that is the reverse of the order in which electrons are removed from a given atom to form positive ions; 3d is filled before 4s in the sequence Ti4+, Ti3+, Ti2+, Ti+, Ti. The superscript 1 for a singly occupied subshell is not compulsory; for example aluminium may be written as either [Ne] 3s2 3p1 or [Ne] 3s2 3p. In atoms where a subshell is unoccupied despite higher subshells being occupied (as is the case in some ions, as well as certain neutral atoms shown to deviate from the Madelung rule), the empty subshell is either denoted with a superscript 0 or left out altogether. For example, neutral palladium may be written as either or simply , and the lanthanum(III) ion may be written as either or simply [Xe]. It is quite common to see the letters of the orbital labels (s, p, d, f) written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a normal typeface (as used here). The choice of letters originates from a now-obsolete system of categorizing spectral lines as "sharp", "principal", "diffuse" and "fundamental" (or "fine"), based on their observed fine structure: their modern usage indicates orbitals with an azimuthal quantum number, , of 0, 1, 2 or 3 respectively. After f, the sequence continues alphabetically g, h, i... ( = 4, 5, 6...), skipping j, although orbitals of these types are rarely required. The electron configurations of molecules are written in a similar way, except that molecular orbital labels are used instead of atomic orbital labels (see below). Energy of ground state and excited states The energy associated to an electron is that of its orbital. The energy of a configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground state. Any other configuration is an excited state. As an example, the ground state configuration of the sodium atom is 1s2 2s2 2p6 3s1, as deduced from the Aufbau principle (see below). The first excited state is obtained by promoting a 3s electron to the 3p subshell, to obtain the 1s2 2s2 2p6 3p1 configuration, abbreviated as the 3p level. Atoms can move from one configuration to another by absorbing or emitting energy. In a sodium-vapor lamp for example, sodium atoms are excited to the 3p level by an electrical discharge, and return to the ground state by emitting yellow light of wavelength 589 nm. Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to X-ray photons. This would be the case for example to excite a 2p electron of sodium to the 3s level and form the excited 1s2 2s2 2p5 3s2 configuration. The remainder of this article deals only with the ground-state configuration, often referred to as "the" configuration of an atom or molecule. History Irving Langmuir was the first to propose in his 1919 article "The Arrangement of Electrons in Atoms and Molecules" in which, building on Gilbert N. Lewis's cubical atom theory and Walther Kossel's chemical bonding theory, he outlined his "concentric theory of atomic structure". Langmuir had developed his work on electron atomic structure from other chemists as is shown in the development of the History of the periodic table and the Octet rule. Niels Bohr (1923) incorporated Langmuir's model that the periodicity in the properties of the elements might be explained by the electronic structure of the atom. His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as 2.4.4.6 instead of 1s2 2s2 2p6 3s2 3p4 (2.8.6). Bohr used 4 and 6 following Alfred Werner's 1893 paper. In fact, the chemists accepted the concept of atoms long before the physicists. Langmuir began his paper referenced above by saying,«…The problem of the structure of atoms has been attacked mainly by physicists who have given little consideration to the chemical properties which must ultimately be explained by a theory of atomic structure. The vast store of knowledge of chemical properties and relationships, such as is summarized by the Periodic Table, should serve as a better foundation for a theory of atomic structure than the relatively meager experimental data along purely physical lines... These electrons arrange themselves in a series of concentric shells, the first shell containing two electrons, while all other shells tend to hold eight.…»The valence electrons in the atom were described by Richard Abegg in 1904. In 1924, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be 2.8.6. However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect). Bohr was well aware of this shortcoming (and others), and had written to his friend Wolfgang Pauli in 1923 to ask for his help in saving quantum theory (the system now known as "old quantum theory"). Pauli hypothesized successfully that the Zeeman effect can be explained as depending only on the response of the outermost (i.e., valence) electrons of the atom. Pauli was able to reproduce Stoner's shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his exclusion principle (1925): {{Blockquote| It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [], j [m] and m [ms].}} The Schrödinger equation, published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom: this solution yields the atomic orbitals that are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936), see below) for the order in which atomic orbitals are filled with electrons. Atoms: Aufbau principle and Madelung rule The aufbau principle (from the German Aufbau, "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as:a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy subshells are filled before electrons are placed in higher-energy orbitals.The principle works very well (for the ground states of the atoms) for the known 118 elements, although it is sometimes slightly wrong. The modern form of the aufbau principle describes an order of orbital energies given by Madelung's rule (or Klechkowski's rule). This rule was first stated by Charles Janet in 1929, rediscovered by Erwin Madelung in 1936, and later given a theoretical justification by V. M. Klechkowski: Subshells are filled in the order of increasing n + . Where two subshells have the same value of n + , they are filled in order of increasing n. This gives the following order for filling the orbitals: 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, (8s, , 6f, 7d, 8p, and 9s) In this list the subshells in parentheses are not occupied in the ground state of the heaviest atom now known (Og, Z = 118). The aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the shell model of nuclear physics and nuclear chemistry. Periodic table The form of the periodic table is closely related to the atomic electron configuration for each element. For example, all the elements of group 2 (the table's second column) have an electron configuration of [E] ns (where [E] is a noble gas configuration), and have notable similarities in their chemical properties. The periodicity of the periodic table in terms of periodic table blocks is due to the number of electrons (2, 6, 10, and 14) needed to fill s, p, d, and f subshells. These blocks appear as the rectangular sections of the periodic table. The single exception is helium, which despite being an s-block atom is conventionally placed with the other noble gasses in the p-block due to its chemical inertness, a consequence of its full outer shell (though there is discussion in the contemporary literature on whether this exception should be retained). The electrons in the valence (outermost) shell largely determine each element's chemical properties. The similarities in the chemical properties were remarked on more than a century before the idea of electron configuration. Shortcomings of the aufbau principle The aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; in both cases this is only approximately true. It considers atomic orbitals as "boxes" of fixed energy into which can be placed two electrons and no more. However, the energy of an electron "in" an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no "one-electron solutions" for systems of more than one electron, only a set of many-electron solutions that cannot be calculated exactly (although there are mathematical approximations available, such as the Hartree–Fock method). The fact that the aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogen-like atom, which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects of the Lamb shift.) Ionization of the transition metals The naïve application of the aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals. Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s and [Ar] 4s respectively, i.e. the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n +  = 4 (n = 4,  = 0) while the 3d-orbital has n +  = 5 (n = 3,  = 2). After calcium, most neutral atoms in the first series of transition metals (scandium through zinc) have configurations with two 4s electrons, but there are two exceptions. Chromium and copper have electron configurations [Ar] 3d 4s and [Ar] 3d 4s respectively, i.e. one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons". However, this is not supported by the facts, as tungsten (W) has a Madelung-following d s configuration and not d s, and niobium (Nb) has an anomalous d s configuration that does not give it a half-filled or completely filled subshell. The apparent paradox arises when electrons are removed'' from the transition metal atoms to form ions. The first electrons to be ionized come not from the 3d-orbital, as one would expect if it were "higher in energy", but from the 4s-orbital. This interchange of electrons between 4s and 3d is found for all atoms of the first series of transition metals. The configurations of the neutral atoms (K, Ca, Sc, Ti, V, Cr, ...) usually follow the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, ...; however the successive stages of ionization of a given atom (such as Fe4+, Fe3+, Fe2+, Fe+, Fe) usually follow the order 1s, 2s, 2p, 3s, 3p, 3d, 4s, ... This phenomenon is only paradoxical if it is assumed that the energy order of atomic orbitals is fixed and unaffected by the nuclear charge or by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly does not. There is no special reason why the Fe ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium, and that the chemistry of the two species is very different. Melrose and Eric Scerri have analyzed the changes of orbital energy with orbital occupations in terms of the two-electron repulsion integrals of the Hartree–Fock method of atomic structure calculation. More recently Scerri has argued that contrary to what is stated in the vast majority of sources including the title of his previous article on the subject, 3d orbitals rather than 4s are in fact preferentially occupied. In chemical environments, configurations can change even more: Th3+ as a bare ion has a configuration of [Rn] 5f1, yet in most ThIII compounds the thorium atom has a 6d1 configuration instead. Mostly, what is present is rather a superposition of various configurations. For instance, copper metal is poorly described by either an [Ar] 3d 4s or an [Ar] 3d 4s configuration, but is rather well described as a 90% contribution of the first and a 10% contribution of the second. Indeed, visible light is already enough to excite electrons in most transition metals, and they often continuously "flow" through different configurations when that happens (copper and its group are an exception). Similar ion-like 3d 4s configurations occur in transition metal complexes as described by the simple crystal field theory, even if the metal has oxidation state 0. For example, chromium hexacarbonyl can be described as a chromium atom (not ion) surrounded by six carbon monoxide ligands. The electron configuration of the central chromium atom is described as 3d with the six electrons filling the three lower-energy d orbitals between the ligands. The other two d orbitals are at higher energy due to the crystal field of the ligands. This picture is consistent with the experimental fact that the complex is diamagnetic, meaning that it has no unpaired electrons. However, in a more accurate description using molecular orbital theory, the d-like orbitals occupied by the six electrons are no longer identical with the d orbitals of the free atom. Other exceptions to Madelung's rule There are several more exceptions to Madelung's rule among the heavier elements, and as atomic number increases it becomes more and more difficult to find simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations, which are an approximate method for taking account of the effect of the other electrons on orbital energies. Qualitatively, for example, the 4d elements have the greatest concentration of Madelung anomalies, because the 4d–5s gap is larger than the 3d–4s and 5d–6s gaps. For the heavier elements, it is also necessary to take account of the effects of special relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light. In general, these relativistic effects tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals. This is the reason why the 6d elements are predicted to have no Madelung anomalies apart from lawrencium (for which relativistic effects stabilise the p1/2 orbital as well and cause its occupancy in the ground state), as relativity intervenes to make the 7s orbitals lower in energy than the 6d ones. The table below shows the configurations of the f-block (green) and d-block (blue) atoms. It shows the ground state configuration in terms of orbital occupancy, but it does not show the ground state in terms of the sequence of orbital energies as determined spectroscopically. For example, in the transition metals, the 4s orbital is of a higher energy than the 3d orbitals; and in the lanthanides, the 6s is higher than the 4f and 5d. The ground states can be seen in the Electron configurations of the elements (data page). However this also depends on the charge: a calcium atom has 4s lower in energy than 3d, but a Ca2+ cation has 3d lower in energy than 4s. In practice the configurations predicted by the Madelung rule are at least close to the ground state even in these anomalous cases. The empty f orbitals in lanthanum, actinium, and thorium contribute to chemical bonding, as do the empty p orbitals in transition metals. Vacant s, d, and f orbitals have been shown explicitly, as is occasionally done, to emphasise the filling order and to clarify that even orbitals unoccupied in the ground state (e.g. lanthanum 4f or palladium 5s) may be occupied and bonding in chemical compounds. (The same is also true for the p-orbitals, which are not explicitly shown because they are only actually occupied for lawrencium in gas-phase ground states.) The various anomalies describe the free atoms and do not necessarily predict chemical behavior. Thus for example neodymium typically forms the +3 oxidation state, despite its configuration that if interpreted naïvely would suggest a more stable +2 oxidation state corresponding to losing only the 6s electrons. Contrariwise, uranium as is not very stable in the +3 oxidation state either, preferring +4 and +6. The electron-shell configuration of elements beyond hassium has not yet been empirically verified, but they are expected to follow Madelung's rule without exceptions until element 120. Element 121 should have the anomalous configuration , having a p rather than a g electron. Electron configurations beyond this are tentative and predictions differ between models, but Madelung's rule is expected to break down due to the closeness in energy of the , 6f, 7d, and 8p1/2 orbitals. That said, the filling sequence 8s, , 6f, 7d, 8p is predicted to hold approximately, with perturbations due to the huge spin-orbit splitting of the 8p and 9p shells, and the huge relativistic stabilisation of the 9s shell. Open and closed shells In the context of atomic orbitals, an open shell is a valence shell which is not completely filled with electrons or that has not given all of its valence electrons through chemical bonds with other atoms or molecules during a chemical reaction. Conversely a closed shell is obtained with a completely filled valence shell. This configuration is very stable. For molecules, "open shell" signifies that there are unpaired electrons. In molecular orbital theory, this leads to molecular orbitals that are singly occupied. In computational chemistry implementations of molecular orbital theory, open-shell molecules have to be handled by either the restricted open-shell Hartree–Fock method or the unrestricted Hartree–Fock method. Conversely a closed-shell configuration corresponds to a state where all molecular orbitals are either doubly occupied or empty (a singlet state). Open shell molecules are more difficult to study computationally. Noble gas configuration Noble gas configuration is the electron configuration of noble gases. The basis of all chemical reactions is the tendency of chemical elements to acquire stability. Main-group atoms generally obey the octet rule, while transition metals generally obey the 18-electron rule. The noble gases (He, Ne, Ar, Kr, Xe, Rn) are less reactive than other elements because they already have a noble gas configuration. Oganesson is predicted to be more reactive due to relativistic effects for heavy atoms. {|class=wikitable ! Period ! Element ! colspan="7"| Configuration |- | 1 || He || 1s2|| || || || || || |- | 2 || Ne || 1s2||2s2 2p6|| || || || || |- | 3 || Ar || 1s2||2s2 2p6||3s2 3p6|| || || || |- | 4 || Kr || 1s2||2s2 2p6||3s2 3p6||4s2 3d10 4p6|| || || |- | 5 || Xe || 1s2||2s2 2p6||3s2 3p6||4s2 3d10 4p6||5s2 4d10 5p6|| || |- | 6 || Rn || 1s2||2s2 2p6||3s2 3p6||4s2 3d10 4p6||5s2 4d10 5p6||6s2 4f14 5d10 6p6|| |- | 7 || Og || 1s2||2s2 2p6||3s2 3p6||4s2 3d10 4p6||5s2 4d10 5p6||6s2 4f14 5d10 6p6||7s2 5f14 6d10 7p6 |} Every system has the tendency to acquire the state of stability or a state of minimum energy, and so chemical elements take part in chemical reactions to acquire a stable electronic configuration similar to that of its nearest noble gas. An example of this tendency is two hydrogen (H) atoms reacting with one oxygen (O) atom to form water (H2O). Neutral atomic hydrogen has one electron in its valence shell, and on formation of water it acquires a share of a second electron coming from oxygen, so that its configuration is similar to that of its nearest noble gas helium (He) with two electrons in its valence shell. Similarly, neutral atomic oxygen has six electrons in its valence shell, and acquires a share of two electrons from the two hydrogen atoms, so that its configuration is similar to that of its nearest noble gas neon with eight electrons in its valence shell. Electron configuration in molecules Electron configuration in molecules is more complex than the electron configuration of atoms, as each molecule has a different orbital structure. The molecular orbitals are labelled according to their symmetry, rather than the atomic orbital labels used for atoms and monatomic ions; hence, the electron configuration of the dioxygen molecule, O, is written 1σ 1σ 2σ 2σ 3σ 1π 1π, or equivalently 1σ 1σ 2σ 2σ 1π 3σ 1π. The term 1π represents the two electrons in the two degenerate π*-orbitals (antibonding). From Hund's rules, these electrons have parallel spins in the ground state, and so dioxygen has a net magnetic moment (it is paramagnetic). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory. The electronic configuration of polyatomic molecules can change without absorption or emission of a photon through vibronic couplings. Electron configuration in solids In a solid, the electron states become very numerous. They cease to be discrete, and effectively blend into continuous ranges of possible states (an electron band). The notion of electron configuration ceases to be relevant, and yields to band theory. Applications The most widespread application of electron configurations is in the rationalization of chemical properties, in both inorganic and organic chemistry. In effect, electron configurations, along with some simplified forms of molecular orbital theory, have become the modern equivalent of the valence concept, describing the number and type of chemical bonds that an atom can be expected to form. This approach is taken further in computational chemistry, which typically attempts to make quantitative estimates of chemical properties. For many years, most such calculations relied upon the "linear combination of atomic orbitals" (LCAO) approximation, using an ever-larger and more complex basis set of atomic orbitals as the starting point. The last step in such a calculation is the assignment of electrons among the molecular orbitals according to the aufbau principle. Not all methods in computational chemistry rely on electron configuration: density functional theory (DFT) is an important example of a method that discards the model. For atoms or molecules with more than one electron, the motion of electrons are correlated and such a picture is no longer exact. A very large number of electronic configurations are needed to exactly describe any multi-electron system, and no energy can be associated with one single configuration. However, the electronic wave function is usually dominated by a very small number of configurations and therefore the notion of electronic configuration remains essential for multi-electron systems. A fundamental application of electron configurations is in the interpretation of atomic spectra. In this case, it is necessary to supplement the electron configuration with one or more term symbols, which describe the different energy levels available to an atom. Term symbols can be calculated for any electron configuration, not just the ground-state configuration listed in tables, although not all the energy levels are observed in practice. It is through the analysis of atomic spectra that the ground-state electron configurations of the elements were experimentally determined.
Physical sciences
Atomic physics
null
67231
https://en.wikipedia.org/wiki/Metre%20per%20second
Metre per second
The metre per second is the unit of both speed (a scalar quantity) and velocity (a vector quantity, which has direction and magnitude) in the International System of Units (SI), equal to the speed of a body covering a distance of one metre in a time of one second. According to the definition of metre, is exactly of the speed of light. The SI unit symbols are m/s, m·s−1, m s−1, or . Conversions is equivalent to: = 3.6 km/h (exactly) ≈ 3.2808 feet per second (approximately) ≈ 2.2369 miles per hour (approximately) ≈ 1.9438 knots (approximately) 1 foot per second = (exactly) 1 mile per hour = (exactly) 1 km/h = (exactly) Relation to other measures The benz, named in honour of Karl Benz, has been proposed as a name for one metre per second. Although it has seen some support as a practical unit, primarily from German sources, it was rejected as the SI unit of velocity and has not seen widespread use or acceptance. Unicode character The "metre per second" symbol is encoded by Unicode at code point .
Physical sciences
Speed
Basics and measurement
67244
https://en.wikipedia.org/wiki/Ecological%20niche
Ecological niche
In ecology, a niche is the match of a species to a specific environmental condition. It describes how an organism or population responds to the distribution of resources and competitors (for example, by growing when resources are abundant, and when predators, parasites and pathogens are scarce) and how it in turn alters those same factors (for example, limiting access to resources by other organisms, acting as a food source for predators and a consumer of prey). "The type and number of variables comprising the dimensions of an environmental niche vary from one species to another [and] the relative importance of particular environmental variables for a species may vary according to the geographic and biotic contexts". A Grinnellian niche is determined by the habitat in which a species lives and its accompanying behavioral adaptations. An Eltonian niche emphasizes that a species not only grows in and responds to an environment, it may also change the environment and its behavior as it grows. The Hutchinsonian niche uses mathematics and statistics to try to explain how species coexist within a given community. The concept of ecological niche is central to ecological biogeography, which focuses on spatial patterns of ecological communities. "Species distributions and their dynamics over time result from properties of the species, environmental variation..., and interactions between the two—in particular the abilities of some species, especially our own, to modify their environments and alter the range dynamics of many other species." Alteration of an ecological niche by its inhabitants is the topic of niche construction. The majority of species exist in a standard ecological niche, sharing behaviors, adaptations, and functional traits similar to the other closely related species within the same broad taxonomic class, but there are exceptions. A premier example of a non-standard niche filling species is the flightless, ground-dwelling kiwi bird of New Zealand, which feeds on worms and other ground creatures, and lives its life in a mammal-like niche. Island biogeography can help explain island species and associated unfilled niches. Grinnellian niche The ecological meaning of niche comes from the meaning of niche as a recess in a wall for a statue, which itself is probably derived from the Middle French word nicher, meaning to nest. The term was coined by the naturalist Roswell Hill Johnson but Joseph Grinnell was probably the first to use it in a research program in 1917, in his paper "The niche relationships of the California Thrasher". The Grinnellian niche concept embodies the idea that the niche of a species is determined by the habitat in which it lives and its accompanying behavioral adaptations. In other words, the niche is the sum of the habitat requirements and behaviors that allow a species to persist and produce offspring. For example, the behavior of the California thrasher is consistent with the chaparral habitat it lives in—it breeds and feeds in the underbrush and escapes from its predators by shuffling from underbrush to underbrush. Its 'niche' is defined by the felicitous complementing of the thrasher's behavior and physical traits (camouflaging color, short wings, strong legs) with this habitat. Grinnellian niches can be defined by non-interactive (abiotic) variables and environmental conditions on broad scales. Variables of interest in this niche class include average temperature, precipitation, solar radiation, and terrain aspect which have become increasingly accessible across spatial scales. Most literature has focused on Grinnellian niche constructs, often from a climatic perspective, to explain distribution and abundance. Current predictions on species responses to climate change strongly rely on projecting altered environmental conditions on species distributions. However, it is increasingly acknowledged that climate change also influences species interactions and an Eltonian perspective may be advantageous in explaining these processes. This perspective of niche allows for the existence of both ecological equivalents and empty niches. An ecological equivalent to an organism is an organism from a different taxonomic group exhibiting similar adaptations in a similar habitat, an example being the different succulents found in American and African deserts, cactus and euphorbia, respectively. As another example, the anole lizards of the Greater Antilles are a rare example of convergent evolution, adaptive radiation, and the existence of ecological equivalents: the anole lizards evolved in similar microhabitats independently of each other and resulted in the same ecomorphs across all four islands. Eltonian niche In 1927 Charles Sutherland Elton, a British ecologist, defined a niche as follows: "The 'niche' of an animal means its place in the biotic environment, its relations to food and enemies." Elton classified niches according to foraging activities ("food habits"): Conceptually, the Eltonian niche introduces the idea of a species' response to and effect on the environment. Unlike other niche concepts, it emphasizes that a species not only grows in and responds to an environment based on available resources, predators, and climatic conditions, but also changes the availability and behavior of those factors as it grows. In an extreme example, beavers require certain resources in order to survive and reproduce, but also construct dams that alter water flow in the river where the beaver lives. Thus, the beaver affects the biotic and abiotic conditions of other species that live in and near the watershed. In a more subtle case, competitors that consume resources at different rates can lead to cycles in resource density that differ between species. Not only do species grow differently with respect to resource density, but their own population growth can affect resource density over time. Eltonian niches focus on biotic interactions and consumer–resource dynamics (biotic variables) on local scales. Because of the narrow extent of focus, data sets characterizing Eltonian niches typically are in the form of detailed field studies of specific individual phenomena, as the dynamics of this class of niche are difficult to measure at a broad geographic scale. However, the Eltonian niche may be useful in the explanation of a species' endurance of global change. Because adjustments in biotic interactions inevitably change abiotic factors, Eltonian niches can be useful in describing the overall response of a species to new environments. Hutchinsonian niche The Hutchinsonian niche is an "n-dimensional hypervolume", where the dimensions are environmental conditions and resources, that define the requirements of an individual or a species to practice its way of life, more particularly, for its population to persist. The "hypervolume" defines the multi-dimensional space of resources (e.g., light, nutrients, structure, etc.) available to (and specifically used by) organisms, and "all species other than those under consideration are regarded as part of the coordinate system." The niche concept was popularized by the zoologist G. Evelyn Hutchinson in 1957. Hutchinson inquired into the question of why there are so many types of organisms in any one habitat. His work inspired many others to develop models to explain how many and how similar coexisting species could be within a given community, and led to the concepts of 'niche breadth' (the variety of resources or habitats used by a given species), 'niche partitioning' (resource differentiation by coexisting species), and 'niche overlap' (overlap of resource use by different species). Statistics were introduced into the Hutchinson niche by Robert MacArthur and Richard Levins using the 'resource-utilization' niche employing histograms to describe the 'frequency of occurrence' as a function of a Hutchinson coordinate. So, for instance, a Gaussian might describe the frequency with which a species ate prey of a certain size, giving a more detailed niche description than simply specifying some median or average prey size. For such a bell-shaped distribution, the position, width and form of the niche correspond to the mean, standard deviation and the actual distribution itself. One advantage in using statistics is illustrated in the figure, where it is clear that for the narrower distributions (top) there is no competition for prey between the extreme left and extreme right species, while for the broader distribution (bottom), niche overlap indicates competition can occur between all species. The resource-utilization approach postulates that not only can competition occur, but that it does occur, and that overlap in resource utilization directly enables the estimation of the competition coefficients. This postulate, however, can be misguided, as it ignores the impacts that the resources of each category have on the organism and the impacts that the organism has on the resources of each category. For instance, the resource in the overlap region can be non-limiting, in which case there is no competition for this resource despite niche overlap. An organism free of interference from other species could use the full range of conditions (biotic and abiotic) and resources in which it could survive and reproduce which is called its fundamental niche. However, as a result of pressure from, and interactions with, other organisms (i.e. inter-specific competition) species are usually forced to occupy a niche that is narrower than this, and to which they are mostly highly adapted; this is termed the realized niche. Hutchinson used the idea of competition for resources as the primary mechanism driving ecology, but overemphasis upon this focus has proved to be a handicap for the niche concept. In particular, overemphasis upon a species' dependence upon resources has led to too little emphasis upon the effects of organisms on their environment, for instance, colonization and invasions. The term "adaptive zone" was coined by the paleontologist George Gaylord Simpson to explain how a population could jump from one niche to another that suited it, jump to an 'adaptive zone', made available by virtue of some modification, or possibly a change in the food chain, that made the adaptive zone available to it without a discontinuity in its way of life because the group was 'pre-adapted' to the new ecological opportunity. Hutchinson's "niche" (a description of the ecological space occupied by a species) is subtly different from the "niche" as defined by Grinnell (an ecological role, that may or may not be actually filled by a species—see vacant niches). A niche is a very specific segment of ecospace occupied by a single species. On the presumption that no two species are identical in all respects (called Hardin's 'axiom of inequality') and the competitive exclusion principle, some resource or adaptive dimension will provide a niche specific to each species. Species can however share a 'mode of life' or 'autecological strategy' which are broader definitions of ecospace. For example, Australian grasslands species, though different from those of the Great Plains grasslands, exhibit similar modes of life. Once a niche is left vacant, other organisms can fill that position. For example, the niche that was left vacant by the extinction of the tarpan has been filled by other animals (in particular a small horse breed, the konik). Also, when plants and animals are introduced into a new environment, they have the potential to occupy or invade the niche or niches of native organisms, often outcompeting the indigenous species. Introduction of non-indigenous species to non-native habitats by humans often results in biological pollution by the exotic or invasive species. The mathematical representation of a species' fundamental niche in ecological space, and its subsequent projection back into geographic space, is the domain of niche modelling. Contemporary niche theory Contemporary niche theory (also called "classic niche theory" in some contexts) is a framework that was originally designed to reconcile different definitions of niches (see Grinnellian, Eltonian, and Hutchinsonian definitions above), and to help explain the underlying processes that affect Lotka-Volterra relationships within an ecosystem. The framework centers around "consumer-resource models" which largely split a given ecosystem into resources (e.g. sunlight or available water in soil) and consumers (e.g. any living thing, including plants and animals), and attempts to define the scope of possible relationships that could exist between the two groups. In contemporary niche theory, the "impact niche" is defined as the combination of effects that a given consumer has on both a). the resources that it uses, and b). the other consumers in the ecosystem. Therefore, the impact niche is equivalent to the Eltonian niche since both concepts are defined by the impact of a given species on its environment. The range of environmental conditions where a species can successfully survive and reproduce (i.e. the Hutchinsonian definition of a realized niche) is also encompassed under contemporary niche theory, termed the "requirement niche". The requirement niche is bounded by both the availability of resources as well as the effects of coexisting consumers (e.g. competitors and predators). Coexistence under contemporary niche theory Contemporary niche theory provides three requirements that must be met in order for two species (consumers) to coexist: The requirement niches of both consumers must overlap. Each consumer must outcompete the other for the resource that it needs most. For example, if two plants (P1 and P2) are competing for nitrogen and phosphorus in a given ecosystem, they will only coexist if they are limited by different resources (P1 is limited by nitrogen and P2 is limited by phosphorus, perhaps) and each species must outcompete the other species to get that resource (P1 needs to be better at obtaining nitrogen and P2 needs to be better at obtaining phosphorus). Intuitively, this makes sense from an inverse perspective: If both consumers are limited by the same resource, one of the species will ultimately be the better competitor, and only that species will survive. Furthermore, if P1 was outcompeted for the nitrogen (the resource it needed most) it would not survive. Likewise, if P2 was outcompeted for phosphorus, it would not survive. The availability of the limiting resources (nitrogen and phosphorus in the above example) in the environment are equivalent. These requirements are interesting and controversial because they require any two species to share a certain environment (have overlapping requirement niches) but fundamentally differ the ways that they use (or "impact") that environment. These requirements have repeatedly been violated by nonnative (i.e. introduced and invasive) species, which often coexist with new species in their nonnative ranges, but do not appear to be constricted these requirements. In other words, contemporary niche theory predicts that species will be unable to invade new environments outside of their requirement (i.e. realized) niche, yet many examples of this are well-documented. Additionally, contemporary niche theory predicts that species will be unable to establish in environments where other species already consume resources in the same ways as the incoming species, however examples of this are also numerous. Niche differentiation In ecology, niche differentiation (also known as niche segregation, niche separation and niche partitioning) refers to the process by which competing species use the environment differently in a way that helps them to coexist. The competitive exclusion principle states that if two species with identical niches (ecological roles) compete, then one will inevitably drive the other to extinction. This rule also states that two species cannot occupy the same exact niche in a habitat and coexist together, at least in a stable manner. When two species differentiate their niches, they tend to compete less strongly, and are thus more likely to coexist. Species can differentiate their niches in many ways, such as by consuming different foods, or using different areas of the environment. As an example of niche partitioning, several anole lizards in the Caribbean islands share common diets—mainly insects. They avoid competition by occupying different physical locations. Although these lizards might occupy different locations, some species can be found inhabiting the same range, with up to 15 in certain areas. For example, some live on the ground while others are arboreal. Species who live in different areas compete less for food and other resources, which minimizes competition between species. However, species who live in similar areas typically compete with each other. Detection and quantification The Lotka–Volterra equation states that two competing species can coexist when intra-specific (within species) competition is greater than inter-specific (between species) competition. Since niche differentiation concentrates competition within-species, due to a decrease in between-species competition, the Lotka-Volterra model predicts that niche differentiation of any degree will result in coexistence. In reality, this still leaves the question of how much differentiation is needed for coexistence. A vague answer to this question is that the more similar two species are, the more finely balanced the suitability of their environment must be in order to allow coexistence. There are limits to the amount of niche differentiation required for coexistence, and this can vary with the type of resource, the nature of the environment, and the amount of variation both within and between the species. To answer questions about niche differentiation, it is necessary for ecologists to be able to detect, measure, and quantify the niches of different coexisting and competing species. This is often done through a combination of detailed ecological studies, controlled experiments (to determine the strength of competition), and mathematical models. To understand the mechanisms of niche differentiation and competition, much data must be gathered on how the two species interact, how they use their resources, and the type of ecosystem in which they exist, among other factors. In addition, several mathematical models exist to quantify niche breadth, competition, and coexistence (Bastolla et al. 2005). However, regardless of methods used, niches and competition can be distinctly difficult to measure quantitatively, and this makes detection and demonstration of niche differentiation difficult and complex. Development Over time, two competing species can either coexist, through niche differentiation or other means, or compete until one species becomes locally extinct. Several theories exist for how niche differentiation arises or evolves given these two possible outcomes. Current competition (The Ghost of Competition Present) Niche differentiation can arise from current competition. For instance, species X has a fundamental niche of the entire slope of a hillside, but its realized niche is only the top portion of the slope because species Y, which is a better competitor but cannot survive on the top portion of the slope, has excluded it from the lower portion of the slope. With this scenario, competition will continue indefinitely in the middle of the slope between these two species. Because of this, detection of the presence of niche differentiation (through competition) will be relatively easy. Importantly, there is no evolutionary change of the individual species in this case; rather this is an ecological effect of species Y out-competing species X within the bounds of species Y's fundamental niche. Via past extinctions (The Ghost of Competition Past) Another way by which niche differentiation can arise is via the previous elimination of species without realized niches. This asserts that at some point in the past, several species inhabited an area, and all of these species had overlapping fundamental niches. However, through competitive exclusion, the less competitive species were eliminated, leaving only the species that were able to coexist (i.e. the most competitive species whose realized niches did not overlap). Again, this process does not include any evolutionary change of individual species, but it is merely the product of the competitive exclusion principle. Also, because no species is out-competing any other species in the final community, the presence of niche differentiation will be difficult or impossible to detect. Evolving differences Finally, niche differentiation can arise as an evolutionary effect of competition. In this case, two competing species will evolve different patterns of resource use so as to avoid competition. Here too, current competition is absent or low, and therefore detection of niche differentiation is difficult or impossible. Types Below is a list of ways that species can partition their niche. This list is not exhaustive, but illustrates several classic examples. Resource partitioning Resource partitioning is the phenomenon where two or more species divides out resources like food, space, resting sites etc. to coexist. For example, some lizard species appear to coexist because they consume insects of differing sizes. Alternatively, species can coexist on the same resources if each species is limited by different resources, or differently able to capture resources. Different types of phytoplankton can coexist when different species are differently limited by nitrogen, phosphorus, silicon, and light. In the Galapagos Islands, finches with small beaks are more able to consume small seeds, and finches with large beaks are more able to consume large seeds. If a species' density declines, then the food it most depends on will become more abundant (since there are so few individuals to consume it). As a result, the remaining individuals will experience less competition for food. Although "resource" generally refers to food, species can partition other non-consumable objects, such as parts of the habitat. For example, warblers are thought to coexist because they nest in different parts of trees. Species can also partition habitat in a way that gives them access to different types of resources. As stated in the introduction, anole lizards appear to coexist because each uses different parts of the forests as perch locations. This likely gives them access to different species of insects. Research has determined that plants can recognize each other's root systems and differentiate between a clone, a plant grown from the same mother plants seeds, and other species. Based on the root secretions, also called exudates, plants can make this determination. The communication between plants starts with the secretions from plant roots into the rhizosphere. If another plant that is kin is entering this area the plant will take up exudates. The exudate, being several different compounds, will enter the plants root cell and attach to a receptor for that chemical halting growth of the root meristem in that direction, if the interaction is kin. Simonsen discusses how plants accomplish root communication with the addition of beneficial rhizobia and fungal networks and the potential for different genotypes of the kin plants, such as the legume M. Lupulina, and specific strains of nitrogen fixing bacteria and rhizomes can alter relationships between kin and non-kin competition. This means there could be specific subsets of genotypes in kin plants that selects well with specific strains that could outcompete other kin. What might seem like an instance in kin competition could just be different genotypes of organisms at play in the soil that increase the symbiotic efficiency. Predator partitioning Predator partitioning occurs when species are attacked differently by different predators (or natural enemies more generally). For example, trees could differentiate their niche if they are consumed by different species of specialist herbivores, such as herbivorous insects. If a species density declines, so too will the density of its natural enemies, giving it an advantage. Thus, if each species is constrained by different natural enemies, they will be able to coexist. Early work focused on specialist predators; however, more recent studies have shown that predators do not need to be pure specialists, they simply need to affect each prey species differently. The Janzen–Connell hypothesis represents a form of predator partitioning. Conditional differentiation Conditional differentiation (sometimes called temporal niche partitioning) occurs when species differ in their competitive abilities based on varying environmental conditions. For example, in the Sonoran Desert, some annual plants are more successful during wet years, while others are more successful during dry years. As a result, each species will have an advantage in some years, but not others. When environmental conditions are most favorable, individuals will tend to compete most strongly with member of the same species. For example, in a dry year, dry-adapted plants will tend to be most limited by other dry-adapted plants. This can help them to coexist through a storage effect. Competition-predation trade-off Species can differentiate their niche via a competition-predation trade-off if one species is a better competitor when predators are absent, and the other is better when predators are present. Defenses against predators, such as toxic compounds or hard shells, are often metabolically costly. As a result, species that produce such defenses are often poor competitors when predators are absent. Species can coexist through a competition-predation trade-off if predators are more abundant when the less defended species is common, and less abundant if the well-defended species is common. This effect has been criticized as being weak, because theoretical models suggest that only two species within a community can coexist because of this mechanism. Segregation versus restriction Two ecological paradigms deal with the problem. The first paradigm predominates in what may be called "classical" ecology. It assumes that niche space is largely saturated with individuals and species, leading to strong competition. Niches are restricted because "neighbouring" species, i.e., species with similar ecological characteristics such as similar habitats or food preferences, prevent expansion into other niches or even narrow niches down. This continual struggle for existence is an important assumption of natural selection introduced by Darwin as an explanation for evolution. The other paradigm assumes that niche space is to a large degree vacant, i.e., that there are many vacant niches. It is based on many empirical studies and theoretical investigations especially of Kauffman 1993. Causes of vacant niches may be evolutionary contingencies or brief or long-lasting environmental disturbances. Both paradigms agree that species are never "universal" in the sense that they occupy all possible niches; they are always specialized, although the degree of specialization varies. For example, there is no universal parasite which infects all host species and microhabitats within or on them. However, the degree of host specificity varies strongly. Thus, Toxoplasma (Protista) infects numerous vertebrates including humans, Enterobius vermicularis infects only humans. The following mechanisms for niche restriction and segregation have been proposed: Niche restriction: Species must be specialized in order to survive. They may survive for a while in less optimal habitats under favourable conditions, but they will be extinguished when conditions become less favourable, for example due to changed weather conditions (this aspect was especially emphasized by Price 1983). Niches remain narrow or become narrower as the result of natural selection in order to enhance the chances of mating. This "mating theory of niche restriction" is supported by the observation that niches of asexual stages are often wider than those of sexually mature stages; that niches become narrower at the time of mating; and that microhabitats of sessile species and of species with small population sizes often are narrower than those of non-sessile species and of species with large population sizes. Niche segregation: The random selection of niches in largely empty niche space will often automatically lead to segregation (this mechanism is of particular importance in the second paradigm). Niches are segregated due to interspecific competition (this mechanism is of particular importance in the first paradigm). Niches of similar species are segregated (as the result of natural selection) in order to prevent interspecific hybridisation, because hybrids are less fit. (Many cases of niche segregation explained by interspecific competition are better explained by this mechanism, i.e., "reinforcement of reproductive barriers") (e.g., Rohde 2005b). Relative significance of the mechanisms Both paradigms acknowledge a role for all mechanisms (except possibly for that of random selection of niches in the first paradigm), but emphasis on the various mechanisms varies. The first paradigm stresses the paramount importance of interspecific competition, whereas the second paradigm tries to explain many cases which are thought to be due to competition in the first paradigm, by reinforcement of reproductive barriers and/or random selection of niches. – Many authors believe in the overriding importance of interspecific competition. Intuitively, one would expect that interspecific competition is of particular importance in all those cases in which sympatric species (i.e., species occurring together in the same area) with large population densities use the same resources and largely exhaust them. However, Andrewartha and Birch (1954,1984) and others have pointed out that most natural populations usually don't even approach exhaustion of resources, and too much emphasis on interspecific competition is therefore wrong. Concerning the possibility that competition has led to segregation in the evolutionary past, Wiens (1974, 1984) concluded that such assumptions cannot be proven, and Connell (1980) found that interspecific competition as a mechanism of niche segregation has been proven only for some pest insects. Barker (1983), in his review of competition in Drosophila and related genera, which are among the best known animal groups, concluded that the idea of niche segregation by interspecific competition is attractive, but that no study has yet been able to show a mechanism responsible for segregation. Without specific evidence, the possibility of random segregation can never be excluded, and assumption of such randomness can indeed serve as a null-model. – Many physiological and morphological differences between species can prevent hybridization. Evidence for niche segregation as the result of reinforcement of reproductive barriers is especially convincing in those cases in which such differences are not found in allopatric but only in sympatric locations. For example, Kawano (2002) has shown this for giant rhinoceros beetles in Southeast Asia. Two closely related species occur in 12 allopatric (i.e., in different areas) and 7 sympatric (i.e., in the same area) locations. In the former, body length and length of genitalia are practically identical, in the latter, they are significantly different, and much more so for the genitalia than the body, convincing evidence that reinforcement is an important factor (and possibly the only one) responsible for niche segregation. - The very detailed studies of communities of Monogenea parasitic on the gills of marine and freshwater fishes by several authors have shown the same. Species use strictly defined microhabitats and have very complex copulatory organs. This and the fact that fish replicas are available in almost unlimited numbers, makes them ideal ecological models. Many congeners (species belonging to the same genus) and non-congeners were found on single host species. The maximum number of congeners was nine species. The only limiting factor is space for attachment, since food (blood, mucus, fast regenerating epithelial cells) is in unlimited supply as long as the fish is alive. Various authors, using a variety of statistical methods, have consistently found that species with different copulatory organs may co-occur in the same microhabitat, whereas congeners with identical or very similar copulatory organs are spatially segregated, convincing evidence that reinforcement and not competition is responsible for niche segregation. For a detailed discussion, especially of competition and reinforcement of reproductive barriers, see Coexistence without niche differentiation: exceptions to the rule Some competing species have been shown to coexist on the same resource with no observable evidence of niche differentiation and in "violation" of the competitive exclusion principle. One instance is in a group of hispine beetle species. These beetle species, which eat the same food and occupy the same habitat, coexist without any evidence of segregation or exclusion. The beetles show no aggression either intra- or inter-specifically. Coexistence may be possible through a combination of non-limiting food and habitat resources and high rates of predation and parasitism, though this has not been demonstrated. This example illustrates that the evidence for niche differentiation is by no means universal. Niche differentiation is also not the only means by which coexistence is possible between two competing species. However, niche differentiation is a critically important ecological idea which explains species coexistence, thus promoting the high biodiversity often seen in many of the world's biomes. Research using mathematical modelling is indeed demonstrating that predation can indeed stabilize lumps of very similar species. Willow warbler and chiffchaff and other very similar warblers can serve as an example. The idea is that it is also a good strategy to be very similar to a successful species or have enough dissimilarity. Also trees in the rain forest can serve as an example of all high canopy species basically following the same strategy. Other examples of nearly identical species clusters occupying the same niche were water beetles, prairie birds and algae. The basic idea is that there can be clusters of very similar species all applying the same successful strategy and between them open spaces. Here the species cluster takes the place of a single species in the classical ecological models. Niche and Geographic Range The geographic range of a species can be viewed as a spatial reflection of its niche, along with characteristics of the geographic template and the species that influence its potential to colonize. The fundamental geographic range of a species is the area it occupies in which environmental conditions are favorable, without restriction from barriers to disperse or colonize. A species will be confined to its realized geographic range when confronting biotic interactions or abiotic barriers that limit dispersal, a more narrow subset of its larger fundamental geographic range. An early study on ecological niches conducted by Joseph H. Connell analyzed the environmental factors that limit the range of a barnacle (Chthamalus stellatus) on Scotland's Isle of Cumbrae. In his experiments, Connell described the dominant features of C. stellatus niches and provided explanation for their distribution on intertidal zone of the rocky coast of the Isle. Connell described the upper portion of C. stellatus's range is limited by the barnacle's ability to resist dehydration during periods of low tide. The lower portion of the range was limited by interspecific interactions, namely competition with a cohabiting barnacle species and predation by a snail. By removing the competing B. balanoides, Connell showed that C. stellatus was able to extend the lower edge of its realized niche in the absence of competitive exclusion. These experiments demonstrate how biotic and abiotic factors limit the distribution of an organism. Parameters The different dimensions, or plot axes, of a niche represent different biotic and abiotic variables. These factors may include descriptions of the organism's life history, habitat, trophic position (place in the food chain), and geographic range. According to the competitive exclusion principle, no two species can occupy the same niche in the same environment for a long time. The parameters of a realized niche are described by the realized niche width of that species. Some plants and animals, called specialists, need specific habitats and surroundings to survive, such as the spotted owl, which lives specifically in old growth forests. Other plants and animals, called generalists, are not as particular and can survive in a range of conditions, for example the dandelion.
Biology and health sciences
Ecology
null
67252
https://en.wikipedia.org/wiki/Birch
Birch
A birch is a thin-leaved deciduous hardwood tree of the genus Betula (), in the family Betulaceae, which also includes alders, hazels, and hornbeams. It is closely related to the beech-oak family Fagaceae. The genus Betula contains 30 to 60 known taxa of which 11 are on the IUCN 2011 Red List of Threatened Species. They are typically short-lived pioneer species and are widespread in the Northern Hemisphere, particularly in northern areas of temperate climates and in boreal climates. Birch wood is used for a wide range of purposes. Description Birch species are generally small to medium-sized trees or shrubs, mostly of northern temperate and boreal climates. The simple leaves are alternate, singly or doubly serrate, feather-veined, petiolate and stipulate. They often appear in pairs, but these pairs are really borne on spur-like, two-leaved, lateral branchlets. The fruit is a small samara, although the wings may be obscure in some species. They differ from the alders (Alnus, another genus in the family) in that the female catkins are not woody and disintegrate at maturity, falling apart to release the seeds, unlike the woody, cone-like female alder catkins. The bark of all birches is characteristically marked with long, horizontal lenticels, and often separates into thin, papery plates, especially upon the paper birch. Distinctive colors give the common names gray, white, black, silver and yellow birch to different species. The buds, forming early and full-grown by midsummer, are all lateral, without a terminal bud forming; the branch is prolonged by the upper lateral bud. The wood of all the species is close-grained with a satiny texture and capable of taking a fine polish; its fuel value is fair. Flower and fruit The flowers are monoecious, and open with or before the leaves. Once fully grown, these leaves are usually long on three-flowered clusters in the axils of the scales of drooping or erect catkins or aments. Staminate catkins are pendulous, clustered, or solitary in the axils of the last leaves of the branch of the year or near the ends of the short lateral branchlets of the year. They form in early autumn and remain rigid during the winter. The scales of the mature staminate catkins are broadly ovate, rounded, yellow or orange colour below the middle and dark chestnut brown at apex. Each scale bears two bractlets and three sterile flowers, each flower consisting of a sessile, membranous, usually two-lobed, calyx. Each calyx bears four short filaments with one-celled anthers or strictly, two filaments divided into two branches, each bearing a half-anther. Anther cells open longitudinally. The pistillate segments are erect or pendulous, and solitary, terminal on the two-leaved lateral spur-like branchlets of the year. The pistillate scales are oblong-ovate, three-lobed, pale yellow-green often tinged with red, becoming brown at maturity. These scales bear two or three fertile flowers, each flower consisting of a naked ovary. The ovary is compressed, two-celled, and crowned with two slender styles; the ovule is solitary. Each scale bears a single small, winged nut that is oval, with two persistent stigmas at the apex. Taxonomy Subdivision Betula species are organised into five subgenera. Birches native to Eurasia include Betula albosinensis – Chinese red birch (northern + central China) Betula alnoides – alder-leaf birch (China, Himalayas, northern Indochina) Betula ashburneri – (Bhutan, Tibet, Sichuan, Yunnan Provinces in China) Betula baschkirica – (eastern European Russia) Betula bomiensis – (Tibet) Betula browicziana – (Turkey and Georgia) Betula buggsii – (China) Betula calcicola – (Sichuan + Yunnan Provinces in China) Betula celtiberica – (Spain) Betula chichibuensis – (Chichibu region of Japan) Betula chinensis – Chinese dwarf birch (China, Korea) Betula coriaceifolia – (Uzbekistan) Betula corylifolia – (Honshu Island in Japan) Betula costata – (northeastern China, Korea, Primorye region of Russia) Betula cylindrostachya – (Himalayas, southern China, Myanmar) Betula dahurica – (eastern Siberia, Russian Far East, northeastern China, Mongolia, Korea, Japan) Betula delavayi – (Tibet, southern China) Betula ermanii – Erman's birch (eastern Siberia, Russian Far East, northeastern China, Korea, Japan) Betula falcata – (Tajikistan) Betula fargesii – (Chongqing + Hubei Provinces in China) Betula fruticosa – (eastern Siberia, Russian Far East, northeastern China, Mongolia, Korea, Japan) Betula globispica – (Honshu Island in Japan) Betula gmelinii – (Siberia, Mongolia, northeastern China, Korea, Hokkaido Island in Japan) Betula grossa – Japanese cherry birch (Japan) Betula gynoterminalis – (Yunnan Province in China) Betula honanensis – (Henan Province in China) Betula humilis or Betula kamtschatica – Kamchatka birch platyphylla (northern + central Europe, Siberia, Kazakhstan, Xinjiang, Mongolia, Korea) Betula insignis – (southern China) Betula karagandensis – (Kazakhstan) Betula klokovii – (Ukraine) Betula kotulae – (Ukraine) Betula luminifera – (China) Betula maximowicziana – monarch birch (Japan, Kuril Islands) Betula medwediewii – Caucasian birch (Turkey, Iran, Caucasus) Betula megrelica – (Republic of Georgia) Betula microphylla – (Siberia, Mongolia, Xinjiang, Kazakhstan, Kyrgyzstan, Uzbekistan) Betula nana – dwarf birch (northern + central Europe, Russia, Siberia, Greenland, Northwest Territories of Canada)) Betula pendula – silver birch (widespread in Europe and northern Asia; Morocco; naturalized in New Zealand and scattered locations in US + Canada) Betula platyphylla – (Betula pendula var. platyphylla)—Siberian silver birch (Siberia, Russian Far East, Manchuria, Korea, Japan, Alaska, western Canada) Betula potamophila – (Tajikistan) Betula potaninii – (southern China) Betula psammophila – (Kazakhstan) Betula pubescens – downy birch, also known as white, European white or hairy birch (Europe, Siberia, Greenland, Newfoundland; naturalized in scattered locations in US) Betula raddeana – (Caucasus) Betula saksarensis – (Khakassiya region of Siberia) Betula saviczii – (Kazakhstan) Betula schmidtii – (northeastern China, Korea, Japan, Primorye region of Russia) Betula sunanensis – (Gansu Province of China) Betula szechuanica – (Betula pendula var. szechuanica)—Sichuan birch (Tibet, southern China) Betula tianshanica – (Kazakhstan, Kyrgyzstan, Tajikistan, Uzbekistan, Xinjiang, Mongolia) Betula utilis – Himalayan birch (Afghanistan, Central Asia, China, China, Tibet, Himalayas) Betula wuyiensis – (Fujian Province of China) Betula zinserlingii – (Kyrgyzstan) Note: many American texts have B. pendula and B. pubescens confused, though they are distinct species with different chromosome numbers. Birches native to North America include Betula alleghaniensis – yellow birch (B. lutea) (eastern Canada, Great Lakes, upper eastern US, Appalachians) Betula caerulea – blue birch (northeast of North America) Betula cordifolia – mountain paper birch (eastern Canada, Great Lakes, New England US) Betula glandulosa – American dwarf birch (Siberia, Mongolia, Russian Far East, Alaska, Canada, Greenland, mountains of western US and New England, Adirondacks) Betula kenaica – Kenai birch ( Alaska, northwestern North America) Betula lenta – sweet birch, cherry birch, or black birch (Quebec, Ontario, eastern US) Betula michauxii – Newfoundland dwarf birch (Newfoundland, Labrador, Quebec, Nova Scotia) Betula minor – dwarf white birch (eastern Canada, mountains of northern New England and Adirondacks) Betula murrayana – Murray's birch (Great Lakes endemic) Betula nana – dwarf birch or bog birch (also in northern Europe and Asia) Betula neoalaskana – Alaska paper birch also known as Alaska birch or Resin birch (Alaska and northern Canada) Betula nigra – river birch or black birch (eastern US) Betula occidentalis – water birch or red birch (B. fontinalis) (Alaska, Yukon, Northwest Territories, western Canada, western US) Betula papyrifera – paper birch, canoe birch or American white birch (Alaska, most of Canada, northern US) Betula populifolia – gray birch (eastern Canada, northeastern US) Betula pumila – swamp birch (Alaska, Canada, northern US) Betula uber – Virginia round-leaf birch (southwestern Virginia) Etymology The common name birch comes from Old English birce, bierce, from Proto-Germanic *berk-jōn (cf. German Birke, West Frisian bjirk), an adjectival formation from *berkōn (cf. Dutch berk, Low German Bark, Danish birk, Norwegian bjørk), itself from the Proto-Indo-European root *bʰerHǵ- ~ bʰrHǵ-, which also gave Lithuanian béržas, Latvian Bērzs, Russian берёза (berëza), Ukrainian береза (beréza), Albanian bredh 'fir', Ossetian bærz(æ), Sanskrit bhurja, Polish brzoza, Latin fraxinus 'ash (tree)'. This root is presumably derived from *bʰreh₁ǵ- 'to shine, whiten', in reference to the birch's white bark. The Proto-Germanic rune berkanan is named after the birch. The generic name Betula is from Latin, which is a diminutive borrowed from Gaulish betua (cf. Old Irish bethe, Welsh bedw). Evolutionary history Within Betulaceae, birches are most closely related to alder. The oldest known birch fossils are those of Betula leopoldae from the Klondike Mountain Formation in Washington State, US, which date to the early Eocene (Ypresian) around 49 million years ago. Ecology Birches often form even-aged stands on light, well-drained, particularly acidic soils. They are regarded as pioneer species, rapidly colonizing open ground especially in secondary successional sequences following a disturbance or fire. Birches are early tree species to become established in primary successions, and can become a threat to heathland if the seedlings and saplings are not suppressed by grazing or periodic burning. Birches are generally lowland species, but some species, such as Betula nana, have a montane distribution. In the British Isles, there is some difference between the environments of Betula pendula and Betula pubescens, and some hybridization, though both are "opportunists in steady-state woodland systems". Mycorrhizal fungi, including sheathing (ecto)mycorrhizas, are found in some cases to be beneficial to tree growth. A large number of lepidopteran insects feed on birch foliage. Uses Because of the hardness of birch, it is easier to shape it with power tools; it is quite difficult to work it with hand tools. Birch wood is fine-grained and pale in colour, often with an attractive satin-like sheen. Ripple figuring may occur, increasing the value of the timber for veneer and furniture-making. The highly decorative Masur (or Karelian) birch, from Betula verrucosa var. carelica, has ripple textures combined with attractive dark streaks and lines. Birch plywood is made from laminations of birch veneer. It is light but strong, and has many other good properties. It is among the strongest and dimensionally most stable plywoods, although it is unsuitable for exterior use. Birch plywood is used to make longboards (skateboard), giving it a strong yet flexible ride. It is also used (often in very thin grades with many laminations) for making model aircraft. Extracts of birch are used for flavoring or leather oil, and in cosmetics such as soap or shampoo. In the past, commercial oil of wintergreen (methyl salicylate) was made from the sweet birch (Betula lenta). Birch-tar or Russian oil extracted from birch bark is thermoplastic and waterproof; it was used as a glue on, for example, arrows, and also for medicinal purposes. Fragrant twigs of wintergreen group birches are used in saunas. Birch is also associated with the feast of Pentecost in Central and Eastern Europe and Siberia, where its branches are used as decoration for churches and homes on this day. Ground birch bark, fermented in sea water, is used for seasoning the woolen, hemp or linen sails and hemp rope of traditional Norwegian boats. Birch twigs bound in a bundle, also called birch, were used for birching, a form of corporal punishment. Many Native Americans in the United States and Indigenous peoples in Canada prize the birch for its bark, which because of its light weight, flexibility, and the ease with which it can be stripped from fallen trees, is often used for the construction of strong, waterproof but lightweight canoes, bowls, and wigwams. The Hughes H-4 Hercules was made mostly of birch wood, despite its better-known moniker, "The Spruce Goose". Birch plywood was specified by the BBC as the only wood that can be used in making the cabinets of the long-lived LS3/5A loudspeaker. Birch is used as firewood because of its high calorific value per unit weight and unit volume. It burns well, without popping, even when frozen and freshly hewn. The bark will burn very well even when wet because of the oils it contains. With care, it can be split into very thin sheets that will ignite from even the smallest of sparks. Birch wood can be used to smoke foods. Birch seeds are used as leaf litter in miniature terrain models. Birch oil is used in the manufacture of Russia leather, a water-resistant leather. As food The inner bark is considered edible as an emergency food, even when raw. It can be dried and ground into flour, as was done by Native Americans and early settlers. It can also be cut into strips and cooked like noodles. The sap can be drunk or used to make syrup and birch beer. Tea can be made from the red inner bark of black birches. Cultivation White-barked birches in particular are cultivated as ornamental trees, largely for their appearance in winter. The Himalayan birch, Betula utilis, especially the variety or subspecies jacquemontii, is among the most widely planted for this purpose. It has been cultivated since the 1870s, and many cultivars are available, including 'Doorenbos', 'Grayswood Ghost' and 'Silver Shadow'; 'Knightshayes' has a slightly weeping habit. Other species with ornamental white bark include Betula ermanii, Betula papyrifera, Betula pendula and Betula raddeana. Medical Approved topical medicine In the European Union, a prescription gel containing birch bark extract (commercial name Episalvan, betulae cortex dry extract (5–10 : 1); extraction solvent: n-heptane 95% (w/w)) was approved in 2016 for the topical treatment of minor skin wounds in adults. Although its mechanism of action in helping to heal injured skin is not fully understood, birch bark extract appears to stimulate the growth of keratinocytes which then fill the wound. Research and traditional medicine Preliminary research indicates that the phytochemicals, betulin and possibly other triterpenes, are active in Episalvan gel and wound healing properties of birch bark. Over centuries, birch bark was used in traditional medicine practices by North American indigenous people for treating superficial wounds by applying bark directly to the skin. Splints made with birch bark were used as casts for broken limbs in the 16th century. Paper Wood pulp made from birch gives relatively long and slender fibres for a hardwood. The thin walls cause the fibre to collapse upon drying, giving a paper with low bulk and low opacity. The birch fibres are, however, easily fibrillated and give about 75% of the tensile strength of softwood. The low opacity makes it suitable for making glassine. In India, the birch (Sanskrit: भुर्ज, bhurja) holds great historical significance in the culture of North India, where the thin bark coming off in winter was extensively used as writing paper. Birch paper (Sanskrit: भुर्ज पत्र, bhurja patra) is exceptionally durable and was the material used for many ancient Indian texts. The Roman period Vindolanda tablets also use birch as a material on which to write and birch bark was used widely in ancient Russia as notepaper (beresta) and for decorative purposes and even making footwear (lapti) and baskets. Use in musical instruments Birch wood is sometimes used as a tonewood for semiacoustic and acoustic guitar bodies, and occasionally for solid-body guitar bodies. It is also a common material used in mallets for keyboard percussion. Drum manufacturers, such as Gretsch and Yamaha, have been known to use birch wood in the construction of drum shells, owing to its strength and colour which takes stain in an appealing way, and which can also amber over very well, while also giving the drums an appealing tone which changes depending on the type of birch used. Culture Birches have spiritual importance in several religions, both modern and historical. In Celtic cultures, the birch symbolises growth, renewal, stability, initiation, and adaptability because it is highly adaptive and able to sustain harsh conditions with casual indifference. Proof of this adaptability is seen in its easy and eager ability to repopulate areas damaged by forest fires or clearings. Birches are also associated with Tír na nÓg, the land of the dead and the Sidhe, in Gaelic folklore, and as such frequently appear in Scottish, Irish, and English folksongs and ballads in association with death, or fairies, or returning from the grave. The leaves of the silver birch tree are used in the festival of St George, held in Novosej and other villages in Albania. The birch is New Hampshire's state tree and the national tree of Finland and Russia. The yellow birch is the official tree of the province of Quebec (Canada). The birch is a very important element in Russian culture and represents the grace, strength, tenderness and natural beauty of Russian women as well as the closeness to nature of the Russians. It's associated with marriage and love. There are numerous folkloric Russian songs in which the birch tree occurs. The Ornäs birch is the national tree of Sweden. The Czech word for the month of March, Březen, is derived from the Czech word bříza meaning birch, as birch trees flower in March under local conditions. The silver birch tree is of special importance to the Swedish city of Umeå. In 1888, the Umeå city fire spread all over the city and nearly burnt it down to the ground, but some birches, supposedly, halted the spread of the fire. To protect the city against future fires, wide avenues were created, and these were lined with silver birch trees all over the city. Umeå later adopted the unofficial name of "City of the Birches (Björkarnas stad)". Also, the ice hockey team of Umeå is called Björklöven, translated to English "The Birch Leaves". "Swinging" birch trees was a common game for American children in the nineteenth century. American poet Lucy Larcom's "Swinging on a Birch Tree" celebrates the game. The poem inspired Robert Frost, who pays homage to the act of climbing birch trees in his more famous poem, "Birches". Frost once told "it was almost sacrilegious climbing a birch tree till it bent, till it gave and swooped to the ground, but that's what boys did in those days".
Biology and health sciences
Fagales
null
67401
https://en.wikipedia.org/wiki/Grapefruit
Grapefruit
The grapefruit (Citrus × paradisi) is a subtropical citrus tree known for its relatively large, sour to semi-sweet, somewhat bitter fruit. The flesh of the fruit is segmented and varies in color from pale yellow to dark red. Grapefruits originated in Barbados in the 18th century. They are a citrus hybrid that was created through an accidental cross between the sweet orange (C. × sinensis) and the pomelo (C. maxima), both of which were introduced to the Caribbean from Asia in the 17th century. It has also been called the 'forbidden fruit'. In the past it was called the pomelo, but that term is now mostly used as the common name for Citrus maxima. Grapefruit–drug interactions are common, as the juice contains furanocoumarins that interfere with the metabolism of many drugs. This can prolong and intensify the effects of those drugs, leading to multiple side-effects such as abnormal heart rhythms, bleeding inside the stomach, low blood pressure, difficulty breathing, and dizziness. Description The evergreen grapefruit trees usually grow to around tall, although they may reach . The leaves are up to long, thin, glossy, and dark green. They produce white flowers with four or five petals. The fruit is yellow-orange skinned and generally an oblate spheroid in shape; it ranges in diameter from . Its flesh is segmented and acidic, varying in color depending on the cultivars, which include white, pink, and red pulps of varying sweetness (generally, the redder varieties are the sweetest). Varieties White grapefruit varieties include Camulos, Cecily, Duncan, Frost Marsh, Genetic Dwarf Marsh, Hall, Jochimsen, Marsh seedy, Nicholson navel, Perlis, Reed Marsh, Tetraploid, Warren Marsh, and Whitney Marsh. Red or pink grapefruit varieties include Flame, Foster Pink, Henderson Ruby, Hudson Foster, Marsh Pink, Ray Ruby, Redblush, Rio Red, Shambar, and Star Ruby. The 1929 'Ruby Red' (or 'Redblush') patent was associated with real commercial success, which came after the discovery of a red grapefruit growing on a pink variety. The Texas Legislature designated this variety the official "State Fruit of Texas" in 1993. Using radiation to trigger mutations, new varieties were developed to retain the red tones that typically faded to pink. The 'Rio Red' variety is a 1984 registered Texas grapefruit with registered trademarks Rio Star and Ruby-Sweet, also sometimes promoted as Reddest and Texas Choice. The 'Rio Red' is a mutation-bred variety that was developed by treatment of bud sticks with thermal neutrons. Its improved attributes of mutant variety are fruit and juice color, deeper red, and wide adaptation. The 'Star Ruby' is the darkest of the red varieties. Developed from an irradiated 'Hudson' grapefruit ('Hudson' being a limb sport of 'Foster', itself a limb sport of the 'Walters'), it has found limited commercial success because it is more difficult to grow than other varieties. As food Nutrition Raw white grapefruit is 90% water, 8% carbohydrates, 1% protein, and contains negligible fat (table). In a reference amount of , raw grapefruit provides of food energy and is a rich source of vitamin C (37% of the Daily Value), with no other micronutrients in significant amounts (table). Culinary Like other citrus fruits, grapefruits are sour because of their citric acid content; grapefruit juice contains about half the citric acid content of lemon juice, and nearly 50% more than orange juice. In Costa Rica, especially in Atenas, grapefruit are often cooked with sugar to balance their sourness, rendering them as sweets; or they are stuffed with dulce de leche as a dessert. In Haiti, grapefruit is used primarily for its juice (jus de Chadèque), but also is used to make jam (confiture de Chadèque). Grapefruit varieties are differentiated by the flesh color of fruit they produce. Common varieties are yellow and pink pulp colors. Flavors range from highly acidic and somewhat sour to sweet and tart, resulting from composition of sugars (mainly sucrose), organic acids (mainly citric acid), and monoterpenes and sesquiterpenes providing aromas. Grapefruit mercaptan, a sulfur-containing terpene, is one of the aroma compounds influencing the taste and odor of grapefruit, compared with other citrus fruits. Drug interactions Grapefruit and grapefruit juice interact with many drugs, resulting in numerous adverse effects including bone marrow suppression, nephrotoxicity, abnormal heart rhythm, rhabdomyolysis, hypotension, gastrointestinal bleeding, dizziness, and respiratory depression, according to the drug involved. One interaction occurs from grapefruit furanocoumarins, such as bergamottin and 6',7'-Dihydroxybergamottin, which occur in both flesh and peel. Furanocoumarins inhibit the CYP3A4 enzyme (among others from the cytochrome P450 enzyme family responsible for metabolizing 90% of drugs). The action of the CYP3A4 enzyme itself is to metabolize many medications. If a drug's breakdown for removal is lessened, then the level of that drug in the blood may become and remain high, leading to adverse effects. On the other hand, some drugs must be metabolized to become active, and inhibiting CYP3A4 may lead to reduced drug effects. Another effect is that grapefruit compounds may inhibit the absorption of drugs in the intestine. If a drug is not absorbed, then not enough of it is in the blood to have a therapeutic effect. Each affected drug has either a specific increase of effect or decrease. One whole grapefruit or a glass of of grapefruit juice is enough to cause drug overdose toxicity. Typically, drugs that are incompatible with grapefruit are marked as such on the container or package insert. Production In 2022, world production of grapefruits (combined with pomelos) was 9.8 million tonnes, led by China with 53% of the world total with Vietnam as a secondary producer (table). Pests and diseases Grapefruits are hosts for fruit flies (family Tephritidae) such as A. suspensa, which lay their eggs in overripe or spoiled grapefruits, sometimes causing serious damage in plantations in the Americas. In sub-Saharan Africa, the Citrus swallowtail, Papilio demodocus, is a minor pest of Citrus plantations. Grapefruits are subject to several diseases of Citrus trees, including citrus tristeza virus, citrus canker (caused by a bacterium, Xanthomonas), and the vector-transmitted citrus greening disease, where the vector is a psyllid bug, and the pathogen is a bacterium, Liberibacter. History Grapefruit originated as a natural hybrid. One ancestor of the grapefruit was the Jamaican sweet orange (Citrus sinensis), itself an ancient hybrid of Asian origin; the other was the Indonesian pomelo (C. maxima). The pomelo was the female ancestor; the sweet orange, itself a hybrid, was the male. Both C. sinensis and C. maxima were present in the West Indies by 1692. One story of the fruit's origin is that a 17th-century trader named 'Captain Shaddock' brought pomelo seeds to Jamaica and bred the first fruit, which were then called shaddocks. The grapefruit then probably originated as a naturally occurring hybrid between the two plants some time after they had been introduced there. A hybrid fruit, called forbidden fruit, was first documented in 1750 (along with 14 other citrus fruits including the guiney orange) by a Welshman, the Rev. Griffith Hughes, in his The Natural History of Barbados. However, Hughes's forbidden fruit may have been a plant distinct from grapefruit although still closely related to it. In 1814, the British naturalist and plantation owner John Lunan published the term grapefruit to describe a similar Jamaican citrus plant. Lunan reported that the name was due to its similarity in taste to the grape (Vitis vinifera). An alternative explanation is that this name may allude to clusters of the fruit on the tree, which often appear similar to bunches of grapes. In 1830, the Jamaican version of the plant was given the botanical name Citrus paradisi by the Scottish physician and botanist James Macfadyen. Macfadyen identified two varieties – one called forbidden fruit, the other Barbadoes Grape Fruit. Macfadyen distinguished between the two plants by fruit shape with the Barbados grapefruit being piriform (pear shaped) while the forbidden fruit was "maliformis". Macfadyen's and Hughes's descriptions differ, so it is not clear that the two reports are describing the same plant. It has been suggested that Hughes's golden orange may actually have been a grapefruit, while his forbidden fruit was a different variety that may since have been lost. A citrus called forbidden fruit or shaddette has been discovered in Saint Lucia; it may be the plant described by Hughes and Macfadyen. The name grape-fruit was used during the 19th century to refer to pomelos. It was brought to Florida by the French businessman Count Odet Philippe in 1823, in what is now known as Safety Harbor. Further crosses have produced the tangelo (1905), the Minneola tangelo (1931), and the oroblanco (1984). Its true origins were not determined until the 1940s, at which point its official name was altered to Citrus × paradisi, the × identifying it as a hybrid. An early pioneer in the American citrus industry was Kimball C. Atwood, a wealthy entrepreneur who founded the Atwood Grapefruit Company in the late 19th century. The Atwood Grove became the largest grapefruit grove in the world, with a yearly output of 80,000 boxes of fruit. There, pink grapefruit was discovered in 1906.
Biology and health sciences
Sapindales
null
67405
https://en.wikipedia.org/wiki/Great%20Red%20Spot
Great Red Spot
The Great Red Spot is a persistent high-pressure region in the atmosphere of Jupiter, producing an anticyclonic storm that is the largest in the Solar System. It is the most recognizable feature on Jupiter, owing to its red-orange color whose origin is still unknown. Located 22 degrees south of Jupiter's equator, it produces wind-speeds up to 432 km/h (268 mph). It was first observed in September 1831, with 60 recorded observations between then and 1878, when continuous observations began. A similar spot was observed from 1665 to 1713; if this is the same storm, it has existed for at least years, but a study from 2024 suggests this is not the case. Observation history First observations The Great Red Spot may have existed before 1665, but it could be that the present spot was first seen only in 1830, and was well studied only after a prominent appearance in 1879. The storm that was seen in the 17th century may have been different from the storm that exists today. A long gap separates its period of current study after 1830 from its 17th century discovery. Whether the original spot dissipated and reformed, whether it faded, or if the observational record was simply poor is unknown. The first sighting of the Great Red Spot is often credited to Robert Hooke, who described a spot on the planet in May 1664. However, it is likely that Hooke's spot was not only in another belt altogether (the North Equatorial Belt, as opposed to the current Great Red Spot's location in the South Equatorial Belt), but also that it was in the shadow of a transiting moon, most likely Callisto. Far more convincing is Giovanni Cassini's description of a "permanent spot" the following year. With fluctuations in visibility, Cassini's spot was observed from 1665 to 1713, but the 48-year observational gap makes the identity of the two spots inconclusive. The older spot's shorter observational history and slower motion than the modern spot makes it difficult to conclude that they are the same. A minor mystery concerns a Jovian spot depicted in a 1711 canvas by Donato Creti, which is exhibited in the Vatican. Part of a series of panels in which different (magnified) heavenly bodies serve as backdrops for various Italian scenes, and all overseen by the astronomer Eustachio Manfredi for accuracy, Creti's painting is the first known depiction of the Great Red Spot as red (albeit raised to the Jovian northern hemisphere due to an optical inversion inherent to the era's telescopes). No Jovian feature was explicitly described in writing as red before the late 19th century. The Great Red Spot has been observed since 5 September 1831. By 1879, over 60 observations had been recorded. Since it came into prominence in 1879, it has been under continuous observation. A 2024 study of historical observations suggests that the "permanent spot" observed from 1665 to 1713 may not be the same as the modern Great Red Spot observed since 1831. It is suggested that the original spot disappeared, and later another spot formed, which is the one seen today. Late 20th and 21st centuries On 25 February 1979, when the Voyager 1 spacecraft was from Jupiter, it transmitted the first detailed image of the Great Red Spot. Cloud details as small as across were visible. The colorful, wavy cloud pattern seen to the left (west) of the Red Spot is a region of extraordinarily complex and variable wave motion. In the 21st century, the major diameter of the Great Red Spot has been observed to be shrinking in size. At the start of 2004, its length was about half that of a century earlier, when it reached a size of , about three times the diameter of Earth. At the present rate of reduction, it will become circular by 2040. It is not known how long the spot will last, or whether the change is a result of normal fluctuations. In 2019, the Great Red Spot began "flaking" at its edge, with fragments of the storm breaking off and dissipating. The shrinking and "flaking" fueled speculation from some astronomers that the Great Red Spot could dissipate within 20 years. However, other astronomers believe that the apparent size of the Great Red Spot reflects its cloud coverage and not the size of the actual, underlying vortex, and they also believe that the flaking events can be explained by interactions with other cyclones or anticyclones, including incomplete absorptions of smaller systems; if this is the case, this would mean that the Great Red Spot is not in danger of dissipating. A smaller spot, designated Oval BA, which formed in March 2000 from the merging of three white ovals, has turned reddish in color. Astronomers have named it the Little Red Spot or Red Jr. As of 5 June 2006, the Great Red Spot and Oval BA appeared to be approaching convergence. The storms pass each other about every two years, but the passings of 2002 and 2004 were of little significance. Amy Simon-Miller, of the Goddard Space Flight Center, predicted the storms would have their closest passing on 4 July 2006. She worked with Imke de Pater and Phil Marcus of UC Berkeley as well as a team of professional astronomers beginning in April 2006 to study the storms using the Hubble Space Telescope; on 20 July 2006, the two storms were photographed passing each other by the Gemini Observatory without converging. In May 2008, a third storm turned red. The Juno spacecraft, which entered into a polar orbit around Jupiter in 2016, flew over the Great Red Spot upon its close approach to Jupiter on 11 July 2017, taking several images of the storm from a distance of about above the surface. Over the duration of the Juno mission, the spacecraft continued to study the composition and evolution of Jupiter's atmosphere, especially its Great Red Spot. The Great Red Spot should not be confused with the Great Dark Spot, a feature observed near the northern pole of Jupiter in 2000 with the Cassini–Huygens spacecraft. There is also a feature in the atmosphere of Neptune called the Great Dark Spot. The latter feature was imaged by Voyager 2 in 1989 and may have been an atmospheric hole rather than a storm. It was no longer present as of 1994, although a similar spot had appeared farther to the north. Mechanical dynamics Jupiter's Great Red Spot rotates counterclockwise, with a period of about 4.5 Earth days, or 11 Jovian days, as of 2008. Measuring in width as of 3 April 2017, the Great Red Spot is 1.3 times the diameter of Earth. The cloud-tops of this storm are about above the surrounding cloud-tops. The storm has continued to exist for centuries because there is no planetary surface (only a mantle of hydrogen) to provide friction; circulating gas eddies persist for a very long time in the atmosphere because there is nothing to oppose their angular momentum. Infrared data has long indicated that the Great Red Spot is colder (and thus higher in altitude) than most of the other clouds on the planet. The upper atmosphere above the storm, however, has substantially higher temperatures than the rest of the planet. Acoustic (sound) waves rising from the turbulence of the storm below have been proposed as an explanation for the heating of this region. The acoustic waves travel vertically up to a height of above the storm where they break in the upper atmosphere, converting wave energy into heat. This creates a region of upper atmosphere that is —several hundred kelvins warmer than the rest of the planet at this altitude. The effect is described as like "crashing [...] ocean waves on a beach". Careful tracking of atmospheric features revealed the Great Red Spot's counterclockwise circulation as far back as 1966, observations dramatically confirmed by the first time-lapse movies from the Voyager fly-bys. The spot is confined by a modest eastward jet stream to its south and a very strong westward one to its north. Though winds around the edge of the spot peak at about , currents inside it seem stagnant, with little inflow or outflow. The rotation period of the spot has decreased with time, perhaps as a direct result of its steady reduction in size. The Great Red Spot's latitude has been stable for the duration of good observational records, typically varying by about a degree. Its longitude, however, is subject to constant variation, including a 90-day longitudinal oscillation with an amplitude of ~1°. Because Jupiter does not rotate uniformly at all latitudes, astronomers have defined three different systems for defining longitude. System II is used for latitudes of more than 10 degrees and was originally based on the average rotational period of the Great Red Spot of 9h 55m 42s. Despite this, however, the spot has "lapped" the planet in System II at least 10 times since the early 19th century. Its drift rate has changed dramatically over the years and has been linked to the brightness of the South Equatorial Belt and the presence or absence of a South Tropical Disturbance. Internal depth and structure Jupiter's Great Red Spot (GRS) is an elliptical shaped anticyclone, occurring at 22 degrees below the equator, in Jupiter's southern hemisphere. The largest anticyclonic storm (~16,000 km) in our solar system, little is known about its internal depth and structure. Visible imaging and cloud-tracking from in-situ observation determined the velocity and vorticity of the GRS, which is located in a thin anticyclonic ring at 70–85% of the radius and is located along Jupiter's fastest westward moving jet stream. During NASA's 2016 Juno mission, gravity signature and thermal infrared data were obtained that offered insight into the structural dynamics and depth of the GRS. During July 2017, the Juno spacecraft conducted a second pass of the GRS to collect Microwave Radiometer (MWR) scans of the GRS to determine how far the GRS extended toward the surface of the condensed H2O layer. These MWR scans suggested that the GRS vertical depth extended to about 240 km below cloud level, with an estimated drop in atmospheric pressure to 100 bar. Two methods of analysis that constrain the data collected were the mascon approach, which found a depth of ~290 km, and the Slepian approach showing wind extending to ~310 km. These methods, along with gravity signature MWR data, suggest that the GRS zonal winds still increase at a rate of 50% of the velocity of the viable cloud level, before the wind decay starts at lower levels. This rate of wind decay and gravity data suggest the depth of the GRS is between 200 and 500 km. Galileo and Cassini's thermal infrared imaging and spectroscopy of the GRS were conducted during 1995–2008, in order to find evidence of thermal inhomogeneities within the internal structure vortex of the GRS. Previous thermal infrared temperature maps from the Voyager, Galileo, and Cassini missions suggested the GRS is a structure of an anticyclonic vortex with a cold core within a upwelling warmer annulus; this data shows a gradient in the temperature of the GRS. Better understanding of Jupiter's atmospheric temperature, aerosol particle opacity, and ammonia gas composition was provided by thermal-IR imaging: a direct correlation of the visible cloud layers reactions, thermal gradient and compositional mapping to observational data were collected over decades. During December 2000, high spatial resolution images from Galileo, of an atmospheric turbulent area to the northwest of the GRS, showed a thermal contrast between the warmest region of the anticyclone and regions to the east and west of the GRS. The vertical temperature of the structure of the GRS is constrained between the 100–600 mbar range, with the vertical temperature of the GRS core at approximately 400 mbar of pressure being 1.0–1.5°K, much warmer than regions of the GRS to the east–west, and 3.0–3.5°K warmer than regions to the north–south of the structure's edge. This structure is consistent with the data collected by the VISIR (VLT Mid-Infrared Imager Spectrometer on the ESO Very Large Telescope) imaging obtained in 2006; this revealed that the GRS was physically present at a wide range of altitudes that occur within the atmospheric pressure range of 80–600 mbar, and confirms the thermal infrared mapping result. To develop a model of the internal structure of the GRS, the Cassini instrument Composite Infrared Spectrometer (CIRS) and ground based spatial imaging mapped the composition of the phosphine and ammonia aerosols (PH3, NH3) and para-hydroxybenzoic acid within the anticyclonic circulation of the GRS. The images that were collected from the CIRS and ground-based imaging trace the vertical motion in the Jovian atmosphere by PH3 and NH3 spectra. The highest concentrations of PH3 and NH3 were found to the north of the GRS peripheral rotation. They aided in determining the southward jet movement and showed evidence of an increase in altitude of the column of aerosols with pressures ranging from 200–500 mbar. However, the NH3 composition data shows that there is a major depletion of NH3 below the visible cloud layer at the southern peripheral ring of the GRS; this lower opacity is relative to a narrow band of atmospheric subsidence. The low mid-IR aerosol opacity, along with the temperature gradients, the altitude difference, and the vertical movement of the zonal winds, are involved with the development and sustainability of the vorticity. The stronger atmospheric subsidence and compositional asymmetries of the GRS suggest that the structure exhibits a degree of tilt from the northern edge to the southern edge of the structure. The GRS depth and internal structure has been constantly changing over decades; however there is still no logical reason that it is 200–500 km in depth, but the jet streams that supply the force that powers the GRS vortex are well below the structure base. Color and composition It is not known what causes the Great Red Spot's reddish color. Hypotheses supported by laboratory experiments suppose that it may be caused by chemical products created from the solar ultraviolet irradiation of ammonium hydrosulfide and the organic compound acetylene, which produces a reddish material—likely complex organic compounds called tholins. The high altitude of the compounds may also contribute to the coloring. The Great Red Spot varies greatly in hue, from almost brick-red to pale salmon or even white. The spot occasionally disappears, becoming evident only through the Red Spot Hollow, which is its location in the South Equatorial Belt (SEB). Its visibility is apparently coupled to the SEB; when the belt is bright white, the spot tends to be dark, and when it is dark, the spot is usually light. These periods when the spot is dark or light occur at irregular intervals; from 1947 to 1997, the spot was darkest in the periods 1961–1966, 1968–1975, 1989–1990, and 1992–1993.
Physical sciences
Solar System
Astronomy
67409
https://en.wikipedia.org/wiki/Boeing%20707
Boeing 707
The Boeing 707 is an early American long-range narrow-body airliner, the first jetliner developed and produced by Boeing Commercial Airplanes. Developed from the Boeing 367-80 prototype first flown in 1954, the initial first flew on December 20, 1957. Pan Am began regular 707 service on October 26, 1958. With versions produced until 1979, the 707 is a swept wing quadjet with podded engines. Its larger fuselage cross-section allowed six-abreast economy seating, retained in the later 720, 727, 737, and 757 models. Although it was not the first commercial jetliner in service, the 707 was the first to be widespread, and is often credited with beginning the Jet Age. It dominated passenger air-transport in the 1960s, and remained common through the 1970s, on domestic, transcontinental, and transatlantic flights, as well as cargo and military applications. It established Boeing as a dominant airliner manufacturer with its 7x7 series. The initial, was powered by Pratt & Whitney JT3C turbojet engines. The shortened, long-range and the more powerful entered service in 1959. The longer-range, heavier 707-300/400 series has larger wings and is stretched slightly by . Powered by Pratt & Whitney JT4A turbojets, the 707-320 entered service in 1959, and the with Rolls-Royce Conway turbofans in 1960. The 720, a lighter short-range variant, was also introduced in 1960. Powered by Pratt & Whitney JT3D turbofans, the 707-120B debuted in 1961 and the 707-320B in 1962. The 707-120B typically flew 137 passengers in two classes over , and could accommodate 174 in one class. With 141 passengers in two classes, the 707-320/420 could fly and the 707-320B up to . The 707-320C convertible passenger-freighter model entered service in 1963, and passenger 707s have been converted to freighter configurations. Military derivatives include the E-3 Sentry airborne reconnaissance aircraft and the C-137 Stratoliner VIP transport. In total, 865 Boeing 707s were produced and delivered, not including 154 Boeing 720s. Development Model 367-80 origins During and after World War II, Boeing was known for its military aircraft. The company had produced innovative and important bombers, from the B-17 Flying Fortress and B-29 Superfortress to the jet-powered B-47 Stratojet and B-52 Stratofortress, but its commercial aircraft were not as successful as those from Douglas Aircraft and other competitors. As Douglas and Lockheed dominated the postwar air transport boom, the demand for Boeing's offering, the 377 Stratocruiser, quickly faded with only 56 examples sold and no new orders as the 1940s drew to a close. That venture had netted the company a $15 million loss. During 1949 and 1950, Boeing embarked on studies for a new jet transport and saw advantages with a design aimed at both military and civilian markets. Aerial refueling was becoming a standard technique for military aircraft, with over 800 KC-97 Stratofreighters on order. The KC-97 was not ideally suited for operations with the USAF's new fleets of jet-powered fighters and bombers; this was where Boeing's new design would win military orders. As the first of a new generation of American passenger jets, Boeing wanted the aircraft's model number to emphasize the difference from its previous propeller-driven aircraft, which bore 300-series numbers. The 400-, 500- and 600-series were already used by their missiles and other products, so Boeing decided that the jets would bear 700-series numbers, and the first would be the 707. The marketing personnel at Boeing chose 707 because they thought it was more appealing than 700. The project was enabled by the Pratt & Whitney JT3C turbojet engine, the civilian version of the J57 that yielded much more power than the previous generation of jet engines and was proving itself with the B-52. Freed from the design constraints imposed by limitations of late-1940s jet engines, developing a robust, safe, and high-capacity jet aircraft was within reach for Boeing. Boeing studied numerous wing and engine layouts for its new transport/tanker, some of which were based on the B-47 and C-97, before settling on the 367-80 "Dash 80" quadjet prototype aircraft. Less than two years elapsed from project launch in 1952 to rollout on May 14, 1954, with the first Dash 80 flying on July 15, 1954. The prototype was a proof-of-concept aircraft for both military and civilian use. The United States Air Force was the first customer, using it as the basis for the KC-135 Stratotanker aerial refueling and cargo aircraft. Whether the passenger 707 would be profitable was far from certain. At the time, nearly all of Boeing's revenue came from military contracts. In a demonstration flight over Lake Washington outside Seattle, on August 7, 1955, test pilot Tex Johnston performed a barrel roll in the 367-80 prototype. Although he justified his unauthorized action to Bill Allen, then president of Boeing, as selling the airplane with a 1 'g' maneuver he was told not to do it again. The wide fuselage of the Dash 80 was large enough for four-abreast (two-plus-two) seating like the Stratocruiser. Answering customers' demands and under Douglas competition, Boeing soon realized this would not provide a viable payload, so it widened the fuselage to to allow five-abreast seating and use of the KC-135's tooling. Douglas Aircraft had launched its DC-8 with a fuselage width of . The airlines liked the extra space and six-abreast seating, so Boeing increased the 707's width again to compete, this time to . Production and testing The first flight of the first-production 707-120 took place on December 20, 1957, and FAA certification followed on September 18, 1958. Both test pilots Joseph John "Tym" Tymczyszyn and James R. Gannett were awarded the first Iven C. Kincheloe Award for the test flights that led to certification. A number of changes were incorporated into the production models from the prototype. A Krueger flap was installed along the leading edge between the inner and outer engines on early 707-120 and -320 models. This was in response to de Havilland Comet overrun accidents which occurred after over-rotating on take-off. Wing stall would also occur on the 707 with over-rotation so the leading-edge flaps were added to prevent stalling even with the tail dragging on the runway. Further developments The initial standard model was the 707-120 with JT3C turbojet engines. Qantas ordered a shorter-bodied version called the 707-138, which was a -120 with six fuselage frames removed, three in front of the wings, and three aft. The frames in the 707 were set apart, so this resulted in a shortening of to a length of . With the maximum takeoff weight the same as that of the -120 (), the -138 was able to fly the longer routes that Qantas needed. Braniff International Airways ordered the higher-thrust version with Pratt & Whitney JT4A engines, the 707-220. The final major derivative was the 707-320, which featured an extended-span wing and JT4A engines, while the 707-420 was the same as the -320, but with Conway turbofan engines. Though initially fitted with turbojet engines, the dominant engine for the Boeing 707 family was the Pratt & Whitney JT3D, a turbofan variant of the JT3C with lower fuel consumption and higher thrust. JT3D-engined 707s and 720s were denoted with a "B" suffix. While many 707-120Bs and -720Bs were conversions of existing JT3C-powered machines, 707-320Bs were available only as newly built aircraft, as they had a stronger structure to support a maximum takeoff weight increased by , along with modifications to the wing. The 707-320B series enabled nonstop westbound flights from Europe to the West Coast of the United States and from the US to Japan. The final 707 variant was the 707-320C, (C for "Convertible"), which had a large fuselage door for cargo. It had a revised wing with three-sectioned leading-edge flaps, improving takeoff and landing performance and allowing the ventral fin to be removed (although the taller fin was retained). The 707-320Bs built after 1963 used the same wing as the -320C and were known as 707-320B Advanced aircraft. In total, 1,010 707s were built for civilian use between 1958 and 1978, though many of these found their way to military service. The 707 production line remained open for purpose-built military variants until 1991, with the last new-build 707 airframes built as E-3 and E-6 aircraft. Traces of the 707 are still found in the 737, which uses a modified version of the 707's fuselage, as well as the same external nose and cockpit configurations as those of the 707. These were also used on the previous 727, while the 757 also used the 707 fuselage cross-section. Design Wings The 707's wings are swept back at 35°, and like all swept-wing aircraft, display an undesirable "Dutch roll" flying characteristic that manifests itself as an alternating combined yawing and rolling motion. Boeing already had considerable experience with this on the B-47 and B-52, and had developed the yaw damper system on the B-47 that would be applied to later swept-wing configurations like the 707. However, many pilots new to the 707 had no experience with this instability as they were mostly accustomed to flying straight-wing propeller-driven aircraft such as the Douglas DC-7 and Lockheed Constellation. On one customer-acceptance flight, where the yaw damper was turned off to familiarize the new pilots with flying techniques, a trainee pilot's actions violently exacerbated the Dutch roll motion and caused three of the four engines to be torn from the wings. The plane, a brand new 707-227, N7071, destined for Braniff, crash-landed on a river bed north of Seattle at Arlington, Washington, killing four of the eight occupants. In his autobiography, test pilot Tex Johnston describes a Dutch roll incident he experienced as a passenger on an early commercial 707 flight. As the aircraft's movements did not cease and most of the passengers became ill, he suspected a misrigging of the directional autopilot (yaw damper). He went to the cockpit and found the crew unable to understand and resolve the situation. He introduced himself and relieved the ashen-faced captain who immediately left the cockpit feeling ill. Johnston disconnected the faulty autopilot and manually stabilized the plane "with two slight control movements". Johnston recommended Boeing increase the height of the tail fin, add a boosted rudder as well as add a ventral fin. These modifications were aimed at mitigating Dutch roll by providing more directional stability in yaw. Engines The initial was powered by Pratt & Whitney JT3C turbojet engines. The JT3D-3B engines are readily identifiable by the large gray secondary-air inlet doors in the nose cowl. These doors are fully open (sucked in at the rear) during takeoff to provide additional air. The doors automatically close with increasing airspeed. The 707 was the first commercial jet aircraft to be fitted with clamshell-type thrust reversers. Turbocompressors The 707 uses engine-driven turbocompressors to supply compressed air for cabin pressurization. On many commercial 707s, the outer port (number 1) engine mount is distinctly different from the other three, as this engine is not fitted with a turbocompressor. Later-model 707s typically had this configuration, although American Airlines had turbocompressors on engines 2 and 3 only. Early 707 models often had turbocompressor fairings on all four engines, but with only two or three compressors installed. Upgraded engines Pratt & Whitney, in a joint venture with Seven Q Seven (SQS) and Omega Air, selected the JT8D-219 low-bypass turbofan as a replacement powerplant for Boeing 707-based aircraft, calling their modified configuration a 707RE. Northrop Grumman selected the -219 to re-engine the US Air Force's fleet of 19 E-8 Joint STARS aircraft, which would allow the J-STARS more time on station due to the engine's greater fuel efficiency. NATO also planned to re-engine their fleet of E-3 Sentry AWACS aircraft. The -219 is publicized as being half the cost of the competing powerplant, the CFM International CFM56, and is 40 dB quieter than the original JT3D engines. Operational history The first commercial orders for the 707 came on October 13, 1955, when leading global carrier Pan Am committed to 20 Boeing 707s, and 25 Douglas DC-8s, dramatically increasing their passenger capacity (in available revenue passenger seat-miles per hour/per day) over its existing fleet of propeller aircraft. The competition between the 707 and DC-8 was fierce. Pan Am ordered these planes when and as they did so that they would be the operators of the "first-off" production line for each aircraft type. Once the initial batch of the aircraft had been delivered to them and put into operation, Pan Am would have the distinction of being not only the "Launch Customer" for both transcontinental American jets, but the exclusive operator of American intercontinental jet transports for at least a year. The only rival in intercontinental jet aircraft production at the time was the British de Havilland Comet. However, the Comet series had been the subject of fatal accidents (due to design flaws) early in its introduction and withdrawn from service; virtually redesigned from scratch, it was still smaller and slower than the 707 when reintroduced as version -4. In addition, airlines and their passengers at the time preferred the more established Douglas Aircraft as a maker of passenger aircraft, and several major carriers committed only to the Douglas DC-8, delayed by Douglas' decision to wait for the larger and more fuel efficient (Pratt & Whitney JT4A) turbojet to design a larger and longer range aircraft around. Anticipating this advantage, Boeing made a late and costly decision to redesign and enlarge the initial 707's wing to help increase range and payload, giving birth to the 707-320. Pan Am inaugurated 707 service with a christening at National Airport on October 17, 1958, attended by President Eisenhower, followed by a transatlantic flight for VIPs (personal guests of founder Juan Trippe) from Baltimore's Friendship International Airport to Paris. The aircraft's first commercial flight was from Idlewild Airport, New York, to Le Bourget, Paris, on October 26, 1958, with a fuel stop in Gander, Newfoundland. In December, National Airlines operated the first US domestic jet airline flights between New York/Idlewild and Miami, using 707s leased from Pan Am. In February 1956, rival global giant Trans World Airlines' then-President Howard Hughes ordered eight new Boeing 707-120, dubbing the new jet service StarStream, launching its first jet service, between New York-Idlewild International Airport and San Francisco International Airport, on January 25, 1959. American Airlines was the first domestic airline to fly its own jets, on January 25, 1959. TWA started domestic 707-131 flights in March and Continental Airlines started 707-124 flights in June; airlines that had ordered only the DC-8, such as United, Delta, and Eastern, were left without jets until September and lost market share on transcontinental flights. Qantas was the first non-US airline to use the 707s, starting in 1959. The 707 quickly became the most popular jetliner of its time. Its success led to rapid developments in airport terminals, runways, airline catering, baggage handling, reservations systems, and other air transport infrastructure. The advent of the 707 also led to the upgrading of air traffic control systems to prevent interference with military jet operations. As the 1960s drew to a close, the exponential growth in air travel led to the 707 being a victim of its own success. The 707 had become too small to handle the increased numbers of passengers on the routes for which it had been designed. Stretching the fuselage again was not a viable option because the installation of larger, more powerful engines would need a larger undercarriage, which was not feasible given the design's limited ground clearance at takeoff. Boeing's answer to the problem was the first wide-body airliner—the Boeing 747. The 707's first-generation engine technology was also rapidly becoming obsolete in the areas of noise and fuel economy, especially after the 1973 oil crisis. Operations of the 707 were threatened by the enactment of international noise regulations in 1985. Shannon Engineering of Seattle developed a hush kit with funding from Tracor, Inc, of Austin, Texas. By the late 1980s, 172 Boeing 707s had been equipped with the Quiet 707 package. Boeing acknowledged that more 707s were in service than before the hush kit was available. Trans World Airlines flew the last scheduled 707 flight for passengers by a US carrier on October 30, 1983, although 707s remained in scheduled service by airlines from other nations for much longer. Middle East Airlines of Lebanon flew 707s and 720s in front-line passenger service until the end of the 1990s. Since LADE of Argentina removed its 707-320Bs from regular service in 2007, Saha Airlines of Iran was the last commercial operator of the Boeing 707. After suspending its scheduled passenger service in April 2013, Saha continued to operate a small fleet of 707s on behalf of the Iranian Air Force. As of 2019, only a handful of 707s remain in operation, acting as military aircraft for aerial refueling, transport, and AWACS missions. Variants Although certified as Series 100s, 200s, 300s, etc., the different 707 variants are more commonly known as Series 120s, 220s, 320s, and so on, where the "20" part of the designation is Boeing's "customer number" for its development aircraft. 707-020 Announced in July 1957 as a derivative for shorter flights from shorter runways, the 707-020 first flew on November 23, 1959. Its type certificate was issued on June 30, 1960, and it entered service with United Airlines on July 5, 1960. As a derivative, the 720 had low development costs, allowing profitability despite few sales. Compared to the 707-120, it has a length reduced by 9 feet (2.7 m), a modified wing and a lightened airframe for a lower maximum takeoff weight. Powered by four Pratt & Whitney JT3C turbojets, the initial 720 could cover a range with 131 passengers in two classes. Powered by JT3D turbofans, the 720B first flew on October 6, 1960, and entered service in March 1961. It could seat 156 passengers in one class over a range. A total of 154 Boeing 720s and 720Bs were built until 1967. Some 720s were later converted to the 720B specification. The 720 was succeeded by the Boeing 727 trijet. 707-120 The 707-120 was the first production 707 variant, with a longer, wider fuselage, and greater wingspan than the Dash 80. The cabin had a full set of rectangular windows and could seat up to 189 passengers. It was designed for transcontinental routes, and often required a refueling stop when flying across the North Atlantic. It had four Pratt & Whitney JT3C-6 turbojets, civilian versions of the military J57, initially producing with water injection. Maximum takeoff weight was and first flight was on December 20, 1957. Major orders were the launch order for 20 707-121 aircraft by Pan Am and an American Airlines order for 30 707-123 aircraft. The first revenue flight was on October 26, 1958; 56 were built, plus seven short-bodied -138s; the last -120 was delivered to Western in May 1960. The 707-138 was a -120 with a fuselage shorter than the others, with (three frames) removed ahead and behind the wing, giving increased range. Maximum takeoff weight was the same as the standard version. It was a variant for Qantas, thus had its customer number 38. To allow for full-load takeoffs at the midflight refueling stop in Fiji, the wing's leading-edge slats were modified for increased lift, and the allowable temperature range for use of full takeoff power was increased by 10°F (5.5°C). Seven -138s were delivered to Qantas between June and September 1959, and they first carried passengers in July of that year. The 707-120B had Pratt & Whitney JT3D-1 turbofan engines, which were quieter, more powerful, and more fuel-efficient, rated at , with the later JT3D-3 version giving . (This thrust did not require water injection, eliminating both the system and 5000–6000 lb of water.) The -120B had the wing modifications introduced on the 720 and a longer tailplane; a total of 72 were built, 31 for American and 41 for TWA, plus six short-bodied -138Bs for Qantas. American had its 23 surviving -123s converted to -123Bs, but TWA did not convert its 15 -131s. The only other conversions were Pan Am's five surviving -121s and one surviving -139, the three aircraft delivered to the USAF as -153s and the seven short-bodied Qantas -138s (making 13 total 707s delivered to Qantas between 1959 and 1964). The first flight of the -120B was on June 22, 1960, and American carried the first passengers in March 1961; the last delivery was to American in April 1969. Maximum weight was for both the long- and short-bodied versions. 707-220 The 707-220 was designed for hot and high operations with more powerful Pratt & Whitney JT4A-3 turbojets. Five of these were produced, but only four were ultimately delivered, with one being lost during a test flight. All were for Braniff International Airways and carried the model number 707-227; the first entered service in December 1959. This version was made obsolete by the arrival of the turbofan-powered 707-120B. 707-320 The 707-320 Intercontinental is a stretched version of the turbojet-powered 707-120, initially powered by JT4A-3 or JT4A-5 turbojets producing each (most eventually got JT4A-11s). The interior allowed up to 189 passengers, the same as the -120 and -220 series, but improved two-class capacity due to an 80-in fuselage stretch ahead of the wing (from to ), with extensions to the fin and horizontal stabilizer extending the aircraft's length further. The longer wing carried more fuel, increasing range by and allowing the aircraft to operate as true transoceanic aircraft. The wing modifications included outboard and inboard inserts, as well as a kink in the trailing edge to add area inboard. Takeoff weight was increased to initially and to with the higher-rated JT4As and center section tanks. Its first flight was on January 11, 1958; 69 turbojet 707-320s were delivered through January 1963, the first passengers being carried (by Pan Am) in August 1959. 707-420 The 707-420 was identical to the -320, but fitted with Rolls-Royce Conway 508 (RCo.12) turbofans (or by-pass turbojets as Rolls-Royce called them) of thrust each. The first announced customer was Lufthansa. BOAC's controversial order was announced six months later, but the British carrier got the first service-ready aircraft off the production line. The British Air Registration Board refused to give the aircraft a certificate of airworthiness, citing insufficient yaw control, excessive rudder forces, and the ability to over-rotate on takeoff, stalling the wing on the ground (a fault of the de Havilland Comet 1). Boeing responded by adding to the vertical stabilizer, applying full instead of partial rudder boost, and fitting an underfin to prevent over-rotation. These modifications except to the fin under the tail became standard on all 707 variants and were retrofitted to all earlier 707s. The 37 -420s were delivered to BOAC, Lufthansa, Air-India, El Al, and Varig through November 1963; Lufthansa was the first to carry passengers, in March 1960. 707-320B The 707-320B had the application of the JT3D turbofan to the Intercontinental, but with aerodynamic refinements. The wing was modified from the -320 by adding a second inboard kink, a dog-toothed leading edge, and curved low-drag wingtips instead of the earlier blunt ones. These wingtips increased overall wingspan by 3.0 ft (0.9 m). Takeoff gross weight was increased to . The 175 707-320B aircraft were all new-build; no original -320 models were converted to fan engines in civilian use. First service was June 1962, with Pan Am. The 707-320B Advanced is an improved version of the -320B, adding the three-section leading-edge flaps already seen on the -320C. These reduced takeoff and landing speeds and altered the lift distribution of the wing, allowing the ventral fin found on earlier 707s to be removed. From 1965, -320Bs had the uprated -320C undercarriage allowing the same MTOW. These were often identified as 707-320BA-H. 707-320C The 707-320C has a convertible passenger–freight configuration, which became the most widely produced variant of the 707. The 707-320C added a strengthened floor and a new cargo door to the -320B model. The wing was fitted with three-section leading-edge flaps which allowed the removal of the underfin. A total of 335 of this variant were built, including some with JT3D-7 engines ( takeoff thrust) and a takeoff weight of . Most -320Cs were delivered as passenger aircraft with airlines hoping the cargo door would increase second-hand values. The addition of two additional emergency exits, one on either side aft of the wing raised the maximum passenger limit to 219. Only a few aircraft were delivered as pure freighters. One of the final orders was by the Iranian Government for 14 707-3J9C aircraft capable of VIP transportation, communication, and in-flight refueling tasks. 707-700 The 707-700 was a test aircraft used to study the feasibility of using CFM International CFM56 engines on a 707 airframe and possibly retrofitting existing aircraft with the engine. After testing in 1979, N707QT, the last commercial 707 airframe, was restored to 707-320C configuration and delivered to the Moroccan Air Force as a tanker aircraft via a "civilian" order. Boeing abandoned the retrofit program, since they felt it would be a threat to the 757 and 767 programs. The information gathered from testing led to the eventual retrofitting of CFM56 engines to the USAF C-135/KC-135R models, and some military versions of the 707 also used the CFM56. The Douglas DC-8 "Super 70" series with CFM56 engines was developed and extended the DC-8's life in a stricter noise regulatory environment. As a result, significantly more DC-8s remained in service into the 21st century than 707s. Undeveloped variants The 707-620 was a proposed domestic range-stretched variant of the 707-320B. The 707-620 was to carry around 200 passengers while retaining several aspects of the 707-320B. It would have been delivered around 1968 and would have also been Boeing's answer to the stretched Douglas DC-8 Series 60. Had the 707-620 been built, it would have cost around US$8,000,000. However, engineers discovered that a longer fuselage and wing meant a painstaking redesign of the wing and landing-gear structures. Rather than spend money on upgrading the 707, engineer Joe Sutter stated the company "decided spending money on the 707 wasn't worth it". The project was cancelled in 1966 in favor of the newer Boeing 747. The 707-820 was a proposed intercontinental stretched variant of the 707-320B. This variant was to be powered by four Pratt & Whitney JT3D-15 turbofan engines, and it would have had a nearly extension in wingspan, to . Two variations were proposed, the 707-820(505) model and the 707-820(506) model. The 505 model would have had a fuselage longer than the 707-320B, for a total length of . This model would have carried 209 passengers in mixed-class configuration and 260 passengers in all-economy configuration. The 506 model would have had a fuselage longer than the 707-320B, to in length. This second model would have carried 225 passengers in mixed-class configuration and 279 passengers in all-economy configuration. Like the 707-620, the 707-820 was also set to compete with the stretched DC-8-60 Super Series models. The design was being pitched to American, TWA, BOAC, and Pan Am at the time of its proposal in early 1965. The 707-820 would have cost US$10,000,000. Like the 707-620, the 707-820 would have required a massive structural redesign to the wing and gear structures. The 707-820 was also cancelled in 1966 in favor of the 747. Military versions The militaries of the US and other countries have used the civilian 707 aircraft in a variety of roles, and under different designations. (The 707 and US Air Force's KC-135 were developed in parallel from the Boeing 367–80 prototype.) The Boeing E-3 Sentry is a US military airborne warning and control system (AWACS) aircraft based on the Boeing 707 that provides all-weather surveillance, command, control, and communications. The Northrop Grumman E-8 Joint STARS is an aircraft modified from the Boeing 707-300 series commercial airliner. The E-8 carries specialized radar, communications, operations and control subsystems. The most prominent external feature is the 40 ft (12 m) canoe-shaped radome under the forward fuselage that houses the 24 ft (7.3 m) APY-7 active electronically scanned array side looking airborne radar antenna. The VC-137 variant of the Stratoliner was a special-purpose design meant to serve as Air Force One, the secure transport for the President of the United States. These models were in operational use from 1962 to 1990. The first presidential jet aircraft, a VC-137B designated SAM 970, is on display at the Museum of Flight in Seattle. Two VC-137C aircraft are on display with SAM 26000 at the National Museum of the United States Air Force near Dayton, Ohio and SAM 27000 at the Ronald Reagan Presidential Library in Simi Valley, California. The Canadian Forces also operated the Boeing 707 with the designation CC-137 Husky (707-347C) from 1971 to 1997. Boeing 717 was the company designation for the C-135 Stratolifter and KC-135 Stratotanker derivatives of the 367-80. (The 717 designation was later reused in renaming the McDonnell Douglas MD-95 to Boeing 717 after the company merged with Boeing.) Operators Boeing's customer codes used to identify specific options and livery specified by customers was started with the 707, and has been maintained through all Boeing's models. In essence the same system as used on the earlier Boeing 377, the code consisted of two digits affixed to the model number to identify the specific aircraft version. For example, Pan Am was assigned code "21". Thus, a 707-320B sold to Pan Am had the model number 707-321B. The number remained constant as further aircraft were purchased; thus, when Pan Am purchased the 747-100, it had the model number 747-121. In the 1980s, the USAF acquired around 250 used 707s to provide replacement turbofan engines for the KC-135E Stratotanker program. The 707 is no longer operated by commercial airlines. American actor John Travolta owned an ex-Qantas 707-138B, with the registration N707JT. In May 2017, he donated the plane to the Historical Aircraft Restoration Society near Wollongong, Australia. The plane is planned to be flown to Shellharbour Airport, where HARS is based, once repairs to ensure safe flying condition have been completed. The repairs were delayed several times since the 2017 announcement. Orders and deliveries Deliveries 707 Model summary Accidents and incidents As of January 2019, the 707 has been in 261 aviation occurrences and 174 hull-loss accidents with a total of 3,039 fatalities. The deadliest incident involving the 707 was the Agadir air disaster which took place on August 3, 1975, with 188 fatalities. On January 14, 2019, a Saha Airlines cargo flight crashed, killing 15 people and seriously injuring one more person. It was the last civil 707 in operation. Aircraft on display VH-XBA model 707-138B (number 29) is one of the first 707s exported, and the first civilian jet registered in Australia (to Qantas in 1959); it is on display at the Qantas Founders Outback Museum in Longreach, Queensland, Australia. 4X-BYD model 707-131(F), (number 34), an ex-Israeli Air Force and TWA aircraft, is on display at the Israeli Air Force Museum near Hatzerim, Israel. OO-SJA model 707-329 (number 78), ex-Sabena, is the first jetliner registered in Belgium; forward fuselage, salvaged following an uncontained engine failure and emergency landing, is on display at the Royal Military Museum Brussels. 4X-JYW model 707-328 (msn. 173617, number 110), is a former Air France (F-BHSE) aircraft sold to the Israeli Air Force; it is on display at the Israeli Air Force Museum, Beersheba – Hatzerim (LLHB). G-APFJ model 707-436 (msn. 17711, number 163) is a forward fuselage on display at the National Museum of Flight, East Fortune, in BOAC livery. N7515A model 707-123 (msn. 17642, number 41), posing as D-ABOF, a 707-430 formerly operated by Lufthansa has its nose section preserved at the Deutsches Museum in Munich.< 4X-ATA model 707-458 (msn. 18070, number 205) is a former El Al aircraft, the nose of which is preserved at the Cradle of Aviation Museum in Garden City, New York. CC-CCG model 707-330B (msn. 18642, number 233), an ex-Lufthansa and LAN Chile craft, is undergoing restoration at Santiago – Los Cerillos, Chile (ULC/SCTI) and will be repainted in the Chilean airline's 1960s scheme. F-BLCD model 707-328B (number 471) is on display at the Musée de l'Air et de l'Espace, Paris, France. EP-IRJ model 707-321B (msn. 18958, number 475), a former Iran Air aircraft, was originally delivered to Pan Am as N416PA, and is currently the Air Restaurant at Mehrabad Airport, Tehran. A20-627 model 707-338C (msn. 19627, number 707) flew with the RAAF. Originally delivered to Qantas as VH-EAG, its forward fuselage is preserved at the Historical Aircraft Restoration Society, Albion Park Rail, New South Wales, Australia. 1419 model 707-328C (msn. 19917, number 763), an ex-SAAF aircraft, is on display at the South African Air Force Museum – Swartkop Air Force Base, Pretoria. N893PA model 707-321B (msn. 20030, number 791), a former CAAC aircraft originally delivered to Pan Am, is preserved at Tianjin, China. HZ-HM2 Model 707-386C (msn. 21081, number 903) is a Saudi Air Force VIP aircraft painted in the current Saudia color scheme; delivered in 1975, it is registered as HZ-HM1 and preserved at the Saudi Air Force Museum, Riyadh. Specifications
Technology
Specific aircraft_2
null
67430
https://en.wikipedia.org/wiki/Nymphaeaceae
Nymphaeaceae
Nymphaeaceae () is a family of flowering plants, commonly called water lilies. They live as rhizomatous aquatic herbs in temperate and tropical climates around the world. The family contains five genera with about 70 known species. Water lilies are rooted in soil in bodies of water, with leaves and flowers floating on or rising from the surface. Leaves are oval and heart-shaped in Barclaya. Leaves are round, with a radial notch in Nymphaea and Nuphar, but fully circular in Victoria and Euryale. Water lilies are a well-studied family of plants because their large flowers with multiple unspecialized parts were initially considered to represent the floral pattern of the earliest flowering plants. Later genetic studies confirmed their evolutionary position as basal angiosperms. Analyses of floral morphology and molecular characteristics and comparisons with a sister taxon, the family Cabombaceae, indicate, however, that the flowers of extant water lilies with the most floral parts are more derived than the genera with fewer floral parts. Genera with more floral parts, Nuphar, Nymphaea, Victoria, have a beetle pollination syndrome, while genera with fewer parts are pollinated by flies or bees, or are self- or wind-pollinated. Thus, the large number of relatively unspecialized floral organs in the Nymphaeaceae is not an ancestral condition for the clade. Description Vegetative characteristics The Nymphaeaceae are annual or perennial, aquatic, rhizomatous herbs. The family is further characterized by scattered vascular bundles in the stems, and frequent presence of latex, usually with distinct, stellate-branched sclereids projecting into the air canals. Hairs are simple, usually producing mucilage (slime). Leaves are alternate and spiral, opposite or occasionally whorled, simple, peltate or nearly so, entire to toothed or dissected, short to long petiolate, with blade submerged, floating or emergent, with palmate to pinnate venation. Stipules are either present or absent. Generative characteristics Flowers are solitary, bisexual, radial, with a long pedicel and usually floating or raised above the surface of the water, with girdling vascular bundles in receptacle. Some species are protogynous and primarily cross-pollinated, but because male and female stages overlap during the second day of flowering, and because it is self-compatible, self-fertilization is possible. Female and male parts of the flower are usually active at different times, to facilitate cross-pollination, although this is just one of several reproductive strategies used by these plants. There are 4–12 sepals, which are distinct to connate, imbricate, and often petallike. Petals lacking or 8 to numerous, inconspicuous to showy, often intergrading with stamens. Stamens are 3 to numerous, the innermost sometimes represented by staminodes. Filaments are distinct, free or adnate to petaloid staminodes, slender and well differentiated from anthers to laminar and poorly differentiated from anthers; pollen grains usually monosulcate or lacking apertures. Carpels are 3 to numerous, distinct or connate. The fruit is an aggregate of nuts, a berry, or an irregularly dehiscent fleshy spongy capsule. Seeds are often arillate, more or less lacking endosperm. Taxonomy Nymphaeaceae has been investigated systematically for decades because botanists considered their floral morphology to represent one of the earliest groups of angiosperms. Modern genetic analyses by the Angiosperm Phylogeny Group researchers has confirmed its basal position among flowering plants. In addition, the Nymphaeaceae are more genetically diverse and geographically dispersed than other basal angiosperms. Nymphaeaceae is placed in the order Nymphaeales, which is the second diverging group of angiosperms after Amborella in the most widely accepted flowering plant classification system, APG IV system. Nymphaeaceae is a small family of three to six genera: Barclaya, Euryale, Nuphar, Nymphaea, Ondinea, and Victoria. The genus Barclaya is sometimes given rank as its own family, Barclayaceae, on the basis of an extended perianth tube (combined sepals and petals) arising from the top of the ovary and by stamens that are joined in the base. However, molecular phylogenetic work includes it in Nymphaeaceae. The genus Ondinea has recently been shown to be a morphologically aberrant species of Nymphaea, and is now included in this genus. The genera Euryale, of far east Asia, and Victoria, from South America, are closely related despite their geographic distance, but their relationship toward Nymphaea need further studies. The sacred lotus was once thought to be a water lily, but is now recognized to be a highly modified eudicot in its own family Nelumbonaceae of the order Proteales. Fossils Several fossil species are known, including Cretaceous representatives of Nymphaea, as well as fossil genera such as Jaguariba from the Cretaceous of Brazil, Allenbya from the Ypresian of British Columbia, Notonuphar from the Eocene of Antarctica, Nuphaea from the Eocene of Germany, Susiea from the Late Paleocene Almont Flora of North Dakota, USA, and Barclayopsis from the Maastrichtian of Eisleben, Germany. Invasiveness The beautiful nature of water lilies has led to their widespread use as ornamental plants. The Mexican waterlily, native to the Gulf Coast of North America, is planted throughout the continent. It has escaped from cultivation and become invasive in some areas, such as California's San Joaquin Valley. It can infest slow-moving bodies of water and is difficult to eradicate. Populations can be controlled by cutting top growth. Herbicides can also be used to control populations using glyphosate and fluridone. Culture The water lily is the national flower of Iran, Bangladesh and Sri Lanka. The Emblem of Bangladesh contains a lily floating on water. It is also the birth flower for the month of July. The Nymphaeaceae, which is also called (Nilufar Abi in Persian), can be seen in many reliefs of the Achaemenid period (552 BC) such as the statue of Anahita in the Persepolis. Lotus flower was included in Kaveh the blacksmith's Derafsh and later as the flag of the Sasanian Empire Derafsh Kaviani. Today, it is known as the symbol of Iranians Solar Hijri Calendar. Lily pads, also known as Seeblätter, are a charge in Northern European heraldry, often coloured red (gules), and appear on the flag of Friesland and the coat of arms of Denmark (in the latter case often replaced by red hearts). The water lily has a special place in Sangam literature and Tamil poetics, where it is considered symbolic of the grief of separation; it is considered to evoke imagery of the sunset, the seashore, and the shark. Heraldry In visual arts Water lilies were depicted by the French artist Claude Monet (1840–1926) in a series of paintings. The Maya The main job of the Maya rulers during pre-Columbian Mesoamerica was to obtain clean and drinkable water for their citizens during both the wet and dry seasons. Their success in accomplishing this is what allowed them to grow their polity by attracting dry-season laborers. They did this by constructing water systems such as reservoirs, wetland reclamation, and dams and channels to capture and store rainwater. With their knowledge of the wetland biosphere, they transformed artificial reservoirs into wetland biospheres. One way that they tested whether the water systems were working properly was if the Nymphaeaceae were thriving. Water lilies became a visual sign of water cleanliness, so the Maya elite began to associate themselves with the flowers. The Maya began to use water lily iconography depicted on stelae, monumental architecture, murals, and in hieroglyphic writing. Even in Maya settlements like Palenque, where the main water supplies were springs and flowing streams (places where water lilies cannot grow), the flowers were prevalent in their iconographic records. Aristocrats and religious figures wore masks and/or headdresses during celebratory events that had water lilies and/or water lily symbols to appear like gods. There is also evidence that water lilies were used as cultural entheogenic. Some interpretations of ritual scenes drawn out by the Maya have been blood being extracted from perforated body parts. However, more close examinations show that this is instead a liquid flowing directly from water lily flowers that were on the heads of certain gods. It is likely that the Maya ingested these plants to create a non-ordinary state of consciousness, which makes sense because there is a class of opiate alkaloids in Nymphaeaceae. Overall, these examples show just how important this specific form of water symbolism was throughout the Maya region. Gallery
Biology and health sciences
Nymphaeales
Plants
67491
https://en.wikipedia.org/wiki/Unbinilium
Unbinilium
Unbinilium, also known as eka-radium or element 120, is a hypothetical chemical element; it has symbol Ubn and atomic number 120. Unbinilium and Ubn are the temporary systematic IUPAC name and symbol, which are used until the element is discovered, confirmed, and a permanent name is decided upon. In the periodic table of the elements, it is expected to be an s-block element, an alkaline earth metal, and the second element in the eighth period. It has attracted attention because of some predictions that it may be in the island of stability. Unbinilium has not yet been synthesized, despite multiple attempts from German and Russian teams. Experimental evidence from these attempts shows that the period 8 elements would likely be far more difficult to synthesise than the previous known elements. New attempts by American, Russian, and Chinese teams to synthesize unbinilium are planned to begin in the mid-2020s. Unbinilium's position as the seventh alkaline earth metal suggests that it would have similar properties to its lighter congeners; however, relativistic effects may cause some of its properties to differ from those expected from a straight application of periodic trends. For example, unbinilium is expected to be less reactive than barium and radium, be closer in behavior to strontium, and while it should show the characteristic +2 oxidation state of the alkaline earth metals, it is also predicted to show the +4 and +6 oxidation states, which are unknown in any other alkaline earth metal. Introduction History Elements 114 to 118 (flerovium through oganesson) were discovered in "hot fusion" reactions bombarding the actinides plutonium through californium with calcium-48, a quasi-stable neutron-rich isotope which could be used as a projectile to produce more neutron-rich isotopes of superheavy elements. This cannot easily be continued to elements 119 and 120, because it would require a target of the next actinides einsteinium and fermium. Tens of milligrams of these would be needed to create such targets, but only micrograms of einsteinium and picograms of fermium have so far been produced. More practical production of further superheavy elements would require bombarding actinides with projectiles heavier than 48Ca, but this is expected to be more difficult. Attempts to synthesize elements 119 and 120 push the limits of current technology, due to the decreasing cross sections of the production reactions and their probably short half-lives, expected to be on the order of microseconds. Synthesis attempts Past Following their success in obtaining oganesson by the reaction between 249Cf and 48Ca in 2006, the team at the Joint Institute for Nuclear Research (JINR) in Dubna started experiments in March–April 2007 to attempt to create unbinilium with a 58Fe beam and a 244Pu target. The attempt was unsuccessful, and the Russian team planned to upgrade their facilities before attempting the reaction again. + → * → no atoms In April 2007, the team at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany attempted to create unbinilium using a 238U target and a 64Ni beam: + → * → no atoms No atoms were detected. The GSI repeated the experiment with higher sensitivity in three separate runs in April–May 2007, January–March 2008, and September–October 2008, all with negative results, reaching a cross section limit of 90 fb. In 2011, after upgrading their equipment to allow the use of more radioactive targets, scientists at the GSI attempted the rather asymmetrical fusion reaction: + → * → no atoms It was expected that the change in reaction would quintuple the probability of synthesizing unbinilium, as the yield of such reactions is strongly dependent on their asymmetry. Although this reaction is less asymmetric than the 249Cf+50Ti reaction, it also creates more neutron-rich unbinilium isotopes that should receive increased stability from their proximity to the shell closure at N = 184. Three signals were observed in May 2011; a possible assignment to 299Ubn and its daughters was considered, but could not be confirmed, and a different analysis suggested that what was observed was simply a random sequence of events. In August–October 2011, a different team at the GSI using the TASCA facility tried a new, even more asymmetrical reaction: + → * → no atoms Because of its asymmetry, the reaction between 249Cf and 50Ti was predicted to be the most favorable practical reaction for synthesizing unbinilium, though it produces a less neutron-rich isotope of unbinilium than any other reaction studied. No unbinilium atoms were identified. This reaction was investigated again in April to September 2012 at the GSI. This experiment used a 249Bk target and a 50Ti beam to produce element 119, but since 249Bk decays to 249Cf with a half-life of about 327 days, both elements 119 and 120 could be searched for simultaneously: + → * → no atoms + → * → no atoms Neither element 119 nor element 120 was observed. Planned The JINR's plans to investigate the 249Cf+50Ti reaction in their new facility were disrupted by the 2022 Russian invasion of Ukraine, after which collaboration between the JINR and other institutes completely ceased due to sanctions. Thus, 249Cf could no longer be used as a target, as it would have to be produced at the Oak Ridge National Laboratory (ORNL) in the United States. Instead, the 248Cm+54Cr reaction will be used. In 2023, the director of the JINR, Grigory Trubnikov, stated that he hoped that the experiments to synthesise element 120 will begin in 2025. In preparation for this, the JINR reported success in the 238U+54Cr reaction in late 2023, making a new isotope of livermorium, 288Lv. This was an unexpectedly good result; the aim had been to experimentally determine the cross-section of a reaction with 54Cr projectiles and prepare for the synthesis of element 120. It is the first successful reaction producing a superheavy element using an actinide target and a projectile heavier than 48Ca. The team at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California, United States plans to use the 88-inch cyclotron to make new elements using 50Ti projectiles. First, the 244Pu+50Ti reaction was tested, successfully creating two atoms of 290Lv in 2024. Since this was successful, an attempt to make element 120 in the 249Cf+50Ti reaction is planned to begin in 2025. The Lawrence Livermore National Laboratory (LLNL), which previously collaborated with the JINR, will collaborate with the LBNL on this project. The team at the Heavy Ion Research Facility in Lanzhou, which is operated by the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences, also plans to synthesise elements 119 and 120. The reactions used will involve actinide targets (e.g. 243Am, 248Cm) and first-row transition metal projectiles (e.g. 50Ti, 51V, 54Cr, 55Mn). Naming Mendeleev's nomenclature for unnamed and undiscovered elements would call unbinilium eka-radium. The 1979 IUPAC recommendations temporarily call it unbinilium (symbol Ubn) until it is discovered, the discovery is confirmed and a permanent name chosen. Although the IUPAC systematic names are widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, scientists who work theoretically or experimentally on superheavy elements typically call it "element 120", with the symbol E120, (120) or 120. Predicted properties Nuclear stability and isotopes The stability of nuclei decreases greatly with the increase in atomic number after curium, element 96, whose half-life is four orders of magnitude longer than that of any currently known higher-numbered element. All isotopes with an atomic number above 101 undergo radioactive decay with half-lives of less than 30 hours. No elements with atomic numbers above 82 (after lead) have stable isotopes. Nevertheless, because of reasons not yet well understood, there is a slight increase of nuclear stability around atomic numbers 110–114, which leads to the appearance of what is known in nuclear physics as the "island of stability". This concept, proposed by University of California professor Glenn Seaborg, explains why superheavy elements last longer than predicted. Isotopes of unbinilium are predicted to have alpha decay half-lives of the order of microseconds. In a quantum tunneling model with mass estimates from a macroscopic-microscopic model, the alpha-decay half-lives of several unbinilium isotopes (292–304Ubn) have been predicted to be around 1–20 microseconds. Some heavier isotopes may be more stable; Fricke and Waber predicted 320Ubn to be the most stable unbinilium isotope in 1971. Since unbinilium is expected to decay via a cascade of alpha decays leading to spontaneous fission around copernicium, the total half-lives of unbinilium isotopes are also predicted to be measured in microseconds. This has consequences for the synthesis of unbinilium, as isotopes with half-lives below one microsecond would decay before reaching the detector. Nevertheless, new theoretical models show that the expected gap in energy between the proton orbitals 2f7/2 (filled at element 114) and 2f5/2 (filled at element 120) is smaller than expected, so that element 114 no longer appears to be a stable spherical closed nuclear shell, and this energy gap may increase the stability of elements 119 and 120. The next doubly magic nucleus is now expected to be around the spherical 306Ubb (element 122), but the expected low half-life and low production cross section of this nuclide makes its synthesis challenging. Given that element 120 fills the 2f5/2 proton orbital, much attention has been given to the compound nucleus 302Ubn* and its properties. Several experiments have been performed between 2000 and 2008 at the Flerov Laboratory of Nuclear Reactions in Dubna studying the fission characteristics of the compound nucleus 302Ubn*. Two nuclear reactions have been used, namely 244Pu+58Fe and 238U+64Ni. The results have revealed how nuclei such as this fission predominantly by expelling closed shell nuclei such as 132Sn (Z = 50, N = 82). It was also found that the yield for the fusion-fission pathway was similar between 48Ca and 58Fe projectiles, suggesting a possible future use of 58Fe projectiles in superheavy element formation. In 2008, the team at GANIL, France, described the results from a new technique which attempts to measure the fission half-life of a compound nucleus at high excitation energy, since the yields are significantly higher than from neutron evaporation channels. It is also a useful method for probing the effects of shell closures on the survivability of compound nuclei in the super-heavy region, which can indicate the exact position of the next proton shell (Z = 114, 120, 124, or 126). The team studied the nuclear fusion reaction between uranium ions and a target of natural nickel: + → * → fission The results indicated that nuclei of unbinilium were produced at high (~70 MeV) excitation energy which underwent fission with measurable half-lives just over 10−18 s. Although very short (indeed insufficient for the element to be considered by IUPAC to exist, because a compound nucleus has no internal structure and its nucleons have not been arranged into shells until it has survived for 10−14 s, when it forms an electronic cloud), the ability to measure such a process indicates a strong shell effect at Z = 120. At lower excitation energy (see neutron evaporation), the effect of the shell will be enhanced and ground-state nuclei can be expected to have relatively long half-lives. This result could partially explain the relatively long half-life of 294Og measured in experiments at Dubna. Similar experiments have indicated a similar phenomenon at element 124 but not for flerovium, suggesting that the next proton shell does in fact lie beyond element 120. In September 2007, the team at RIKEN began a program utilizing 248Cm targets and have indicated future experiments to probe the possibility of 120 being the next proton magic number (and 184 being the next neutron magic number) using the aforementioned nuclear reactions to form 302Ubn*, as well as 248Cm+54Cr. They also planned to further chart the region by investigating the nearby compound nuclei 296Og*, 298Og*, 306Ubb*, and 308Ubb*. The most likely isotopes of unbinilium to be synthesised in the near future are 295Ubn through 299Ubn, because they can be produced in the 3n and 4n channels of the 249–251Cf+50Ti, 245Cm+54Cr, and 248Cm+54Cr reactions. Atomic and physical Being the second period 8 element, unbinilium is predicted to be an alkaline earth metal, below beryllium, magnesium, calcium, strontium, barium, and radium. Each of these elements has two valence electrons in the outermost s-orbital (valence electron configuration ns2), which is easily lost in chemical reactions to form the +2 oxidation state: thus the alkaline earth metals are rather reactive elements, with the exception of beryllium due to its small size. Unbinilium is predicted to continue the trend and have a valence electron configuration of 8s2. It is therefore expected to behave much like its lighter congeners; however, it is also predicted to differ from the lighter alkaline earth metals in some properties. The main reason for the predicted differences between unbinilium and the other alkaline earth metals is the spin–orbit (SO) interaction—the mutual interaction between the electrons' motion and spin. The SO interaction is especially strong for the superheavy elements because their electrons move faster—at velocities comparable to the speed of light—than those in lighter atoms. In unbinilium atoms, it lowers the 7p and 8s electron energy levels, stabilizing the corresponding electrons, but two of the 7p electron energy levels are more stabilized than the other four. The effect is called subshell splitting, as it splits the 7p subshell into more-stabilized and the less-stabilized parts. Computational chemists understand the split as a change of the second (azimuthal) quantum number l from 1 to 1/2 and 3/2 for the more-stabilized and less-stabilized parts of the 7p subshell, respectively. Thus, the outer 8s electrons of unbinilium are stabilized and become harder to remove than expected, while the 7p3/2 electrons are correspondingly destabilized, perhaps allowing them to participate in chemical reactions. This stabilization of the outermost s-orbital (already significant in radium) is the key factor affecting unbinilium's chemistry, and causes all the trends for atomic and molecular properties of alkaline earth metals to reverse direction after barium. Due to the stabilization of its outer 8s electrons, unbinilium's first ionization energy—the energy required to remove an electron from a neutral atom—is predicted to be 6.0 eV, comparable to that of calcium. The electron of the hydrogen-like unbinilium atom—oxidized so it has only one electron, Ubn119+—is predicted to move so quickly that its mass is 2.05 times that of a non-moving electron, a feature coming from the relativistic effects. For comparison, the figure for hydrogen-like radium is 1.30 and the figure for hydrogen-like barium is 1.095. According to simple extrapolations of relativity laws, that indirectly indicates the contraction of the atomic radius to around 200 pm, very close to that of strontium (215 pm); the ionic radius of the Ubn2+ ion is also correspondingly lowered to 160 pm. The trend in electron affinity is also expected to reverse direction similarly at radium and unbinilium. Unbinilium should be a solid at room temperature, with melting point 680 °C: this continues the downward trend down the group, being lower than the value 700 °C for radium. The boiling point of unbinilium is expected to be around 1700 °C, which is lower than that of all the previous elements in the group (in particular, radium boils at 1737 °C), following the downward periodic trend. The density of unbinilium has been predicted to be 7 g/cm3, continuing the trend of increasing density down the group: the value for radium is 5.5 g/cm3. Chemical The chemistry of unbinilium is predicted to be similar to that of the alkaline earth metals, but it would probably behave more like calcium or strontium than barium or radium. Like strontium, unbinilium should react vigorously with air to form an oxide (UbnO) and with water to form the hydroxide (Ubn(OH)2), which would be a strong base, and releasing hydrogen gas. It should also react with the halogens to form salts such as UbnCl2. While these reactions would be expected from periodic trends, their lowered intensity is somewhat unusual, as ignoring relativistic effects, periodic trends would predict unbinilium to be even more reactive than barium or radium. This lowered reactivity is due to the relativistic stabilization of unbinilium's valence electron, increasing unbinilium's first ionization energy and decreasing the metallic and ionic radii; this effect is already seen for radium. On the other hand, the ionic radius of the Ubn2+ ion is predicted to be larger than that of Sr2+, because the 7p orbitals are destabilized and are thus larger than the p-orbitals of the lower shells. Unbinilium may also show the +4 oxidation state, which is not seen in any other alkaline earth metal, in addition to the +2 oxidation state that is characteristic of the other alkaline earth metals and is also the main oxidation state of all the known alkaline earth metals: this is because of the destabilization and expansion of the 7p3/2 spinor, causing its outermost electrons to have a lower ionization energy than what would otherwise be expected. The +6 state involving all the 7p3/2 electrons has been suggested in a hexafluoride, UbnF6. The +1 state may also be isolable. Many unbinilium compounds are expected to have a large covalent character, due to the involvement of the 7p3/2 electrons in the bonding: this effect is also seen to a lesser extent in radium, which shows some 6s and 6p3/2 contribution to the bonding in radium fluoride (RaF2) and astatide (RaAt2), resulting in these compounds having more covalent character. The standard reduction potential of the Ubn2+/Ubn couple is predicted to be −2.9 V, which is almost exactly the same as that for the Sr2+/Sr couple of strontium (−2.899 V). In the gas phase, the alkaline earth metals do not usually form covalently bonded diatomic molecules like the alkali metals do, since such molecules would have the same number of electrons in the bonding and antibonding orbitals and would have very low dissociation energies. Thus, the M–M bonding in these molecules is predominantly through van der Waals forces. The metal–metal bond lengths in these M2 molecules increase down the group from Ca2 to Ubn2. On the other hand, their metal–metal bond-dissociation energies generally increase from Ca2 to Ba2 and then drop to Ubn2, which should be the most weakly bound of all the group 2 homodiatomic molecules. The cause of this trend is the increasing participation of the p3/2 and d electrons as well as the relativistically contracted s orbital. From these M2 dissociation energies, the enthalpy of sublimation (ΔHsub) of unbinilium is predicted to be 150 kJ/mol. The Ubn–Au bond should be the weakest of all bonds between gold and an alkaline earth metal, but should still be stable. This gives extrapolated medium-sized adsorption enthalpies (−ΔHads) of 172 kJ/mol on gold (the radium value should be 237 kJ/mol) and 50 kJ/mol on silver, the smallest of all the alkaline earth metals, that demonstrate that it would be feasible to study the chromatographic adsorption of unbinilium onto surfaces made of noble metals. The ΔHsub and −ΔHads values are correlated for the alkaline earth metals.
Physical sciences
Periods
Chemistry
67492
https://en.wikipedia.org/wiki/Ununennium
Ununennium
Ununennium, also known as eka-francium or element 119, is a hypothetical chemical element; it has symbol Uue and atomic number 119. Ununennium and Uue are the temporary systematic IUPAC name and symbol respectively, which are used until the element has been discovered, confirmed, and a permanent name is decided upon. In the periodic table of the elements, it is expected to be an s-block element, an alkali metal, and the first element in the eighth period. It is the lightest element that has not yet been synthesized. An attempt to synthesize the element has been ongoing since 2018 in RIKEN in Japan. The Joint Institute for Nuclear Research in Dubna, Russia, plans to make an attempt at some point in the future, but a precise date has not been released to the public. The Heavy Ion Research Facility in Lanzhou, China (HIRFL) also plans to make an attempt. Theoretical and experimental evidence has shown that the synthesis of ununennium will likely be far more difficult than that of the previous elements. Ununennium's position as the seventh alkali metal suggests that it would have similar properties to its lighter congeners. However, relativistic effects may cause some of its properties to differ from those expected from a straight application of periodic trends. For example, ununennium is expected to be less reactive than caesium and francium and closer in behavior to potassium or rubidium, and while it should show the characteristic +1 oxidation state of the alkali metals, it is also predicted to show the +3 and +5 oxidation states, which are unknown in any other alkali metal. Introduction History Synthesis attempts Elements 114 to 118 (flerovium through oganesson) were discovered in "hot fusion" reactions at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. This involved bombarding the actinides plutonium through californium with calcium-48, a quasi-stable neutron-rich isotope which could be used as a projectile to produce more neutron-rich isotopes of superheavy elements. (The term "hot" refers to the high excitation energy of the resulting compound nucleus.) This cannot easily be continued to element 119, because it would require a target of the next actinide einsteinium. Tens of milligrams of einsteinium would be needed for a reasonable chance of success, but only micrograms have so far been produced. An attempt to make element 119 from calcium-48 and less than a microgram of einsteinium was made in 1985 at the superHILAC accelerator at Berkeley, California, but did not succeed. + → * → no atoms More practical production of further superheavy elements requires projectiles heavier than 48Ca, but this makes the reaction more symmetric and gives it a smaller chance of success. Attempts to synthesize element 119 push the limits of current technology, due to the decreasing cross sections of the production reactions and the probably short half-lives of produced isotopes, expected to be on the order of microseconds. From April to September 2012, an attempt to synthesize the isotopes 295Uue and 296Uue was made by bombarding a target of berkelium-249 with titanium-50 at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. This reaction between 249Bk and 50Ti was predicted to be the most favorable practical reaction for formation of ununennium, as it is the most asymmetric reaction available. Moreover, as berkelium-249 decays to californium-249 (the next element) with a short half-life of 327 days, this allowed elements 119 and 120 to be searched for simultaneously. Due to the predicted short half-lives, the GSI team used new "fast" electronics capable of registering decay events within microseconds. + → * → no atoms + → * → no atoms Neither element 119 nor element 120 was observed. The experiment was originally planned to continue to November 2012, but was stopped early to make use of the 249Bk target to confirm the synthesis of tennessine (thus changing the projectiles to 48Ca). The team at RIKEN in Wakō, Japan began bombarding curium-248 targets with a vanadium-51 beam in January 2018 to search for element 119. Curium was chosen as a target, rather than heavier berkelium or californium, as these heavier targets are difficult to prepare. The 248Cm targets were provided by Oak Ridge National Laboratory. RIKEN developed a high-intensity vanadium beam. The experiment began at a cyclotron while RIKEN upgraded its linear accelerators; the upgrade was completed in 2020. Bombardment may be continued with both machines until the first event is observed. The RIKEN team's efforts are being financed by the Emperor of Japan. + → * → no atoms yet The produced isotopes of ununennium are expected to undergo two alpha decays to known isotopes of moscovium, 287Mc and 288Mc. This would anchor them to a known sequence of five or six further alpha decays, respectively, and corroborate their production. As of September 2023, the team at RIKEN had run the 248Cm+51V reaction for 462 days. A report by the RIKEN Nishina Center Advisory Committee noted that this reaction was chosen because of the availability of the target and projectile materials, despite predictions favoring the 249Bk+50Ti reaction, owing to the 50Ti projectile being closer to doubly magic 48Ca and having an even atomic number (22); reactions with even-Z projectiles have generally been shown to have greater cross-sections. The report recommended that if the 5 fb cross-section limit is reached without any events observed, then the team should "evaluate and eventually reconsider the experimental strategy before taking additional beam time." As of August 2024, the team at RIKEN was still running this reaction "24/7". The team at the JINR plans to attempt synthesis of element 119 in the future, but a precise timeframe has not been publicly released. In late 2023, the JINR reported the first successful synthesis of a superheavy element with a projectile heavier than 48Ca: 238U was bombarded with 54Cr to make a new isotope of livermorium (element 116), 288Lv. Successful synthesis of a superheavy nuclide in this experiment was an unexpectedly good result; the aim was to experimentally determine the cross-section of a reaction with 54Cr projectiles and prepare for the synthesis of element 120. The JINR has also alluded to a future attempt to synthesise element 119 with the same projectile, bombarding 243Am with 54Cr. The team at the Heavy Ion Research Facility in Lanzhou (HIRFL), which is operated by the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences, also plans to try the 243Am+54Cr reaction. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, ununennium should be known as eka-francium. Using the 1979 IUPAC recommendations, the element should be temporarily called ununennium (symbol Uue) until it is discovered, the discovery is confirmed, and a permanent name chosen. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations are mostly ignored among scientists who work theoretically or experimentally on superheavy elements, who call it "element 119", with the symbol E119, (119) or 119. Predicted properties Nuclear stability and isotopes The stability of nuclei decreases greatly with the increase in atomic number after curium, element 96, whose half-life is four orders of magnitude longer than that of any currently known higher-numbered element. All isotopes with an atomic number above 101 undergo radioactive decay with half-lives of less than 30 hours. No elements with atomic numbers above 82 (after lead) have stable isotopes. Nevertheless, for reasons not yet well understood, there is a slight increase of nuclear stability around atomic numbers 110–114, which leads to the appearance of what is known in nuclear physics as the "island of stability". This concept, proposed by University of California professor Glenn Seaborg, explains why superheavy elements last longer than predicted. The alpha-decay half-lives predicted for 291–307Uue are on the order of microseconds. The longest alpha-decay half-life predicted is ~485 microseconds for the isotope 294Uue. When factoring in all decay modes, the predicted half-lives drop further to only tens of microseconds. Some heavier isotopes may be more stable; Fricke and Waber predicted 315Uue to be the most stable ununennium isotope in 1971. This has consequences for the synthesis of ununennium, as isotopes with half-lives below one microsecond would decay before reaching the detector, and the heavier isotopes cannot be synthesised by the collision of any known usable target and projectile nuclei. Nevertheless, new theoretical models show that the expected gap in energy between the proton orbitals 2f7/2 (filled at element 114) and 2f5/2 (filled at element 120) is smaller than expected, so that element 114 no longer appears to be a stable spherical closed nuclear shell, and this energy gap may increase the stability of elements 119 and 120. The next doubly magic nucleus is now expected to be around the spherical 306Ubb (element 122), but the expected low half-life and low production cross section of this nuclide makes its synthesis challenging. The most likely isotopes of ununennium to be synthesised in the near future are 293Uue through 296Uue, because they are populated in the 3n and 4n channels of the 243Am+48Cr and 249Bk+50Ti reactions. Atomic and physical Being the first period 8 element, ununennium is predicted to be an alkali metal, taking its place in the periodic table below lithium, sodium, potassium, rubidium, caesium, and francium. Each of these elements has one valence electron in the outermost s-orbital (valence electron configuration ns1), which is easily lost in chemical reactions to form the +1 oxidation state: thus, the alkali metals are very reactive elements. Ununennium is predicted to continue the trend and have a valence electron configuration of 8s1. It is therefore expected to behave much like its lighter congeners; however, it is also predicted to differ from the lighter alkali metals in some properties. The main reason for the predicted differences between ununennium and the other alkali metals is the spin–orbit (SO) interaction—the mutual interaction between the electrons' motion and spin. The SO interaction is especially strong for the superheavy elements because their electrons move faster—at speeds comparable to the speed of light—than those in lighter atoms. In ununennium atoms, it lowers the 7p and 8s electron energy levels, stabilizing the corresponding electrons, but two of the 7p electron energy levels are more stabilized than the other four. The effect is called subshell splitting, as it splits the 7p subshell into more-stabilized and the less-stabilized parts. Computational chemists understand the split as a change of the second (azimuthal) quantum number ℓ from 1 to and for the more-stabilized and less-stabilized parts of the 7p subshell, respectively. Thus, the outer 8s electron of ununennium is stabilized and becomes harder to remove than expected, while the 7p3/2 electrons are correspondingly destabilized, perhaps allowing them to participate in chemical reactions. This stabilization of the outermost s-orbital (already significant in francium) is the key factor affecting ununennium's chemistry, and causes all the trends for atomic and molecular properties of alkali metals to reverse direction after caesium. Due to the stabilization of its outer 8s electron, ununennium's first ionization energy—the energy required to remove an electron from a neutral atom—is predicted to be 4.53 eV, higher than those of the known alkali metals from potassium onward. This effect is so large that unbiunium (element 121) is predicted to have a lower ionization energy of 4.45 eV, so that the alkali metal in period 8 would not have the lowest ionization energy in the period, as is true for all previous periods. Ununennium's electron affinity is expected to be far greater than that of caesium and francium; indeed, ununennium is expected to have an electron affinity higher than all the alkali metals lighter than it at about 0.662 eV, close to that of cobalt (0.662 eV) and chromium (0.676 eV). Relativistic effects also cause a very large drop in the polarizability of ununennium to 169.7 a.u. Indeed, the static dipole polarisability (αD) of ununennium, a quantity for which the impacts of relativity are proportional to the square of the element's atomic number, has been calculated to be small and similar to that of sodium. The electron of the hydrogen-like ununennium atom—oxidized so it has only one electron, Uue118+—is predicted to move so quickly that its mass is 1.99 times that of a non-moving electron, a consequence of relativistic effects. For comparison, the figure for hydrogen-like francium is 1.29 and the figure for hydrogen-like caesium is 1.091. According to simple extrapolations of relativity laws, that indirectly indicates the contraction of the atomic radius to around 240 pm, very close to that of rubidium (247 pm); the metallic radius is also correspondingly lowered to 260 pm. The ionic radius of Uue+ is expected to be 180 pm. Ununennium is predicted to have a melting point between 0 °C and 30 °C: thus it may be a liquid at room temperature. It is not known whether this continues the trend of decreasing melting points down the group, as caesium's melting point is 28.5 °C and francium's is estimated to be around 8.0 °C. The boiling point of ununennium is expected to be around 630 °C, similar to that of francium, estimated to be around 620 °C; this is lower than caesium's boiling point of 671 °C. The density of ununennium has been variously predicted to be between 3 and 4 g/cm3, continuing the trend of increasing density down the group: the density of francium is estimated at 2.48 g/cm3, and that of caesium is known to be 1.93 g/cm3. Chemical The chemistry of ununennium is predicted to be similar to that of the alkali metals, but it would probably behave more like potassium or rubidium than caesium or francium. This is due to relativistic effects, as in their absence periodic trends would predict ununennium to be even more reactive than caesium and francium. This lowered reactivity is due to the relativistic stabilization of ununennium's valence electron, increasing ununennium's first ionization energy and decreasing the metallic and ionic radii; this effect is already seen for francium. The chemistry of ununennium in the +1-oxidation state should be more similar to the chemistry of rubidium than to that of francium. On the other hand, the ionic radius of the Uue+ ion is predicted to be larger than that of Rb+, because the 7p orbitals are destabilized and are thus larger than the p-orbitals of the lower shells. Ununennium may also show the +3 oxidation state, which is not seen in any other alkali metal, in addition to the +1 oxidation state that is characteristic of the other alkali metals and is also the main oxidation state of all the known alkali metals: this is because of the destabilization and expansion of the 7p3/2 spinor, causing its outermost electrons to have a lower ionization energy than what would otherwise be expected. The 7p3/2 spinor's chemical activity has been suggested to make the +5 oxidation state possible in [UueF6]−, analogous to [SbF6]− or [BrF6]−. The analogous francium(V) compound, [FrF6]−, might also be achievable, but is not experimentally known. Many ununennium compounds are expected to have a large covalent character, due to the involvement of the 7p3/2 electrons in the bonding: this effect is also seen to a lesser extent in francium, which shows some 6p3/2 contribution to the bonding in francium superoxide (FrO2). Thus, instead of ununennium being the most electropositive element, as a simple extrapolation would seem to indicate, caesium instead retains this position, with ununennium's electronegativity most likely being close to sodium's (0.93 on the Pauling scale). The standard reduction potential of the Uue+/Uue couple is predicted to be −2.9 V, the same as that of the Fr+/Fr couple and just over that of the K+/K couple at −2.931 V. {| class="wikitable floatright" style="font-size:85%;" |+ Bond lengths and bond-dissociation energies of MAu (M = an alkali metal). All data are predicted, except for the bond-dissociation energies of KAu, RbAu, and CsAu. ! Compound ! Bond length(Å) ! Bond-dissociationenergy (kJ/mol) |- ! KAu | 2.856 | 2.75 |- ! RbAu | 2.967 | 2.48 |- ! CsAu | 3.050 | 2.53 |- ! FrAu | 3.097 | 2.75 |- ! UueAu | 3.074 | 2.44 |} In the gas phase, and at very low temperatures in the condensed phase, the alkali metals form covalently bonded diatomic molecules. The metal–metal bond lengths in these M2 molecules increase down the group from Li2 to Cs2, but then decrease after that to Uue2, due to the aforementioned relativistic effects that stabilize the 8s orbital. The opposite trend is shown for the metal–metal bond-dissociation energies. The Uue–Uue bond should be slightly stronger than the K–K bond. From these M2 dissociation energies, the enthalpy of sublimation (ΔHsub) of ununennium is predicted to be 94 kJ/mol (the value for francium should be around 77 kJ/mol). The UueF molecule is expected to have a significant covalent character owing to the high electron affinity of ununennium. The bonding in UueF is predominantly between a 7p orbital on ununennium and a 2p orbital on fluorine, with lesser contributions from the 2s orbital of fluorine and the 8s, 6dz2, and the two other 7p orbitals of ununennium. This is very different from the behaviour of s-block elements, as well as gold and mercury, in which the s-orbitals (sometimes mixed with d-orbitals) are the ones participating in the bonding. The Uue–F bond is relativistically expanded due to the splitting of the 7p orbital into 7p1/2 and 7p3/2 spinors, forcing the bonding electrons into the largest orbital measured by radial extent: a similar expansion in bond length is found in the hydrides AtH and TsH. The Uue–Au bond should be the weakest of all bonds between gold and an alkali metal, but should still be stable. This gives extrapolated medium-sized adsorption enthalpies (−ΔHads) of 106 kJ/mol on gold (the francium value should be 136 kJ/mol), 76 kJ/mol on platinum, and 63 kJ/mol on silver, the smallest of all the alkali metals, that demonstrate that it would be feasible to study the chromatographic adsorption of ununennium onto surfaces made of noble metals. The enthalpy of adsorption of ununennium on a Teflon surface is predicted to be 17.6 kJ/mol, which would be the lowest among the alkali metals. The ΔHsub and −ΔHads values for the alkali metals change in opposite directions as atomic number increases.
Physical sciences
Periods
Chemistry
67513
https://en.wikipedia.org/wiki/Systematic%20element%20name
Systematic element name
A systematic element name is the temporary name assigned to an unknown or recently synthesized chemical element. A systematic symbol is also derived from this name. In chemistry, a transuranic element receives a permanent name and symbol only after its synthesis has been confirmed. In some cases, such as the Transfermium Wars, controversies over the formal name and symbol have been protracted and highly political. In order to discuss such elements without ambiguity, the International Union of Pure and Applied Chemistry (IUPAC) uses a set of rules, adopted in 1978, to assign a temporary systematic name and symbol to each such element. This approach to naming originated in the successful development of regular rules for the naming of organic compounds. IUPAC rules The temporary names derive systematically from the element's atomic number, and apply only to 101 ≤ Z ≤ 999. Each digit is translated into a "numerical root" according to the table. The roots are concatenated, and the name is completed by the suffix -ium. Some of the roots are Latin and others are Greek, to avoid two digits starting with the same letter (for example, the Greek-derived pent is used instead of the Latin-derived quint to avoid confusion with quad for 4). There are two elision rules designed to prevent odd-looking names. Traditionally the suffix -ium was used only for metals (or at least elements that were expected to be metallic), and other elements used different suffixes: halogens used -ine and noble gases used -on instead. However, the systematic names use -ium for all elements regardless of group. Thus, elements 117 and 118 were ununseptium and ununoctium, not ununseptine and ununocton. This does not apply to the trivial names these elements receive once confirmed; thus, elements 117 and 118 are now tennessine and oganesson, respectively. For these trivial names, all elements receive the suffix -ium except those in group 17, which receive -ine (like the halogens), and those in group 18, which receive -on (like the noble gases). (That being said, tennessine and oganesson are expected to behave quite differently from their lighter congeners.) The systematic symbol is formed by taking the first letter of each root, converting the first to uppercase. This results in three-letter symbols instead of the one- or two-letter symbols used for named elements. The rationale is that any scheme producing two-letter symbols will have to deviate from full systematicity to avoid collisions with the symbols of the permanently named elements. The Recommendations for the Naming of Elements of Atomic Numbers Greater than 100 can be found here. , all 118 discovered elements have received individual permanent names and symbols. Therefore, systematic names and symbols are now used only for the undiscovered elements beyond element 118, oganesson. When such an element is discovered, it will keep its systematic name and symbol until its discovery meets the criteria of and is accepted by the IUPAC/IUPAP Joint Working Party, upon which the discoverers are invited to propose a permanent name and symbol. Once this name and symbol is proposed, there is still a comment period before they become official and replace the systematic name and symbol. At the time the systematic names were recommended (1978), names had already been officially given to all elements up to atomic number 103, lawrencium. While systematic names were given for elements 101 (mendelevium), 102 (nobelium), and 103 (lawrencium), these were only as "minor alternatives to the trivial names already approved by IUPAC". The following elements for some time only had systematic names as approved names, until their final replacement with trivial names after their discoveries were accepted.
Physical sciences
Nomenclature
Chemistry
67514
https://en.wikipedia.org/wiki/Moscovium
Moscovium
Moscovium is a synthetic chemical element; it has symbol Mc and atomic number 115. It was first synthesized in 2003 by a joint team of Russian and American scientists at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. In December 2015, it was recognized as one of four new elements by the Joint Working Party of international scientific bodies IUPAC and IUPAP. On 28 November 2016, it was officially named after the Moscow Oblast, in which the JINR is situated. Moscovium is an extremely radioactive element: its most stable known isotope, moscovium-290, has a half-life of only 0.65 seconds. In the periodic table, it is a p-block transactinide element. It is a member of the 7th period and is placed in group 15 as the heaviest pnictogen. Moscovium is calculated to have some properties similar to its lighter homologues, nitrogen, phosphorus, arsenic, antimony, and bismuth, and to be a post-transition metal, although it should also show several major differences from them. In particular, moscovium should also have significant similarities to thallium, as both have one rather loosely bound electron outside a quasi-closed shell. Chemical experimentation on single atoms has confirmed theoretical expectations that moscovium is less reactive than its lighter homologue bismuth. Over a hundred atoms of moscovium have been observed to date, all of which have been shown to have mass numbers from 286 to 290. Introduction History Discovery The first successful synthesis of moscovium was by a joint team of Russian and American scientists in August 2003 at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. Headed by Russian nuclear physicist Yuri Oganessian, the team included American scientists of the Lawrence Livermore National Laboratory. The researchers on February 2, 2004, stated in Physical Review C that they bombarded americium-243 with calcium-48 ions to produce four atoms of moscovium. These atoms decayed by emission of alpha-particles to nihonium in about 100 milliseconds. The Dubna–Livermore collaboration strengthened their claim to the discoveries of moscovium and nihonium by conducting chemical experiments on the final decay product 268Db. None of the nuclides in this decay chain were previously known, so existing experimental data was not available to support their claim. In June 2004 and December 2005, the presence of a dubnium isotope was confirmed by extracting the final decay products, measuring spontaneous fission (SF) activities and using chemical identification techniques to confirm that they behave like a group 5 element (as dubnium is known to be in group 5 of the periodic table). Both the half-life and the decay mode were confirmed for the proposed 268Db, lending support to the assignment of the parent nucleus to moscovium. However, in 2011, the IUPAC/IUPAP Joint Working Party (JWP) did not recognize the two elements as having been discovered, because current theory could not distinguish the chemical properties of group 4 and group 5 elements with sufficient confidence. Furthermore, the decay properties of all the nuclei in the decay chain of moscovium had not been previously characterized before the Dubna experiments, a situation which the JWP generally considers "troublesome, but not necessarily exclusive". Road to confirmation Two heavier isotopes of moscovium, 289Mc and 290Mc, were discovered in 2009–2010 as daughters of the tennessine isotopes 293Ts and 294Ts; the isotope 289Mc was later also synthesized directly and confirmed to have the same properties as found in the tennessine experiments. In 2011, the Joint Working Party of international scientific bodies International Union of Pure and Applied Chemistry (IUPAC) and International Union of Pure and Applied Physics (IUPAP) evaluated the 2004 and 2007 Dubna experiments, and concluded that they did not meet the criteria for discovery. Another evaluation of more recent experiments took place within the next few years, and a claim to the discovery of moscovium was again put forward by Dubna. In August 2013, a team of researchers at Lund University and at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany announced they had repeated the 2004 experiment, confirming Dubna's findings. Simultaneously, the 2004 experiment had been repeated at Dubna, now additionally also creating the isotope 289Mc that could serve as a cross-bombardment for confirming the discovery of the tennessine isotope 293Ts in 2010. Further confirmation was published by the team at the Lawrence Berkeley National Laboratory in 2015. In December 2015, the IUPAC/IUPAP Joint Working Party recognized the element's discovery and assigned the priority to the Dubna-Livermore collaboration of 2009–2010, giving them the right to suggest a permanent name for it. While they did not recognise the experiments synthesising 287Mc and 288Mc as persuasive due to the lack of a convincing identification of atomic number via cross-reactions, they recognised the 293Ts experiments as persuasive because its daughter 289Mc had been produced independently and found to exhibit the same properties. In May 2016, Lund University (Lund, Scania, Sweden) and GSI cast some doubt on the syntheses of moscovium and tennessine. The decay chains assigned to 289Mc, the isotope instrumental in the confirmation of the syntheses of moscovium and tennessine, were found based on a new statistical method to be too different to belong to the same nuclide with a reasonably high probability. The reported 293Ts decay chains approved as such by the JWP were found to require splitting into individual data sets assigned to different tennessine isotopes. It was also found that the claimed link between the decay chains reported as from 293Ts and 289Mc probably did not exist. (On the other hand, the chains from the non-approved isotope 294Ts were found to be congruent.) The multiplicity of states found when nuclides that are not even–even undergo alpha decay is not unexpected and contributes to the lack of clarity in the cross-reactions. This study criticized the JWP report for overlooking subtleties associated with this issue, and considered it "problematic" that the only argument for the acceptance of the discoveries of moscovium and tennessine was a link they considered to be doubtful. On June 8, 2017, two members of the Dubna team published a journal article answering these criticisms, analysing their data on the nuclides 293Ts and 289Mc with widely accepted statistical methods, noted that the 2016 studies indicating non-congruence produced problematic results when applied to radioactive decay: they excluded from the 90% confidence interval both average and extreme decay times, and the decay chains that would be excluded from the 90% confidence interval they chose were more probable to be observed than those that would be included. The 2017 reanalysis concluded that the observed decay chains of 293Ts and 289Mc were consistent with the assumption that only one nuclide was present at each step of the chain, although it would be desirable to be able to directly measure the mass number of the originating nucleus of each chain as well as the excitation function of the 243Am+48Ca reaction. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, moscovium is sometimes known as eka-bismuth. In 1979, IUPAC recommended that the placeholder systematic element name ununpentium (with the corresponding symbol of Uup) be used until the discovery of the element is confirmed and a permanent name is decided. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it "element 115", with the symbol of E115, (115) or even simply 115. On 30 December 2015, discovery of the element was recognized by the International Union of Pure and Applied Chemistry (IUPAC). According to IUPAC recommendations, the discoverer(s) of a new element has the right to suggest a name. A suggested name was langevinium, after Paul Langevin. Later, the Dubna team mentioned the name moscovium several times as one among many possibilities, referring to the Moscow Oblast where Dubna is located. In June 2016, IUPAC endorsed the latter proposal to be formally accepted by the end of the year, which it was on 28 November 2016. The naming ceremony for moscovium, tennessine, and oganesson was held on 2 March 2017 at the Russian Academy of Sciences in Moscow. Other routes of synthesis In 2024, the team at JINR reported the observation of one decay chain of 289Mc while studying the reaction between 242Pu and 50Ti, aimed at producing more neutron-deficient livermorium isotopes in preparation for synthesis attempts of elements 119 and 120. This was the first successful report of a charged-particle exit channel – the evaporation of a proton and two neutrons, rather than only neutrons, as the compound nucleus de-excites to the ground state – in a hot fusion reaction between an actinide target and a projectile with atomic number greater than or equal to 20. Such reactions have been proposed as a novel synthesis route for yet-undiscovered isotopes of superheavy elements with several neutrons more than the known ones, which may be closer to the theorized island of stability and have longer half-lives. In particular, the isotopes 291Mc–293Mc may be reachable in these types of reactions within current detection limits. Predicted properties Other than nuclear properties, no properties of moscovium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that it decays very quickly. Properties of moscovium remain unknown and only predictions are available. Nuclear stability and isotopes Moscovium is expected to be within an island of stability centered on copernicium (element 112) and flerovium (element 114). Due to the expected high fission barriers, any nucleus within this island of stability exclusively decays by alpha decay and perhaps some electron capture and beta decay. Although the known isotopes of moscovium do not actually have enough neutrons to be on the island of stability, they can be seen to approach the island as in general, the heavier isotopes are the longer-lived ones. The hypothetical isotope 291Mc is an especially interesting case as it has only one neutron more than the heaviest known moscovium isotope, 290Mc. It could plausibly be synthesized as the daughter of 295Ts, which in turn could be made from the reaction . Calculations show that it may have a significant electron capture or positron emission decay mode in addition to alpha decay and also have a relatively long half-life of several seconds. This would produce 291Fl, 291Nh, and finally 291Cn which is expected to be in the middle of the island of stability and have a half-life of about 1200 years, affording the most likely hope of reaching the middle of the island using current technology. Possible drawbacks are that the cross section of the production reaction of 295Ts is expected to be low and the decay properties of superheavy nuclei this close to the line of beta stability are largely unexplored. The heavy isotopes from 291Mc to 294Mc might also be produced using charged-particle evaporation, in the 245Cm(48Ca,pxn) and 248Cm(48Ca,pxn) reactions. The light isotopes 284Mc, 285Mc, and 286Mc could be made from the 241Am+48Ca reaction. They would undergo a chain of alpha decays, ending at transactinide isotopes too light to be made by hot fusion and too heavy to be made by cold fusion. The isotope 286Mc was found in 2021 at Dubna, in the reaction: it decays into the already-known 282Nh and its daughters. The yet lighter 282Mc and 283Mc could be made from 243Am+44Ca, but the cross-section would likely be lower. Other possibilities to synthesize nuclei on the island of stability include quasifission (partial fusion followed by fission) of a massive nucleus. Such nuclei tend to fission, expelling doubly magic or nearly doubly magic fragments such as calcium-40, tin-132, lead-208, or bismuth-209. It has been shown that the multi-nucleon transfer reactions in collisions of actinide nuclei (such as uranium and curium) might be used to synthesize the neutron-rich superheavy nuclei located at the island of stability, although formation of the lighter elements nobelium or seaborgium is more favored. One last possibility to synthesize isotopes near the island is to use controlled nuclear explosions to create a neutron flux high enough to bypass the gaps of instability at 258–260Fm and at mass number 275 (atomic numbers 104 to 108), mimicking the r-process in which the actinides were first produced in nature and the gap of instability around radon bypassed. Some such isotopes (especially 291Cn and 293Cn) may even have been synthesized in nature, but would have decayed away far too quickly (with half-lives of only thousands of years) and be produced in far too small quantities (about 10−12 the abundance of lead) to be detectable as primordial nuclides today outside cosmic rays. Physical and atomic In the periodic table, moscovium is a member of group 15, the pnictogens. It appears below nitrogen, phosphorus, arsenic, antimony, and bismuth. Every previous pnictogen has five electrons in its valence shell, forming a valence electron configuration of ns2np3. In moscovium's case, the trend should be continued and the valence electron configuration is predicted to be 7s27p3; therefore, moscovium will behave similarly to its lighter congeners in many respects. However, notable differences are likely to arise; a largely contributing effect is the spin–orbit (SO) interaction—the mutual interaction between the electrons' motion and spin. It is especially strong for the superheavy elements, because their electrons move much faster than in lighter atoms, at velocities comparable to the speed of light. In relation to moscovium atoms, it lowers the 7s and the 7p electron energy levels (stabilizing the corresponding electrons), but two of the 7p electron energy levels are stabilized more than the other four. The stabilization of the 7s electrons is called the inert-pair effect, and the effect "tearing" the 7p subshell into the more stabilized and the less stabilized parts is called subshell splitting. Computation chemists see the split as a change of the second (azimuthal) quantum number l from 1 to and for the more stabilized and less stabilized parts of the 7p subshell, respectively. For many theoretical purposes, the valence electron configuration may be represented to reflect the 7p subshell split as 7s7p7p. These effects cause moscovium's chemistry to be somewhat different from that of its lighter congeners. The valence electrons of moscovium fall into three subshells: 7s (two electrons), 7p1/2 (two electrons), and 7p3/2 (one electron). The first two of these are relativistically stabilized and hence behave as inert pairs, while the last is relativistically destabilized and can easily participate in chemistry. (The 6d electrons are not destabilized enough to participate chemically.) Thus, the +1 oxidation state should be favored, like Tl+, and consistent with this the first ionization potential of moscovium should be around 5.58 eV, continuing the trend towards lower ionization potentials down the pnictogens. Moscovium and nihonium both have one electron outside a quasi-closed shell configuration that can be delocalized in the metallic state: thus they should have similar melting and boiling points (both melting around 400 °C and boiling around 1100 °C) due to the strength of their metallic bonds being similar. Additionally, the predicted ionization potential, ionic radius (1.5 Å for Mc+; 1.0 Å for Mc3+), and polarizability of Mc+ are expected to be more similar to Tl+ than its true congener Bi3+. Moscovium should be a dense metal due to its high atomic weight, with a density around 13.5 g/cm3. The electron of the hydrogen-like moscovium atom (oxidized so that it only has one electron, Mc114+) is expected to move so fast that it has a mass 1.82 times that of a stationary electron, due to relativistic effects. For comparison, the figures for hydrogen-like bismuth and antimony are expected to be 1.25 and 1.077 respectively. Chemical Moscovium is predicted to be the third member of the 7p series of chemical elements and the heaviest member of group 15 in the periodic table, below bismuth. Unlike the two previous 7p elements, moscovium is expected to be a good homologue of its lighter congener, in this case bismuth. In this group, each member is known to portray the group oxidation state of +5 but with differing stability. For nitrogen, the +5 state is mostly a formal explanation of molecules like N2O5: it is very difficult to have five covalent bonds to nitrogen due to the inability of the small nitrogen atom to accommodate five ligands. The +5 state is well represented for the essentially non-relativistic typical pnictogens phosphorus, arsenic, and antimony. However, for bismuth it becomes rare due to the relativistic stabilization of the 6s orbitals known as the inert-pair effect, so that the 6s electrons are reluctant to bond chemically. It is expected that moscovium will have an inert-pair effect for both the 7s and the 7p1/2 electrons, as the binding energy of the lone 7p3/2 electron is noticeably lower than that of the 7p1/2 electrons. Nitrogen(I) and bismuth(I) are known but rare and moscovium(I) is likely to show some unique properties, probably behaving more like thallium(I) than bismuth(I). Because of spin-orbit coupling, flerovium may display closed-shell or noble gas-like properties; if this is the case, moscovium will likely be typically monovalent as a result, since the cation Mc+ will have the same electron configuration as flerovium, perhaps giving moscovium some alkali metal character. Calculations predict that moscovium(I) fluoride and chloride would be ionic compounds, with an ionic radius of about 109–114 pm for Mc+, although the 7p1/2 lone pair on the Mc+ ion should be highly polarisable. The Mc3+ cation should behave like its true lighter homolog Bi3+. The 7s electrons are too stabilized to be able to contribute chemically and hence the +5 state should be impossible and moscovium may be considered to have only three valence electrons. Moscovium would be quite a reactive metal, with a standard reduction potential of −1.5 V for the Mc+/Mc couple. The chemistry of moscovium in aqueous solution should essentially be that of the Mc+ and Mc3+ ions. The former should be easily hydrolyzed and not be easily complexed with halides, cyanide, and ammonia. Moscovium(I) hydroxide (McOH), carbonate (Mc2CO3), oxalate (Mc2C2O4), and fluoride (McF) should be soluble in water; the sulfide (Mc2S) should be insoluble; and the chloride (McCl), bromide (McBr), iodide (McI), and thiocyanate (McSCN) should be only slightly soluble, so that adding excess hydrochloric acid would not noticeably affect the solubility of moscovium(I) chloride. Mc3+ should be about as stable as Tl3+ and hence should also be an important part of moscovium chemistry, although its closest homolog among the elements should be its lighter congener Bi3+. Moscovium(III) fluoride (McF3) and thiozonide (McS3) should be insoluble in water, similar to the corresponding bismuth compounds, while moscovium(III) chloride (McCl3), bromide (McBr3), and iodide (McI3) should be readily soluble and easily hydrolyzed to form oxyhalides such as McOCl and McOBr, again analogous to bismuth. Both moscovium(I) and moscovium(III) should be common oxidation states and their relative stability should depend greatly on what they are complexed with and the likelihood of hydrolysis. Like its lighter homologues ammonia, phosphine, arsine, stibine, and bismuthine, moscovine (McH3) is expected to have a trigonal pyramidal molecular geometry, with an Mc–H bond length of 195.4 pm and a H–Mc–H bond angle of 91.8° (bismuthine has bond length 181.7 pm and bond angle 91.9°; stibine has bond length 172.3 pm and bond angle 92.0°). In the predicted aromatic pentagonal planar cluster, analogous to pentazolate (), the Mc–Mc bond length is expected to be expanded from the extrapolated value of 312–316 pm to 329 pm due to spin–orbit coupling effects. Experimental chemistry The isotopes 288Mc, 289Mc, and 290Mc have half-lives long enough for chemical investigation. A 2024 experiment at the GSI, producing 288Mc via the 243Am+48Ca reaction, studied the adsorption of nihonium and moscovium on SiO2 and gold surfaces. The adsorption enthalpy of moscovium on SiO2 was determined experimentally as (68% confidence interval). Moscovium was determined to be less reactive with the SiO2 surface than its lighter congener bismuth, but more reactive than closed-shell copernicium and flerovium. This arises because of the relativistic stabilisation of the 7p1/2 shell.
Physical sciences
Group 15
Chemistry
67554
https://en.wikipedia.org/wiki/Invasive%20species
Invasive species
An invasive species is an introduced species that harms its new environment. Invasive species adversely affect habitats and bioregions, causing ecological, environmental, and/or economic damage. The term can also be used for native species that become harmful to their native environment after human alterations to its food web. Since the 20th century, invasive species have become serious economic, social, and environmental threats worldwide. Invasion of long-established ecosystems by organisms is a natural phenomenon, but human-facilitated introductions have greatly increased the rate, scale, and geographic range of invasion. For millennia, humans have served as both accidental and deliberate dispersal agents, beginning with their earliest migrations, accelerating in the Age of Discovery, and accelerating again with international trade. Notably invasive plant species include the kudzu vine, giant hogweed, Japanese knotweed, and yellow starthistle. Notably invasive animals include European rabbits, domestic cats, and carp. Terminology Invasive species are the subset of established non-native alien or naturalized species that are a threat to native species and biodiversity. The term "invasive" is poorly defined and often very subjective. Invasive species may be plants, animals, fungi, and microbes; some include native species that have invaded human habitats such as farms and landscapes. Some broaden the term to include indigenous or "native" species that have colonized natural areas. Some sources name Homo sapiens as an invasive species, but broad appreciation of human learning capacity and their behavioral potential and plasticity may argue against any such fixed categorization. The definition of "native" can be controversial. For example, the ancestors of Equus ferus (modern horses) evolved in North America and radiated to Eurasia before becoming extinct in North America. Upon being introduced to North America in 1493 by Spanish conquistadors, it is debatable whether the feral horses were native or exotic to the continent of their evolutionary ancestors. While invasive species can be studied within many subfields of biology, most research on invasive organisms has been in ecology and biogeography. Much of the work has been influenced by Charles Elton's 1958 book The Ecology of Invasion by Animals and Plants which creates a generalized picture of biological invasions. Studies remained sparse until the 1990s. This research, largely field observational studies, has disproportionately been concerned with terrestrial plants. The rapid growth of the field has driven a need to standardize the language used to describe invasive species and events. Despite this, little standard terminology exists; the field lacks any official designation but is commonly referred to as "invasion ecology" or more generally "invasion biology". This lack of standard terminology has arisen due to the interdisciplinary nature of the field which borrows terms from disciplines such as agriculture, zoology, and pathology, as well as due to studies being performed in isolation. In an attempt to avoid the ambiguous, subjective, and pejorative vocabulary that so often accompanies discussion of invasive species even in scientific papers, Colautti and MacIsaac proposed a new nomenclature system based on biogeography rather than on taxa. By discarding taxonomy, human health, and economic factors, this model focused only on ecological factors. The model evaluated individual populations rather than entire species. It classified each population based on its success in that environment. This model applied equally to indigenous and to introduced species, and did not automatically categorize successful introductions as harmful. The USDA's National Invasive Species Information Center defines invasive species very narrowly. According to Executive Order 13112, Invasive species' means an alien species whose introduction does or is likely to cause economic or environmental harm or harm to human health." Causes Typically, an introduced species must survive at low population densities before it becomes invasive in a new location. At low population densities, it can be difficult for the introduced species to reproduce and maintain itself in a new location, so a species might reach a location multiple times before it becomes established. Repeated patterns of human movement, such as ships sailing to and from ports or cars driving up and down highways, offer repeated opportunities for establishment (a high propagule pressure). Ecosystem-based mechanisms In ecosystems, the availability of resources determines the impact of additional species on the ecosystem. Stable ecosystems have a resource equilibrium, which can be changed fundamentally by the arrival of invasive species. When changes such as a forest fire occur, normal ecological succession favors native grasses and forbs. An introduced species that can spread faster than natives can outcompete native species for food, squeezing them out. Nitrogen and phosphorus are often the limiting factors in these situations. Every species occupies an ecological niche in its native ecosystem; some species fill large and varied roles, while others are highly specialized. Invading species may occupy unused niches, or create new ones. For example, edge effects describe what happens when part of an ecosystem is disturbed, as in when land is cleared for agriculture. The boundary between the remaining undisturbed habitat and the newly cleared land itself forms a distinct habitat, creating new winners and losers and possibly hosting species that would not thrive outside the boundary habitat. In 1958, Charles S. Elton claimed that ecosystems with higher species diversity were less subject to invasive species because fewer niches remained unoccupied. Other ecologists later pointed to highly diverse, but heavily invaded ecosystems, arguing that ecosystems with high species diversity were more susceptible to invasion. This debate hinged on the spatial scale of invasion studies. Small-scale studies tended to show a negative relationship between diversity and invasion, while large-scale studies tended to show the reverse, perhaps a side-effect of invasives' ability to capitalize on increased resource availability and weaker species interactions that are more common when larger samples are considered. However, this pattern does not seem to hold true for invasive vertebrates. Island ecosystems may be more prone to invasion because their species face few strong competitors and predators, and because their distance from colonizing species populations makes them more likely to have "open" niches. For example, native bird populations on Guam have been decimated by the invasive brown tree snake. In New Zealand the first invasive species were the dogs and rats brought by Polynesian settlers around 1300. These and other introductions devastated endemic New Zealand species. The colonization of Madagascar brought similar harm to its ecosystems. Logging has caused harm directly by destroying habitat, and has allowed non-native species such as prickly pear and silver wattle to invade. The water hyacinth forms dense mats on water surfaces, limiting light penetration and hence harming aquatic organisms, and causing substantial management costs. The shrub lantana (Lantana camara) is now considered invasive in over 60 countries, and has invaded large geographies in several countries prompting aggressive federal efforts at attempting to control it. Primary geomorphological effects of invasive plants are bioconstruction and bioprotection. For example, kudzu (Pueraria montana), a vine native to Asia, was widely introduced in the southeastern United States in the early 20th century to control soil erosion. The primary geomorphological effects of invasive animals are bioturbation, bioerosion, and bioconstruction. For example, invasions of the Chinese mitten crab (Eriocheir sinensis) have resulted in higher bioturbation and bioerosion rates. A native species can become harmful and effectively invasive to its native environment after human alterations to its food web. This has been the case with the purple sea urchin (Strongylocentrotus purpuratus), which has decimated kelp forests along the northern California coast due to overharvesting of its natural predator, the California sea otter (Enhydra lutris). Species-based mechanisms Invasive species appear to have specific traits or specific combinations of traits that allow them to outcompete native species. In some cases, the competition is about rates of growth and reproduction. In other cases, species interact with each other more directly. One study found that 86% of invasive species could be identified from such traits alone. Another study found that invasive species often had only a few of the traits, and that noninvasive species had these also. Common invasive species traits include fast growth and rapid reproduction, such as vegetative reproduction in plants; association with humans; and prior successful invasions. Domestic cats are effective predators; they have become feral and invasive in places such as the Florida Keys. An introduced species might become invasive if it can outcompete native species for resources. If these species evolved under great competition or predation, then the new environment may host fewer able competitors, allowing the invader to proliferate. Ecosystems used to their fullest capacity by native species can be modeled as zero-sum systems, in which any gain for the invader is a loss for the native. However, such unilateral competitive superiority (and extinction of native species with increased populations of the invader) is not the rule. An invasive species might be able to use resources previously unavailable to native species, such as deep water accessed by a long taproot, or to live on previously uninhabited soil types. For example, barbed goatgrass was introduced to California on serpentine soils, which have low water-retention, low nutrient levels, a high magnesium/calcium ratio, and possible heavy metal toxicity. Plant populations on these soils tend to show low density, but goatgrass can form dense stands on these soils and crowd out native species. Invasive species might alter their environment by releasing chemical compounds, modifying abiotic factors, or affecting the behaviour of herbivores, impacting on other species. Some, like Kalanchoe daigremontana, produce allelopathic compounds that inhibit competitors. Others like Stapelia gigantea facilitate the growth of seedlings of other species in arid environments by providing appropriate microclimates and preventing herbivores from eating seedlings. Changes in fire regimens are another form of facilitation. Bromus tectorum, originally from Eurasia, is highly fire-adapted. It spreads rapidly after burning, and increases the frequency and intensity of fires by providing large amounts of dry detritus during the fire season in western North America. Where it is widespread, it has altered the local fire regimen so much that native plants cannot survive the frequent fires, allowing it to become dominant in its introduced range. Ecological facilitation occurs where one species physically modifies a habitat in ways advantageous to other species. For example, zebra mussels increase habitat complexity on lake floors, providing crevices in which invertebrates live. This increase in complexity, together with the nutrition provided by the waste products of mussel filter-feeding, increases the density and diversity of benthic invertebrate communities. Introduced species may spread rapidly and unpredictably. When bottlenecks and founder effects cause a great decrease in the population size and may constrict genetic variation, the individuals begin to show additive variance as opposed to epistatic variance. This conversion can lead to increased variance in the founding populations, which permits rapid evolution. Selection may then act on the capacity to disperse as well as on physiological tolerance to new stressors in the environment, such as changed temperature and different predators and prey. Rapid adaptive evolution through intraspecific phenotypic plasticity, pre-adaptation and post-introduction evolution lead to offspring that have higher fitness. Critically, plasticity permits changes to better suit the individual to its environment. Pre-adaptations and evolution after the introduction reinforce the success of the introduced species. The enemy release hypothesis states that evolution leads to ecological balance in every ecosystem. No single species can occupy a majority of an ecosystem due to the presences of competitors, predators, and diseases. Introduced species moved to a novel habitat can become invasive, with rapid population growth, when these controls do not exist in the new ecosystem. Vectors Non-native species have many vectors, but most are associated with human activity. Natural range extensions are common, but humans often carry specimens faster and over greater distances than natural forces. An early human vector occurred when prehistoric humans introduced the Pacific rat (Rattus exulans) to Polynesia. Vectors include plants or seeds imported for horticulture. The pet trade moves animals across borders, where they can escape and become invasive. Organisms stow away on transport vehicles. Incidental human assisted transfer is the main cause of introductionsother than for polar regions. Diseases may be vectored by invasive insects: the Asian citrus psyllid carries the bacterial disease citrus greening. The arrival of invasive propagules to a new site is a function of the site's invasibility. Many invasive species, once they are dominant in the area, become essential to the ecosystem of that area, and their removal could be harmful. Economics plays a major role in exotic species introduction. High demand for the valuable Chinese mitten crab is one explanation for the possible intentional release of the species in foreign waters. Within the aquatic environment Maritime trade has rapidly affected the way marine organisms are transported within the ocean; new means of species transport include hull fouling and ballast water transport. In fact, Molnar et al. 2008 documented the pathways of hundreds of marine invasive species and found that shipping was the dominant mechanism for the transfer of invasive species. Many marine organisms can attach themselves to vessel hulls. Such organisms are easily transported from one body of water to another, and are a significant risk factor for a biological invasion event. Controlling for vessel hull fouling is voluntary and there are no regulations currently in place to manage hull fouling. However, the governments of California and New Zealand have announced more stringent control for vessel hull fouling within their respective jurisdictions. Another vector of non-native aquatic species is ballast water taken up at sea and released in port by transoceanic vessels. Some 10,000 species are transported via ballast water each day. Many of these are harmful. For example, freshwater zebra mussels from Eurasia most likely reached the Great Lakes via ballast water. These outcompete native organisms for oxygen and food, and can be transported in the small puddle left in a supposedly empty ballast tank. Regulations attempt to mitigate such risks, not always successfully. Climate change is causing an increase in ocean temperature. This in turn will cause range shifts in organisms, which could harm the environment as new species interactions occur. For example, organisms in a ballast tank of a ship traveling from the temperate zone through tropical waters may experience temperature fluctuations as much as 20 °C. Heat challenges during transport may enhance the stress tolerance of species in their non-native range, by selecting for genotypes that will survive a second applied heat stress, such as increased ocean temperature in the founder population. Effects of wildfire and firefighting Invasive species often exploit disturbances to an ecosystem (wildfires, roads, foot trails) to colonize an area. Large wildfires can sterilize soils, while adding nutrients. Invasive plants that can regenerate from their roots then have an advantage over natives that rely on seeds for propagation. Adverse effects Invasive species can affect the invaded habitats and bioregions adversely, causing ecological, environmental, or economic damage. Ecological The European Union defines "Invasive Alien Species" as those that are outside their natural distribution area, and that threaten biological diversity. Biotic invasion is one of the five top drivers for global biodiversity loss, and is increasing because of tourism and globalization. This may be particularly true in inadequately regulated fresh water systems, though quarantines and ballast water rules have improved the situation. Invasive species may drive local native species to extinction via competitive exclusion, niche displacement, or hybridisation with related native species. Therefore, besides their economic ramifications, alien invasions may result in extensive changes in the structure, composition and global distribution of the biota at sites of introduction, leading ultimately to the homogenisation of the world's fauna and flora and the loss of biodiversity. It is difficult to unequivocally attribute extinctions to a species invasion, though there is for example strong evidence that the extinction of about 90 amphibian species was caused by the chytrid fungus spread by international trade. Multiple successive introductions of different non-native species can worsen the total effect, as with the introductions of the amethyst gem clam and the European green crab. The gem clam was introduced into California's Bodega Harbor from the US East Coast a century ago. On its own, it never displaced native clams (Nutricola spp.). In the mid-1990s, the introduction of the European green crab resulted in an increase of the amethyst gem at the expense of the native clams. In India, multiple invasive plants have invaded 66% of natural areas, reducing the densities of native forage plants, declining the habitat-use by wild herbivores and threatening the long-term sustenance of dependent carnivores, including the tiger. Invasive species can change the functions of ecosystems. For example, invasive plants can alter the fire regime (cheatgrass, Bromus tectorum), nutrient cycling (smooth cordgrass Spartina alterniflora), and hydrology (Tamarix) in native ecosystems. Invasive species that are closely related to rare native species have the potential to hybridize with the native species. Harmful effects of hybridization have led to a decline and even extinction of native species. For example, hybridization with introduced cordgrass, Spartina alterniflora, threatens the existence of California cordgrass (Spartina foliosa) in San Francisco Bay. Invasive species cause competition for native species and because of this 400 of the 958 endangered species under the Endangered Species Act are at risk. The unintentional introduction of forest pest species and plant pathogens can change forest ecology and damage the timber industry. Overall, forest ecosystems in the U.S. are widely invaded by exotic pests, plants, and pathogens. The Asian long-horned beetle (Anoplophora glabripennis) was first introduced into the U.S. in 1996, and was expected to infect and damage millions of acres of hardwood trees. As of 2005 thirty million dollars had been spent in attempts to eradicate this pest and protect millions of trees in the affected regions. The woolly adelgid has inflicted damage on old-growth spruce, fir and hemlock forests and damages the Christmas tree industry. Chestnut blight and Dutch elm disease are plant pathogens with serious impacts. Garlic mustard, Alliaria petiolata, is one of the most problematic invasive plant species in eastern North American forests, where it is highly invasive of the understory, reducing the growth rate of tree seedlings and threatening to modify the forest's tree composition. Native species can be threatened with extinction through the process of genetic pollution. Genetic pollution is unintentional hybridization and introgression, which leads to homogenization or replacement of local genotypes as a result of either a numerical or fitness advantage of the introduced species. Genetic pollution occurs either through introduction or through habitat modification, where previously isolated species are brought into contact with the new genotypes. Invading species have been shown to adapt to their new environments in a remarkably short amount of time. The population size of invading species may remain small for a number of years and then experience an explosion in population, a phenomenon known as "the lag effect". Hybrids resulting from invasive species interbreeding with native species can incorporate their genotypes into the gene pool over time through introgression. Similarly, in some instances a small invading population can threaten much larger native populations. For example, Spartina alterniflora was introduced in the San Francisco Bay and hybridized with native Spartina foliosa. The higher pollen count and male fitness of the invading species resulted in introgression that threatened the native populations due to lower pollen counts and lower viability of the native species. Reduction in fitness is not always apparent from morphological observations alone. Some degree of gene flow is normal, and preserves constellations of genes and genotypes. An example of this is the interbreeding of migrating coyotes with the red wolf, in areas of eastern North Carolina where the red wolf was reintroduced, reducing red wolf numbers. Environmental In South Africa's Cape Town region, analysis demonstrated that the restoration of priority source water sub-catchments through the removal of thirsty alien plant invasions (such as Australian acacias, pines and eucalyptus, and Australian black wattle) would generate expected annual water gains of 50 billion liters within 5 years compared to the business-as-usual scenario (which is important as Cape Town experiences significant water scarcity). This is the equivalent to one-sixth of the city's current supply needs. These annual gains will double within 30 years. The catchment restoration is significantly more cost-effective then other water augmentation solutions (1/10 the unit cost of alternative options). A water fund has been established, and these exotic species are being eradicated. Human health Invasive species can affect human health. With the alteration in ecosystem functionality (due to homogenization of biota communities), invasive species have resulted in negative effects on human well-being, which includes reduced resource availability, unrestrained spread of human diseases, recreational and educational activities, and tourism. Alien species have caused diseases including human immunodeficiency virus (HIV), monkey pox, and severe acute respiratory syndrome (SARS). Invasive species and accompanying control efforts can have long term public health implications. For instance, pesticides applied to treat a particular pest species could pollute soil and surface water. Encroachment of humans into previously remote ecosystems has exposed exotic diseases such as HIV to the wider population. Introduced birds (e.g. pigeons), rodents and insects (e.g. mosquito, flea, louse and tsetse fly pests) can serve as vectors and reservoirs of human afflictions. Throughout recorded history, epidemics of human diseases, such as malaria, yellow fever, typhus, and bubonic plague, spread via these vectors. A recent example of an introduced disease is the spread of the West Nile virus, which killed humans, birds, mammals, and reptiles. The introduced Chinese mitten crabs are carriers of Asian lung fluke. Waterborne disease agents, such as cholera bacteria (Vibrio cholerae), and causative agents of harmful algal blooms are often transported via ballast water. Economic Globally, invasive species management and control are substantial economic burdens, with expenditures reaching approximately $1.4 trillion annually. The economic impact of invasive alien species alone was estimated to exceed $423 billion annually as of 2019. This cost has exhibited a significant increase, quadrupling every decade since 1970, underscoring the escalating financial implications of these biological invasions. Invasive species contribute to ecological degradation, altering ecosystem functionality and reducing the services ecosystems provide. This necessitates additional expenditures to control the spread of biological invasions, mitigate further impacts, and restore affected ecosystems. For example, the damage caused by 79 invasive species between 1906 and 1991 in the United States has been estimated at US$120 billion. Similarly, in China, invasive species have been reported to reduce the country's gross domestic product (GDP) by 1.36% per year. The management of biological invasions can be costly. In Australia, for instance, the expense to monitor, control, manage, and research invasive weed species is approximately AU$116.4 million per year, with costs directed solely to central and local government. While in some cases, invasive species may offer economic benefits, such as the potential for commercial forestry from invasive trees, these benefits are generally overshadowed by the substantial costs associated with biological invasions. In most cases, the economic returns from invasive species are far less than the costs they impose. United States In the Great Lakes region the sea lamprey is an invasive species. In its original habitat, it had co-evolved as a parasite that did not kill its host. However, in the Great Lakes Region, it acts as a predator and can consume up to 40 pounds of fish in its 12–18 month feeding period. Sea lampreys prey on all types of large fish such as lake trout and salmon. The sea lampreys' destructive effects on large fish negatively affect the fishing industry and have helped cause the collapse of the population of some species. Economic costs from invasive species can be separated into direct costs through production loss in agriculture and forestry, and management costs. Estimated damage and control costs of invasive species in the U.S. amount to more than $138 billion annually. Economic losses can occur through loss of recreational and tourism revenues. When economic costs of invasions are calculated as production loss and management costs, they are low because they do not consider environmental damage; if monetary values were assigned to the extinction of species, loss in biodiversity, and loss of ecosystem services, costs from impacts of invasive species would drastically increase. It is often argued that the key to invasive species management is early detection and rapid response. However, early response only helps when the invasive species is not frequently reintroduced into the managed area, and the cost of response is affordable. Weeds reduce yield in agriculture. Many weeds are accidental introductions that accompany imports of commercial seeds and plants. Introduced weeds in pastures compete with native forage plants, threaten young cattle (e.g., leafy spurge, Euphorbia virgata) or are unpalatable because of thorns and spines (e.g., yellow starthistle). Forage loss from invasive weeds on pastures amounts to nearly US$1 billion in the U.S. A decline in pollinator services and loss of fruit production has been caused by honey bees infected by the invasive varroa mite. Introduced rats (Rattus rattus and R. norvegicus) have become serious pests on farms, destroying stored grains. The introduction of leaf miner flies (Agromyzidae), including the American serpentine leaf miner (Liriomyza trifolii), to California has caused losses in California's floriculture industry, as the larvae of these invasive species feed on ornamental plants. Invasive plant pathogens and insect vectors for plant diseases can suppress agricultural yields and harm nursery stock. Citrus greening is a bacterial disease vectored by the invasive Asian citrus psyllid. As a result, citrus is under quarantine and highly regulated in areas where the psyllid has been found. Invasive species can impact outdoor recreation, such as fishing, hunting, hiking, wildlife viewing, and water-based activities. They can damage environmental services including water quality, plant and animal diversity, and species abundance, though the extent of this is under-researched. Eurasian watermilfoil (Myriophyllum spicatum) in parts of the US, fills lakes with plants, complicating fishing and boating. The loud call of the introduced common coqui depresses real estate values in affected neighborhoods of Hawaii. The large webs of the orb-weaving spider Zygiella x-notata, invasive in California, disrupts garden work. Europe The overall economic cost of invasive alien species in Europe between 1960 and 2020 has been estimated at around US$140 billion (including potential costs that may or may not have actually materialised) or US$78 billion (only including observed costs known to have materialised). These estimates are very conservative. Models based on these data suggest a true annual cost of around US$140 billion in 2020. is one of the most invaded countries in Europe, with an estimate of more than 3,000 alien species. The impacts of invasive alien species on the economy has been wide-ranging, from management costs, to loss of crops, to infrastructure damage. The overall economic cost of invasions to Italy between 1990 and 2020 was estimated at US$819.76 million (EUR€704.78 million). However, only 15 recorded species have more reliably estimated costs, hence the actual cost may be much larger than the aforementioned sum. has an estimated minimum of 2,750 introduced and invasive alien species. Renault et al. (2021) obtained 1,583 cost records for 98 invasive alien species and found that they caused a conservative total cost between US$1.2 billion and 11.5 billion over the period 1993–2018. This study extrapolated costs for species invading France, but for which costs were reported only in other countries but not in France, which yielded an additional cost ranging from US$151 million to $3.03 billion. Damage costs were nearly eight times higher than management expenditure. Insects, and in particular the Asian tiger mosquito Aedes albopictus and the yellow fever mosquito Ae. aegypti, totalled very high economic costs, followed by non-graminoid terrestrial flowering and aquatic plants (Ambrosia artemisiifolia, Ludwigia sp. and Lagarosiphon major). Over 90% of alien species currently recorded in France had no costs reported in the literature, resulting in high biases in taxonomic, regional and activity sector coverages. However, no reports does not mean that there are no negative consequences and thus no costs. Favorable effects The entomologist Chris D. Thomas argues that most introduced species are neutral or beneficial with respect to other species but this is a minority opinion. The scientific community ubiquitously considers their effects on biodiversity to be negative. Some invasive species can provide a suitable habitat or food source for other organisms. In areas where a native has become extinct or reached a point that it cannot be restored, non-native species can fill their role. For instance, in the US, the endangered southwestern willow flycatcher mainly nests in the non-native tamarisk. The introduced mesquite is an aggressive invasive species in India, but is the preferred nesting site of native waterbirds in small cities like Udaipur in Rajasthan. Similarly, Ridgway's rail has adapted to the invasive hybrid of Spartina alterniflora and Spartina foliosa, which offers better cover and nesting habitat. In Australia, saltwater crocodiles, which had become endangered, have recovered by feeding on introduced feral pigs. Non-native species can act as catalysts for restoration, increasing the heterogeneity and biodiversity in an ecosystem. This can create microclimates in sparse and eroded ecosystems, promoting the growth and reestablishment of native species. For example, in Kenya, guava trees in farmland are attractive to many fruit-eating birds, which drop seeds from rainforest trees as much as away beneath the guavas, encouraging forest regeneration. Non-native species can provide ecosystem services, functioning as biocontrol agents to limit the effects of invasive agricultural pests. Asian oysters, for example, filter water pollutants better than native oysters in Chesapeake Bay. Some species have invaded an area so long ago that they are considered to have naturalised there. For example, the bee Lasioglossum leucozonium, shown by population genetic analysis to be an invasive species in North America, has become an important pollinator of caneberry (Rubus spp.) as well as cucurbit, apple trees, and blueberry bushes. In the US, the endangered Taylor's checkerspot butterfly has come to rely on invasive ribwort plantain as the food plant for its caterpillars. Some invasions offer potential commercial benefits. For instance, silver carp and common carp can be harvested for human food and exported to markets already familiar with the product, or processed into pet foods, or mink feed. Water hyacinth can be turned into fuel by methane digesters, and other invasive plants can be harvested and utilized as a source of bioenergy. Control, eradication, and study Humans are versatile enough to remediate adverse effects of species invasions. The public is motivated by invasive species that impact their local area. The control of alien species populations is important in the conservation of biodiversity in natural ecosystem. One of the most promising methods for controlling alien species is genetic. Cargo inspection and quarantine The original motivation was to protect against agricultural pests while still allowing the export of agricultural products. In 1994 the first set of global standards were agreed to, including the Agreement on the Application of Sanitary and Phytosanitary Measures (SPS Agreement). These are overseen by the World Trade Organization. The International Maritime Organization oversees the International Convention for the Control and Management of Ships' Ballast Water and Sediments (the Ballast Water Management Convention). Although primarily targeted at other, more general environmental concerns, the Convention on Biological Diversity does specify some steps that its members should take to control invasive species. The CBD is the most significant international agreement on the environmental consequences of invasive species; most such measures are voluntary and unspecific. Slowing spread Firefighters are becoming responsible for decontamination of their own equipment, public water equipment, and private water equipment, due to the risk of aquatic invasive species transfer. In the United States this is especially a concern for wildland firefighters because quagga and zebra mussel invasion and wildfires co-occur in the American West. Reestablishing species Island restoration deals with the eradication of invasive species. A 2019 study suggests that if eradications of invasive animals were conducted on just 169 islands, the survival prospects of 9.4% of the Earth's most highly threatened terrestrial insular vertebrates would be improved. Invasive vertebrate eradication on islands aligns with United Nations Sustainable Development Goal 15 and associated targets. Rodents were carried to South Georgia, an island in the southern Atlantic Ocean with no permanent inhabitants, in the 18th century by sealing and whaling ships. They soon wrought havoc on the island's bird population, eating eggs and attacking chicks. In 2018, the South Georgia Island was declared free of invasive rodents after a multi-year extermination effort. Bird populations have rebounded, including the South Georgia pipit and South Georgia pintail, both endemic to the island. Taxon substitution Non-native species can be introduced to fill an ecological engineering role that previously was performed by a native species now extinct. The procedure is known as taxon substitution. On many islands, tortoise extinction has resulted in dysfunctional ecosystems with respect to seed dispersal and herbivory. On the offshore islets of Mauritius, tortoises now extinct had served as the keystone herbivores. Introduction of the non-native Aldabra giant tortoises on two islets in 2000 and 2007 has begun to restore ecological equilibrium. The introduced tortoises are dispersing seeds of several native plants and are selectively grazing invasive plant species. Grazing and browsing are expected to replace ongoing intensive manual weeding, and the introduced tortoises are already breeding. By using them as food The practice of eating invasive species to reduce their populations has been explored. In 2005 Chef Bun Lai of Miya's Sushi in New Haven, Connecticut created the first menu dedicated to invasive species. At that time, half the items on the menu were conceptual because those invasive species were not yet commercially available. By 2013, Miya's offered invasive aquatic species such as Chesapeake blue catfish, Florida lionfish, Kentucky silver carp, Georgia cannonball jellyfish, and invasive plants such as Japanese knotweed and autumn olive. Joe Roman, a Harvard and University of Vermont conservation biologist and recipient of the Rachel Carson Environmental award, runs a website named "Eat The Invaders". In the 21st century, organizations including Reef Environmental Educational Foundation and the Institute for Applied Ecology have published cookbooks and recipes using invasive species as ingredients. Invasive plant species have been explored as a sustainable source of beneficial phytochemicals and edible protein. Proponents of eating invasive organisms argue that humans have the ability to eat away any species that it has an appetite for, pointing to the many animals which humans have been able to hunt to extinction—such as the Caribbean monk seal, and the passenger pigeon. They further point to the success that Jamaica has had in significantly decreasing the population of lionfish by encouraging the consumption of the fish. Skeptics point out that once a foreign species has entrenched itself in a new place—such as the Indo-Pacific lionfish that has now virtually taken over the waters of the Western Atlantic, Caribbean and Gulf of Mexico—eradication is almost impossible. Critics argue that encouraging consumption might have the unintended effect of spreading harmful species even more widely. Pesticides and herbicides Pesticides are commonly used to control invasives. Herbicides used against invasive plants include fungal herbicides. Although the effective population size of an introduced population is bottlenecked, some genetic variation has been known to provide invasive plants with resistance against these fungal bioherbicides. Invasive populations of Bromus tectorum exist with resistance to Ustilago bullata used as a biocontrol, and a similar problem has been reported in Microstegium vimineum subject to Bipolaris microstegii and B. drechsleri. This is not solely a character of invasive plant genetics but is normal for wild plants such as the weed Linum marginale and its fungal pathogen Melampsora lini. Crops have another disadvantage over any uncontrolled plant – wild native or invasive – namely their greater uptake of nutrients, as they are deliberately bred to increase nutrient intake to enable increased product output. Gene drive A gene drive could be used to eliminate invasive species and has, for example, been proposed as a way to eliminate invasive mammal species in New Zealand. Briefly put, an individual of a species may have two versions of a gene, one with a desired coding outcome and one not, with offspring having a 50:50 chance of inheriting one or the other. Genetic engineering can be used to inhibit inheritance of the non-desired gene, resulting in faster propagation of the desired gene in subsequent generations. Gene drives for biodiversity conservation purposes are being explored as part of The Genetic Biocontrol of Invasive Rodents program because they offer the potential for reduced risk to non-target species and reduced costs when compared to traditional invasive species removal techniques. A wider outreach network for gene drive research exists to raise awareness of the value of gene drive research for the public good. Some scientists are concerned that the technique could wipe out species in their original native habitats. The gene could mutate, causing unforeseen problems, or hybridize with native species. Predicting invasive plants Accurately predicting the impacts of non-native plants can be an especially effective management option because most introductions of non-native plant species are intentional. Weed risk assessments attempt to predict the chances that a specific plant will have negative effects in a new environment, often using a standardized questionnaire. The resulting total score is associated with a management action such as "prevent introduction". Assessments commonly use information about the physiology, life history, native ranges, and phylogenetic relationships of the species evaluated. The effectiveness of the approach is debated.
Biology and health sciences
Ecology
null
67611
https://en.wikipedia.org/wiki/Tennessine
Tennessine
Tennessine is a synthetic chemical element; it has symbol Ts and atomic number 117. It has the second-highest atomic number and joint-highest atomic mass of all known elements and is the penultimate element of the 7th period of the periodic table. It is named after the U.S. state of Tennessee, where key research institutions involved in its discovery are located (however, the IUPAC says that the element is named after the "region of Tennessee"). The discovery of tennessine was officially announced in Dubna, Russia, by a Russian–American collaboration in April 2010, which makes it the most recently discovered element. One of its daughter isotopes was created directly in 2011, partially confirming the results of the experiment. The experiment itself was repeated successfully by the same collaboration in 2012 and by a joint German–American team in May 2014. In December 2015, the Joint Working Party of the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP), which evaluates claims of discovery of new elements, recognized the element and assigned the priority to the Russian–American team. In June 2016, the IUPAC published a declaration stating that the discoverers had suggested the name tennessine, a name which was officially adopted in November 2016. Tennessine may be located in the "island of stability", a concept that explains why some superheavy elements are more stable despite an overall trend of decreasing stability for elements beyond bismuth on the periodic table. The synthesized tennessine atoms have lasted tens and hundreds of milliseconds. In the periodic table, tennessine is expected to be a member of group 17, the halogens. Some of its properties may differ significantly from those of the lighter halogens due to relativistic effects. As a result, tennessine is expected to be a volatile metal that neither forms anions nor achieves high oxidation states. A few key properties, such as its melting and boiling points and its first ionization energy, are nevertheless expected to follow the periodic trends of the halogens. Introduction History Pre-discovery In December 2004, the Joint Institute for Nuclear Research (JINR) team in Dubna, Moscow Oblast, Russia, proposed a joint experiment with the Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, United States, to synthesize element 117 — so called for the 117 protons in its nucleus. Their proposal involved fusing a berkelium (element 97) target and a calcium (element 20) beam, conducted via bombardment of the berkelium target with calcium nuclei: this would complete a set of experiments done at the JINR on the fusion of actinide targets with a calcium-48 beam, which had thus far produced the new elements 113–116 and 118. ORNL—then the world's only producer of berkelium—could not then provide the element, as they had temporarily ceased production, and re-initiating it would be too costly. Plans to synthesize element 117 were suspended in favor of the confirmation of element 118, which had been produced earlier in 2002 by bombarding a californium target with calcium. The required berkelium-249 is a by-product in californium-252 production, and obtaining the required amount of berkelium was an even more difficult task than obtaining that of californium, as well as costly: It would cost around 3.5 million dollars, and the parties agreed to wait for a commercial order of californium production, from which berkelium could be extracted. The JINR team sought to use berkelium because calcium-48, the isotope of calcium used in the beam, has 20 protons and 28 neutrons, making a neutron–proton ratio of 1.4; and it is the lightest stable or near-stable nucleus with such a large neutron excess. Thanks to the neutron excess, the resulting nuclei were expected to be heavier and closer to the sought-after island of stability. Of the aimed for 117 protons, calcium has 20, and thus they needed to use berkelium, which has 97 protons in its nucleus. In February 2005, the leader of the JINR team — Yuri Oganessian — presented a colloquium at ORNL. Also in attendance were representatives of Lawrence Livermore National Laboratory, who had previously worked with JINR on the discovery of elements 113–116 and 118, and Joseph Hamilton of Vanderbilt University, a collaborator of Oganessian. Hamilton checked if the ORNL high-flux reactor produced californium for a commercial order: The required berkelium could be obtained as a by-product. He learned that it did not and there was no expectation for such an order in the immediate future. Hamilton kept monitoring the situation, making the checks once in a while. (Later, Oganessian referred to Hamilton as "the father of 117" for doing this work.) Discovery ORNL resumed californium production in spring 2008. Hamilton noted the restart during the summer and made a deal on subsequent extraction of berkelium (the price was about $600,000). During a September 2008 symposium at Vanderbilt University in Nashville, Tennessee, celebrating his 50th year on the Physics faculty, Hamilton introduced Oganessian to James Roberto (then the deputy director for science and technology at ORNL). They established a collaboration among JINR, ORNL, and Vanderbilt. Clarice Phelps was part of ORNL's team that collaborated with JINR; this is particularly notable as because of it the IUPAC recognizes her as the first African-American woman to be involved with the discovery of a chemical element. The eventual collaborating institutions also included The University of Tennessee (Knoxville), Lawrence Livermore National Laboratory, The Research Institute for Advanced Reactors (Russia), and The University of Nevada (Las Vegas). In November 2008, the U.S. Department of Energy, which had oversight over the reactor in Oak Ridge, allowed the scientific use of the extracted berkelium. The production lasted 250 days and ended in late December 2008, resulting in 22 milligrams of berkelium, enough to perform the experiment. In January 2009, the berkelium was removed from ORNL's High Flux Isotope Reactor; it was subsequently cooled for 90 days and then processed at ORNL's Radiochemical Engineering and Development Center to separate and purify the berkelium material, which took another 90 days. Its half-life is only 330 days: this means, after that time, half the berkelium produced would have decayed. Because of this, the berkelium target had to be quickly transported to Russia; for the experiment to be viable, it had to be completed within six months of its departure from the United States. The target was packed into five lead containers to be flown from New York to Moscow. Russian customs officials twice refused to let the target enter the country because of missing or incomplete paperwork. Over the span of a few days, the target traveled over the Atlantic Ocean five times. On its arrival in Russia in June 2009, the berkelium was immediately transferred to Research Institute of Atomic Reactors (RIAR) in Dimitrovgrad, Ulyanovsk Oblast, where it was deposited as a 300-nanometer-thin layer on a titanium film. In July 2009, it was transported to Dubna, where it was installed in the particle accelerator at the JINR. The calcium-48 beam was generated by chemically extracting the small quantities of calcium-48 present in naturally occurring calcium, enriching it 500 times. This work was done in the closed town of Lesnoy, Sverdlovsk Oblast, Russia. The experiment began in late July 2009. In January 2010, scientists at the Flerov Laboratory of Nuclear Reactions announced internally that they had detected the decay of a new element with atomic number 117 via two decay chains: one of an odd–odd isotope undergoing 6 alpha decays before spontaneous fission, and one of an odd–even isotope undergoing 3 alpha decays before fission. The obtained data from the experiment was sent to the LLNL for further analysis. On 9 April 2010, an official report was released in the journal Physical Review Letters identifying the isotopes as 294117 and 293117, which were shown to have half-lives on the order of tens or hundreds of milliseconds. The work was signed by all parties involved in the experiment to some extent: JINR, ORNL, LLNL, RIAR, Vanderbilt, the University of Tennessee (Knoxville, Tennessee, U.S.), and the University of Nevada (Las Vegas, Nevada, U.S.), which provided data analysis support. The isotopes were formed as follows: + → 297117* → 294117 + 3 (1 event) + → 297117* → 293117 + 4 (5 events) Confirmation All daughter isotopes (decay products) of element 117 were previously unknown; therefore, their properties could not be used to confirm the claim of discovery. In 2011, when one of the decay products (115) was synthesized directly, its properties matched those measured in the claimed indirect synthesis from the decay of element 117. The discoverers did not submit a claim for their findings in 2007–2011 when the Joint Working Party was reviewing claims of discoveries of new elements. The Dubna team repeated the experiment in 2012, creating seven atoms of element 117 and confirming their earlier synthesis of element 118 (produced after some time when a significant quantity of the berkelium-249 target had beta decayed to californium-249). The results of the experiment matched the previous outcome; the scientists then filed an application to register the element. In May 2014, a joint German–American collaboration of scientists from the ORNL and the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Hessen, Germany, claimed to have confirmed discovery of the element. The team repeated the Dubna experiment using the Darmstadt accelerator, creating two atoms of element 117. In December 2015, the JWP officially recognized the discovery of 293117 on account of the confirmation of the properties of its daughter 115, and thus the listed discoverers — JINR, LLNL, and ORNL — were given the right to suggest an official name for the element. (Vanderbilt was left off the initial list of discoverers in an error that was later corrected.) In May 2016, Lund University (Lund, Scania, Sweden) and GSI cast some doubt on the syntheses of elements 115 and 117. The decay chains assigned to 115, the isotope instrumental in the confirmation of the syntheses of elements 115 and 117, were found based on a new statistical method to be too different to belong to the same nuclide with a reasonably high probability. The reported 293117 decay chains approved as such by the JWP were found to require splitting into individual data sets assigned to different isotopes of element 117. It was also found that the claimed link between the decay chains reported as from 117 and 115 probably did not exist. (On the other hand, the chains from the non-approved isotope 117 were found to be congruent.) The multiplicity of states found when nuclides that are not even–even undergo alpha decay is not unexpected and contributes to the lack of clarity in the cross-reactions. This study criticized the JWP report for overlooking subtleties associated with this issue, and considered it "problematic" that the only argument for the acceptance of the discoveries of elements 115 and 117 was a link they considered to be doubtful. On 8 June 2017, two members of the Dubna team published a journal article answering these criticisms, analysing their data on the nuclides 117 and 115 with widely accepted statistical methods, noted that the 2016 studies indicating non-congruence produced problematic results when applied to radioactive decay: they excluded from the 90% confidence interval both average and extreme decay times, and the decay chains that would be excluded from the 90% confidence interval they chose were more probable to be observed than those that would be included. The 2017 reanalysis concluded that the observed decay chains of 117 and 115 were consistent with the assumption that only one nuclide was present at each step of the chain, although it would be desirable to be able to directly measure the mass number of the originating nucleus of each chain as well as the excitation function of the reaction. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, element 117 should be known as eka-astatine. Using the 1979 recommendations by the International Union of Pure and Applied Chemistry (IUPAC), the element was temporarily called ununseptium (symbol Uus), formed from Latin roots "one", "one", and "seven", a reference to the element's atomic number 117. Many scientists in the field called it "element 117", with the symbol E117, (117), or 117. According to guidelines of IUPAC valid at the moment of the discovery approval, the permanent names of new elements should have ended in "-ium"; this included element 117, even if the element was a halogen, which traditionally have names ending in "-ine"; however, the new recommendations published in 2016 recommended using the "-ine" ending for all new group 17 elements. After the original synthesis in 2010, Dawn Shaughnessy of LLNL and Oganessian declared that naming was a sensitive question, and it was avoided as far as possible. However, Hamilton, who teaches at Vanderbilt University in Nashville, Tennessee, declared that year, "I was crucial in getting the group together and in getting the 249Bk target essential for the discovery. As a result of that, I'm going to get to name the element. I can't tell you the name, but it will bring distinction to the region." In a 2015 interview, Oganessian, after telling the story of the experiment, said, "and the Americans named this a tour de force, they had demonstrated they could do [this] with no margin for error. Well, soon they will name the 117th element." In March 2016, the discovery team agreed on a conference call involving representatives from the parties involved on the name "tennessine" for element 117. In June 2016, IUPAC published a declaration stating the discoverers had submitted their suggestions for naming the new elements 115, 117, and 118 to the IUPAC; the suggestion for the element 117 was tennessine, with a symbol of Ts, after "the region of Tennessee". The suggested names were recommended for acceptance by the IUPAC Inorganic Chemistry Division; formal acceptance was set to occur after a five-month term following publishing of the declaration expires. In November 2016, the names, including tennessine, were formally accepted. Concerns that the proposed symbol Ts may clash with a notation for the tosyl group used in organic chemistry were rejected, following existing symbols bearing such dual meanings: Ac (actinium and acetyl) and Pr (praseodymium and propyl). The naming ceremony for moscovium, tennessine, and oganesson was held on 2 March 2017 at the Russian Academy of Sciences in Moscow; a separate ceremony for tennessine alone had been held at ORNL in January 2017. Predicted properties Other than nuclear properties, no properties of tennessine or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that it decays very quickly. Properties of tennessine remain unknown and only predictions are available. Nuclear stability and isotopes The stability of nuclei quickly decreases with the increase in atomic number after curium, element 96, whose half-life is four orders of magnitude longer than that of any subsequent element. All isotopes with an atomic number above 101 undergo radioactive decay with half-lives of less than 30 hours. No elements with atomic numbers above 82 (after lead) have stable isotopes. This is because of the ever-increasing Coulomb repulsion of protons, so that the strong nuclear force cannot hold the nucleus together against spontaneous fission for long. Calculations suggest that in the absence of other stabilizing factors, elements with more than 104 protons should not exist. However, researchers in the 1960s suggested that the closed nuclear shells around 114 protons and 184 neutrons should counteract this instability, creating an "island of stability" where nuclides could have half-lives reaching thousands or millions of years. While scientists have still not reached the island, the mere existence of the superheavy elements (including tennessine) confirms that this stabilizing effect is real, and in general the known superheavy nuclides become exponentially longer-lived as they approach the predicted location of the island. Tennessine is the second-heaviest element created so far, and all its known isotopes have half-lives of less than one second. Nevertheless, this is longer than the values predicted prior to their discovery: the predicted lifetimes for 293Ts and 294Ts used in the discovery paper were 10 ms and 45 ms respectively, while the observed lifetimes were 21 ms and 112 ms respectively. The Dubna team believes that the synthesis of the element is direct experimental proof of the existence of the island of stability. It has been calculated that the isotope 295Ts would have a half-life of about 18 milliseconds, and it may be possible to produce this isotope via the same berkelium–calcium reaction used in the discoveries of the known isotopes, 293Ts and 294Ts. The chance of this reaction producing 295Ts is estimated to be, at most, one-seventh the chance of producing 294Ts. This isotope could also be produced in a pxn channel of the 249Cf+48Ca reaction that successfully produced oganesson, evaporating a proton alongside some neutrons; the heavier tennessine isotopes 296Ts and 297Ts could similarly be produced in the 251Cf+48Ca reaction. Calculations using a quantum tunneling model predict the existence of several isotopes of tennessine up to 303Ts. The most stable of these is expected to be 296Ts with an alpha-decay half-life of 40 milliseconds. A liquid drop model study on the element's isotopes shows similar results; it suggests a general trend of increasing stability for isotopes heavier than 301Ts, with partial half-lives exceeding the age of the universe for the heaviest isotopes like 335Ts when beta decay is not considered. Lighter isotopes of tennessine may be produced in the 243Am+50Ti reaction, which was considered as a contingency plan by the Dubna team in 2008 if 249Bk proved unavailable; the isotopes 289Ts through 292Ts could also be produced as daughters of element 119 isotopes that can be produced in the 243Am+54Cr and 249Bk+50Ti reactions. Atomic and physical Tennessine is expected to be a member of group 17 in the periodic table, below the five halogens; fluorine, chlorine, bromine, iodine, and astatine, each of which has seven valence electrons with a configuration of . For tennessine, being in the seventh period (row) of the periodic table, continuing the trend would predict a valence electron configuration of , and it would therefore be expected to behave similarly to the halogens in many respects that relate to this electronic state. However, going down group 17, the metallicity of the elements increases; for example, iodine already exhibits a metallic luster in the solid state, and astatine is expected to be a metal. As such, an extrapolation based on periodic trends would predict tennessine to be a rather volatile metal. Calculations have confirmed the accuracy of this simple extrapolation, although experimental verification of this is currently impossible as the half-lives of the known tennessine isotopes are too short. Significant differences between tennessine and the previous halogens are likely to arise, largely due to spin–orbit interaction—the mutual interaction between the motion and spin of electrons. The spin–orbit interaction is especially strong for the superheavy elements because their electrons move faster—at velocities comparable to the speed of light—than those in lighter atoms. In tennessine atoms, this lowers the 7s and the 7p electron energy levels, stabilizing the corresponding electrons, although two of the 7p electron energy levels are more stabilized than the other four. The stabilization of the 7s electrons is called the inert pair effect; the effect that separates the 7p subshell into the more-stabilized and the less-stabilized parts is called subshell splitting. Computational chemists understand the split as a change of the second (azimuthal) quantum number l from 1 to 1/2 and 3/2 for the more-stabilized and less-stabilized parts of the 7p subshell, respectively. For many theoretical purposes, the valence electron configuration may be represented to reflect the 7p subshell split as . Differences for other electron levels also exist. For example, the 6d electron levels (also split in two, with four being 6d3/2 and six being 6d5/2) are both raised, so they are close in energy to the 7s ones, although no 6d electron chemistry has ever been predicted for tennessine. The difference between the 7p1/2 and 7p3/2 levels is abnormally high; 9.8 eV. Astatine's 6p subshell split is only 3.8 eV, and its 6p1/2 chemistry has already been called "limited". These effects cause tennessine's chemistry to differ from those of its upper neighbors (see below). Tennessine's first ionization energy—the energy required to remove an electron from a neutral atom—is predicted to be 7.7 eV, lower than those of the halogens, again following the trend. Like its neighbors in the periodic table, tennessine is expected to have the lowest electron affinity—energy released when an electron is added to the atom—in its group; 2.6 or 1.8 eV. The electron of the hypothetical hydrogen-like tennessine atom—oxidized so it has only one electron, Ts116+—is predicted to move so quickly that its mass is 1.90 times that of a non-moving electron, a feature attributable to relativistic effects. For comparison, the figure for hydrogen-like astatine is 1.27 and the figure for hydrogen-like iodine is 1.08. Simple extrapolations of relativity laws indicate a contraction of atomic radius. Advanced calculations show that the radius of an tennessine atom that has formed one covalent bond would be 165 pm, while that of astatine would be 147 pm. With the seven outermost electrons removed, tennessine is finally smaller; 57 pm for tennessine and 61 pm for astatine. The melting and boiling points of tennessine are not known; earlier papers predicted about 350–500 °C and 550 °C, respectively, or 350–550 °C and 610 °C, respectively. These values exceed those of astatine and the lighter halogens, following periodic trends. A later paper predicts the boiling point of tennessine to be 345 °C (that of astatine is estimated as 309 °C, 337 °C, or 370 °C, although experimental values of 230 °C and 411 °C have been reported). The density of tennessine is expected to be between 7.1 and 7.3 g/cm3. Chemical The known isotopes of tennessine, 293Ts and 294Ts, are too short-lived to allow for chemical experimentation at present. Nevertheless, many chemical properties of tennessine have been calculated. Unlike the lighter group 17 elements, tennessine may not exhibit the chemical behavior common to the halogens. For example, fluorine, chlorine, bromine, and iodine routinely accept an electron to achieve the more stable electronic configuration of a noble gas, obtaining eight electrons (octet) in their valence shells instead of seven. This ability weakens as atomic weight increases going down the group; tennessine would be the least willing group 17 element to accept an electron. Of the oxidation states it is predicted to form, −1 is expected to be the least common. The standard reduction potential of the Ts/Ts− couple is predicted to be −0.25 V; this value is negative, unlike for all the lighter halogens. There is another opportunity for tennessine to complete its octet—by forming a covalent bond. Like the halogens, when two tennessine atoms meet they are expected to form a Ts–Ts bond to give a diatomic molecule. Such molecules are commonly bound via single sigma bonds between the atoms; these are different from pi bonds, which are divided into two parts, each shifted in a direction perpendicular to the line between the atoms, and opposite one another rather than being located directly between the atoms they bind. Sigma bonding has been calculated to show a great antibonding character in the At2 molecule and is not as favorable energetically. Tennessine is predicted to continue the trend; a strong pi character should be seen in the bonding of Ts2. The molecule tennessine chloride (TsCl) is predicted to go further, being bonded with a single pi bond. Aside from the unstable −1 state, three more oxidation states are predicted; +5, +3, and +1. The +1 state should be especially stable because of the destabilization of the three outermost 7p3/2 electrons, forming a stable, half-filled subshell configuration; astatine shows similar effects. The +3 state should be important, again due to the destabilized 7p3/2 electrons. The +5 state is predicted to be uncommon because the 7p1/2 electrons are oppositely stabilized. The +7 state has not been shown—even computationally—to be achievable. Because the 7s electrons are greatly stabilized, it has been hypothesized that tennessine effectively has only five valence electrons. The simplest possible tennessine compound would be the monohydride, TsH. The bonding is expected to be provided by a 7p3/2 electron of tennessine and the 1s electron of hydrogen. The non-bonding nature of the 7p1/2 spinor is because tennessine is expected not to form purely sigma or pi bonds. Therefore, the destabilized (thus expanded) 7p3/2 spinor is responsible for bonding. This effect lengthens the TsH molecule by 17 picometers compared with the overall length of 195 pm. Since the tennessine p electron bonds are two-thirds sigma, the bond is only two-thirds as strong as it would be if tennessine featured no spin–orbit interactions. The molecule thus follows the trend for halogen hydrides, showing an increase in bond length and a decrease in dissociation energy compared to AtH. The molecules TlTs and NhTs may be viewed analogously, taking into account an opposite effect shown by the fact that the element's p1/2 electrons are stabilized. These two characteristics result in a relatively small dipole moment (product of difference between electric charges of atoms and displacement of the atoms) for TlTs; only 1.67 D, the positive value implying that the negative charge is on the tennessine atom. For NhTs, the strength of the effects are predicted to cause a transfer of the electron from the tennessine atom to the nihonium atom, with the dipole moment value being −1.80 D. The spin–orbit interaction increases the dissociation energy of the TsF molecule because it lowers the electronegativity of tennessine, causing the bond with the extremely electronegative fluorine atom to have a more ionic character. Tennessine monofluoride should feature the strongest bonding of all group 17 monofluorides. VSEPR theory predicts a bent-T-shaped molecular geometry for the group 17 trifluorides. All known halogen trifluorides have this molecular geometry and have a structure of AX3E2—a central atom, denoted A, surrounded by three ligands, X, and two unshared electron pairs, E. If relativistic effects are ignored, TsF3 should follow its lighter congeners in having a bent-T-shaped molecular geometry. More sophisticated predictions show that this molecular geometry would not be energetically favored for TsF3, predicting instead a trigonal planar molecular geometry (AX3E0). This shows that VSEPR theory may not be consistent for the superheavy elements. The TsF3 molecule is predicted to be significantly stabilized by spin–orbit interactions; a possible rationale may be the large difference in electronegativity between tennessine and fluorine, giving the bond a partially ionic character.
Physical sciences
Group 17
Chemistry
67619
https://en.wikipedia.org/wiki/Elm
Elm
Elms are deciduous and semi-deciduous trees comprising the genus Ulmus in the family Ulmaceae. They are distributed over most of the Northern Hemisphere, inhabiting the temperate and tropical-montane regions of North America and Eurasia, presently ranging southward in the Middle East to Lebanon and Israel, and across the Equator in the Far East into Indonesia. Elms are components of many kinds of natural forests. Moreover, during the 19th and early 20th centuries, many species and cultivars were also planted as ornamental street, garden, and park trees in Europe, North America, and parts of the Southern Hemisphere, notably Australasia. Some individual elms reached great size and age. However, in recent decades, most mature elms of European or North American origin have died from Dutch elm disease, caused by a microfungus dispersed by bark beetles. In response, disease-resistant cultivars have been developed, capable of restoring the elm to forestry and landscaping. Description The genus is hermaphroditic, having apetalous perfect flowers which are wind-pollinated. Elm leaves are alternate, with simple, single- or, most commonly, doubly serrate margins, usually asymmetric at the base and acuminate at the apex. The fruit is a round wind-dispersed samara flushed with chlorophyll, facilitating photosynthesis before the leaves emerge. The samarae are very light, those of British elms numbering around 50,000 to the pound (454 g). (Very rarely anomalous samarae occur with more than two wings.) All species are tolerant of a wide range of soils and pH levels but, with few exceptions, demand good drainage. The elm tree can grow to great height, the American elm in excess of 30 m (100 ft), often with a forked trunk creating a vase profile. Taxonomy There are about 30 to 40 species of Ulmus (elm); the ambiguity in number results from difficulty in delineating species, owing to the ease of hybridization between them and the development of local seed-sterile vegetatively propagated microspecies in some areas, mainly in the Ulmus field elm (Ulmus minor) group. Oliver Rackham describes Ulmus as the most critical genus in the entire British flora, adding that 'species and varieties are a distinction in the human mind rather than a measured degree of genetic variation'. Eight species are endemic to North America and three to Europe, but the greatest diversity is in Asia with approximately two dozen species. The oldest fossils of Ulmus are leaves dating Paleocene, found across the Northern Hemisphere. The classification adopted in the List of elm species is largely based on that established by Brummitt. A large number of synonyms have accumulated over the last three centuries; their currently accepted names can be found in the list of Elm synonyms and accepted names. Botanists who study elms and argue over elm identification and classification are called "pteleologists", from the Greek πτελέα (elm). As part of the suborder urticalean rosids, they are distantly related to cannabis, mulberries, figs, hops, and nettles. Ecology Propagation Elm propagation methods vary according to elm type and location, and the plantsman's needs. Native species may be propagated by seed. In their natural setting, native species, such as wych elm and European white elm in central and northern Europe and field elm in southern Europe, set viable seed in "favourable" seasons. Optimal conditions occur after a late warm spring. After pollination, seeds of spring-flowering elms ripen and fall at the start of summer (June); they remain viable for only a few days. They are planted in sandy potting soil at a depth of 1 cm, and germinate in three weeks. Slow-germinating American elm will remain dormant until the second season. Seeds from autumn-flowering elms ripen in the fall and germinate in the spring. Since elms may hybridize within and between species, seed propagation entails a hybridisation risk. In unfavourable seasons, elm seeds are usually sterile. Elms outside their natural range, such as English elm U. minor 'Atinia', and elms unable to pollinate because pollen sources are genetically identical, are sterile and are propagated by vegetative reproduction. Vegetative reproduction is also used to produce genetically identical elms (clones). Methods include the winter transplanting of root suckers; taking hardwood cuttings from vigorous one-year-old shoots in late winter, taking root cuttings in early spring; taking softwood cuttings in early summer; grafting; ground and air layering; and micropropagation. A bottom heat of 18 °C and humid conditions are maintained for hard- and softwood cuttings. The transplanting of root suckers remains the easiest most and common propagation method for European field elm and its hybrids. For specimen urban elms, grafting to wych-elm rootstock may be used to eliminate suckering or to ensure stronger root growth. The mutant-elm cultivars are usually grafted, the "weeping" elms 'Camperdown' and 'Horizontalis' at , the dwarf cultivars 'Nana' and 'Jacqueline Hillier' at ground level. Since the Siberian elm is drought tolerant, in dry countries, new varieties of elm are often root-grafted onto this species. Associated organisms Pests and diseases Dutch elm disease Dutch elm disease (DED) devastated elms throughout Europe and much of North America in the second half of the 20th century. It derives its name "Dutch" from the first description of the disease and its cause in the 1920s by Dutch botanists Bea Schwarz and Christina Johanna Buisman. Owing to its geographical isolation and effective quarantine enforcement, Australia has so far remained unaffected by DED, as have the provinces of Alberta and British Columbia in western Canada. DED is caused by a microfungus transmitted by two species of Scolytus elm-bark beetles, which act as vectors. The disease affects all species of elms native to North America and Europe, but many Asiatic species have evolved antifungal genes and are resistant. Fungal spores, introduced into wounds in the tree caused by the beetles, invade the xylem or vascular system. The tree responds by producing tyloses, effectively blocking the flow from roots to leaves. Woodland trees in North America are not quite as susceptible to the disease because they usually lack the root grafting of the urban elms and are somewhat more isolated from each other. In France, inoculation with the fungus of over 300 clones of the European species failed to find a single variety that possessed of any significant resistance. The first, less aggressive strain of the disease fungus, Ophiostoma ulmi, arrived in Europe from Asia in 1910, and was accidentally introduced to North America in 1928. It was steadily weakened by viruses in Europe and had all but disappeared by the 1940s. However, the disease had a much greater and longer-lasting impact in North America, owing to the greater susceptibility of the American elm, Ulmus americana, which masked the emergence of the second, far more virulent strain of the disease Ophiostoma novo-ulmi. It appeared in the United States sometime in the 1940s, and was originally believed to be a mutation of O. ulmi. Limited gene flow from O. ulmi to O. novo-ulmi was probably responsible for the creation of the North American subspecies O. novo-ulmi subsp. americana. It was first recognized in Britain in the early 1970s, believed to have been introduced via a cargo of Canadian rock elm destined for the boatbuilding industry, and rapidly eradicated most of the mature elms from western Europe. A second subspecies, O. novo-ulmi subsp. novo-ulmi, caused similar devastation in Eastern Europe and Central This subspecies, which was introduced to North America, and like O. ulmi, is thought to have originated in Asia. The two subspecies have now hybridized in Europe where their ranges have overlapped. The hypothesis that O. novo-ulmi arose from a hybrid of the original O. ulmi and another strain endemic to the Himalayas, Ophiostoma himal-ulmi, is now discredited. No sign indicates the current pandemic is waning, and no evidence has been found of a susceptibility of the fungus to a disease of its own caused by d-factors: naturally occurring virus-like agents that severely debilitated the original O. ulmi and reduced its sporulation. Elm phloem necrosis Elm phloem necrosis (elm yellows) is a disease of elm trees that is spread by leafhoppers or by root grafts. This very aggressive disease, with no known cure, occurs in the Eastern United States, southern Ontario in Canada, and Europe. It is caused by phytoplasmas that infect the phloem (inner bark) of the tree. Infection and death of the phloem effectively girdles the tree and stops the flow of water and nutrients. The disease affects both wild-growing and cultivated trees. Occasionally, cutting the infected tree before the disease completely establishes itself and cleanup and prompt disposal of infected matter has resulted in the plant's survival via stump sprouts. Insects Most serious of the elm pests is the elm leaf beetle Xanthogaleruca luteola, which can decimate foliage, although rarely with fatal results. The beetle was accidentally introduced to North America from Europe. Another unwelcome immigrant to North America is the Japanese beetle Popillia japonica. In both instances, the beetles cause far more damage in North America owing to the absence of the predators in their native lands. In Australia, introduced elm trees are sometimes used as food plants by the larvae of hepialid moths of the genus Aenetus. These burrow horizontally into the trunk then vertically down. Circa 2000, the Asian Zig-zag sawfly Aproceros leucopoda appeared in Europe and North America, although in England, its impact has been minimal and it is no longer monitored. Birds Sapsucker woodpeckers have a great love of young elm trees. Cultivation One of the earliest of ornamental elms was the ball-headed graft narvan elm, Ulmus minor 'Umbraculifera', cultivated from time immemorial in Persia as a shade tree and widely planted in cities through much of south-west and central Asia. From the 18th century to the early 20th century, elms, whether species, hybrids, or cultivars, were among the most widely planted ornamental trees in both Europe and North America. They were particularly popular as a street tree in avenue plantings in towns and cities, creating high-tunnelled effects. Their quick growth and variety of foliage and forms, their tolerance of air-pollution, and the comparatively rapid decomposition of their leaf litter in the fall were further advantages. In North America, the species most commonly planted was the American elm (U. americana), which had unique properties that made it ideal for such use - rapid growth, adaptation to a broad range of climates and soils, strong wood, resistance to wind damage, and vase-like growth habit requiring minimal pruning. In Europe, the wych elm (U. glabra) and the field elm (U. minor) were the most widely planted in the countryside, the former in northern areas including Scandinavia and northern Britain, the latter further south. The hybrid between these two, Dutch elm (U. × hollandica), occurs naturally and was also commonly planted. In much of England, the English elm later came to dominate the horticultural landscape. Most commonly planted in hedgerows, it sometimes occurred in densities over 1000/km2. In south-eastern Australia and New Zealand, large numbers of English and Dutch elms, as well as other species and cultivars, were planted as ornamentals following their introduction in the 19th century, while in northern Japan Japanese elm (U. davidiana var. japonica) was widely planted as a street tree. From about 1850 to 1920, the most prized small ornamental elm in parks and gardens was the 'Camperdown' elm (U. glabra 'Camperdownii'), a contorted, weeping cultivar of the wych elm grafted on to a nonweeping elm trunk to give a wide, spreading, and weeping fountain shape in large garden spaces. In northern Europe, elms were among the few trees tolerant of saline deposits from sea spray, which can cause "salt-burning" and die-back. This tolerance made elms reliable both as shelterbelt trees exposed to sea wind, in particular along the coastlines of southern and western Britain and in the Low Countries, and as trees for coastal towns and cities. This belle époque lasted until the First World War, when the elm began its slide into horticultural decline. The impact of the hostilities on Germany, the origin of at least 40 cultivars, coincided with an outbreak of the early strain of DED, Ophiostoma ulmi. The devastation caused by the Second World War, and the demise in 1944 of the huge Späth nursery in Berlin, only accelerated the process. The outbreak of the new, three times more virulent, strain of DED Ophiostoma novo-ulmi in the late 1960s, brought the tree to its nadir. Since around 1990, the elm has enjoyed a renaissance through the successful development in North America and Europe of cultivars highly resistant to DED. Consequently, the total number of named cultivars, ancient and modern, now exceeds 300, although many of the older clones, possibly over 120, have been lost to cultivation. Some of the latter, however, were by today's standards inadequately described or illustrated before the pandemic, and a number may survive, or have regenerated, unrecognised. Enthusiasm for the newer clones often remains low owing to the poor performance of earlier, supposedly disease-resistant Dutch trees released in the 1960s and 1970s. In the Netherlands, sales of elm cultivars slumped from over 56,000 in 1989 to just 6,800 in 2004, whilst in the UK, only four of the new American and European releases were commercially available in 2008. Efforts to develop DED-resistant cultivars began in the Netherlands in 1928 and continued, uninterrupted by World War II, until 1992. Similar programmes were initiated in North America (1937), Italy (1978) and Spain (1986). Research has followed two paths: Species and species cultivars In North America, careful selection has produced a number of trees resistant not only to DED, but also to the droughts and cold winters that occur on the continent. Research in the United States has concentrated on the American elm (U. americana), resulting in the release of DED-resistant clones, notably the cultivars 'Valley Forge' and 'Jefferson'. Much work has also been done into the selection of disease-resistant Asiatic species and cultivars. In 1993, Mariam B. Sticklen and Mark G. Bolyard reported the results of experiments funded by the US National Park Service and conducted at Michigan State University in East Lansing that were designed to apply genetic engineering techniques to the development of DED-resistant strains of American elm trees. In 2007, A. E. Newhouse and F. Schrodt of the State University of New York College of Environmental Science and Forestry in Syracuse reported that young transgenic American elm trees had shown reduced DED symptoms and normal mycorrhizal colonization. In Europe, the European white elm (U. laevis) has received much attention. While this elm has little innate resistance to DED, it is not favoured by the vector bark beetles. Thus it becomes colonized and infected only when no other elms are available, a rare situation in western Europe. Research in Spain has suggested that it may be the presence of a triterpene, alnulin, which makes the tree bark unattractive to the beetle species that spread the disease. This possibility, though, has not been conclusively proven. More recently, field elms Ulmus minor highly resistant to DED have been discovered in Spain, and form the basis of a major breeding programme. Hybrid cultivars Owing to their innate resistance to DED, Asiatic species have been crossed with European species, or with other Asiatic elms, to produce trees that are both highly resistant to disease and tolerant of native climates. After a number of false dawns in the 1970s, this approach has produced a range of reliable hybrid cultivars now commercially available in North America and Europe. Disease resistance is invariably carried by the female parent. Some of these cultivars, notably those with the Siberian elm (Ulmus pumila) in their ancestry, lack the forms for which the iconic American elm and English elm were prized. Moreover, several exported to northwestern Europe have proven unsuited to the maritime climate conditions there, notably because of their intolerance of anoxic conditions resulting from ponding on poorly drained soils in winter. Dutch hybridizations invariably included the Himalayan elm (Ulmus wallichiana) as a source of antifungal genes and have proven more tolerant of wet ground; they should also ultimately reach a greater size. However, the susceptibility of the cultivar 'Lobel', used as a control in Italian trials, to elm yellows has now (2014) raised a question mark over all the Dutch clones. Several highly resistant Ulmus cultivars have been released since 2000 by the Institute of Plant Protection in Florence, most commonly featuring crosses of the Dutch cultivar 'Plantijn' with the Siberian elm to produce resistant trees better adapted to the Mediterranean climate. Cautions regarding novel cultivars Elms take many decades to grow to maturity, and as the introduction of these disease-resistant cultivars is relatively recent, their long-term performance and ultimate size and form cannot be predicted with certainty. The National Elm Trial in North America, begun in 2005, is a nationwide trial to assess strengths and weaknesses of the 19 leading cultivars raised in the US over a 10-year period; European cultivars have been excluded. Meanwhile, in Europe, American and European cultivars are being assessed in field trials started in 2000 by the UK charity Butterfly Conservation. Landscaped parks Central Park The oldest American elm trees in New York City's Central Park were planted in the 1860s by Frederick Law Olmsted, making them among the oldest stands of American elms in the world. Along the Mall and Literary Walk four lines of American elms stretch over the walkway forming a cathedral-like covering. A part of New York City's urban ecology, the elms improve air and water quality, reduce erosion and flooding, and decrease air temperatures during warm days. While the stand is still vulnerable to DED, in the 1980s the Central Park Conservancy undertook aggressive countermeasures such as heavy pruning and removal of extensively diseased trees. These efforts have largely been successful in saving the majority of the trees, although several are still lost each year. Younger American elms that have been planted in Central Park since the outbreak are of the DED-resistant 'Princeton' and 'Valley Forge' cultivars. National Mall Several rows of American elm trees that the National Park Service (NPS) first planted during the 1930s line much of the 1.9-mile-length (3 km) of the National Mall in Washington, DC. DED first appeared on the trees during the 1950s and reached a peak in the 1970s. The NPS used a number of methods to control the epidemic, including sanitation, pruning, injecting trees with fungicide, and replanting with DED-resistant cultivars. The NPS combated the disease's local insect vector, the smaller European elm bark beetle (Scolytus multistriatus), by trapping and by spraying with insecticides. As a result, the population of American elms planted on the Mall and its surrounding areas has remained intact for more than 80 years. Uses Wood Elm wood is valued for its interlocking grain, and consequent resistance to splitting, with significant uses in wagon-wheel hubs, chair seats, and coffins. The bodies of Japanese Taiko drums are often cut from the wood of old elm trees, as the wood's resistance to splitting is highly desired for nailing the skins to them, and a set of three or more is often cut from the same tree. The elm's wood bends well and distorts easily. The often long, straight trunks were favoured as a source of timber for keels in ship construction. Elm is also prized by bowyers; of the ancient bows found in Europe, a large portion are elm. During the Middle Ages, elm was also used to make longbows if yew was unavailable. The first written references to elm occur in the Linear B lists of military equipment at Knossos in the Mycenaean period. Several of the chariots are of elm ("πτε-ρε-ϝα", pte-re-wa), and the lists twice mention wheels of elmwood. Hesiod says that ploughs in Ancient Greece were also made partly of elm. The density of elm wood varies between species, but averages around 560 kg/m3. Elm wood is also resistant to decay when permanently wet, and hollowed trunks were widely used as water pipes during the medieval period in Europe. Elm was also used as piers in the construction of the original London Bridge, but this resistance to decay in water does not extend to ground contact. Viticulture The Romans, and more recently the Italians, planted elms in vineyards as supports for vines. Lopped at 3 m, the elms' quick growth, twiggy lateral branches, light shade, and root suckering made them ideal trees for this purpose. The lopped branches were used for fodder and firewood. Ovid in his Amores characterizes the elm as "loving the vine": ulmus amat vitem, vitis non deserit ulmum (the elm loves the vine, the vine does not desert the elm), and the ancients spoke of the "marriage" between elm and vine. Medicinal products The mucilaginous inner bark of the slippery elm (Ulmus rubra) has long been used as a demulcent, and is still produced commercially for this purpose in the US with approval for sale as a nutritional supplement by the Food and Drug Administration. Fodder Elms also have a long history of cultivation for fodder, with the leafy branches cut to feed livestock. The practice continues today in the Himalaya, where it contributes to serious deforestation. Biomass As fossil fuel resources diminish, increasing attention is being paid to trees as sources of energy. In Italy, the Istituto per la Protezione delle Piante is (2012) in the process of releasing to commerce very fast-growing elm cultivars, able to increase in height by more than 2 m (6 ft) per year. Food Elm bark, cut into strips and boiled, sustained much of the rural population of Norway during the great famine of 1812. The seeds are particularly nutritious, containing 45% crude protein, and less than 7% fibre by dry mass. Alternative medicine Elm has been listed as one of the 38 substances that are used to prepare Bach flower remedies, a kind of alternative medicine. Bonsai Chinese elm (Ulmus parvifolia) is a popular choice for bonsai owing to its tolerance of severe pruning. Genetic resource conservation In 1997, a European Union elm project was initiated, its aim to coordinate the conservation of all the elm genetic resources of the member states and, among other things, to assess their resistance to Dutch elm disease. Accordingly, over 300 clones were selected and propagated for testing. Culture Notable elm trees Many elm trees of various kinds have attained great size or otherwise become particularly noteworthy. In art Many artists have admired elms for the ease and grace of their branching and foliage, and have painted them with sensitivity. Elms are a recurring element in the landscapes and studies of, for example, John Constable, Ferdinand Georg Waldmüller, Frederick Childe Hassam, Karel Klinkenberg, and George Inness. In mythology and literature In Greek mythology, the nymph Ptelea (Πτελέα, Elm) was one of the eight hamadryads, nymphs of the forest and daughters of Oxylos and Hamadryas. In his Hymn to Artemis, poet Callimachus (third century BC) tells how, at the age of three, the infant goddess Artemis practised her newly acquired silver bow and arrows, made for her by Hephaestus and the Cyclopes, by shooting first at an elm, then at an oak, before turning her aim on a wild animal: πρῶτον ἐπὶ πτελέην, τὸ δὲ δεύτερον ἧκας ἐπὶ δρῦν, τὸ τρίτον αὖτ᾽ ἐπὶ θῆρα. The first reference in literature to elms occurs in the Iliad. When Eetion, father of Andromache, is killed by Achilles during the Trojan War, the mountain nymphs plant elms on his tomb ("περί δὲ πτελέας ἐφύτευσαν νύμφαι ὀρεστιάδες, κoῦραι Διὸς αἰγιόχoιo"). Also in the Iliad, when the River Scamander, indignant at the sight of so many corpses in his water, overflows and threatens to drown Achilles, the latter grasps a branch of a great elm in an attempt to save himself ("ὁ δὲ πτελέην ἕλε χερσὶν εὐφυέα μεγάλην". The nymphs also planted elms on the tomb in the Thracian Chersonese of "great-hearted Protesilaus" ("μεγάθυμου Πρωτεσιλάου"), the first Greek to fall in the Trojan War. These elms grew to be the tallest in the known world, but when their topmost branches saw far off the ruins of Troy, they immediately withered, so great still was the bitterness of the hero buried below, who had been loved by Laodamia and slain by Hector. The story is the subject of a poem by Antiphilus of Byzantium (first century AD) in the Palatine Anthology: Θεσσαλὲ Πρωτεσίλαε, σὲ μὲν πολὺς ᾄσεται αἰών, Tρoίᾳ ὀφειλoμένoυ πτώματος ἀρξάμενoν• σᾶμα δὲ τοι πτελέῃσι συνηρεφὲς ἀμφικoμεῦση Nύμφαι, ἀπεχθoμένης Ἰλίoυ ἀντιπέρας. Δένδρα δὲ δυσμήνιτα, καὶ ἤν ποτε τεῖχoς ἴδωσι Tρώϊον, αὐαλέην φυλλοχoεῦντι κόμην. ὅσσoς ἐν ἡρώεσσι τότ᾽ ἦν χόλoς, oὗ μέρoς ἀκμὴν ἐχθρὸν ἐν ἀψύχoις σώζεται ἀκρέμoσιν. [:Thessalian Protesilaos, a long age shall sing your praises, Of the destined dead at Troy the first; Your tomb with thick-foliaged elms they covered, The nymphs, across the water from hated Ilion. Trees full of anger; and whenever that wall they see, Of Troy, the leaves in their upper crown wither and fall. So great in the heroes was the bitterness then, some of which still Remembers, hostile, in the soulless upper branches.] Protesilaus had been king of Pteleos () in Thessaly, which took its name from the abundant elms () in the region. Elms occur often in pastoral poetry, where they symbolise the idyllic life, their shade being mentioned as a place of special coolness and peace. In the first Idyll of Theocritus (third century BC), for example, the goatherd invites the shepherd to sit "here beneath the elm" ("δεῦρ' ὑπὸ τὰν πτελέαν") and sing. Beside elms, Theocritus places "the sacred water" ("") of the Springs of the Nymphs and the shrines to the nymphs. Aside from references literal and metaphorical to the elm and vine theme, the tree occurs in Latin literature in the Elm of Dreams in the Aeneid. When the Sibyl of Cumae leads Aeneas down to the Underworld, one of the sights is the Stygian Elm: In medio ramos annosaque bracchia pandit ulmus opaca, ingens, quam sedem somnia vulgo uana tenere ferunt, foliisque sub omnibus haerent. [:Spreads in the midst her boughs and agéd arms an elm, huge, shadowy, where vain dreams, 'tis said, are wont to roost them, under every leaf close-clinging.] Virgil refers to a Roman superstition (vulgo) that elms were trees of ill-omen because their fruit seemed to be of no value. It has been noted that two elm-motifs have arisen from classical literature: (1) the 'Paradisal Elm' motif, arising from pastoral idylls and the elm-and-vine theme, and (2) the 'Elm and Death' motif, perhaps arising from Homer's commemorative elms and Virgil's Stygian Elm. Many references to elm in European literature from the Renaissance onwards fit into one or other of these categories. There are two examples of pteleogenesis (:birth from elms) in world myths. In Germanic and Scandinavian mythology the first woman, Embla, was fashioned from an elm, while in Japanese mythology Kamuy Fuchi, the chief goddess of the Ainu people, "was born from an elm impregnated by the Possessor of the Heavens". The elm occurs frequently in English literature, one of the best known instances being in Shakespeare's A Midsummer Night's Dream, where Titania, Queen of the Fairies, addresses her beloved Nick Bottom using an elm-simile. Here, as often in the elm-and-vine motif, the elm is a masculine symbol: Sleep thou, and I will wind thee in my arms. ... the female Ivy so Enrings the barky fingers of the Elm. O, how I love thee! how I dote on thee! Another of the most famous kisses in English literature, that of Paul and Helen at the start of Forster's Howards End, is stolen beneath a great wych elm. The elm tree is also referenced in children's literature. An Elm Tree and Three Sisters by Norma Sommerdorf is a children's book about three young sisters who plant a small elm tree in their backyard. In politics The cutting of the elm was a diplomatic altercation between the kings of France and England in 1188, during which an elm tree near Gisors in Normandy was felled. In politics, the elm is associated with revolutions. In England after the Glorious Revolution of 1688, the final victory of parliamentarians over monarchists, and the arrival from Holland, with William III and Mary II, of the Dutch elm hybrid, planting of this cultivar became a fashion among enthusiasts of the new political order. In the American Revolution, the Liberty Tree was an American white elm in Boston, Massachusetts, in front of which, from 1765, the first resistance meetings were held against British attempts to tax the American colonists without democratic representation. When the British, knowing that the tree was a symbol of rebellion, felled it in 1775, the Americans took to widespread Liberty Elm planting, and sewed elm symbols on to their revolutionary flags. Elm planting by American Presidents later became something of a tradition. In the French Revolution, too, Les arbres de la liberté (Liberty Trees), often elms, were planted as symbols of revolutionary hopes, the first in Vienne, Isère, in 1790, by a priest inspired by the Boston elm. L'Orme de La Madeleine (:the Elm of La Madeleine), Faycelles, Département de Lot, planted around 1790 and surviving to this day, was a case in point. By contrast, a famous Parisian elm associated with the Ancien Régime, L'Orme de Saint-Gervais in the Place St-Gervais, was felled by the revolutionaries; church authorities planted a new elm in its place in 1846, and an early 20th-century elm stands on the site today. Premier Lionel Jospin, obliged by tradition to plant a tree in the garden of the Hôtel Matignon, the official residence and workplace of Prime Ministers of France, insisted on planting an elm, so-called 'tree of the Left', choosing the new disease-resistant hybrid 'Clone 762' (Ulmus 'Wanoux' = ). In the French Republican Calendar, in use from 1792 to 1806, the 12th day of the month Ventôse (= 2 March) was officially named "jour de l'Orme", Day of the Elm. Liberty Elms were also planted in other countries in Europe to celebrate their revolutions, an example being L'Olmo di Montepaone, L'Albero della Libertà (:the Elm of Montepaone, Liberty Tree) in Montepaone, Calabria, planted in 1799 to commemorate the founding of the democratic Parthenopean Republic, and surviving until it was brought down by a recent storm (it has since been cloned and 'replanted'). After the Greek Revolution of 1821–32, a thousand young elms were brought to Athens from Missolonghi, "Sacred City of the Struggle" against the Turks and scene of Lord Byron's death, and planted in 1839–40 in the National Garden. In an ironic development, feral elms have spread and invaded the grounds of the abandoned Greek royal summer palace at Tatoi in Attica. In a chance event linking elms and revolution, on the morning of his execution (30 January 1649), walking to the scaffold at the Palace of Whitehall, King Charles I turned to his guards and pointed out, with evident emotion, an elm near the entrance to Spring Gardens that had been planted by his brother in happier days. The tree was said to be still standing in the 1860s. In local history and place names The name of what is now the London neighbourhood of Seven Sisters is derived from seven elms which stood there at the time when it was a rural area, planted a circle with a walnut tree at their centre, and traceable on maps back to 1619.
Biology and health sciences
Rosales
null
1070326
https://en.wikipedia.org/wiki/Total%20derivative
Total derivative
In mathematics, the total derivative of a function at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. In many situations, this is the same as considering all partial derivatives simultaneously. The term "total derivative" is primarily used when is a function of several variables, because when is a function of a single variable, the total derivative is the same as the ordinary derivative of the function. The total derivative as a linear map Let be an open subset. Then a function is said to be (totally) differentiable at a point if there exists a linear transformation such that The linear map is called the (total) derivative or (total) differential of at . Other notations for the total derivative include and . A function is (totally) differentiable if its total derivative exists at every point in its domain. Conceptually, the definition of the total derivative expresses the idea that is the best linear approximation to at the point . This can be made precise by quantifying the error in the linear approximation determined by . To do so, write where equals the error in the approximation. To say that the derivative of at is is equivalent to the statement where is little-o notation and indicates that is much smaller than as . The total derivative is the unique linear transformation for which the error term is this small, and this is the sense in which it is the best linear approximation to . The function is differentiable if and only if each of its components is differentiable, so when studying total derivatives, it is often possible to work one coordinate at a time in the codomain. However, the same is not true of the coordinates in the domain. It is true that if is differentiable at , then each partial derivative exists at . The converse does not hold: it can happen that all of the partial derivatives of at exist, but is not differentiable at . This means that the function is very "rough" at , to such an extreme that its behavior cannot be adequately described by its behavior in the coordinate directions. When is not so rough, this cannot happen. More precisely, if all the partial derivatives of at exist and are continuous in a neighborhood of , then is differentiable at . When this happens, then in addition, the total derivative of is the linear transformation corresponding to the Jacobian matrix of partial derivatives at that point. The total derivative as a differential form When the function under consideration is real-valued, the total derivative can be recast using differential forms. For example, suppose that is a differentiable function of variables . The total derivative of at may be written in terms of its Jacobian matrix, which in this instance is a row matrix: The linear approximation property of the total derivative implies that if is a small vector (where the denotes transpose, so that this vector is a column vector), then Heuristically, this suggests that if are infinitesimal increments in the coordinate directions, then In fact, the notion of the infinitesimal, which is merely symbolic here, can be equipped with extensive mathematical structure. Techniques, such as the theory of differential forms, effectively give analytical and algebraic descriptions of objects like infinitesimal increments, . For instance, may be inscribed as a linear functional on the vector space . Evaluating at a vector in measures how much "points" in the th coordinate direction. The total derivative is a linear combination of linear functionals and hence is itself a linear functional. The evaluation measures how much points in the direction determined by at , and this direction is the gradient. This point of view makes the total derivative an instance of the exterior derivative. Suppose now that is a vector-valued function, that is, . In this case, the components of are real-valued functions, so they have associated differential forms . The total derivative amalgamates these forms into a single object and is therefore an instance of a vector-valued differential form. The chain rule for total derivatives The chain rule has a particularly elegant statement in terms of total derivatives. It says that, for two functions and , the total derivative of the composite function at satisfies If the total derivatives of and are identified with their Jacobian matrices, then the composite on the right-hand side is simply matrix multiplication. This is enormously useful in applications, as it makes it possible to account for essentially arbitrary dependencies among the arguments of a composite function. Example: Differentiation with direct dependencies Suppose that f is a function of two variables, x and y. If these two variables are independent, so that the domain of f is , then the behavior of f may be understood in terms of its partial derivatives in the x and y directions. However, in some situations, x and y may be dependent. For example, it might happen that f is constrained to a curve . In this case, we are actually interested in the behavior of the composite function . The partial derivative of f with respect to x does not give the true rate of change of f with respect to changing x because changing x necessarily changes y. However, the chain rule for the total derivative takes such dependencies into account. Write . Then, the chain rule says By expressing the total derivative using Jacobian matrices, this becomes: Suppressing the evaluation at for legibility, we may also write this as This gives a straightforward formula for the derivative of in terms of the partial derivatives of and the derivative of . For example, suppose The rate of change of f with respect to x is usually the partial derivative of f with respect to x; in this case, However, if y depends on x, the partial derivative does not give the true rate of change of f as x changes because the partial derivative assumes that y is fixed. Suppose we are constrained to the line Then and the total derivative of f with respect to x is which we see is not equal to the partial derivative . Instead of immediately substituting for y in terms of x, however, we can also use the chain rule as above: Example: Differentiation with indirect dependencies While one can often perform substitutions to eliminate indirect dependencies, the chain rule provides for a more efficient and general technique. Suppose is a function of time and variables which themselves depend on time. Then, the time derivative of is The chain rule expresses this derivative in terms of the partial derivatives of and the time derivatives of the functions : This expression is often used in physics for a gauge transformation of the Lagrangian, as two Lagrangians that differ only by the total time derivative of a function of time and the generalized coordinates lead to the same equations of motion. An interesting example concerns the resolution of causality concerning the Wheeler–Feynman time-symmetric theory. The operator in brackets (in the final expression above) is also called the total derivative operator (with respect to ). For example, the total derivative of is Here there is no term since itself does not depend on the independent variable directly. Total differential equation A total differential equation is a differential equation expressed in terms of total derivatives. Since the exterior derivative is coordinate-free, in a sense that can be given a technical meaning, such equations are intrinsic and geometric. Application to equation systems In economics, it is common for the total derivative to arise in the context of a system of equations. For example, a simple supply-demand system might specify the quantity q of a product demanded as a function D of its price p and consumers' income I, the latter being an exogenous variable, and might specify the quantity supplied by producers as a function S of its price and two exogenous resource cost variables r and w. The resulting system of equations determines the market equilibrium values of the variables p and q. The total derivative of p with respect to r, for example, gives the sign and magnitude of the reaction of the market price to the exogenous variable r. In the indicated system, there are a total of six possible total derivatives, also known in this context as comparative static derivatives: , , , , , and . The total derivatives are found by totally differentiating the system of equations, dividing through by, say , treating and as the unknowns, setting , and solving the two totally differentiated equations simultaneously, typically by using Cramer's rule.
Mathematics
Multivariable and vector calculus
null
1071613
https://en.wikipedia.org/wiki/Papaver%20somniferum
Papaver somniferum
Papaver somniferum, commonly known as the opium poppy or breadseed poppy, is a species of flowering plant in the family Papaveraceae. It is the species of plant from which both opium and poppy seeds are derived and is also a valuable ornamental plant grown in gardens. Its native range was east of the Mediterranean Sea, but has since been obscured and vastly expanded by introduction and cultivation from ancient times to the present day, being naturalized across much of Europe and Asia. This poppy is grown as an agricultural crop on a large scale, for one of three primary purposes: to produce poppy seeds, to produce opium (for use mainly by the pharmaceutical industry), and to produce other alkaloids (mainly thebaine and oripavine) that are processed by pharmaceutical companies into drugs such as hydrocodone and oxycodone. Each of these goals has special breeds that are targeted at one of these businesses, and breeding efforts (including biotechnological ones) are continually underway. A comparatively small amount of P. somniferum is also produced commercially for ornamental purposes. Today many varieties have been bred that do not produce a significant quantity of opium. The cultivar 'Sujata' produces no latex at all. Breadseed poppy is more accurate as a common name today because all varieties of P. somniferum produce edible seeds. This differentiation has strong implications for legal policy surrounding the growing of this plant. Description Papaver somniferum is an annual herb growing to about tall. The plant is strongly glaucous, giving a greyish-green appearance, and the stem and leaves bear a sparse distribution of coarse hairs. The large leaves are lobed, the upper stem leaves clasping the stem, the lowest leaves with a short petiole. The flowers are up to diameter, normally with four white, mauve or red petals, sometimes with dark markings at the base. The fruit is a hairless, rounded capsule topped with 12–18 radiating stigmatic rays, or fluted cap. All parts of the plant exude white latex when wounded. Metabolism The alkaloids are organic nitrogenous compounds, derivatives of secondary metabolism, synthesized through the metabolic pathway of benzylisoquinoline. First, the amino acid phenylalanine, through the enzyme phenylalanine hydroxylase, is transformed into tyrosine. Tyrosine can follow two different routes: by tyrosine hydroxylase it can form L-dopamine (L-DOPA), or it can be reduced to form 4-phenylhydroxyacetaldehyde (4-HPAA). Subsequently, L-DOPA reacts with 4-HPAA and, through a series of reactions, forms (S) -norcoclaurine, which carries the benzylisoquinoline skeleton that gives its name to this pathway. The conversion of (S) -norcoclaurin to (S) -reticuline is one of the key points, since from (S) -reticuline morphine can be formed through the morphinan route, noscapine through the path of the noscapina or berberina. Genome The poppy genome contains 51,213 genes encoding proteins distributed 81.6% in 11 individual chromosomes and 18.4% remaining in unplaced scaffolds. In addition, 70.9% of the genome is made up of repetitive elements, of which the most represented are the long terminal repeat retrotransposons. This enrichment of genes is related to the maintenance of homeostasis and a positive regulation of transcription. The analysis of synergy of the opium poppy reveals traces of segmental duplications 110 million years ago (MYA), before the divergence between Papaveraceae and Ranunculaceae, and an event of duplication of the complete genome makes 7.8 MYA. The genes are possibly grouped as follows: The genes responsible for the conversion of (S) -reticuline to noscapine are found on chromosome 11. The genes responsible for the conversion of (S) -reticuline to thebaine are found on chromosome 11. The genes responsible for the conversion of thebaine are found in chromosome 1, chromosome 2, chromosome 7, and perhaps others. Taxonomy Papaver somniferum was formally described by the Swedish botanist Carl Linnaeus in his seminal publication Species Plantarum in 1753 on page 508. Varieties and cultivars P. somniferum has had a very long tradition of use, starting in the Neolithic. This long period of time allowed the development of a broad range of different forms. In total there are 52 botanical varieties. Breeding of P. somniferum faces a challenge caused by the contradictory breeding goals for this species. On one hand a very high content of alkaloids is requested for medical uses. The global demand for the alkaloids and the pharmaceutical derivatives has increased in the past years. Therefore, there is a need for the development of varieties with a high opium yield. On the other hand, the food industry demands as low alkaloid contents as possible. There is one accepted subspecies, P. somniferum subsp. setigerum (DC.) Arcang. There are also many varieties and cultivars. Colors of the flowers vary widely, as do other physical characteristics, such as number and shape of petals , number of flowers and fruits, number of seeds, color of seeds, and production of opium. Papaver somniferum var. paeoniflorum is a variety with flowers that are highly double, and are grown in many colors. P. somniferum var. laciniatum is a variety with flowers that are highly double and deeply lobed. The variety Sujata produces no latex and no commercial utility for opioid production. Distribution and habitat The native range of opium poppy is probably the Eastern Mediterranean, but extensive cultivation and introduction of the species throughout Europe since ancient times have obscured its origin. It has escaped from cultivation, or has been introduced and become naturalized extensively in all regions of the British Isles, particularly in the south and east and in almost all other countries of the world with suitable, temperate climates. Ecology Diseases P. somniferum is susceptible to several fungal, insect and virus infections including seed borne diseases such as downy mildew and root rot. The use of pesticides in combination to cultural methods have been considered as major control measures for various poppy diseases. The fungal pathogen Peronospora arborescens, the causal agent of downy mildew, occurs preferentially during wet and humid conditions. This oomycete penetrates the roots through oospores and infects the leaves as conidia in a secondary infection. The fungus causes hypertrophy and curvature of the stem and flower stalks. The symptoms are chlorosis and curling of the affected tissues with necrotic spots. The leaf under-surface is covered with a downy mildew coating containing conidiospores that spread the infection further leading to plant damage and death. Another downy mildew species, Peronospora somniferi, produces systemic infections leading to stunting and deformation of poppy plants. Downy mildew can be controlled preventively at the initial stage of seed development through several fungicide applications. Leaf blight caused by the fungus Helminthosporium papaveris is one of the most destructive poppy diseases worldwide. The seed-borne fungus causes root rot in young plants and stunted stems in plants at a higher development stage, where leaf spots appear on the leaves and is being transmitted to capsules and seeds. Early sowing of seeds and deep plowing of poppy residues can reduce fungal inoculum during the plant growing season in the following year on neighboring poppy stocks, respectively. Mosaic diseases in p. somniferum are caused by rattle virus and the Carlavirus. In 2006, a novel virus tentatively called "opium poppy mosaic virus" (OPMV) from the genus Umbravirus was isolated from p. somniferum containing leaf mosaic and mottling symptoms, in New Zealand. Pests There are only a few pests that can do harm to P. somniferum. Flea beetles perforate the leaves of young plants and aphids suck on the sap of the flower buds. The poppy root weevil (Stenocarus ruficornis) is another significant pest. The insect lives in the soil and migrates in spring to the poppy fields after crop emergence. Adults damage the leaves of small plants by eating them. Female lay their eggs into the tissue of lower leaves. Insect larvae hatch and burrow into the soil to complete their life cycle on the poppy roots as adults. Cultivation In the growth development of P. somniferum, six stages can be distinguished. The growth development starts with the growth of the seedlings. In a second step the rosette-type leaves and stalks are formed. After that budding (hook stage) takes place as a third step. The hook stage is followed by flowering. Subsequently, technical maturity is reached, which means that the plant is ready for cutting. The last step is biological maturity; dry seeds are ripened. The photoperiod seems to be the main determinant of flower development of P. somniferum. P. somniferum shows a very slow development in the beginning of its vegetation period. Due to this fact the competition of weeds is very high in early stages. It is very important to control weeds effectively in the first 50 days after sowing. Additionally, Papaver somniferum is rather susceptible to herbicides. The pre-emergence application of the herbicide chlortoluron has been shown to be effective in reducing weed levels. However, in the last decade the weed management of Papaver somniferum has shifted from pre-emergence treatments to post-emergence treatments. Especially, the application of the two herbicides mesotrione and tembotrione has become very popular. The combined application of those two herbicides has been shown to be recommendable for effective weed management in Papaver somniferum. Sowing time (autumn or spring), preceding crop and soil texture are important variables influencing the weed species composition. A highly abundant weed species in Papaver somniferum fields was shown to be Papaver rhoeas. Papaver somniferum and Papaver rhoeas are congeners and belong to the same plant family, which impedes the chemical control of this weed species. Therefore, weed management represents a big challenge and requires technological knowledge from the farmer. In order to increase the efficiency of weed control not only chemical weed control should be applied but also mechanical weed control. For P. somniferum, a growth density of 70 to 80 plants per square meter is recommended. Latex-to-biomass yield is greatest under conditions of slight water deficit. Ornamental Live plants and seeds of the opium poppy are widely sold by seed companies and nurseries in most of the western world, including the United States. Poppies are sought after by gardeners for the vivid coloration of the blooms, the hardiness and reliability of the poppy plants, the exotic chocolate-vegetal fragrance note of some cultivars, and the ease of growing the plants from purchased flats of seedlings or by direct sowing of the seed. Poppy seed pods are also sold for dried flower arrangements. Though "opium poppy and poppy straw" are listed in Schedule II of the United States' Controlled Substances Act, P. somniferum can be grown legally in the United States as a seed crop or ornamental flower. During the summer, opium poppies can be seen flowering in gardens throughout North America and Europe, and displays are found in many private plantings, as well as in public botanical and museum gardens such as United States Botanical Garden, Missouri Botanical Garden, and North Carolina Botanical Garden. Many countries grow the plants, and some rely heavily on the commercial production of the drug as a major source of income. As an additional source of profit, the seeds of the same plants are sold for use in foods, so the cultivation of the plant is a significant source of income. This international trade in seeds of P. somniferum was addressed by a UN resolution "to fight the international trade in illicit opium poppy seeds" on 28 July 1998. Production Food In 2018, world production of poppy seeds for consumption was 76,240 tonnes, led by Turkey with 35% of the world total (table). Poppy seed production and trade are susceptible to fluctuations mainly due to unstable yields. The performance of most genotypes of Papaver somniferum is very susceptible to environmental changes. This behaviour led to a stagnation of the poppy seed market value between 2008 and 2009 as a consequence of high stock levels, bad weather and poor quality. The world leading importer of poppy seed is India (16 000 tonnes), followed by Russia, Poland and Germany. Poppy seed oil remains a niche product due to the lower yield compared to conventional oil crops. Medicine Australia (Tasmania), Turkey and India are the major producers of poppy for medicinal purposes and poppy-based drugs, such as morphine or codeine. The New York Times reported, in 2014, that Tasmania was the largest producer of the poppy cultivars used for thebaine (85% of the world's supply) and oripavine (100% of the world's supply) production. Tasmania also had 25% of the world's opium and codeine production. Restrictions In most of Central and Eastern Europe, poppy seed is commonly used for traditional pastries and cakes, and it is legal to grow poppies throughout the region, although Germany requires a licence. Since January 1999 in the Czech Republic, according to the 167/1998 Sb. Addictive Substances Act, poppies growing in fields larger than is obliged for reporting to the local Custom Office. Extraction of opium from the plants is prohibited by law (§ 15 letter d/ of the act). It is also prohibited to grow varieties with more than 0.8% of morphine in dry matter of their capsules, excluding research and experimental purposes (§24/1b/ of the act). The name Czech blue poppy refers to blue poppy seeds used for food. The United Kingdom does not require a licence for opium poppy cultivation, but does for extracting opium for medicinal products. In the United States, opium poppies and poppy straw are prohibited. As the opium poppy is legal for culinary or esthetic reasons, poppies were once grown as a cash crop by farmers in California. The law of poppy cultivation in the United States is somewhat ambiguous. The reason for the ambiguity is that the Opium Poppy Control Act of 1942 (now repealed) stated that any opium poppies should be declared illegal, except if the farmers were issued a state permit. § 3 of the Opium Poppy Control Act stated: It shall be unlawful for any person who is not the holder of a license authorizing him to produce the opium poppy, duly issued to him by the Secretary of the Treasury in accordance with the provisions of this Act, to produce the opium poppy, or to permit the production of the opium poppy in or upon any place owned, occupied, used, or controlled by him. This led to the Poppy Rebellion, and to the Narcotics Bureau arresting anyone planting opium poppies and forcing the destruction of poppy fields of anyone who defied the prohibition of poppy cultivation. Though the press of those days favored the Federal Bureau of Narcotics, the state of California supported the farmers who grew opium poppies for their seeds for uses in foods such as poppy seed muffins. Today, this area of law has remained vague and remains somewhat controversial in the United States. The Opium Poppy Control Act of 1942 was repealed on 27 October 1970. Under the Federal Controlled Substances Act, opium poppy and poppy straw are listed as Schedule II drugs under ACSN 9630. Most (all?) states also use this classification under the uniform penal code. Possession of a Schedule II drug is a federal and state felony. Canada forbids possessing, seeking or obtaining the opium poppy (Papaver somniferum), its preparations, derivatives, alkaloids and salts, although an exception is made for poppy seed. In some parts of Australia, P. somniferum is illegal to cultivate, but in Tasmania, some 50% of the world supply is cultivated. In New Zealand, it is legal to cultivate the opium poppy as long as it is not used to produce controlled drugs. In United Arab Emirates the cultivation of the opium poppy is illegal, as is possession of poppy seed. At least one man has been imprisoned for possessing poppy seed obtained from a bread roll. Burma bans cultivation in certain provinces. In northern Burma bans have ended a century-old tradition of growing the opium poppy. Between 20,000 and 30,000 former poppy farmers left the Kokang region as a result of the ban in 2002. People from the Wa region, where the ban was implemented in 2005, fled to areas where growing opium is still possible. In South Korea, the cultivation of the opium poppy is strictly prohibited. Uses History Use of the opium poppy predates written history. The making and use of opium was known to the ancient Minoans. Its sap was later named opion by the ancient Greeks. The English name is based on the Latin adaptation of the Greek form. Evidence of the early domestication of opium poppy has been discovered through small botanical remains found in regions of the Mediterranean and west of the Rhine River, predating circa 5000 BC. These samples found in various Neolithic sites show the incredibly early cultivation and natural spread of the plant throughout western Europe. Opium was used for treating asthma, stomach illnesses, and bad eyesight. Opium became a major colonial commodity, moving legally and illegally through trade networks on the Indian subcontinent, Colonial America, Qing China and others. Members of the East India Company saw the opium trade as an investment opportunity beginning in 1683. In 1773, the Governor of Bengal established a monopoly on the production of Bengal opium, on behalf of the East India Company administration. The cultivation and manufacture of Indian opium was further centralized and controlled through a series of acts issued between 1797 and 1949. East India Company merchants balanced an economic deficit from the importation of Chinese tea by selling Indian opium which was smuggled into China in defiance of Qing government bans. This trade led to the First and Second Opium Wars. Many modern writers, particularly in the 19th century, have written on the opium poppy and its effects, notably Thomas de Quincey in Confessions of an English Opium Eater. The French Romantic composer Hector Berlioz used opium for inspiration, subsequently producing his Symphonie Fantastique. In this work, a young artist overdoses on opium and experiences a series of visions of his unrequited love. In the US, the Drug Enforcement Administration raided Thomas Jefferson's Monticello estate in 1987. It removed the poppy plants that had been planted continually there since Jefferson was alive and using opium from them. Employees of the foundation also destroyed gift shop items like shirts depicting the poppy and packets of the heirloom seed. Poppy seeds and oil Poppy seeds from Papaver somniferum are an important food item and the source of poppy seed oil, an edible oil that has many uses. The seeds contain very low levels of opiates and the oil extracted from them contains even less. Both the oil and the seed residue also have commercial uses. The poppy press cake as a residue of the oil pressing can be used as fodder for different animals as e.g., poultry and fancy fowls. Especially in the time of the molt of the birds, the cake is nutritive and fits to their special needs. Next to the animal fodder, poppy offers other by-products. For example, the stem of the plant can be used for energy briquettes and pellets to heat. Poppy seeds are used as a food in many cultures. They may be used whole by bakers to decorate their products or milled and mixed with sugar as a sweet filling. They have a creamy and nut-like flavor, and when used with ground coconut, the seeds provide a unique and flavour-rich curry base. They can be dry roasted and ground to be used in wet curry (curry paste) or dry curry. When the European Union attempted to ban the cultivation of Papaver somniferum by private individuals on a small scale (such as personal gardens), citizens in EU countries where poppy seed is eaten heavily, such as countries in the Central-Eastern region, strongly resisted the plan, causing the EU to change course. Singapore, UAE, and Saudi Arabia are among nations that ban even having poppy seeds, not just growing the plants for them. The UAE has a long prison sentence for anyone possessing poppy seeds. Opiates The opium poppy is the principal source of opium, the dried latex produced by the seed pods. Opium contains a class of naturally occurring alkaloids known as opiates, that include morphine, codeine, thebaine, oripavine, papaverine and noscapine. The specific epithet somniferum means "sleep-bringing", referring to the sedative properties of some of these opiates. The opiate drugs are extracted from opium. The latex oozes from incisions made on the green seed pods and is collected once dry. Tincture of opium or laudanum, consisting of opium dissolved in alcohol or a mixture of alcohol and water, is one of many unapproved drugs regulated by the U.S. Food and Drug Administration (FDA). Its marketing and distribution persists because its historical use preceded the Federal Food, Drug & Cosmetic Act of 1938. Tincture of opium B.P., containing 1% w/v of anhydrous morphine, also remains in the British Pharmacopoeia, listed as a Class A substance under the Misuse of Drugs Act 1971. Morphine is the predominant alkaloid found in the cultivated varieties of opium poppy that are used for opium production. Other varieties produce minimal opium or none at all, such as the latex-free Sujata type. Non-opium cultivars that are planted for drug production feature a high level of thebaine or oripavine. Those are refined into drugs like oxycodone. Raw opium contains about 8–14% morphine by dry weight, or more in high-yield cultivars. It may be used directly or chemically modified to produce semi-synthetic opioids such as heroin. Culture Opium poppies (flower and fruit) appear on the coat of arms of the Royal College of Anaesthetists.
Biology and health sciences
Ranunculales
Plants
1072236
https://en.wikipedia.org/wiki/Unit%20operation
Unit operation
In chemical engineering and related fields, a unit operation is a basic step in a process. Unit operations involve a physical change or chemical transformation such as separation, crystallization, evaporation, filtration, polymerization, isomerization, and other reactions. For example, in milk processing, the following unit operations are involved: homogenization, pasteurization, and packaging. These unit operations are connected to create the overall process. A process may require many unit operations to obtain the desired product from the starting materials, or feedstocks. History Historically, the different chemical industries were regarded as different industrial processes and with different principles. Arthur Dehon Little developed the concept of "unit operations" to explain industrial chemistry processes in 1916. In 1923, William H. Walker, Warren K. Lewis and William H. McAdams wrote the book The Principles of Chemical Engineering and explained that the variety of chemical industries have processes which follow the same physical laws. They summed up these similar processes into unit operations. Each unit operation follows the same physical laws and may be used in all relevant chemical industries. For instance, the same engineering is required to design a mixer for either napalm or porridge, even if the use, market or manufacturers are very different. The unit operations form the fundamental principles of chemical engineering. Chemical engineering Chemical engineering unit operations consist of five classes: Fluid flow processes, including fluids transportation, filtration, and solids fluidization. Heat transfer processes, including evaporation and heat exchange. Mass transfer processes, including gas absorption, distillation, extraction, adsorption, and drying. Thermodynamic processes, including gas liquefaction, and refrigeration. Mechanical processes, including solids transportation, crushing and pulverization, and screening and sieving. Chemical engineering unit operations also fall in the following categories which involve elements from more than one class: Combination (mixing) Separation (distillation, crystallization) Reaction (chemical reaction) Furthermore, there are some unit operations which combine even these categories, such as reactive distillation and stirred tank reactors. A "pure" unit operation is a physical transport process, while a mixed chemical/physical process requires modeling both the physical transport, such as diffusion, and the chemical reaction. This is usually necessary for designing catalytic reactions, and is considered a separate discipline, termed chemical reaction engineering. Chemical engineering unit operations and chemical engineering unit processing form the main principles of all kinds of chemical industries and are the foundation of designs of chemical plants, factories, and equipment used. In general, unit operations are designed by writing down the balances for the transported quantity for each elementary component (which may be infinitesimal) in the form of equations, and solving the equations for the design parameters, then selecting an optimal solution out of the several possible and then designing the physical equipment. For instance, distillation in a plate column is analyzed by writing down the mass balances for each plate, wherein the known vapor-liquid equilibrium and efficiency, drip out and drip in comprise the total mass flows, with a sub-flow for each component. Combining a stack of these gives the system of equations for the whole column. There is a range of solutions, because a higher reflux ratio enables fewer plates, and vice versa. The engineer must then find the optimal solution with respect to acceptable volume holdup, column height and cost of construction.
Physical sciences
Chemical engineering
Chemistry
1072751
https://en.wikipedia.org/wiki/Habitable%20zone
Habitable zone
In astronomy and astrobiology, the habitable zone (HZ), or more precisely the circumstellar habitable zone (CHZ), is the range of orbits around a star within which a planetary surface can support liquid water given sufficient atmospheric pressure. The bounds of the HZ are based on Earth's position in the Solar System and the amount of radiant energy it receives from the Sun. Due to the importance of liquid water to Earth's biosphere, the nature of the HZ and the objects within it may be instrumental in determining the scope and distribution of planets capable of supporting Earth-like extraterrestrial life and intelligence. The habitable zone is also called the Goldilocks zone, a metaphor, allusion and antonomasia of the children's fairy tale of "Goldilocks and the Three Bears", in which a little girl chooses from sets of three items, rejecting the ones that are too extreme (large or small, hot or cold, etc.), and settling on the one in the middle, which is "just right". Since the concept was first presented in 1953, many stars have been confirmed to possess an HZ planet, including some systems that consist of multiple HZ planets. Most such planets, being either super-Earths or gas giants, are more massive than Earth, because massive planets are easier to detect. On November 4, 2013, astronomers reported, based on Kepler space telescope data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs in the Milky Way. About 11 billion of these may be orbiting Sun-like stars. Proxima Centauri b, located about 4.2 light-years (1.3 parsecs) from Earth in the constellation of Centaurus, is the nearest known exoplanet, and is orbiting in the habitable zone of its star. The HZ is also of particular interest to the emerging field of habitability of natural satellites, because planetary-mass moons in the HZ might outnumber planets. In subsequent decades, the HZ concept began to be challenged as a primary criterion for life, so the concept is still evolving. Since the discovery of evidence for extraterrestrial liquid water, substantial quantities of it are now thought to occur outside the circumstellar habitable zone. The concept of deep biospheres, like Earth's, that exist independently of stellar energy, are now generally accepted in astrobiology given the large amount of liquid water known to exist in lithospheres and asthenospheres of the Solar System. Sustained by other energy sources, such as tidal heating or radioactive decay or pressurized by non-atmospheric means, liquid water may be found even on rogue planets, or their moons. Liquid water can also exist at a wider range of temperatures and pressures as a solution, for example with sodium chlorides in seawater on Earth, chlorides and sulphates on equatorial Mars, or ammoniates, due to its different colligative properties. In addition, other circumstellar zones, where non-water solvents favorable to hypothetical life based on alternative biochemistries could exist in liquid form at the surface, have been proposed. History An estimate of the range of distances from the Sun allowing the existence of liquid water appears in Newton's Principia (Book III, Section 1, corol. 4). The philosopher Louis Claude de Saint-Martin speculated in his 1802 work Man: His True Nature and Ministry, "... we may presume, that, being susceptible of vegetation, it [the Earth] has been placed, in the series of planets, in the rank which was necessary, and at exactly the right distance from the sun, to accomplish its secondary object of vegetation; and from this we might infer that the other planets are either too near or too remote from the sun, to vegetate." The concept of a circumstellar habitable zone was first introduced in 1913, by Edward Maunder in his book "Are The Planets Inhabited?". The concept was later discussed in 1953 by Hubertus Strughold, who in his treatise The Green and the Red Planet: A Physiological Study of the Possibility of Life on Mars, coined the term "ecosphere" and referred to various "zones" in which life could emerge. In the same year, Harlow Shapley wrote "Liquid Water Belt", which described the same concept in further scientific detail. Both works stressed the importance of liquid water to life. Su-Shu Huang, an American astrophysicist, first introduced the term "habitable zone" in 1959 to refer to the area around a star where liquid water could exist on a sufficiently large body, and was the first to introduce it in the context of planetary habitability and extraterrestrial life. A major early contributor to the habitable zone concept, Huang argued in 1960 that circumstellar habitable zones, and by extension extraterrestrial life, would be uncommon in multiple star systems, given the gravitational instabilities of those systems. The concept of habitable zones was further developed in 1964 by Stephen H. Dole in his book Habitable Planets for Man, in which he discussed the concept of the circumstellar habitable zone as well as various other determinants of planetary habitability, eventually estimating the number of habitable planets in the Milky Way to be about 600 million. At the same time, science-fiction author Isaac Asimov introduced the concept of a circumstellar habitable zone to the general public through his various explorations of space colonization. The term "Goldilocks zone" emerged in the 1970s, referencing specifically a region around a star whose temperature is "just right" for water to be present in the liquid phase. In 1993, astronomer James Kasting introduced the term "circumstellar habitable zone" to refer more precisely to the region then (and still) known as the habitable zone. Kasting was the first to present a detailed model for the habitable zone for exoplanets. An update to habitable zone concept came in 2000 when astronomers Peter Ward and Donald Brownlee introduced the idea of the "galactic habitable zone", which they later developed with Guillermo Gonzalez. The galactic habitable zone, defined as the region where life is most likely to emerge in a galaxy, encompasses those regions close enough to a galactic center that stars there are enriched with heavier elements, but not so close that star systems, planetary orbits, and the emergence of life would be frequently disrupted by the intense radiation and enormous gravitational forces commonly found at galactic centers. Subsequently, some astrobiologists propose that the concept be extended to other solvents, including dihydrogen, sulfuric acid, dinitrogen, formamide, and methane, among others, which would support hypothetical life forms that use an alternative biochemistry. In 2013, further developments in habitable zone concepts were made with the proposal of a circum planetary habitable zone, also known as the "habitable edge", to encompass the region around a planet where the orbits of natural satellites would not be disrupted, and at the same time tidal heating from the planet would not cause liquid water to boil away. It has been noted that the current term of 'circumstellar habitable zone' poses confusion as the name suggests that planets within this region will possess a habitable environment. However, surface conditions are dependent on a host of different individual properties of that planet. This misunderstanding is reflected in excited reports of 'habitable planets'. Since it is completely unknown whether conditions on these distant HZ worlds could host life, different terminology is needed. Determination Whether a body is in the circumstellar habitable zone of its host star is dependent on the radius of the planet's orbit (for natural satellites, the host planet's orbit), the mass of the body itself, and the radiative flux of the host star. Given the large spread in the masses of planets within a circumstellar habitable zone, coupled with the discovery of super-Earth planets which can sustain thicker atmospheres and stronger magnetic fields than Earth, circumstellar habitable zones are now split into two separate regions—a "conservative habitable zone" in which lower-mass planets like Earth can remain habitable, complemented by a larger "extended habitable zone" in which a planet like Venus, with stronger greenhouse effects, can have the right temperature for liquid water to exist at the surface. Solar System estimates Estimates for the habitable zone within the Solar System range from 0.38 to 10.0 astronomical units, though arriving at these estimates has been challenging for a variety of reasons. Numerous planetary mass objects orbit within, or close to, this range and as such receive sufficient sunlight to raise temperatures above the freezing point of water. However, their atmospheric conditions vary substantially. The aphelion of Venus, for example, touches the inner edge of the zone in most estimates and while atmospheric pressure at the surface is sufficient for liquid water, a strong greenhouse effect raises surface temperatures to at which water can only exist as vapor. The entire orbits of the Moon, Mars, and numerous asteroids also lie within various estimates of the habitable zone. Only at Mars' lowest elevations (less than 30% of the planet's surface) is atmospheric pressure and temperature sufficient for water to, if present, exist in liquid form for short periods. At Hellas Basin, for example, atmospheric pressures can reach 1,115 Pa and temperatures above zero Celsius (about the triple point for water) for 70 days in the Martian year. Despite indirect evidence in the form of seasonal flows on warm Martian slopes, no confirmation has been made of the presence of liquid water there. While other objects orbit partly within this zone, including comets, Ceres is the only one of planetary mass. A combination of low mass and an inability to mitigate evaporation and atmosphere loss against the solar wind make it impossible for these bodies to sustain liquid water on their surface. Despite this, studies are strongly suggestive of past liquid water on the surface of Venus, Mars, Vesta and Ceres, suggesting a more common phenomenon than previously thought. Since sustainable liquid water is thought to be essential to support complex life, most estimates, therefore, are inferred from the effect that a repositioned orbit would have on the habitability of Earth or Venus as their surface gravity allows sufficient atmosphere to be retained for several billion years. According to the extended habitable zone concept, planetary-mass objects with atmospheres capable of inducing sufficient radiative forcing could possess liquid water farther out from the Sun. Such objects could include those whose atmospheres contain a high component of greenhouse gas and terrestrial planets much more massive than Earth (super-Earth class planets), that have retained atmospheres with surface pressures of up to 100 kbar. There are no examples of such objects in the Solar System to study; not enough is known about the nature of atmospheres of these kinds of extrasolar objects, and their position in the habitable zone cannot determine the net temperature effect of such atmospheres including induced albedo, anti-greenhouse or other possible heat sources. For reference, the average distance from the Sun of some major bodies within the various estimates of the habitable zone is: Mercury, 0.39 AU; Venus, 0.72 AU; Earth, 1.00 AU; Mars, 1.52 AU; Vesta, 2.36 AU; Ceres and Pallas, 2.77 AU; Jupiter, 5.20 AU; Saturn, 9.58 AU. In the most conservative estimates, only Earth lies within the zone; in the most permissive estimates, even Saturn at perihelion, or Mercury at aphelion, might be included. Extrasolar extrapolation Astronomers use stellar flux and the inverse-square law to extrapolate circumstellar habitable zone models created for the Solar System to other stars. For example, according to Kopparapu's habitable zone estimate, although the Solar System has a circumstellar habitable zone centered at 1.34 AU from the Sun, a star with 0.25 times the luminosity of the Sun would have a habitable zone centered at , or 0.5, the distance from the star, corresponding to a distance of 0.67 AU. Various complicating factors, though, including the individual characteristics of stars themselves, mean that extrasolar extrapolation of the HZ concept is more complex. Spectral types and star-system characteristics Some scientists argue that the concept of a circumstellar habitable zone is actually limited to stars in certain types of systems or of certain spectral types. Binary systems, for example, have circumstellar habitable zones that differ from those of single-star planetary systems, in addition to the orbital stability concerns inherent with a three-body configuration. If the Solar System were such a binary system, the outer limits of the resulting circumstellar habitable zone could extend as far as 2.4 AU. With regard to spectral types, Zoltán Balog proposes that O-type stars cannot form planets due to the photoevaporation caused by their strong ultraviolet emissions. Studying ultraviolet emissions, Andrea Buccino found that only 40% of stars studied (including the Sun) had overlapping liquid water and ultraviolet habitable zones. Stars smaller than the Sun, on the other hand, have distinct impediments to habitability. For example, Michael Hart proposed that only main-sequence stars of spectral class K0 or brighter could offer habitable zones, an idea which has evolved in modern times into the concept of a tidal locking radius for red dwarfs. Within this radius, which is coincidental with the red-dwarf habitable zone, it has been suggested that the volcanism caused by tidal heating could cause a "tidal Venus" planet with high temperatures and no hospitable environment for life. Others maintain that circumstellar habitable zones are more common and that it is indeed possible for water to exist on planets orbiting cooler stars. Climate modeling from 2013 supports the idea that red dwarf stars can support planets with relatively constant temperatures over their surfaces in spite of tidal locking. Astronomy professor Eric Agol argues that even white dwarfs may support a relatively brief habitable zone through planetary migration. At the same time, others have written in similar support of semi-stable, temporary habitable zones around brown dwarfs. Also, a habitable zone in the outer parts of stellar systems may exist during the pre-main-sequence phase of stellar evolution, especially around M-dwarfs, potentially lasting for billion-year timescales. Stellar evolution Circumstellar habitable zones change over time with stellar evolution. For example, hot O-type stars, which may remain on the main sequence for fewer than 10 million years, would have rapidly changing habitable zones not conducive to the development of life. Red dwarf stars, on the other hand, which can live for hundreds of billions of years on the main sequence, would have planets with ample time for life to develop and evolve. Even while stars are on the main sequence, though, their energy output steadily increases, pushing their habitable zones farther out; our Sun, for example, was 75% as bright in the Archaean as it is now, and in the future, continued increases in energy output will put Earth outside the Sun's habitable zone, even before it reaches the red giant phase. In order to deal with this increase in luminosity, the concept of a continuously habitable zone has been introduced. As the name suggests, the continuously habitable zone is a region around a star in which planetary-mass bodies can sustain liquid water for a given period. Like the general circumstellar habitable zone, the continuously habitable zone of a star is divided into a conservative and extended region. In red dwarf systems, gigantic stellar flares which could double a star's brightness in minutes and huge starspots which can cover 20% of the star's surface area, have the potential to strip an otherwise habitable planet of its atmosphere and water. As with more massive stars, though, stellar evolution changes their nature and energy flux, so by about 1.2 billion years of age, red dwarfs generally become sufficiently constant to allow for the development of life. Once a star has evolved sufficiently to become a red giant, its circumstellar habitable zone will change dramatically from its main-sequence size. For example, the Sun is expected to engulf the previously habitable Earth as a red giant. However, once a red giant star reaches the horizontal branch, it achieves a new equilibrium and can sustain a new circumstellar habitable zone, which in the case of the Sun would range from 7 to 22 AU. At such stage, Saturn's moon Titan would likely be habitable in Earth's temperature sense. Given that this new equilibrium lasts for about 1 Gyr, and because life on Earth emerged by 0.7 Gyr from the formation of the Solar System at latest, life could conceivably develop on planetary mass objects in the habitable zone of red giants. However, around such a helium-burning star, important life processes like photosynthesis could only happen around planets where the atmosphere has carbon dioxide, as by the time a solar-mass star becomes a red giant, planetary-mass bodies would have already absorbed much of their free carbon dioxide. Moreover, as Ramirez and Kaltenegger (2016) showed, intense stellar winds would completely remove the atmospheres of such smaller planetary bodies, rendering them uninhabitable anyway. Thus, Titan would not be habitable even after the Sun becomes a red giant. Nevertheless, life need not originate during this stage of stellar evolution for it to be detected. Once the star becomes a red giant, and the habitable zone extends outward, the icy surface would melt, forming a temporary atmosphere that can be searched for signs of life that may have been thriving before the start of the red giant stage. Desert planets A planet's atmospheric conditions influence its ability to retain heat so that the location of the habitable zone is also specific to each type of planet: desert planets (also known as dry planets), with very little water, will have less water vapor in the atmosphere than Earth and so have a reduced greenhouse effect, meaning that a desert planet could maintain oases of water closer to its star than Earth is to the Sun. The lack of water also means there is less ice to reflect heat into space, so the outer edge of desert-planet habitable zones is further out. Other considerations A planet cannot have a hydrosphere—a key ingredient for the formation of carbon-based life—unless there is a source for water within its stellar system. The origin of water on Earth is still not completely understood; possible sources include the result of impacts with icy bodies, outgassing, mineralization, leakage from hydrous minerals from the lithosphere, and photolysis. For an extrasolar system, an icy body from beyond the frost line could migrate into the habitable zone of its star, creating an ocean planet with seas hundreds of kilometers deep such as GJ 1214 b or Kepler-22b may be. Maintenance of liquid surface water also requires a sufficiently thick atmosphere. Possible origins of terrestrial atmospheres are currently theorised to outgassing, impact degassing and ingassing. Atmospheres are thought to be maintained through similar processes along with biogeochemical cycles and the mitigation of atmospheric escape. In a 2013 study led by Italian astronomer Giovanni Vladilo, it was shown that the size of the circumstellar habitable zone increased with greater atmospheric pressure. Below an atmospheric pressure of about 15 millibars, it was found that habitability could not be maintained because even a small shift in pressure or temperature could render water unable to form as a liquid. Although traditional definitions of the habitable zone assume that carbon dioxide and water vapor are the most important greenhouse gases (as they are on the Earth), a study led by Ramses Ramirez and co-author Lisa Kaltenegger has shown that the size of the habitable zone is greatly increased if prodigious volcanic outgassing of hydrogen is also included along with the carbon dioxide and water vapor. The outer edge in the Solar System would extend out as far as 2.4 AU in that case. Similar increases in the size of the habitable zone were computed for other stellar systems. An earlier study by Ray Pierrehumbert and Eric Gaidos had eliminated the CO2-H2O concept entirely, arguing that young planets could accrete many tens to hundreds of bars of hydrogen from the protoplanetary disc, providing enough of a greenhouse effect to extend the solar system outer edge to 10 AU. In this case, though, the hydrogen is not continuously replenished by volcanism and is lost within millions to tens of millions of years. In the case of planets orbiting in the HZs of red dwarf stars, the extremely close distances to the stars cause tidal locking, an important factor in habitability. For a tidally locked planet, the sidereal day is as long as the orbital period, causing one side to permanently face the host star and the other side to face away. In the past, such tidal locking was thought to cause extreme heat on the star-facing side and bitter cold on the opposite side, making many red dwarf planets uninhabitable; however, three-dimensional climate models in 2013 showed that the side of a red dwarf planet facing the host star could have extensive cloud cover, increasing its bond albedo and reducing significantly temperature differences between the two sides. Planetary mass natural satellites have the potential to be habitable as well. However, these bodies need to fulfill additional parameters, in particular being located within the circumplanetary habitable zones of their host planets. More specifically, moons need to be far enough from their host giant planets that they are not transformed by tidal heating into volcanic worlds like Io, but must remain within the Hill radius of the planet so that they are not pulled out of the orbit of their host planet. Red dwarfs that have masses less than 20% of that of the Sun cannot have habitable moons around giant planets, as the small size of the circumstellar habitable zone would put a habitable moon so close to the star that it would be stripped from its host planet. In such a system, a moon close enough to its host planet to maintain its orbit would have tidal heating so intense as to eliminate any prospects of habitability. A planetary object that orbits a star with high orbital eccentricity may spend only some of its year in the HZ and experience a large variation in temperature and atmospheric pressure. This would result in dramatic seasonal phase shifts where liquid water may exist only intermittently. It is possible that subsurface habitats could be insulated from such changes and that extremophiles on or near the surface might survive through adaptions such as hibernation (cryptobiosis) and/or hyperthermostability. Tardigrades, for example, can survive in a dehydrated state temperature between and . Life on a planetary object orbiting outside HZ might hibernate on the cold side as the planet approaches the apastron where the planet is coolest and become active on approach to the periastron when the planet is sufficiently warm. Extrasolar discoveries A 2015 review concluded that the exoplanets Kepler-62f, Kepler-186f and Kepler-442b were likely the best candidates for being potentially habitable. These are at a distance of 990, 490 and 1,120 light-years away, respectively. Of these, Kepler-186f is closest in size to Earth with 1.2 times Earth's radius, and it is located towards the outer edge of the habitable zone around its red dwarf star. Among nearest terrestrial exoplanet candidates, Tau Ceti e is 11.9 light-years away. It is in the inner edge of its planetary system's habitable zone, giving it an estimated average surface temperature of . Studies that have attempted to estimate the number of terrestrial planets within the circumstellar habitable zone tend to reflect the availability of scientific data. A 2013 study by Ravi Kumar Kopparapu put ηe, the fraction of stars with planets in the HZ, at 0.48, meaning that there may be roughly 95–180 billion habitable planets in the Milky Way. However, this is merely a statistical prediction; only a small fraction of these possible planets have yet been discovered. Previous studies have been more conservative. In 2011, Seth Borenstein concluded that there are roughly 500 million habitable planets in the Milky Way. NASA's Jet Propulsion Laboratory 2011 study, based on observations from the Kepler mission, raised the number somewhat, estimating that about "1.4 to 2.7 percent" of all stars of spectral class F, G, and K are expected to have planets in their HZs. Early findings The first discoveries of extrasolar planets in the HZ occurred just a few years after the first extrasolar planets were discovered. However, these early detections were all gas giant-sized, and many were in eccentric orbits. Despite this, studies indicate the possibility of large, Earth-like moons around these planets supporting liquid water. One of the first discoveries was 70 Virginis b, a gas giant initially nicknamed "Goldilocks" due to it being neither "too hot" nor "too cold". Later study revealed temperatures analogous to Venus, ruling out any potential for liquid water. 16 Cygni Bb, also discovered in 1996, has an extremely eccentric orbit that spends only part of its time in the HZ, such an orbit would causes extreme seasonal effects. In spite of this, simulations have suggested that a sufficiently large companion could support surface water year-round. Gliese 876 b, discovered in 1998, and Gliese 876 c, discovered in 2001, are both gas giants discovered in the habitable zone around Gliese 876 that may also have large moons. Another gas giant, Upsilon Andromedae d was discovered in 1999 orbiting Upsilon Andromidae's habitable zone. Announced on April 4, 2001, HD 28185 b is a gas giant found to orbit entirely within its star's circumstellar habitable zone and has a low orbital eccentricity, comparable to that of Mars in the Solar System. Tidal interactions suggest it could harbor habitable Earth-mass satellites in orbit around it for many billions of years, though it is unclear whether such satellites could form in the first place. HD 69830 d, a gas giant with 17 times the mass of Earth, was found in 2006 orbiting within the circumstellar habitable zone of HD 69830, 41 light years away from Earth. The following year, 55 Cancri f was discovered within the HZ of its host star 55 Cancri A. Hypothetical satellites with sufficient mass and composition are thought to be able to support liquid water at their surfaces. Though, in theory, such giant planets could possess moons, the technology did not exist to detect moons around them, and no extrasolar moons had been discovered. Planets within the zone with the potential for solid surfaces were therefore of much higher interest. Habitable super-Earths The 2007 discovery of Gliese 581c, the first super-Earth in the circumstellar habitable zone, created significant interest in the system by the scientific community, although the planet was later found to have extreme surface conditions that may resemble Venus. Gliese 581 d, another planet in the same system and thought to be a better candidate for habitability, was also announced in 2007. Its existence was later disconfirmed in 2014, but only for a short time. As of 2015, the planet has no newer disconfirmations. Gliese 581 g, yet another planet thought to have been discovered in the circumstellar habitable zone of the system, was considered to be more habitable than both Gliese 581 c and d. However, its existence was also disconfirmed in 2014, and astronomers are divided about its existence. Discovered in August 2011, HD 85512 b was initially speculated to be habitable, but the new circumstellar habitable zone criteria devised by Kopparapu et al. in 2013 place the planet outside the circumstellar habitable zone. Kepler-22 b, discovered in December 2011 by the Kepler space probe, is the first transiting exoplanet discovered around a Sun-like star. With a radius 2.4 times that of Earth, Kepler-22b has been predicted by some to be an ocean planet. Gliese 667 Cc, discovered in 2011 but announced in 2012, is a super-Earth orbiting in the circumstellar habitable zone of Gliese 667 C. It is one of the most Earth-like planets known. Gliese 163 c, discovered in September 2012 in orbit around the red dwarf Gliese 163 is located 49 light years from Earth. The planet has 6.9 Earth masses and 1.8–2.4 Earth radii, and with its close orbit receives 40 percent more stellar radiation than Earth, leading to surface temperatures of about ° C. HD 40307 g, a candidate planet tentatively discovered in November 2012, is in the circumstellar habitable zone of HD 40307. In December 2012, Tau Ceti e and Tau Ceti f were found in the circumstellar habitable zone of Tau Ceti, a Sun-like star 12 light years away. Although more massive than Earth, they are among the least massive planets found to date orbiting in the habitable zone; however, Tau Ceti f, like HD 85512 b, did not fit the new circumstellar habitable zone criteria established by the 2013 Kopparapu study. It is now considered as uninhabitable. Near Earth-sized planets and Solar analogs Recent discoveries have uncovered planets that are thought to be similar in size or mass to Earth. "Earth-sized" ranges are typically defined by mass. The lower range used in many definitions of the super-Earth class is 1.9 Earth masses; likewise, sub-Earths range up to the size of Venus (~0.815 Earth masses). An upper limit of 1.5 Earth radii is also considered, given that above the average planet density rapidly decreases with increasing radius, indicating these planets have a significant fraction of volatiles by volume overlying a rocky core. A genuinely Earth-like planet – an Earth analog or "Earth twin" – would need to meet many conditions beyond size and mass; such properties are not observable using current technology. A solar analog (or "solar twin") is a star that resembles the Sun. No solar twin with an exact match as that of the Sun has been found. However, some stars are nearly identical to the Sun and are considered solar twins. An exact solar twin would be a G2V star with a 5,778 K temperature, be 4.6 billion years old, with the correct metallicity and a 0.1% solar luminosity variation. Stars with an age of 4.6 billion years are at the most stable state. Proper metallicity and size are also critical to low luminosity variation. Using data collected by NASA's Kepler space telescope and the W. M. Keck Observatory, scientists have estimated that 22% of solar-type stars in the Milky Way galaxy have Earth-sized planets in their habitable zone. On 7 January 2013, astronomers from the Kepler team announced the discovery of Kepler-69c (formerly KOI-172.02), an Earth-size exoplanet candidate (1.7 times the radius of Earth) orbiting Kepler-69, a star similar to the Sun, in the HZ and expected to offer habitable conditions. The discovery of two planets orbiting in the habitable zone of Kepler-62, by the Kepler team was announced on April 19, 2013. The planets, named Kepler-62e and Kepler-62f, are likely solid planets with sizes 1.6 and 1.4 times the radius of Earth, respectively. With a radius estimated at 1.1 Earth, Kepler-186f, discovery announced in April 2014, is the closest yet size to Earth of an exoplanet confirmed by the transit method though its mass remains unknown and its parent star is not a Solar analog. Kapteyn b, discovered in June 2014 is a possible rocky world of about 4.8 Earth masses and about 1.5 Earth radii were found orbiting the habitable zone of the red subdwarf Kapteyn's Star, 12.8 light-years away. On 6 January 2015, NASA announced the 1000th confirmed exoplanet discovered by the Kepler Space Telescope. Three of the newly confirmed exoplanets were found to orbit within habitable zones of their related stars: two of the three, Kepler-438b and Kepler-442b, are near-Earth-size and likely rocky; the third, Kepler-440b, is a super-Earth. However, Kepler-438b is found to be a subject of powerful flares, so it is now considered uninhabitable. 16 January, K2-3d a planet of 1.5 Earth radii was found orbiting within the habitable zone of K2-3, receiving 1.4 times the intensity of visible light as Earth. Kepler-452b, announced on 23 July 2015 is 50% bigger than Earth, likely rocky and takes approximately 385 Earth days to orbit the habitable zone of its G-class (solar analog) star Kepler-452. The discovery of a system of three tidally-locked planets orbiting the habitable zone of an ultracool dwarf star, TRAPPIST-1, was announced in May 2016. The discovery is considered significant because it dramatically increases the possibility of smaller, cooler, more numerous and closer stars possessing habitable planets. Two potentially habitable planets, discovered by the K2 mission in July 2016 orbiting around the M dwarf K2-72 around 227 light years from the Sun: K2-72c and K2-72e are both of similar size to Earth and receive similar amounts of stellar radiation. Announced on the 20 April 2017, LHS 1140b is a super-dense super-Earth 39 light years away, 6.6 times Earth's mass and 1.4 times radius, its star 15% the mass of the Sun but with much less observable stellar flare activity than most M dwarfs. The planet is one of few observable by both transit and radial velocity that's mass is confirmed with an atmosphere may be studied. Discovered by radial velocity in June 2017, with approximately three times the mass of Earth, Luyten b orbits within the habitable zone of Luyten's Star just 12.2 light-years away. At 11 light-years away, the second closest planet, Ross 128 b, was announced in November 2017 following a decade's radial velocity study of relatively "quiet" red dwarf star Ross 128. At 1.35 times Earth's mass, is it roughly Earth-sized and likely rocky in composition. Discovered in March 2018, K2-155d is about 1.64 times the radius of Earth, is likely rocky and orbits in the habitable zone of its red dwarf star 203 light years away. One of the earliest discoveries by the Transiting Exoplanet Survey Satellite (TESS) announced on July 31, 2019, is a Super-Earth planet GJ 357 d orbiting the outer edge of a red dwarf 31 light years away. K2-18b is an exoplanet 124 light-years away, orbiting in the habitable zone of the K2-18, a red dwarf. This planet is significant for water vapor found in its atmosphere; this was announced on September 17, 2019. In September 2020, astronomers identified 24 superhabitable planet (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets at present, based on astrophysical parameters, as well as the natural history of known life forms on the Earth. Habitability outside the HZ Liquid-water environments have been found to exist in the absence of atmospheric pressure and at temperatures outside the HZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans. Outside the HZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice. With some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. On Earth itself, living organisms may be found more than below the surface. Another possibility is that outside the HZ organisms may use alternative biochemistries that do not require water at all. Astrobiologist Christopher McKay, has suggested that methane () may be a solvent conducive to the development of "cryolife", with the Sun's "methane habitable zone" being centered on from the star. This distance is coincident with the location of Titan, whose lakes and rain of methane make it an ideal location to find McKay's proposed cryolife. In addition, testing of a number of organisms has found some are capable of surviving in extra-HZ conditions. Significance for complex and intelligent life The Rare Earth hypothesis argues that complex and intelligent life is uncommon and that the HZ is one of many critical factors. According to Ward & Brownlee (2004) and others, not only is a HZ orbit and surface water a primary requirement to sustain life but a requirement to support the secondary conditions required for multicellular life to emerge and evolve. The secondary habitability factors are both geological (the role of surface water in sustaining necessary plate tectonics) and biochemical (the role of radiant energy in supporting photosynthesis for necessary atmospheric oxygenation). But others, such as Ian Stewart and Jack Cohen in their 2002 book Evolving the Alien argue that complex intelligent life may arise outside the HZ. Intelligent life outside the HZ may have evolved in subsurface environments, from alternative biochemistries or even from nuclear reactions. On Earth, several complex multicellular life forms (or eukaryotes) have been identified with the potential to survive conditions that might exist outside the conservative habitable zone. Geothermal energy sustains ancient circumvent ecosystems, supporting large complex life forms such as Riftia pachyptila. Similar environments may be found in oceans pressurised beneath solid crusts, such as those of Europa and Enceladus, outside of the habitable zone. Numerous microorganisms have been tested in simulated conditions and in low Earth orbit, including eukaryotes. An animal example is the Milnesium tardigradum, which can withstand extreme temperatures well above the boiling point of water and the cold vacuum of outer space. In addition, the lichens Rhizocarpon geographicum and Xanthoria elegans have been found to survive in an environment where the atmospheric pressure is far too low for surface liquid water and where the radiant energy is also much lower than that which most plants require to photosynthesize. The fungi Cryomyces antarcticus and Cryomyces minteri are also able to survive and reproduce in Mars-like conditions. Species, including humans, known to possess animal cognition require large amounts of energy, and have adapted to specific conditions, including an abundance of atmospheric oxygen and the availability of large quantities of chemical energy synthesized from radiant energy. If humans are to colonize other planets, true Earth analogs in the HZ are most likely to provide the closest natural habitat; this concept was the basis of Stephen H. Dole's 1964 study. With suitable temperature, gravity, atmospheric pressure and the presence of water, the necessity of spacesuits or space habitat analogs on the surface may be eliminated, and complex Earth life can thrive. Planets in the HZ remain of paramount interest to researchers looking for intelligent life elsewhere in the universe. The Drake equation, sometimes used to estimate the number of intelligent civilizations in our galaxy, contains the factor or parameter , which is the average number of planetary-mass objects orbiting within the HZ of each star. A low value lends support to the Rare Earth hypothesis, which posits that intelligent life is a rarity in the Universe, whereas a high value provides evidence for the Copernican mediocrity principle, the view that habitability—and therefore life—is common throughout the Universe. A 1971 NASA report by Drake and Bernard Oliver proposed the "water hole", based on the spectral absorption lines of the hydrogen and hydroxyl components of water, as a good, obvious band for communication with extraterrestrial intelligence that has since been widely adopted by astronomers involved in the search for extraterrestrial intelligence. According to Jill Tarter, Margaret Turnbull and many others, HZ candidates are the priority targets to narrow waterhole searches and the Allen Telescope Array now extends Project Phoenix to such candidates. Because the HZ is considered the most likely habitat for intelligent life, METI efforts have also been focused on systems likely to have planets there. The 2001 Teen Age Message and 2003 Cosmic Call 2, for example, were sent to the 47 Ursae Majoris system, known to contain three Jupiter-mass planets and possibly with a terrestrial planet in the HZ. The Teen Age Message was also directed to the 55 Cancri system, which has a gas giant in its HZ. A Message from Earth in 2008, and Hello From Earth in 2009, were directed to the Gliese 581 system, containing three planets in the HZ—Gliese 581 c, d, and the unconfirmed g.
Physical sciences
Planetary science
Astronomy
1072889
https://en.wikipedia.org/wiki/Superacid
Superacid
In chemistry, a superacid (according to the original definition) is an acid with an acidity greater than that of 100% pure sulfuric acid (), which has a Hammett acidity function (H0) of −12. According to the modern definition, a superacid is a medium in which the chemical potential of the proton is higher than in pure sulfuric acid. Commercially available superacids include trifluoromethanesulfonic acid (), also known as triflic acid, and fluorosulfuric acid (), both of which are about a thousand times stronger (i.e. have more negative H0 values) than sulfuric acid. Most strong superacids are prepared by the combination of a strong Lewis acid and a strong Brønsted acid. A strong superacid of this kind is fluoroantimonic acid. Another group of superacids, the carborane acid group, contains some of the strongest known acids. Finally, when treated with anhydrous acid, zeolites (microporous aluminosilicate minerals) will contain superacidic sites within their pores. These materials are used on massive scale by the petrochemical industry in the upgrading of hydrocarbons to make fuels. History The term superacid was originally coined by James Bryant Conant in 1927 to describe acids that were stronger than conventional mineral acids. This definition was refined by Ronald Gillespie in 1971, as any acid with an H0 value lower than that of 100% sulfuric acid (−11.93). George A. Olah prepared the so-called "magic acid", so named for its ability to attack hydrocarbons, by mixing antimony pentafluoride (SbF5) and fluorosulfonic acid (FSO3H). The name was coined after a candle was placed in a sample of magic acid after a Christmas party. The candle dissolved, showing the ability of the acid to protonate alkanes, which under normal acidic conditions do not protonate to any extent. At 140 °C (284 °F), FSO3H–SbF5 protonates methane to give the tertiary-butyl carbocation, a reaction that begins with the protonation of methane: CH4 + H+ → → + H2 + 3 CH4 → (CH3)3C+ + 3H2 Common uses of superacids include providing an environment to create, maintain, and characterize carbocations. Carbocations are intermediates in numerous useful reactions such as those forming plastics and in the production of high-octane gasoline. Origin of extreme acid strength Traditionally, superacids are made from mixing a Brønsted acid with a Lewis acid. The function of the Lewis acid is to bind to and stabilize the anion that is formed upon dissociation of the Brønsted acid, thereby removing a proton acceptor from the solution and strengthening the proton donating ability of the solution. For example, fluoroantimonic acid, nominally (), can produce solutions with a H0 lower than –28, giving it a protonating ability over a billion times greater than 100% sulfuric acid. Fluoroantimonic acid is made by dissolving antimony pentafluoride (SbF5) in anhydrous hydrogen fluoride (HF). In this mixture, HF releases its proton (H+) concomitant with the binding of F− by the antimony pentafluoride. The resulting anion () delocalizes charge effectively and holds onto its electron pairs tightly, making it an extremely poor nucleophile and base. The mixture owes its extraordinary acidity to the weakness of proton acceptors (and electron pair donors) (Brønsted or Lewis bases) in solution. Because of this, the protons in fluoroantimonic acid and other superacids are popularly described as "naked", being readily donated to substances not normally regarded as proton acceptors, like the C–H bonds of hydrocarbons. However, even for superacidic solutions, protons in the condensed phase are far from being unbound. For instance, in fluoroantimonic acid, they are bound to one or more molecules of hydrogen fluoride. Though hydrogen fluoride is normally regarded as an exceptionally weak proton acceptor (though a somewhat better one than the SbF6– anion), dissociation of its protonated form, the fluoronium ion H2F+ to HF and the truly naked H+ is still a highly endothermic process (ΔG° = +113 kcal/mol), and imagining the proton in the condensed phase as being "naked" or "unbound", like charged particles in a plasma, is highly inaccurate and misleading. More recently, carborane acids have been prepared as single component superacids that owe their strength to the extraordinary stability of the carboranate anion, a family of anions stabilized by three-dimensional aromaticity, as well as by electron-withdrawing group typically attached thereto. In superacids, the proton is shuttled rapidly from proton acceptor to proton acceptor by tunneling through a hydrogen bond via the Grotthuss mechanism, just as in other hydrogen-bonded networks, like water or ammonia. Applications In petrochemistry, superacidic media are used as catalysts, especially for alkylations. Typical catalysts are sulfated oxides of titanium and zirconium or specially treated alumina or zeolites. The solid acids are used for alkylating benzene with ethene and propene as well as difficult acylations, e.g. of chlorobenzene. In organic chemistry, superacids are used as a means of protonating alkanes to promote the use of carbocations in situ during reactions. The resulting carbocations are of much use in organic synthesis of numerous organic compounds, the high acidity of the superacids helps to stabilize the highly reactive and unstable carbocations for future reactions. Examples The following are examples of superacids. Each is listed with its Hammett acidity function, where a smaller value of H0 (in these cases, more negative) indicates a stronger acid. Fluoroantimonic acid (HF:SbF5, H0 = -28) Magic acid (HSO3F:SbF5, H0 = −23) Carborane acids (H(HCB11X11), H0 ≤ −18, indirectly determined and depends on substituents) Fluoroboric acid (HF:BF3, H0 = −16.6) Bistriflimidic acid (NH(CF3SO2)2, H0 = -15.8. Estimated value calculated from pKa values in 1,2-dichloroethane in comparison to triflic acid) Fluorosulfuric acid (FSO3H, H0 = −15.1) Hydrogen fluoride (HF, H0 = −15.1) Triflic acid (HOSO2CF3, H0 = −14.9) Oleum (SO3:H2SO4, H0 = −14.5) Perchloric acid (HClO4, H0 = −13) Sulfuric acid (H2SO4, H0 = −11.9)
Physical sciences
Specific acids
Chemistry
1073054
https://en.wikipedia.org/wiki/Punji%20stick
Punji stick
The punji stick or punji stake is a type of booby trapped stake. It is a simple spike, made out of wood or bamboo, which is sharpened, heated, and usually set in a hole. Punji sticks are usually deployed in substantial numbers. The Oxford English Dictionary (third edition, 2007) lists less frequent, earlier spellings for "punji stake (or stick)": panja, panjee, panjie, panji, and punge. Description Punji sticks would be placed in areas likely to be passed through by enemy troops. The presence of punji sticks may be camouflaged by natural undergrowth, crops, grass, brush or similar materials. They were often incorporated into various types of traps; for example, a camouflaged pit into which a soldier might fall (it would then be a trou de loup). Sometimes a pit would be dug with punji sticks in the sides pointing downward at an angle. A soldier stepping into the pit would find it impossible to remove their leg without doing severe damage, and injuries might be incurred by the simple act of falling forward while one's leg is in a narrow, vertical, stake-lined pit. Such pits would require time and care to dig the soldier's leg out, immobilizing the unit longer than if the foot were simply pierced, in which case the victim could be evacuated by stretcher or fireman's carry if necessary. Other additional measures include coating the sticks in poison from plants, animal venom, or even human feces, causing infection or poisoning in the victim after being pierced by the sticks, even if the injury itself was not life-threatening. Punji sticks were sometimes deployed in the preparation of an ambush. Soldiers lying in wait for the enemy to pass would deploy punji sticks in the areas where the surprised enemy might be expected to take cover, resulting in soldiers diving for cover potentially impaling themselves. The point of penetration was usually in the foot or lower leg area. Punji sticks were not necessarily meant to kill the person who stepped on them; rather, they were sometimes designed specifically to only wound the enemy and slow or halt their unit while the victim was evacuated to a medical facility. Vietnam War In the Vietnam War, this method was used to force wounded soldiers to be transported by helicopter to a medical hospital for treatment. Punji sticks were also used in Vietnam to complement various defenses, such as barbed wire. Etymology The term first appeared in the English language in the 1870s, after the British Indian Army encountered the sticks in their border conflicts against the Kachins of northeast Burma (and it is from the Tibeto-Burman language that this word probably originated).
Technology
Military technology: General
null
1074025
https://en.wikipedia.org/wiki/Ethiopian%20wolf
Ethiopian wolf
The Ethiopian wolf (Canis simensis), also called the red jackal, the Simien jackal or Simien fox, is a canine native to the Ethiopian Highlands. In southeastern Ethiopia, it is also known as the horse jackal. It is similar to the coyote in size and build, and is distinguished by its long and narrow skull, and its red and white fur. Unlike most large canids, which are widespread, generalist feeders, the Ethiopian wolf is a highly specialised feeder of Afroalpine rodents with very specific habitat requirements. It is one of the world's rarest canids, and Africa's most endangered carnivore. The species's current range is limited to seven isolated mountain ranges at altitudes of 3,000–4,500 m, with the overall adult population estimated at 360–440 individuals in 2011, more than half of them in the Bale Mountains. The Ethiopian wolf is listed as endangered by the IUCN, on account of its small numbers and fragmented range. Threats include increasing pressure from expanding human populations, resulting in habitat degradation through overgrazing, and disease transference and interbreeding from free-ranging dogs. Its conservation is headed by Oxford University's Ethiopian Wolf Conservation Programme, which seeks to protect the wolves through vaccination and community outreach programs. Naming Alternative English names for the Ethiopian wolf include the red jackal, the Simenian fox, the Simien jackal, Ethiopian jackal, and Abyssinian wolf. Indigenous names Historical account The species was first scientifically described in 1835 by Eduard Rüppell, who provided a skull for the British Museum. European writers traveling in Ethiopia during the mid-19th century (then called Abyssinia by Europeans and Ze Etiyopia by its citizens), wrote that the animal's skin was never worn by natives, as it was popularly believed that the wearer would die should any wolf hairs enter an open wound, while Charles Darwin hypothesised that the species gave rise to greyhounds. Since then, it was scarcely heard of in Europe up until the early 20th century, when several skins were shipped to England by Major Percy Powell-Cotton during his travels in Abyssinia. The Ethiopian wolf was recognised as requiring protection in 1938, and received it in 1974. The first in-depth studies on the species occurred in the 1980s with the onset of the American-sponsored Bale Mountains Research Project. Ethiopian wolf populations in the Bale Mountains National Park were negatively affected by the political unrest of the Ethiopian Civil War, though the critical state of the species was revealed during the early 1990s after a combination of shooting and a severe rabies epidemic decimated most packs studied in the Web Valley and Sanetti Plateau. In response, the IUCN reclassified the species from endangered to critically endangered in 1994. The IUCN/SSC Canid Specialist Group advocated a three-front strategy of education, wolf population monitoring, and rabies control in domestic dogs. The establishment of the Ethiopian Wolf Conservation Programme in Bale soon followed in 1995 by Oxford University, in conjunction with the Ethiopian Wildlife Conservation Authority (EWCA). Soon after, a further wolf population was discovered in the Central Highlands. Elsewhere, information on Ethiopian wolves remained scarce; although first described in 1835 as living in the Simien Mountains, the paucity of information stemming from that area indicated that the species was likely declining there, while reports from the Gojjam plateau were a century out of date. Wolves were recorded in the Arsi Mountains since the early 20th century, and in the Bale Mountains in the late 1950s. The status of the Ethiopian wolf was reassessed in the late 1990s, following improvements in travel conditions into northern Ethiopia. The surveys taken revealed local extinctions in Mount Choqa, Gojjam, and in every northern Afroalpine region where agriculture is well developed and human pressure acute. This revelation stressed the importance of the Bale Mountains wolf populations for the species' long-term survival, as well as the need to protect other surviving populations. A decade after the rabies outbreak, the Bale populations had fully recovered to pre-epizootic levels, prompting the species' downlisting to endangered in 2004, though it still remains the world's rarest canid, and Africa's most endangered carnivore. Taxonomy and evolution The earliest known fossil of the Ethiopian wolf is known from the Melka Wakena paleoanthropological site-complex in the Southeastern Ethiopian Highlands. It is the right half of a mandible and dated to between 1.6 and 1.4 million years ago. The authors of this study state that the ancestors of the Ethiopian wolf arrived in Africa from Eurasia at the same time as the ancestors of the African wild dog approximately 1.8 million years ago. The Ethiopian wolf has survived numerous climatic changes in its Ethiopian highland habitat, with its range repeatedly expanding and contracting with glacial cycles. In 1994, a mitochondrial DNA analysis showed a closer relationship to the gray wolf and the coyote than to other African canids, and C. simensis may be an evolutionary relic of a gray wolf-like ancestor's past invasion of northern Africa from Eurasia. See further: Canis evolution Due to the high density of rodents in their new Afroalpine habitat, the ancestors of the Ethiopian wolf gradually developed into specialised rodent hunters. This specialisation is reflected in the animal's skull morphology, with its very elongated head, long jaw, and widely spaced teeth. During this period, the species likely attained its highest abundance, and had a relatively continuous distribution. This changed about 15,000 years ago with the onset of the current interglacial, which caused the species' Afroalpine habitat to fragment, thus isolating Ethiopian wolf populations from each other. The Ethiopian wolf is one of five Canis species present in Africa, and is readily distinguishable from jackals by its larger size, relatively longer legs, distinct reddish coat, and white markings. John Edward Gray and Glover Morrill Allen originally classified the species under a separate genus, Simenia, and Oscar Neumann considered it to be "only an exaggerated fox". Juliet Clutton-Brock refuted the separate genus in favour of placing the species in the genus Canis, upon noting cranial similarities with the side-striped jackal. In 2015, a study of mitochondrial genome sequences and whole genome nuclear sequences of African and Eurasian canids indicated that extant wolf-like canids have colonised Africa from Eurasia at least five times throughout the Pliocene and Pleistocene, which is consistent with fossil evidence suggesting that much of African canid fauna diversity resulted from the immigration of Eurasian ancestors, likely coincident with Plio-Pleistocene climatic oscillations between arid and humid conditions. According to a phylogeny derived from nuclear sequences, the Eurasian golden jackal (Canis aureus) diverged from the wolf/coyote lineage 1.9 million years ago, and with mitochondrial genome sequences indicating the Ethiopian wolf diverged from this lineage slightly prior to that. Further studies on RAD sequences found instances of Ethiopian wolves hybridizing with African golden wolves. Admixture with other Canis species In 2018, whole genome sequencing was used to compare members of the genus Canis. The study supports the African golden wolf being distinct from the golden jackal, and with the Ethiopian wolf being genetically basal to both. There are two genetically distinct African golden wolf populations that exist in northwestern and eastern Africa. This suggests that Ethiopian wolves – or an extinct close relative – once had a much larger range within Africa to admix with other canids. There is evidence of gene flow between the eastern population and the Ethiopian wolf, which has led to the eastern population being distinct from the northwestern population. The common ancestor of both African golden wolf populations was a genetically admixed canid of 72% grey wolf and 28% Ethiopian wolf ancestry. Subspecies , two subspecies are recognised by Mammal Species of the World Volume Three (MSW3). Description The Ethiopian wolf is similar in size and build to North America's coyote; it is larger than the black-backed jackal and side-striped jackals as well as the African wolf and has comparatively longer legs. Its skull is very flat, with a long facial region accounting for 58% of the skull's total length. The ears are broad, pointed, and directed forward. The teeth, particularly the premolars, are small and widely spaced. The canine teeth measure 14–22 mm in length, while the carnassials are relatively small. The Ethiopian wolf has eight mammae, of which only six are functional. The front paws have five toes, including a dewclaw, while the hind paws have four. As is typical in the genus Canis, males are larger than females, having 20% greater body mass. Adults measure in body length, and in height. Adult males weigh , while females weigh . The Ethiopian wolf has short guard hairs and thick underfur, which provides protection at temperatures as low as −15 °C. Its overall colour is ochre to rusty red, with dense whitish to pale ginger underfur. The fur of the throat, chest and underparts is white, with a distinct white band occurring around the sides of the neck. There is a sharp boundary between the red coat and white marks. The ears are thickly furred on the edges, though naked on the inside. The naked borders of the lips, the gums and palate are black. The lips, a small spot on the cheeks and an ascending crescent below the eyes are white. The thickly furred tail is white underneath, and has a black tip, though, unlike most other canids, there is no dark patch marking the supracaudal gland. It moults during the wet season (August–October), and there is no evident seasonal variation in coat colour, though the contrast between the red coat and white markings increases with age and social rank. Females tend to have paler coats than males. During the breeding season, the female's coat turns yellow, becomes woolier, and the tail turns brownish, losing much of its hair. Animals resulting from Ethiopian wolf-dog hybridisation tend to be more heavily built than pure wolves, and have shorter muzzles and different coat patterns. Behaviour Social and territorial behaviours The Ethiopian wolf is a social animal, living in family groups containing up to 20 adults (individuals older than one year), though packs of six wolves are more common. Packs are formed by dispersing males and a few females, which with the exception of the breeding female, are reproductively suppressed. Each pack has a well-established hierarchy, with dominance and subordination displays being common. Upon dying, a breeding female can be replaced by a resident daughter, though this increases the risk of inbreeding. Such a risk is sometimes circumvented by multiple paternity and extra-pack matings. The dispersal of wolves from their packs is largely restricted by the scarcity of unoccupied habitat. These packs live in communal territories, which encompass of land on average. In areas with little food, the species lives in pairs, sometimes accompanied by pups, and defends larger territories averaging . In the absence of disease, Ethiopian wolf territories are largely stable, but packs can expand whenever the opportunity arises, such as when another pack disappears. The size of each territory correlates with the abundance of rodents, the number of wolves in a pack, and the survival of pups. Ethiopian wolves rest together in the open at night, and congregate for greetings and border patrols at dawn, noon, and evening. They may shelter from rain under overhanging rocks and behind boulders. The species never sleeps in dens, and only uses them for nursing pups. When patrolling their territories, Ethiopian wolves regularly scent-mark, and interact aggressively and vocally with other packs. Such confrontations typically end with the retreat of the smaller group. Reproduction and development The mating season usually takes place between August and November. Courtship involves the breeding male following the female closely. The breeding female only accepts the advances of the breeding male, or males from other packs. The gestation period is 60–62 days, with pups being born between October and December. Pups are born toothless and with their eyes closed, and are covered in a charcoal-grey coat with a buff patch on the chest and abdomen. Litters consist of two to six pups, which emerge from their den after three weeks, when the dark coat is gradually replaced with the adult colouration. By the age of five weeks, the pups feed on a combination of milk and solid food, and become completely weaned off milk at the age of 10 weeks to six months. All members of the pack contribute to protecting and feeding the pups, with subordinate females sometimes assisting the dominant female by suckling them. Full growth and sexual maturity are attained at the age of two years. Cooperative breeding and pseudopregnancy have been observed in Ethiopian wolves. Most females disperse from their natal pack at about two years of age, and some become "floaters" that may successfully immigrate into existing packs. Breeding pairs are most often unrelated to each other, suggesting that female-biased dispersal reduces inbreeding. Inbreeding is ordinarily avoided because it leads to a reduction in progeny fitness (inbreeding depression) due largely to the homozygous expression of deleterious recessive alleles. Hunting behaviours Unlike most social carnivores, the Ethiopian wolf tends to forage and feed on small prey alone. It is most active during the day, the time when rodents are themselves most active, though they have been observed to hunt in groups when targeting mountain nyala calves. Major Percy-Cotton described the hunting behaviour of Ethiopian wolves as thus: The technique described above is commonly used in hunting big-headed African mole-rats, with the level of effort varying from scratching lightly at the hole to totally destroying a set of burrows, leaving metre-high earth mounds. Wolves in Bale have been observed to forage among cattle herds, a tactic thought to aid in ambushing rodents out of their holes by using the cattle to hide their presence. Ethiopian wolves have also been observed forming temporary associations with troops of grazing geladas. Solitary wolves hunt for rodents in the midst of the monkeys, ignoring juvenile monkeys, though these are similar in size to some of their prey. The monkeys, in turn, tolerate and largely ignore the wolves, although they take flight if they observe feral dogs, which sometimes prey on them. Within the troops, the wolves enjoy much higher success in capturing rodents than usual, perhaps because the monkeys' activities flush out the rodents, or because the presence of numerous larger animals makes it harder for rodents to spot a threat. Ecology Habitat The Ethiopian wolf is restricted to isolated pockets of Afroalpine grasslands and heathlands inhabited by Afroalpine rodents. Its ideal habitat extends from above the tree line around 3,200 to 4,500 m, with some wolves inhabiting the Bale Mountains being present in montane grasslands at 3,000 m. Although specimens were collected in Gojjam and northwestern Shoa at 2,500 m in the early 20th century, no recent records exist of the species occurring below 3,000 m. In modern times, subsistence agriculture, which extends up to 3,700 m, has largely restricted the species to the highest peaks. The Ethiopian wolf uses all Afroalpine habitats, but has a preference for open areas containing short herbaceous and grassland communities inhabited by rodents, which are most abundant along flat or gently sloping areas with poor drainage and deep soils. Prime wolf habitat in the Bale Mountains consists of short Alchemilla herbs and grasses, with low vegetation cover. Other favourable habitats consist of tussock grasslands, high-altitude scrubs rich in Helichrysum, and short grasslands growing in shallow soils. In its northern range, the wolf's habitat is composed of plant communities characterised by a matrix of Festuca tussocks, Euryops bushes, and giant lobelias, all of which are favoured by the wolf's rodent prey. Although marginal in importance, the ericaceous moorlands at 3,200–3,600 m in Simien may provide a refuge for wolves in highly disturbed areas. Diet In the Bale Mountains, the Ethiopian wolf's primary prey are big-headed African mole-rats, though it also feeds on grass rats, black-clawed brush-furred rats, and highland hares. Other secondary prey species include vlei rats, yellow-spotted brush-furred rats, and occasionally goslings and eggs. Ethiopian wolves have twice been observed to feed on rock hyraxes, and mountain nyala calves. It will also prey on reedbuck calves. In areas where the big-headed African mole-rat is absent, the smaller Northeast African mole-rat is targeted. In the Simien Mountains, the Ethiopian wolf preys on Abyssinian grass rats. Undigested sedge leaves have occasionally been found in Ethiopian wolf stomachs. The sedge possibly is ingested for roughage or for parasite control. It has also been observed to consume nectar from the flowers of Kniphofia foliosa. The species may scavenge on carcasses, but is usually displaced by free-ranging dogs and African golden wolves. It typically poses no threat to livestock, with farmers often leaving herds in wolf-inhabited areas unattended. Range and populations Six current Ethiopian wolf populations are known. North of the Rift Valley, the species occurs in the Simien Mountains in Gondar, in the northern and southern Wollo highlands, and in Guassa Menz in north Shoa. It has recently become extinct in Gosh Meda in north Shoa and Mount Guna, and has not been reported in Mount Choqa for several decades. Southeast of the Rift Valley, it occurs in the Arsi and Bale Mountains. Threats The Ethiopian wolf has been considered rare since it was first recorded scientifically. The species likely has always been confined to Afroalpine habitats, so it was never widespread. In historical times, all of the Ethiopian wolf's threats are both directly and indirectly human-induced, as the wolf's highland habitat, with its high annual rainfall and rich fertile soils, is ideal for agricultural activities. Its proximate threats include habitat loss and fragmentation (subsistence agriculture, overgrazing, road construction, and livestock farming), diseases (primarily rabies and canine distemper), conflict with humans (poisoning, persecution, and road kills), and hybridisation with dogs. Disease Rabies outbreaks, stemming from infected dogs, have killed many Ethiopian wolves over the 1990s and 2000s. Two well-documented outbreaks in Bale, one in 1991 and another in 2008–2009, resulted in the die-off or disappearance of 75% of known animals. Both incidents prompted reactive vaccinations in 2003 and 2008–2009, respectively. Canine distemper is not necessarily fatal to wolves, though a recent increase in infection has occurred, with outbreaks of canine distemper having been detected in 2005–2006 in Bale and in 2010 across subpopulations. Habitat loss During the 1990s, wolf populations in Gosh Meda and Guguftu became extinct. In both cases, the extent of Afroalpine habitat above the limit of agriculture had been reduced to less than 20 km2. The EWCP team confirmed the extinction of a wolf population in Mt. Guna in 2011, whose numbers had been in single figures for several years. Habitat loss in the Ethiopian highlands is directly linked to agricultural expansion into Afroalpine areas. In the northern highlands, human density is among the highest in Africa, with 300 people per km2 in some localities, with almost all areas below 3,700 m having been converted into barley fields. Suitable areas of land below this limit are under some level of protection, such as Guassa-Menz and the Denkoro Reserve, or within the southern highlands, such as the Arsi and Bale Mountains. The most vulnerable wolf populations to habitat loss are those within relatively low-lying Afroalpine ranges, such as those in Aboi Gara and Delanta in North Wollo. Population fragmentation Some Ethiopian wolf populations, particularly those in North Wollo, show signs of high fragmentation, which is likely to increase with current rates of human expansion. The dangers posed by fragmentation include increased contact with humans, dogs, and livestock, and further risk of isolation and inbreeding in wolf populations. Although no evidence of inbreeding depression or reduced fitness exists, the extremely small wolf population sizes, particularly those north of the Rift Valley, raise concerns among conservationists. Elsewhere, the Bale populations are fairly continuous, while those in Simien can still interbreed through habitat corridors. Encroachment within protected areas In the Simien Mountains National Park, human and livestock populations are increasing by 2% annually, with further road construction allowing easy access to peasants into wolf home ranges; 3,171 people in 582 households were found to be living in the park and 1,477 outside the park in October 2005. Although the area of the park has since been expanded, further settlement stopped, and grazing restricted, effective enforcement may take years. , about 30,000 people live in 30 villages around and two within the park, including 4,650 cereal farmers, herders, woodcutters, and many others. In Bale there are numerous villages in and around the area, comprising over 8,500 households with more than 12,500 dogs. In 2007, the estimate of households within wolf habitat numbered 1,756. Because of the high number of dogs, the risk of infection in local wolf populations is high. Furthermore, intentional and unintentional brush fires are frequent in the ericaceous moorlands wolves inhabit. Overgrazing Although wolves in Bale have learned to use cattle to conceal their presence when hunting for rodents, the level of grazing in the area can adversely affect the vegetation available for the wolves' prey. Although no declines in wolf populations related to overgrazing have occurred, high grazing intensities are known to lead to soil erosion and vegetation deterioration in Afroalpine areas such as Delanta and Simien. Human persecution and disturbance Direct killings of wolves were more frequent during the Ethiopian Civil War, when firearms were more available. The extinction of wolves in Mt. Choqa was likely due to persecution. Although people living close to wolves in modern times believe that wolf populations are recovering, negative attitudes towards the species persist due to livestock predation. Wolves were largely unmolested by humans in Bale, as they were not considered threats to sheep and goats. However, they are perceived as threats to livestock elsewhere, with cases of retaliatory killings occurring in the Arsi Mountains. The Ethiopian wolf has not been recorded to be exploited for its fur, though in one case, wolf hides were used as saddle pads. It was once hunted by sportsmen, though this is now illegal. Vehicle collisions killed at least four wolves in the Sanetti Plateau since 1988, while two others were left with permanent limps. Similar accidents are a risk in areas where roads cut across wolf habitats, such as in Menz and Arsi. Hybridisation with dogs Management plans for hybridization with dogs involve sterilization of known hybrids. Incidences of Ethiopian wolf-dog hybridization have been recorded in Bale's Web Valley. At least four hybrids were identified and sterilized in the area. Although hybridization has not been detected elsewhere, scientists are concerned that it could pose a threat to the wolf population's genetic integrity, resulting in outbreeding depression or a reduction in fitness, though this does not appear to have taken place. Due to the female's strong preference to avoid inbreeding, hybridization could be the result of not finding any males who are not close relatives outside of dogs. Competition with African golden wolves Encounters with African golden wolves (Canis lupaster) are usually agonistic, with Ethiopian wolves dominating African wolves if the latter enter their territories, and vice versa. Although African golden wolves are inefficient rodent hunters and thus not in direct competition with Ethiopian wolves, it is likely that heavy human persecution prevents the former from attaining numbers large enough to completely displace the latter. Conservation The Ethiopian wolf is not listed on the CITES appendices, though it is afforded full official protection under Ethiopia's Wildlife Conservation Regulations of 1974, Schedule VI, with the killing of a wolf carrying a two-year jail sentence. The species is present in several protected areas, including three areas in South Wollo (Bale Mountains National Park, Simien Mountains National Park, and Borena Sayint Regional Park), one in north Shoa (Guassa Community Conservation Area), and one in the Arsi Mountains National Park. Areas of suitable wolf habitat have recently increased to 87%, as a result of boundary extensions in Simien and the creation of the Arsi Mountains National Park. Steps taken to ensure the survival of the Ethiopian wolf include dog vaccination campaigns in Bale, Menz, and Simien, sterilization programs for wolf-dog hybrids in Bale, rabies vaccination of wolves in parts of Bale, community and school education programs in Bale and Wollo, contributing to the running of national parks, and population monitoring and surveying. A 10-year national action plan was formed in February 2011. The species' critical situation was first publicised by the Wildlife Conservation Society in 1983, with the Bale Mountains Research Project being established shortly after. This was followed by a detailed, four-year field study, which prompted the IUCN/SSC Canid Specialist Group to produce an action plan in 1997. The plan called for the education of people in wolf-inhabited areas, wolf population monitoring, and the stemming of rabies in dog populations. The Ethiopian Wolf Conservation Programme was formed in 1995 by Oxford University, with donors including the Born Free Foundation, Frankfurt Zoological Society, and the Wildlife Conservation Network. The overall aim of the EWCP is to protect the wolf's Afroalpine habitat in Bale, and establish additional conservation areas in Menz and Wollo. The EWCP carries out education campaigns for people outside the wolf's range to improve dog husbandry and manage diseases within and around the park, as well as monitoring wolves in Bale, south and north Wollo. The program seeks to vaccinate up to 5,000 dogs a year to reduce rabies and distemper in wolf-inhabited areas. In 2016, the Korean company Sooam Biotech was reported to be attempting to clone the Ethiopian wolf using dogs as surrogate mothers to help conserve the species.
Biology and health sciences
Canines
Animals
14835582
https://en.wikipedia.org/wiki/Algebraic%20combinatorics
Algebraic combinatorics
Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. History The term "algebraic combinatorics" was introduced in the late 1970s. Through the early or mid-1990s, typical combinatorial objects of interest in algebraic combinatorics either admitted a lot of symmetries (association schemes, strongly regular graphs, posets with a group action) or possessed a rich algebraic structure, frequently of representation theoretic origin (symmetric functions, Young tableaux). This period is reflected in the area 05E, Algebraic combinatorics, of the AMS Mathematics Subject Classification, introduced in 1991. Scope Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may be enumerative in nature or involve matroids, polytopes, partially ordered sets, or finite geometries. On the algebraic side, besides group theory and representation theory, lattice theory and commutative algebra are commonly used. Important topics Symmetric functions The ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric groups. Association schemes An association scheme is a collection of binary relations satisfying certain compatibility conditions. Association schemes provide a unified approach to many topics, for example combinatorial designs and coding theory. In algebra, association schemes generalize groups, and the theory of association schemes generalizes the character theory of linear representations of groups. Strongly regular graphs A strongly regular graph is defined as follows. Let G = (V,E) be a regular graph with v vertices and degree k. G is said to be strongly regular if there are also integers λ and μ such that: Every two adjacent vertices have λ common neighbours. Every two non-adjacent vertices have μ common neighbours. A graph of this kind is sometimes said to be a srg(v, k, λ, μ). Some authors exclude graphs which satisfy the definition trivially, namely those graphs which are the disjoint union of one or more equal-sized complete graphs, and their complements, the Turán graphs. Young tableaux A Young tableau (pl.: tableaux) is a combinatorial object useful in representation theory and Schubert calculus. It provides a convenient way to describe the group representations of the symmetric and general linear groups and to study their properties. Young tableaux were introduced by Alfred Young, a mathematician at Cambridge University, in 1900. They were then applied to the study of the symmetric group by Georg Frobenius in 1903. Their theory was further developed by many mathematicians, including Percy MacMahon, W. V. D. Hodge, G. de B. Robinson, Gian-Carlo Rota, Alain Lascoux, Marcel-Paul Schützenberger and Richard P. Stanley. Matroids A matroid is a structure that captures and generalizes the notion of linear independence in vector spaces. There are many equivalent ways to define a matroid, the most significant being in terms of independent sets, bases, circuits, closed sets or flats, closure operators, and rank functions. Matroid theory borrows extensively from the terminology of linear algebra and graph theory, largely because it is the abstraction of various notions of central importance in these fields. Matroids have found applications in geometry, topology, combinatorial optimization, network theory and coding theory. Finite geometries A finite geometry is any geometric system that has only a finite number of points. The familiar Euclidean geometry is not finite, because a Euclidean line contains infinitely many points. A geometry based on the graphics displayed on a computer screen, where the pixels are considered to be the points, would be a finite geometry. While there are many systems that could be called finite geometries, attention is mostly paid to the finite projective and affine spaces because of their regularity and simplicity. Other significant types of finite geometry are finite Möbius or inversive planes and Laguerre planes, which are examples of a general type called Benz planes, and their higher-dimensional analogs such as higher finite inversive geometries. Finite geometries may be constructed via linear algebra, starting from vector spaces over a finite field; the affine and projective planes so constructed are called Galois geometries. Finite geometries can also be defined purely axiomatically. Most common finite geometries are Galois geometries, since any finite projective space of dimension three or greater is isomorphic to a projective space over a finite field (that is, the projectivization of a vector space over a finite field). However, dimension two has affine and projective planes that are not isomorphic to Galois geometries, namely the non-Desarguesian planes. Similar results hold for other kinds of finite geometries.
Mathematics
Combinatorics
null
11174927
https://en.wikipedia.org/wiki/Beaver%20dam
Beaver dam
A beaver dam or beaver impoundment is a dam built by beavers; it creates a pond which protects against predators such as coyotes, wolves and bears, and holds their food during winter. These structures modify the natural environment in such a way that the overall ecosystem builds upon the change, making beavers a keystone species and ecosystem engineers. They build prolifically at night, carrying mud with their forepaws and timber between their teeth. Construction A minimum water level of is required to keep the underwater entrance to beaver lodges from being blocked by ice during the winter. In lakes, rivers and large streams with deep enough water, beavers may not build dams, and live in bank burrows and lodges. Beavers start construction by diverting the stream to lessen the water's flow pressure. Branches and logs are then driven into the mud of the stream bed to form a base. Then sticks, bark (from deciduous trees), rocks, mud, grass, leaves, masses of plants, and anything else available are used to build the superstructure. Beavers can transport their own weight in material; they drag logs along mudslides and float them through canals to get them in place. Once the dam has flooded enough area to the proper depth to form a protective moat for the lodge (often covering many acres), beavers begin construction of the lodge. Trees approaching a diameter of may be used to construct a dam, although the average is . Log length depends on the diameter of the tree and the size of the beaver. There are recorded cases of beavers felling trees of tall and in diameter. Logs of this size are not intended to be used as structural members of the dam; rather, the bark is used for food, and sometimes to get to upper branches. It takes a beaver about 20 minutes to cut down a wide aspen, by gnawing a groove around the trunk in an hourglass shape. A beaver's jaws are powerful enough to cut a sapling in one bite. Maintenance work on the dam and lodges is often done in autumn. If beavers are considered central place foragers, their canals may be considered an extension of their "central place" far beyond the lodge, according to a 2004–2012 study that mapped beaver ponds and cut stumps. Some people consider that by building dams beavers are expressing tool use behaviour. Size Beaver dams typically range in length from a few meters to about . Canals can be over long. The largest known beaver dam is in Wood Buffalo National Park in Alberta, Canada, and is long. Satellite photos provided by NASA WorldWind show the dam did not exist in 1975, but it appeared in subsequent images. It has two or more lodges and is a combination of two original dams. Google Earth images show new dams being built which could ultimately join the main dam and increase the overall length by another during the next decade. Coordinates: . Another large beaver dam measuring long, high and thick at the base was found in Three Forks, Montana. Effects Dam building can help to restore damaged wetlands. Wetland benefits include flood control downstream, biodiversity (by providing habitat for different species), and water cleansing, both by the breakdown of toxins such as pesticides and the retention of silt by beaver dams. Beaver dams reduce erosion and decrease the turbidity that can be a limiting factor for some aquatic life. The benefits may be long-term and largely unnoticed unless a catchment is monitored closely. Almost half of endangered and threatened species in North America rely upon wetlands. In 2012, a systematic review was conducted on the impacts of beaver dams on fishes and fish habitat (biased to North America (88%)). The most frequently cited benefits of beaver dams were increased habitat heterogeneity, rearing and overwintering habitat, flow refuge, and invertebrate production. Impeded fish movement because of dams, siltation of spawning habitat and low oxygen levels in ponds were the most often cited negative impacts. Benefits (184) were cited more frequently than costs (119). Flood control A beaver dam may have a freeboard above the water level. When heavy rains occur, the river or lake fills up. Afterward the dam gradually releases the extra stored water, thus somewhat reducing the height of the flood wave moving down the river. The surface of any stream intersects the surrounding water table. By raising the stream level, the gradient of the surface of the water table above the beaver dam is reduced, and water near the beaver dam flows more slowly into the stream. This may also help in reducing flood waves, and increasing water flow when there is no rain. In other words, beaver dams smooth out water flow by increasing the area wetted by the stream. This allows more water to seep into the ground where its flow is slowed. This water eventually finds its way back to the stream. Rivers with beaver dams in their head waters have lower high water and higher low water levels. By raising the water table in wetlands such as peatlands, they can stabilize a fluctuating water table, which influences the levels of both carbon and water. In a 2017 study of beaver dam hydrology, monitored beaver dams in a Rocky Mountain peatland were found to increase groundwater storage and regional water balance, which can be beneficial for preventing drought. The study also suggested potential to improve carbon sequestration. Excess nutrient removal Beaver ponds can cause the removal of nutrients from the stream flow. Farming along the banks of rivers often increases the loads of phosphates, nitrates and other nutrients, which can cause eutrophication and may contaminate drinking water. Besides silt, the beaver dam collects twigs and branches from the beavers' activity as well as leaves, notably in the autumn. The main component of this material is cellulose, a polymer of β-glucose monomers. (This creates a more crystalline structure than is found in starch, which is composed of α-glucose monomers. Cellulose is a type of polysaccharide.) Many bacteria produce cellulase which can split off the glucose and use it for energy. Just as algae receive energy from sunlight, these bacteria derive energy from cellulose, and form the base of a very similar food chain. Additionally, bacterial populations absorb nitrogen and phosphorus compounds as they pass by in the water stream and keep these and other nutrients in the beaver pond and the surrounding ecology. Pesticide and herbicide removal Agriculture introduces herbicides and pesticides into streams. Some of these toxicants are metabolized and decomposed by the bacteria in the cellulose-rich bottom of a beaver dam. Denitrification Some scientists believe that the nitrogen cascade, the production of more fixed nitrogen than the natural cycles can turn back into nitrogen gas, may be as much of a problem to Earth's ecology as carbon dioxide production. Studies have shown that beaver dams along a stream contribute to denitrification (the conversion of nitrogen compounds back into nitrogen). Bacteria in the dirt and the plant debris, which collects at the dams, turns nitrates into nitrogen gas. The gas bubbles to the surface and mixes with the atmosphere once more. Salmon and trout Beaver dams and the associated ponds can provide nurseries for salmon and trout. An early indication of this was seen following the 1818 agreement between the British government of Canada and the government of America allowing Americans access to the Columbia watershed. The Hudson's Bay Company, in a fit of pique, instructed its trappers to extirpate the fur-bearing animals in the area. The beaver was the first to be made locally extinct. Salmon runs fell precipitously in the following years, even though none of the factors associated with the decline of salmon runs were extant at that time. There are several reasons why beaver dams increase salmon runs. They produce ponds that are deep enough for juvenile salmon to hide from predatory wading birds. They trap nutrients in their ecology and notably the nutrient pulse represented by the migration of the adult salmon upstream. These nutrients help feed the juveniles after the yolk sac has been digested. The dams provide calm water which means that the young salmon can use energy for growth rather than for navigating currents; larger smolts with a food reserve have a better rate of survival when they reach the sea. Finally, beaver dams keep the water clear which favours all salmonoids. Frogs Beaver dams have been shown to be beneficial to frog and toad populations, likely because they provide protected areas for larvae to mature in warmer, well-oxygenated water. A study in Alberta, Canada, showed that "Pitfall traps on beaver ponds captured 5.7 times more newly metamorphosed wood frogs, 29 times more western toads and 24 times more boreal chorus frogs than on nearby free-flowing streams." Birds Beaver dams help migrating songbirds. By stimulating the growth of species of plants that are critical to populations of songbirds in decline, beaver dams help create food and habitat. The presence of beaver dams has been shown to be associated with an increased diversity of songbirds. They can also have positive effects on local waterfowl, such as ducks, that are in need of standing water habitats. Disruption Beaver dams can be disruptive; the flooding can cause extensive property damage, and, when the flooding occurs next to a railroad roadbed, it can cause derailments by washing out the tracks. When a beaver dam bursts, the resulting flash flood may overwhelm a culvert. Traditional solutions to beaver problems have been focused on the trapping and removal of all the beavers in the area. While this is sometimes necessary, it is typically a short-lived solution, as beaver populations have made a remarkable comeback in the United States (after near extirpation in the nineteenth century) and are likely to continually recolonize suitable habitat. Modern solutions include relatively cost-effective and low maintenance flow devices. Introduced to an area without its natural predators, as in Tierra del Fuego, beavers have flooded thousands of acres of land and are considered a plague. One notable difference in Tierra del Fuego from most of North America is that the trees in Tierra del Fuego cannot be coppiced as can willows, poplars, aspens, and other North American trees. Thus the damage by the beavers seems more severe. The beaver's disruption is not limited to human geography; beavers can destroy nesting habitat for endangered species. Warming temperatures in the Arctic allow beavers to extend their habitat further north, where their dams impair boat travel, impact access to food, affect water quality, and endanger downstream fish populations. Pools formed by the dams store heat, thus changing local hydrology and causing localized thawing of permafrost that in turn contributes to global warming. Stream life cycle Wetland creation If a beaver pond becomes too shallow due to sediment accumulation, or the tree supply is depleted, beavers will abandon the site. Eventually the dam will be breached and the water will drain out. The rich thick layer of silt, branches, and dead leaves behind the old dam is an ideal habitat for some wetland species. Meadow creation As the wetland fills up with plant debris and dries out, pasture species colonize it and the wetland may eventually become a meadow suitable for grazing in a previously forested area. This provides a valuable niche for many animals which otherwise would be excluded. Beaver dam creation also increases the plants the dams were made from (such as willows) to reproduce by cutting, encouraging the growth of adventitious roots. Riverine forest Finally the meadow will be colonized by riverine trees, typically aspens, willows and such species which are favoured by the beaver. Beavers are then likely to recolonize the area, and the cycle begins again. Bottomland Each time the stream life cycle repeats itself another layer of organic soil is added to the bottom of the valley. The valley slowly fills and the flat area at the bottom widens. Research is sparse, but it seems likely that parts of the bottomland in North America was created, or at least added to, by the efforts of the generations of beavers that lived there. Analogs Humans sometimes build structures similar to beaver dams in streams, either to get the benefits of beaver dams in places without beavers, or to encourage beavers to settle in a particular area. These are often called "beaver dam analogs" (BDA) although other names are also used. When the goal is to attract beavers, sometimes the site is unsuitable in its present condition, such as being too eroded for beavers to build a dam in their usual way. BDA builders may use construction techniques beyond the beaver's capabilities, such as driving wooden posts into the stream bed to brace horizontal branches that would otherwise be washed away. The hope is that beavers who wander by or are brought in will choose to live there and take over construction and maintenance of the dam.
Biology and health sciences
Shelters and structures
Animals
13728473
https://en.wikipedia.org/wiki/Tropical%20savanna%20climate
Tropical savanna climate
Tropical savanna climate or tropical wet and dry climate is a tropical climate sub-type that corresponds to the Köppen climate classification categories Aw (for a dry "winter") and As (for a dry "summer"). The driest month has less than of precipitation and also less than mm of precipitation. This latter fact is in a direct contrast to a tropical monsoon climate, whose driest month sees less than of precipitation but has more than of precipitation. In essence, a tropical savanna climate tends to either see less overall rainfall than a tropical monsoon climate or have more pronounced dry season(s). It is impossible for a tropical savanna climate to have more than as such would result in a negative value in that equation. In tropical savanna climates, the dry season can become severe, and often drought conditions prevail during the course of the year. Tropical savanna climates often feature tree-studded grasslands due to its dryness, rather than thick jungle. It is this widespread occurrence of tall, coarse grass (called savanna) which has led to Aw and As climates often being referred to as the tropical savanna. However, there is some doubt whether tropical grasslands are climatically induced. Additionally, pure savannas, without trees, are the exception rather than the rule. Versions There are generally four types of tropical savanna climates: Distinct wet and dry seasons of relatively equal duration. Most of the region's annual rainfall is experienced during the wet season and very little precipitation falls during the dry season. A lengthy dry season and a relatively short wet season. This version features seven or more dry season months and five or fewer wet season months. There are more variations within this version: On one extreme, the region receives just enough precipitation during the short wet season to preclude it from a semi-arid climate classification. This drier variation of the tropical savanna climate is typically found adjacent to regions with hot semi-arid (BSh) climates, such as seen in places like India, the Sahel region in Africa and Brazil. On the other extreme, the climate features a lengthy dry season followed by a short but extremely rainy wet season. However, regions with this variation of the climate do not experience enough rainfall during the wet season to qualify as a tropical monsoon climate (Am). These can be found near tropical monsoon climates such as in Asia, Africa and the Americas. A lengthy, moderately wet season and a relatively short dry season. This version features seven or more wet season months and five or fewer dry season months. This version's precipitation pattern is similar to precipitation patterns observed in some tropical monsoon climates (as well as subhumid temperate climates farther poleward) but does not experience enough rainfall during the wet season to be classified as such, while the rainfall in the dry season is just low enough to preclude a tropical rainforest climate (Af) and temperatures in the winter months are warm enough to preclude a humid subtropical climate (Cwa) or subtropical highland climate (Cwb) classification. This is often found near the poleward margins of the tropical savanna climates. A dry season with a noticeable amount of rainfall followed by a rainy wet season. In essence, this version mimics the precipitation patterns more commonly found in a tropical monsoon climate, but do not receive enough precipitation during either the dry season or the year to be classified as such. Distribution Tropical savanna climates are most commonly found in Africa, Asia, Central America, and South America. The climate is also prevalent in sections of northern Australia, the Pacific Islands, in extreme southern North America in south Florida, and some islands in the Caribbean. Most places that have this climate are found at the outer margins of the tropical zone, but occasionally an inner-tropical location (e.g., San Marcos, Antioquia, Colombia) also qualifies. Similarly, the Caribbean coast, eastward from the Gulf of Urabá on the Colombia – Panamá border to the Orinoco river delta, on the Atlantic Ocean (ca. ), have long dry periods (the extreme is the BSh climate (see below), characterized by very low, unreliable precipitation, present, for instance, in extensive areas in the Guajira, and Coro, western Venezuela, the northernmost peninsulas in South America, which receive < total annual precipitation, practically all in two or three months). This condition extends to the Lesser Antilles and Greater Antilles forming the Circumcaribbean dry belt. The length and severity of the dry season diminishes inland (southward); at the latitude of the Amazon river—which flows eastward, just south of the equatorial line—the climate is Af. East from the Andes, between the arid Caribbean and the ever-wet Amazon, are the Orinoco river Llanos or savannas, from where this climate takes its name. Sometimes As is used in place of Aw if the dry season occurs during the time of higher sun and longer days. This is typically due to a rain shadow effect that cuts off ITCZ-triggered summer precipitation in a tropical area while winter precipitation remains sufficient to preclude a hot semi-arid climate (BSh) and temperatures in the summer months are warm enough to preclude a Mediterranean climate (Csa/Csb) classification. This is the case in parts of Hawaii, East Africa (Mombasa, Kenya, Somalia), Sri Lanka (Trincomalee) and coastal regions of Northeastern Brazil (from São Luís through Natal to Maceió) and southeast India, for instance. The difference between "summer" and "winter" in such tropical locations is usually so slight that a distinction between an As and Aw climate is trivial. In most places that have tropical wet and dry climates, however, the dry season occurs during the time of lower sun and shorter days because of reduction of or lack of convection, which in turn is due to the meridional shifts of the Intertropical Convergence Zone during the entire course of the year, based on which hemisphere the location sits. Cities with a tropical savanna climate Abidjan, Ivory Coast (Aw) Abuja, Nigeria (Aw) Bahir Dar, Ethiopia (Aw, bordering on Cwb) Bamako, Mali (Aw) Bangalore, India (Aw) Bangkok, Thailand (Aw) Bangui, Central African Republic (Aw) Banjul, The Gambia (Aw) Barranquilla, Colombia (Aw) Belo Horizonte, Brazil (Aw) Bhubaneswar, Odisha, India (Aw) Bissau, Guinea-Bissau (Aw) Bobo-Dioulasso, Burkina Faso (Aw) Brasília, Brazil (Aw) Brazzaville, Republic of the Congo (Aw) Bridgetown, Barbados (Aw) Bujumbura, Burundi (Aw) Cancún, Mexico (Aw, bordering on Am) Caracas, Venezuela (Aw) Cartagena, Colombia (Aw) Cape Coast, Ghana (Aw/As) Chipita, Zambia (Aw) Chennai, Tamil Nadu, India (As, bordering on Aw) Cotonou, Benin (Aw) Cuernavaca, Mexico (Aw, bordering on Cwa) Dar es Salaam, Tanzania (Aw) Darwin, Northern Territory, Australia (Aw) Denpasar, Indonesia (Aw) Dhaka, Bangladesh (Aw) Dili, East Timor (Aw) Dongfang, Hainan, China (Aw) Fortaleza, Brazil (As) Guatemala City, Guatemala (Aw, bordering on Cwa) Guayaquil, Ecuador (Aw) Haikou, Hainan, China (Aw, bordering on Cwa) Havana, Cuba (Aw, bordering on Af) Ho Chi Minh City, Vietnam (Aw) Hyderabad, Telangana, India (Aw, bordering on BSh) Jaffna, Sri Lanka (As) Juba, South Sudan (Aw) Kano, Nigeria (Aw) Kaohsiung, Taiwan (Aw) Kapalua, Hawaii, United States (As) Key West, Florida, United States (Aw) Khulna, Bangladesh (Aw, bordering on Am) Kingston, Jamaica (Aw, bordering on BSh) Kigali, Rwanda (Aw) Kinshasa, Democratic Republic of the Congo (Aw) Kolkata, India (Aw) Kumasi, Ghana (Aw) Kupang, West Timor, Indonesia (Aw) Lagos, Nigeria (Aw) Lanai City, Hawaii, United States (As, bordering Csb) Lomé, Togo (Aw) Malanje, Angola (Aw, bordering on Cwa and Cwb) Managua, Nicaragua (Aw) Mandalay, Myanmar (Aw, bordering on BSh) Maputo, Mozambique (Aw, bordering on BSh) Mengla, Yunnan, China (Aw, bordering on Cwa) Minamitorishima, Japan (Aw) Mombasa, Kenya (As) Moundou, Chad (Aw) Mumbai, India (Aw, bordering on Am) Naples, Florida, United States (Aw) Natal, Brazil (As) Nha Trang, Vietnam (As) Nouméa, New Caledonia (As) Phnom Penh, Cambodia (Aw) Port-au-Prince, Haiti (Aw) Port Louis, Mauritius (Aw) Port Moresby, Papua New Guinea (Aw) Rio de Janeiro, Brazil (Aw, bordering on Am) San Cristóbal Island, Ecuador (Aw) San José, Costa Rica (Aw) San Pedro Sula, Honduras (Aw) San Salvador, El Salvador (Aw) São Tomé, São Tomé and Principe (As) Sanya, Hainan, China (Aw) St. John's, Antigua and Barbuda (Aw) Surabaya, Indonesia (Aw) Tangail, Bangladesh (Aw) Tegucigalpa, Honduras (Aw) Townsville, Queensland, Australia (Aw) Trincomalee, Sri Lanka (As) Veracruz, Mexico (Aw) Vientiane, Laos (Aw) Wake Island, United States (Aw) Yaoundé, Cameroon (Aw) Ziguinchor, Senegal (Aw) Some examples of tropical savanna climates
Physical sciences
Climates
Earth science
4308142
https://en.wikipedia.org/wiki/Guan%20%28bird%29
Guan (bird)
The guans are a number of bird genera which make up the largest group in the family Cracidae. They are found mainly in northern South America, southern Central America, and a few adjacent Caribbean islands. There is also the peculiar horned guan (Oreophasis derbianus) which is not a true guan, but a very distinct and ancient cracid with no close living relatives (Pereira et al. 2002). Systematics and evolution The evolution of the group is fairly well resolved due to comprehensive analyses of morphology, biogeography, and mt and nDNA sequences (Pereira et al. 2002, Grau et al. 2005). The position of Penelopina and Chamaepetes - peculiar genera of which the former, uniquely among guans and more in line with curassows, shows pronounced sexual dimorphism - relative to each other is not determinable with certainty at present, but all evidence suggests that they are the basalmost guans. Their distribution is fairly far northwards, with 2 of their 3 species living in Central America. This indicates that the guans' origin is in the northern Andes region, in the general area of Colombia or perhaps Ecuador; the date of their initial radiation is not well resolved due to the lack of fossil evidence but can be very roughly placed around 40–25 mya (Oligocene, perhaps some time earlier). The two basal lineages diverged during the Burdigalian, around 20–15 mya.(Pereira et al. 2002) The two larger genera diverged around the same time, spreading mainly southwards all over tropical South America in the process (Pereira et al. 2002). It appears as if the present-day distribution of the piping-guans is much relictual, due to climate changes fragmenting lowland habitat. Aburria were apparently being driven into refugia of suitable habitat time and again during the Late Pliocene by a combination of this and, possibly, competition with the more diverse and generally more adaptable Penelope (Grau et al. 2005). If taken as a subfamily, the group also includes the chachalacas, but the horned guan is excluded and found in its monotypic subfamily. Genera and species
Biology and health sciences
Galliformes
Animals
9701472
https://en.wikipedia.org/wiki/Timema
Timema
Timema is a genus of relatively short-bodied, stout and wingless stick insects native to the far western United States, and the sole extant member of the family Timematidae. The genus was first described in 1895 by Samuel Hubbard Scudder, based on observations of the species Timema californicum. Compared to other stick insects (order Phasmatodea), the genus Timema is considered basal; that is, the earliest "branch" to diverge from the phylogenetic tree that includes all Phasmatodea. To emphasize this outgroup status, all stick insects not included in Timema are sometimes described as "Euphasmatodea." Five of the twenty-one species of Timema are parthenogenetic, including two species that have not engaged in sexual reproduction for one million years, the longest known asexual period for any insect. Description Timema spp. differ from other Phasmatodea in that their tarsi have three segments rather than five. For stick insects, they have relatively small, stout bodies, so that they look somewhat like earwigs (order Dermaptera). Cryptic coloration and camouflage Timema walking sticks are night-feeders who spend daytime resting on the leaves or bark of the plants they feed on. Timema colors (primarily green, gray, or brown) and patterns (which may be stripes, scales, or dots) match their typical background, a form of crypsis. In 2008, researchers studying the presence or absence of a dorsal stripe suggested that it has independently evolved several times in Timema species and is an adaptation for crypsis on needle-like leaves. All of the eight Timema species with a dorsal stripe have at least one host plant with needle-like foliage. Of the thirteen unstriped species, seven feed only on broadleaf plants. Four (T. ritense, T. podura, T. genevievae, and T. coffmani) rest during the day on the host plant's trunk rather than its leaves and have bodies that are brown, gray, or tan. Only two species (T. nakipa and T. boharti) have green unstriped morphs that feed on needle-like foliage; both are generalist feeders that also feed on broadleaf hosts. The species Timema cristinae exhibits both striped and unstriped populations depending on the host plant, a form of polymorphism that clearly illustrates the camouflage function of the stripe. The earliest ancestors of this species were generalists that fed on plants belonging to both the genera Adenostoma and Ceanothus. They eventually diverged into two distinct ecotypes with a more specialist host plant preference. One ecotype prefers to feed on Adenostoma while the other ecotype prefers to feed on Ceanothus. The Adenostoma ecotype possesses a white dorsal stripe, an adaptation to blend in with the needle-like leaves of the plant, while the Ceanothus ecotype does not (Ceanothus spp. have broad leaves). The Adenostoma ecotype is also smaller, with a wider head, and shorter legs. These characteristics are genetically inherited and has been interpreted as the early stages of the speciation process. The two ecotypes will eventually become separate species once reproductive isolation is achieved. At the moment, both ecotypes are still capable of interbreeding and producing viable offspring, as such they are still considered a single species. Life cycle and reproduction Timema eggs are soft, ellipsoidal, and about two mm long, with a lid-like structure at one end (the operculum) through which the nymph will emerge. Timema females use particles of dirt, which they have previously ingested, to coat their eggs. The eggs of many stick insects, including Timema, are attractive to ants, who carry them away to their burrows to feed on the egg's capitulum, while leaving the rest of the egg intact to hatch. The emerging nymph passes through six or seven instars before reaching adulthood. Timema males, in sexual species of Timema, show a consistent pattern of courting behavior. The male climbs onto the back of the female and, after a short display of vibrating and waving, they proceed to mate. (Rejection by the female is possible but uncommon.) The male then rides on the female's back for up to five days, a behavior often referred to as "guarding" the female. Several species of Timema are parthenogenetic: that is, females can reproduce asexually, producing viable eggs without male participation. According to Tanja Schwander, "Timema are indeed the oldest insects for which there is good evidence that they have been asexual for long periods of time." She heads a team of researchers who found that five Timema species (T. douglasi, T. monikense, T. shepardi, T. tahoe and T. genevievae) have used only asexual reproduction for more than 500,000 years, with T. tahoe and T. genevievae reproducing asexually for over one million years. Genetic analysis, published in 2023, of four asexual Timema species suggested that males, which are rare but not entirely absent, do in fact engage in sexual reproduction with some females. Habitat The geographic range of Timema is limited to mountainous regions of western North America between 30° and 42° N. They are found primarily in California, as well as in a few other neighboring states (Oregon, Nevada, Arizona) and in northern Mexico. All are herbivores, primarily feeding on host plants found in chaparral. Host plants of the different Timema species include Pseudotsuga menziesii (Douglas fir), Sequoia sempervirens (Californian redwood), Arctostaphylos spp. (manzanita), Ceanothus spp., Adenostoma fasciculatum (chamise), Abies concolor (white fir), Quercus spp. (oak), Heteromeles arbutifolia (toyon), Cercocarpus spp. (mountain-mahogany), Eriogonum sp. (buckwheat), and Juniperus spp. (juniper). Phylogeny General phylogenetic relationships within Timema (Law & Crespi, 2002). Species marked with ♀ are parthenogenetic (female only). Classification Timema is the only extant member of the family Timematidae and the suborder Timematodea. Their clade is considered basal to the order Phasmatodea; that is, many scientists believe that Timema-type stick insects represent the earliest "branch" to diverge from the phylogenetic tree that gave rise to all the stick insects of Phasmatodea. This primal distinction is referenced by the name "Euphasmatodea", which is given to all the clades of Phasmatodea other than the suborder Timematodea. While formerly the only member of the family, in 2019 two fossil genera were described from the Cenomanian aged Burmese amber of Myanmar. Twenty-one species have been described; in addition there are at least two undescribed species known to exist: Timema bartmani Timema boharti Timema californicum Timema chumash Timema coffmani Timema cristinae Timema dorotheae Timema douglasi Timema genevievae Timema knulli Timema landelsense Timema monikense Timema morongense Timema nakipa Timema nevadense Timema petita Timema podura Timema poppense Timema ritense Timema shepardi Timema tahoe Timema sp. nov. on limber pine Timema sp. nov. on Sargent cypress
Biology and health sciences
Insects: General
Animals
3166203
https://en.wikipedia.org/wiki/Luminosity%20distance
Luminosity distance
Luminosity distance DL is defined in terms of the relationship between the absolute magnitude M and apparent magnitude m of an astronomical object. which gives: where DL is measured in parsecs. For nearby objects (say, in the Milky Way) the luminosity distance gives a good approximation to the natural notion of distance in Euclidean space. The relation is less clear for distant objects like quasars far beyond the Milky Way since the apparent magnitude is affected by spacetime curvature, redshift, and time dilation. Calculating the relation between the apparent and actual luminosity of an object requires taking all of these factors into account. The object's actual luminosity is determined using the inverse-square law and the proportions of the object's apparent distance and luminosity distance. Another way to express the luminosity distance is through the flux-luminosity relationship, where is flux (W·m−2), and is luminosity (W). From this the luminosity distance (in meters) can be expressed as: The luminosity distance is related to the "comoving transverse distance" by and with the angular diameter distance by the Etherington's reciprocity theorem: where z is the redshift. is a factor that allows calculation of the comoving distance between two objects with the same redshift but at different positions of the sky; if the two objects are separated by an angle , the comoving distance between them would be . In a spatially flat universe, the comoving transverse distance is exactly equal to the radial comoving distance , i.e. the comoving distance from ourselves to the object.
Physical sciences
Basics
Astronomy
3167641
https://en.wikipedia.org/wiki/Ammonium%20acetate
Ammonium acetate
Ammonium acetate, also known as spirit of Mindererus in aqueous solution, is a chemical compound with the formula NH4CH3CO2. It is a white, hygroscopic solid and can be derived from the reaction of ammonia and acetic acid. It is available commercially. History The synonym Spirit of Mindererus is named after R. Minderer, a physician from Augsburg. Uses It is the main precursor to acetamide: NH4CH3CO2 → CH3C(O)NH2 + H2O It is also used as a diuretic. Buffer As the salt of a weak acid and a weak base, ammonium acetate is often used with acetic acid to create a buffer solution. Ammonium acetate is volatile at low pressures. Because of this, it has been used to replace cell buffers that contain non-volatile salts in preparing samples for mass spectrometry. It is also popular as a buffer for mobile phases for HPLC with ELSD and CAD-based detection for this reason. Other volatile salts that have been used for this include ammonium formate. When dissolving ammonium acetate in pure water, the resulting solution typically has a pH of 7, because the equal amounts of acetate and ammonium neutralize each other. However, ammonium acetate is a dual component buffer system, which buffers around pH 4.75 ± 1 (acetate) and pH 9.25 ± 1 (ammonium), but it has no significant buffer capacity at pH 7, contrary to common misconception. Other a biodegradable de-icing agent. a catalyst in the Knoevenagel condensation and as a source of ammonia in the Borch reaction in organic synthesis. a protein precipitating reagent in dialysis to remove contaminants via diffusion. a reagent in agricultural chemistry for determination of soil CEC (cation exchange capacity) and determination of available potassium in soil wherein the ammonium ion acts as a replacement cation for potassium. part of Calley's method for lead artifact conservation Food additive Ammonium acetate is also used as a food additive as an acidity regulator; INS number 264. It is approved for usage in Australia and New Zealand. Production Ammonium acetate is produced by the neutralization of acetic acid with ammonium carbonate or by saturating glacial acetic acid with ammonia. Obtaining crystalline ammonium acetate is difficult on account of its hygroscopic nature.
Physical sciences
Acetates
Chemistry
3168368
https://en.wikipedia.org/wiki/Cygnus%20Loop
Cygnus Loop
The Cygnus Loop (radio source W78, or Sharpless 103) is a large supernova remnant (SNR) in the constellation Cygnus, an emission nebula measuring nearly 3° across. Some arcs of the loop, known collectively as the Veil Nebula or Cirrus Nebula, emit in the visible electromagnetic range. Radio, infrared, and X-ray images reveal the complete loop. Visual components: the Veil Nebula The visual portion of the Cygnus Loop is known as the Veil Nebula, also called the Cirrus Nebula or the Filamentary Nebula. Several components have separate names and identifiers, including the "Western Veil" or "Witch's Broom", the "Eastern Veil", and Pickering's Triangle. NGC 6960 NGC 6960, the Western Veil, is the western part of the remnant, also known as the "Witch's Broom", located at J2000 RA Dec . As the westernmost NGC object in the nebula (first in right ascension), its number is sometimes used as an NGC identifier for the nebula as a whole. NGC 6992, NGC 6995, and IC 1340 These three luminous areas make up the Eastern Veil. NGC 6992 is an HI shell located along the north-eastern edge of the loop at J2000 RA Dec . NGC 6995 is located farther south at J2000 RA Dec , and IC 1340 even farther south at J2000 RA Dec . Pickering's Triangle Also known as Pickering's Wedge, or Pickering's Triangular Wisp, this segment of relatively faint nebulosity was discovered photographically in 1904 by Williamina Fleming at Harvard Observatory, where Edward Charles Pickering was director at the time. The Triangle is brightest along the northern side of the loop, though photographs show the nebulosity extending into the central area as well. NGC 6974 and NGC 6979 These two objects are generally identified today (as by the NGC/IC Project and Uranometria) with two brighter knots of nebulosity in a cloud at the northern edge of the loop, to the east of the northern edge of Pickering's Triangle. NGC 6979 was reported by William Herschel, and while the coordinates he recorded for Veil objects were somewhat imprecise, his position for this one is tolerably close to the knot at J2000 RA Dec . The identifier NGC 6979 is sometimes taken to refer to Pickering's Triangle, but the Triangle is probably not what Herschel saw or what the Catalogue intended for this entry: it was discovered only photographically, after the Catalogue was published, and long after Herschel's observation. NGC 6974 was reported by Lord Rosse, but the position he gave lies in an empty region inside the main loop. It was assumed that he recorded the position incorrectly, and the New General Catalogue gives Rosse's object as the other knot in the northern cloud, located at J2000 RA Dec , one degree north of Rosse's position. (This position is farther east than NGC 6979, even though NGC objects are generally ordered by increasing RA.) These filaments in the north-central area are sometimes known as the "carrot". The spectrum at 34.5 MHz of the region associated with NGC 6974 ranges straight over the entire frequency range 25 to 5000 MHz. Southeastern knot The southeastern knot is located at J2000 RA Dec on the southeastern rim of the Cygnus Loop. The knot has been identified as an encounter between the blast wave from the supernova and a small isolated cloud. The knot is a prominent X-ray feature, consisting of a number of filaments correlated with visual line emission. By combining visual and X-ray data, it can be shown that the southeastern knot is an indentation on the surface of the blast wave, not a small cloud but the tip of a larger cloud. The presence of a reverse shock is evidence that the knot represents an early stage of a blast wave encountering a large cloud. Distance Until 1999, the most often-quoted distance to the supernova remnant was a 1958 estimate made by R. Minkowski, combining his radial velocity measurements with E. Hubble's proper motion study of the remnant's optical filaments to calculate a distance of 770 parsecs or 2500 light-years. However, in 1999, William Blair, assuming that the shock wave should be expanding at the same rate in all directions, compared the angular expansion along the sides of the bubble (visible in Hubble Space Telescope images) with direct line-of-sight measurements of the radial expansion towards the Earth and concluded that the actual size of the bubble was about 40% smaller than the conventional value, leading to a distance of about 1470 ly. A larger revised value of 540 pc (1760 ly) appeared to be corroborated by Blair's later discovery, via the Far Ultraviolet Spectroscopic Explorer (FUSE), of a star seemingly behind the Veil. A UV spectrum of this star, KPD 2055+3111 of spectral type sdOB, showed absorption lines in its spectrum indicate that its light is partially intercepted by the supernova remnant. With an estimated (but uncertain) distance of about 1860 ly away, this star seemed to support the revised estimate of 1760 ly. More recent investigations of the Cygnus Loop's distance using Gaia parallax measurements of several stars seen toward the Cygnus Loop have led to more accurate distance estimates. One of these stars, a 9.6 magnitude B8 star (BD+31 4224) located near the remnant's northwestern rim shows evidence of interactions of its stellar wind with the Cygnus Loop's shock wave, thereby indicating it is located actually inside the remnant. This star's Gaia estimated distance of around 730 pc, along with two other stars both at about 740 pc which exhibit spectral features indicating they must lie behind the remnant, leads to new distance of 725 15 pc or around 2400 light-years. The Gaia estimated distance to the sdOB star KPD 2055+3111 is 819 pc (2,670 ly). This new distance, surprisingly close to the value estimated some 60 years ago by Minkowski, means the Cygnus Loop is physically some 37 pc (120 ly) in diameter and has an age of around 20,000 years. Astronomical ultraviolet source The brightest far-ultraviolet sources of the Cygnus Loop occur in the north-east edge of the remnant. The first flight of the High Resolution Emission Line Spectrometer (HIRELS), a wide-field, far-ultraviolet nebular spectrometer, tuned to OVI emission lines, was launched aboard a Nike-Black Brant from White Sands Missile Range to observe the Cygnus Loop, the first observed galactic OVI emission line source. X-ray source The X-ray source Cygnus X-5 coincides with SNR G074.0-08.6 (the Cygnus Loop), located at J2000 RA Dec , observed by Uhuru at 4U 2046+31. This source also has catalogue numbers 1E 2049.4+3050, 1H 2050+310, and 1M 2051+309, having been observed by the Einstein Observatory, HEAO 1, and OSO 7, respectively. The Cygnus Loop is a strong source of soft X-rays. The center of the supernova shell as determined from X-ray data lies at J1950 RA Dec . A characteristic thermal temperature averaged over the loop from X-ray spectral data is Tx = 2.9 ± 1.5 x 106 K. An X-ray surface brightness map of the loop was obtained with a one-dimensional X-ray telescope flown aboard an Aerobee 170 sounding rocket launched on March 30, 1973, from the White Sands Missile Range. Searches for a compact stellar remnant Most stars that produce supernovae leave behind compact stellar remnants- a neutron star or black hole, typically depending on the mass of the original star. Various techniques based on the features of the supernova remnant estimate the Cygnus Loop progenitor star's mass at 12 to 15 Solar masses, a value that puts the expected remnant firmly within neutron star boundaries. However, despite many searches, no compact stellar remnant had been confidently identified since the identification of the supernova remnant. A noted anomaly is that in X-rays, the nebula appears perfectly spherical aside from a "blowout region" to the south. Searches for a compact stellar remnant have been largely concentrated here, as the hole may have been caused by the violent ejection of a neutron star. A detailed 2012 study of the blowout region identified a possible pulsar wind nebula, as well as a point-like source within it. Although at almost exactly the same position as a known Seyfert galaxy, the slight offset combined with a lack of a radio counterpart makes the point-like source probably unrelated to the galaxy. Whether the feature is a pulsar wind nebula, and if so whether it is related to the Cygnus Loop, is still unknown for certain. If it is indeed the compact stellar remnant of the supernova, the neutron star would have to have been ejected from the center of the nebula at a speed of roughly , depending on the precise age and distance of the remnant. Fiction In the novel Mindbridge by Joe Haldeman, the Cygnus Loop is the remains of the home star of an omnipotent, immortal race that ultimately decided to destroy itself.
Physical sciences
Notable nebulae
Astronomy
3172179
https://en.wikipedia.org/wiki/Medical%20specialty
Medical specialty
A medical specialty is a branch of medical practice that is focused on a defined group of patients, diseases, skills, or philosophy. Examples include those branches of medicine that deal exclusively with children (pediatrics), cancer (oncology), laboratory medicine (pathology), or primary care (family medicine). After completing medical school or other basic training, physicians or surgeons and other clinicians usually further their medical education in a specific specialty of medicine by completing a multiple-year residency to become a specialist. History of medical specialization To a certain extent, medical practitioners have long been specialized. According to Galen, specialization was common among Roman physicians. The particular system of modern medical specialties evolved gradually during the 19th century. Informal social recognition of medical specialization evolved before the formal legal system. The particular subdivision of the practice of medicine into various specialties varies from country to country, and is somewhat arbitrary. Classification of medical specialization Medical specialties can be classified along several axes. These are: Surgical or internal medicine Age range of patients Diagnostic or therapeutic Organ-based or technique-based Throughout history, the most important has been the division into surgical and internal medicine specialties. The surgical specialties are those in which an important part of diagnosis and treatment is achieved through major surgical techniques. The internal medicine specialties are the specialties in which the main diagnosis and treatment is never major surgery. In some countries, anesthesiology is classified as a surgical discipline, since it is vital in the surgical process, though anesthesiologists never perform major surgery themselves. Many specialties are organ-based. Many symptoms and diseases come from a particular organ. Others are based mainly around a set of techniques, such as radiology, which was originally based around X-rays. The age range of patients seen by any given specialist can be quite variable. Pediatricians handle most complaints and diseases in children that do not require surgery, and there are several subspecialties (formally or informally) in pediatrics that mimic the organ-based specialties in adults. Pediatric surgery may or may not be a separate specialty that handles some kinds of surgical complaints in children. A further subdivision is the diagnostic versus therapeutic specialties. While the diagnostic process is of great importance in all specialties, some specialists perform mainly or only diagnostic examinations, such as pathology, clinical neurophysiology, and radiology. This line is becoming somewhat blurred with interventional radiology, an evolving field that uses image expertise to perform minimally invasive procedures. Specialties that are common worldwide List of specialties recognized in the European Union and European Economic Area The European Union publishes a list of specialties recognized in the European Union, and by extension, the European Economic Area. There is substantial overlap between some of the specialties and it is likely that for example "Clinical radiology" and "Radiology" refer to a large degree to the same pattern of practice across Europe. Accident and emergency medicine Allergist Anaesthetics Cardiology Child psychiatry Clinical biology Clinical chemistry Clinical microbiology Clinical neurophysiology Craniofacial surgery Dermatology Endocrinology Family and General Medicine Gastroenterologic surgery Gastroenterology General Practice General surgery Geriatrics Hematology Immunology Infectious diseases Internal medicine Laboratory medicine Nephrology Neuropsychiatry Neurology Neurosurgery Nuclear medicine Obstetrics and gynaecology Occupational medicine Oncology Ophthalmology Oral and maxillofacial surgery Orthopaedics Otorhinolaryngology Paediatric surgery Paediatrics Pathology Pharmacology Physical medicine and rehabilitation Plastic surgery Podiatric surgery Preventive medicine Psychiatry Public health Radiation Oncology Radiology Respiratory medicine Rheumatology Stomatology Thoracic surgery Tropical medicine Urology Vascular surgery Venereology List of North American medical specialties and others In this table, as in many healthcare arenas, medical specialties are organized into the following groups: Surgical specialties focus on manually operative and instrumental techniques to treat disease. Medical specialties that focus on the diagnosis and non-surgical treatment of disease. Diagnostic specialties focus more purely on diagnosis of disorders. Salaries According to the 2022 Medscape Physician Compensation Report, physicians on average earn $339K annually. Primary care physicians earn $260K annually while specialists earned $368K annually. The table below details the average range of salaries for physicians in the US of medical specialties: Specialties by country Australia and New Zealand There are 15 recognised specialty medical Colleges in Australia. The majority of these are Australasian Colleges and therefore also oversee New Zealand specialist doctors. These Colleges are: The Royal Australasian College of Dental Surgeons supervises training of specialist medical practitioners specializing in oral and maxillofacial surgery in addition to its role in the training of dentists. There are approximately 260 faciomaxillary surgeons in Australia. The Royal New Zealand College of General Practitioners is a distinct body from the Australian Royal Australian College of General Practitioners. There are approximately 5100 members of the RNZCGP. Within some of the larger colleges, there are sub-faculties, such as: Australasian Faculty of Rehabilitation Medicine within the Royal Australasian College of Physicians There are some collegiate bodies in Australia that are not officially recognised as specialties by the Australian Medical Council but have a college structure for members, such as: Australasian College of Physical Medicine There are some collegiate bodies in Australia of allied health non-medical practitioners with specialisation. They are not recognised as medical specialists, but can be treated as such by private health insurers, such as: Australasian College of Podiatric Surgeons Canada Specialty training in Canada is overseen by the Royal College of Physicians and Surgeons of Canada and the College of Family Physicians of Canada. For specialists working in the province of Quebec, the Collège des médecins du Québec also oversees the process. Germany In Germany these doctors use the term Facharzt. India Specialty training in India is overseen by the Medical Council of India, responsible for recognition of post graduate training and by the National Board of Examinations. Education of Ayurveda in overseen by Central Council of Indian Medicine (CCIM), the council conducts UG and PG courses all over India, while Central Council of Homoeopathy does the same in the field of Homeopathy. Sweden In Sweden, a medical license is required before commencing specialty training. Those graduating from Swedish medical schools are first required to do a rotational internship of about 1.5 to 2 years in various specialties before attaining a medical license. The specialist training lasts 5 years. United States There are three agencies or organizations in the United States that collectively oversee physician board certification of MD and DO physicians in the United States in the 26 approved medical specialties recognized in the country. These organizations are the American Board of Medical Specialties (ABMS) and the American Medical Association (AMA); the American Osteopathic Association Bureau of Osteopathic Specialists (AOABOS) and the American Osteopathic Association; the American Board of Physician Specialties (ABPS) and the American Association of Physician Specialists (AAPS). Each of these agencies and their associated national medical organization functions as its various specialty academies, colleges and societies. All boards of certification now require that medical practitioners demonstrate, by examination, continuing mastery of the core knowledge and skills for a chosen specialty. Recertification varies by particular specialty between every seven and every ten years. In the United States there are hierarchies of medical specialties in the cities of a region. Small towns and cities have primary care, middle sized cities offer secondary care, and metropolitan cities have tertiary care. Income, size of population, population demographics, distance to the doctor, all influence the numbers and kinds of specialists and physicians located in a city. Demography A population's income level determines whether sufficient physicians can practice in an area and whether public subsidy is needed to maintain the health of the population. Developing countries and poor areas usually have shortages of physicians and specialties, and those in practice usually locate in larger cities. For some underlying theory regarding physician location, see central place theory. The proportion of men and women in different medical specialties varies greatly. Such sex segregation is largely due to differential application. Satisfaction and burnout A survey of physicians in the United States came to the result that dermatologists are most satisfied with their choice of specialty followed by radiologists, oncologists, plastic surgeons, and gastroenterologists. In contrast, primary care physicians were the least satisfied, followed by nephrologists, obstetricians/gynecologists, and pulmonologists. Surveys have also revealed high levels of depression among medical students (25 - 30%) as well as among physicians in training (22 - 43%), which for many specialties, continue into regular practice. A UK survey conducted of cancer-related specialties in 1994 and 2002 found higher job satisfaction in those specialties with more patient contact. Rates of burnout also varied by specialty.
Biology and health sciences
Fields of medicine
Health