id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,778,844
https://en.wikipedia.org/wiki/Online%20engineering
Online engineering (OE; Sometimes also referred to as remote engineering) is a current trend in engineering and science, aiming to allow and organize a shared use of equipment and resources, but also specialized software (such as for example simulators). Reasons for the growing importance of sharing engineering resources are: the growing complexity of engineering tasks, more and more specialized and expensive equipment as well as software tools and simulators, the necessary use of expensive equipment and software tools/simulators in short-time projects, the application of high-tech equipment also in SMEs, the need of high qualified staff to control recent equipment, the demands of globalization and division of labor, The International Association of Online Engineering (IAOE) is an international non-profit organization with the objective of encouraging the wider development, distribution and application of online engineering. The main forum of online engineering community is the annual International Conference on Remote Engineering and Virtual Instrumentation (REV). See also Electronics Virtual instrumentation Virtual education Virtual learning environment References External links International Association of Online Engineering (IAOE) International Conference on Remote Engineering and Virtual Instrumentation (REV) Engineering OnLine Continuous Learning Engineering concepts E-Science
Online engineering
[ "Engineering" ]
232
[ "nan" ]
14,779,081
https://en.wikipedia.org/wiki/Microthermal%20analysis
Microthermal analysis is a materials characterization technique which combines the thermal analysis principles of differential scanning calorimetry (DSC) with the high spatial resolution of scanning probe microscopy. The instrument consists of a thermal probe, a fine platinum/rhodium alloy wire (5 micro meter in diameter) coated by a sheath of silver (Wollaston wire). The wire is bent into a V-shape, and the silver sheath is etched away to form a fine-pointed tip. The probe acts as both the heater as well as a temperature sensor. The probe is attached to a conventional scanning probe microscope and can be scanned over the sample surface to resolve the thermal behavior of the sample spatially. This technique has been widely used for localized thermal analysis, where the probe is heated rapidly to avoid thermal diffusion through the sample and the response of the substance in immediate proximity to the tip is measured as a function of temperature. Micro-thermal analysis was launched commercially in March 1998. Microthermal analysis has been extended to higher spatial resolution to nanothermal analysis, which uses microfabricated self-heating silicon cantilevers to probe thermomechanical properties of materials with sub-100 nm spatial resolution. References External links Application notes from TA Instruments Materials science
Microthermal analysis
[ "Physics", "Materials_science", "Engineering" ]
253
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
14,779,772
https://en.wikipedia.org/wiki/Cedar%20Grove%20Weir
The Cedar Grove Weir is a weir located across the Logan River in the South East region of Queensland, Australia. The main purpose of the weir is for potable water storage. Location and features The weir is located northwest of , and southwest of and is connected to the Wyaralong Dam on Teviot Brook and the Bromelton Offstream Storage facility. The project was completed in December 2007 at a cost of 18.5 million and provides up to of water per day. Initially managed by Queensland Water Infrastructure, a body set up to administer major water infrastructure projects, the weir is now managed by SEQ Water. In conjunction with the South Maclean Weir and water treatment plants, the Cedar Grove Weir acts as a pumping pool for captured releases from Wyaralong Dam located upstream; and provides immediate supplies to the Beaudesert region. The weir also connects to the regional South East Queensland Water Grid. The weir overflowed for the first time on 4 January 2008. See also List of dams and weirs in Queensland References External links Logan River Dams in Queensland Weirs Dams completed in 2008 2008 establishments in Australia Logan City
Cedar Grove Weir
[ "Environmental_science" ]
228
[ "Hydrology", "Weirs" ]
14,780,124
https://en.wikipedia.org/wiki/Rasta%20filtering
RASTA filtering and mean subtraction was introduced to support perceptual linear prediction (PLP) preprocessing. It uses bandpass filtering in the log spectral domain. Rasta filtering then removes slow channel variations. It has also been applied to cepstrum feature-based preprocessing with both log spectral and cepstral domain filtering. In general a RASTA filter is defined by The numerator is a regression filter with N being the order (must be odd) and the denominator is an integrator with time decay. The pole controls the lower limit of frequency and is normally around 0.9. RASTA-filtering can be changed to use mean subtraction, implementing a moving average filter. Filtering is normally performed in the cepstral domain. The mean becomes the long term cepstrum and is typically computed on the speech part for each separate utterance. A silence is necessary to detect each utterance. References https://labrosa.ee.columbia.edu/~dpwe/papers/HermM94-rasta.pdf Signal processing
Rasta filtering
[ "Technology", "Engineering" ]
231
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
14,781,500
https://en.wikipedia.org/wiki/Thermal%20dose%20unit
A Thermal dose unit (TDU) is a unit of measurement used in the oil and gas industry to measure exposure to thermal radiation. It is a function of intensity (power per unit area) and exposure time. 1 TDU = 1 (kW/m2)4/3s. Results of exposure References Units of measurement
Thermal dose unit
[ "Mathematics" ]
67
[ "Quantity", "Units of measurement" ]
14,781,807
https://en.wikipedia.org/wiki/Lerch%20Bates
Lerch Bates is an international consulting services company specializing in the design and management of building systems with 36 offices in North America, Europe, the Middle East and India. Founded in 1947, focused on elevator consulting, they work with architects, developers, building investors, owners and managers on the design, sustainability and continuous use of building systems Specific services offered include: planning and design (including CAD), survey and evaluation of existing systems and equipment, contracting management, development of specifications, bidding assistance and negotiation, project management and administration, testing, and training. Projects Some of the buildings and projects that Lerch Bates has contributed to: Burj Khalifa, Dubai Freedom Tower, New York City 201 Bishopsgate, London Petronas Towers, Kuala Lumpur Taipei 101, Taipei Russia Tower, Moscow Bank of China Tower, Hong Kong Trump International Hotel and Tower, Dubai Signature Tower, Nashville Moscow City Tower, Moscow Marina 101, Dubai Indira Gandhi International Airport, New Delhi Google Headquarters, Mountain View Company history In 1947, Charles W. Lerch started a company performing maintenance and repairs, C. W. Lerch Co, (widely accepted as the first independent elevator consulting company) in Chicago. He later added elevator consulting as a side business. In 1964 Charles W. Lerch was joined by Vane Q. Bates and both men working full-time on elevator consulting. The firm moved to its offices to Denver, Colorado that same year and officed downtown in the Patterson Building on 17th Street (since demolished). The Company moved a number of times but has maintained its HQ in and around Denver. In 1974 the company was incorporated as Lerch Bates & Associates and by the 1980s building boom, and under the leadership of Quent Bates (CW Lerch having died in the early 1980s) Lerch Bates had 15 offices in North America. In 1985 Lerch Bates Limited in London was formed and in 1990 it created the first "Performance Related" maintenance contact, which related equipment downtime and traffic performance to maintenance premiums. In 1994 under the direction of Quent Bates and his quest to reward the employees, Lerch Bates became an employee owned company and is now considered one of the best and longest running ESOP's in the US. In 1998 Lerch Bates designed the world's fastest elevators for the then world's tallest building, the Taipei Financial Centre / Taipei 101. References External links Lerch Bates Corporate Website Design companies established in 1947 Engineering consulting firms of the United States International engineering consulting firms 1947 establishments in Illinois
Lerch Bates
[ "Engineering" ]
505
[ "Engineering consulting firms", "International engineering consulting firms" ]
14,781,912
https://en.wikipedia.org/wiki/Polymer%20nanocomposite
Polymer nanocomposites (PNC) consist of a polymer or copolymer having nanoparticles or nanofillers dispersed in the polymer matrix. These may be of different shape (e.g., platelets, fibers, spheroids), but at least one dimension must be in the range of 1–50 nm. These PNC's belong to the category of multi-phase systems (MPS, viz. blends, composites, and foams) that consume nearly 95% of plastics production. These systems require controlled mixing/compounding, stabilization of the achieved dispersion, orientation of the dispersed phase, and the compounding strategies for all MPS, including PNC, are similar. Alternatively, polymer can be infiltrated into 1D, 2D, 3D preform creating high content polymer nanocomposites. Polymer nanoscience is the study and application of nanoscience to polymer-nanoparticle matrices, where nanoparticles are those with at least one dimension of less than 100 nm. The transition from micro- to nano-particles lead to change in its physical as well as chemical properties. Two of the major factors in this are the increase in the ratio of the surface area to volume, and the size of the particle. The increase in surface area-to-volume ratio, which increases as the particles get smaller, leads to an increasing dominance of the behavior of atoms on the surface area of particle over that of those interior of the particle. This affects the properties of the particles when they are reacting with other particles. Because of the higher surface area of the nano-particles, the interaction with the other particles within the mixture is more and this increases the strength, heat resistance, etc. and many factors do change for the mixture. An example of a nanopolymer is silicon nanospheres which show quite different characteristics; their size is 40–100 nm and they are much harder than silicon, their hardness being between that of sapphire and diamond. Polymer nanocomposites can be prepared using sequential infiltration synthesis (SIS), in which inorganic nanomaterials are grown within a polymer substrate via diffusion of vapor-phase precursors into the matrix. Bio-hybrid polymer nanofibers Many technical applications of biological objects like proteins, viruses or bacteria such as chromatography, optical information technology, sensorics, catalysis and drug delivery require their immobilization. Carbon nanotubes, gold particles and synthetic polymers are used for this purpose. This immobilization has been achieved predominantly by adsorption or by chemical binding and to a lesser extent by incorporating these objects as guests in host matrices. In the guest host systems, an ideal method for the immobilization of biological objects and their integration into hierarchical architectures should be structured on a nanoscale to facilitate the interactions of biological nano-objects with their environment. Due to the large number of natural or synthetic polymers available and the advanced techniques developed to process such systems to nanofibres, rods, tubes etc. make polymers a good platform for the immobilization of biological objects. Bio-hybrid nanofibres by electrospinning Polymer fibers are, in general, produced on a technical scale by extrusion, i.e., a polymer melt or a polymer solution is pumped through cylindrical dies and spun/drawn by a take-up device. The resulting fibers have diameters typically on the 10-μm scale or above. To come down in diameter into the range of several hundreds of nanometers or even down to a few nanometers, Electrospinning is today still the leading polymer processing technique available. A strong electric field of the order of 103 V/cm is applied to the polymer solution droplets emerging from a cylindrical die. The electric charges, which are accumulated on the surface of the droplet, cause droplet deformation along the field direction, even though the surface tension counteracts droplet evolution. In supercritical electric fields, the field strength overbears the surface tension and a fluid jet emanates from the droplet tip. The jet is accelerated towards the counter electrode. During this transport phase, the jet is subjected to strong electrically driven circular bending motions that cause a strong elongation and thinning of the jet, a solvent evaporation until, finally, the solid nanofibre is deposited on the counter electrode. Bio-hybrid polymer nanotubes by wetting Electro spinning, co-electrospinning, and the template methods based on nanofibres yield nano-objects which are, in principle, infinitively long. For a broad range of applications including catalysis, tissue engineering, and surface modification of implants this infinite length is an advantage. But in some applications like inhalation therapy or systemic drug delivery, a well-defined length is required. The template method to be described in the following has the advantage such that it allows the preparation of nanotubes and nanorods with very high precision. The method is based on the use of well defined porous templates, such as porous aluminum or silicon. The basic concept of this method is to exploit wetting processes. A polymer melt or solution is brought into contact with the pores located in materials characterized by high energy surfaces such as aluminum or silicon. Wetting sets in and covers the walls of the pores with a thin film with a thickness of the order of a few tens of nanometers. Gravity does not play a role, as it is obvious from the fact that wetting takes place independent of the orientation of the pores relative to the direction of gravity. The exact process is still not understood theoretically in detail but its known from experiments that low molar mass systems tend to fill the pores completely, whereas polymers of sufficient chain length just cover the walls. This process happens typically within a minute for temperatures about 50 K above the melting temperature or glass transition temperature, even for highly viscous polymers, such as, for instance, polytetrafluoroethylene, and this holds even for pores with an aspect ratio as large as 10,000. The complete filling, on the other hand, takes days. To obtain nanotubes, the polymer/template system is cooled down to room temperature or the solvent is evaporated, yielding pores covered with solid layers. The resulting tubes can be removed by mechanical forces for tubes up to 10 μm in length, i.e., by just drawing them out from the pores or by selectively dissolving the template. The diameter of the nanotubes, the distribution of the diameter, the homogeneity along the tubes, and the lengths can be controlled. Applications The nanofibres, nanocomposites, hollow nanofibres, core–shell nanofibres, and nanorods or nanotubes produced have a great potential for a broad range of applications including homogeneous and heterogeneous catalysis, dental restorative materials, sensorics, filter applications, and optoelectronics. Here we will just consider a limited set of applications related to life science. Tissue engineering This is mainly concerned with the replacement of tissues which have been destroyed by sickness or accidents or other artificial means. The examples are skin, bone, cartilage, blood vessels and may be even organs. This technique involves providing a scaffold on which cells are added and the scaffold should provide favorable conditions for the growth of the same. Nanofibres have been found to provide very good conditions for the growth of such cells, one of the reasons being that fibrillar structures can be found on many tissues which allow the cells to attach strongly to the fibers and grow along them as shown. Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight %) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites. Potentially, these nanocomposites may be used to create novel, mechanically strong, light weight composite bone implants. The results suggest that mechanical reinforcement is dependent on the nanostructure morphology, defects, dispersion of nanomaterials in the polymer matrix, and the cross-linking density of the polymer. In general, two-dimensional nanostructures can reinforce the polymer better than one-dimensional nanostructures, and inorganic nanomaterials are better reinforcing agents than carbon based nanomaterials. Delivery from compartmented nanotubes Nano tubes are also used for carrying drugs in general therapy and in tumor therapy in particular. The role of them is to protect the drugs from destruction in blood stream, to control the delivery with a well-defined release kinetics, and in ideal cases, to provide vector-targeting properties or release mechanism by external or internal stimuli. Rod or tube-like, rather than nearly spherical, nanocarriers may offer additional advantages in terms of drug delivery systems. Such drug carrier particles possess additional choice of the axial ratio, the curvature, and the "all-sweeping" hydrodynamic-related rotation, and they can be modified chemically at the inner surface, the outer surface, and at the end planes in a very selective way. Nanotubes prepared with a responsive polymer attached to the tube opening allow the control of access to and release from the tube. Furthermore, nanotubes can also be prepared showing a gradient in its chemical composition along the length of the tube. Compartmented drug release systems were prepared based on nanotubes or nanofibres. Nanotubes and nanofibres, for instance, which contained fluorescent albumin with dog-fluorescein isothiocyanate were prepared as a model drug, as well as super paramagnetic nanoparticles composed of iron oxide or nickel ferrite. The presence of the magnetic nanoparticles allowed, first of all, the guiding of the nanotubes to specific locations in the body by external magnetic fields. Super paramagnetic particles are known to display strong interactions with external magnetic fields leading to large saturation magnetizations. In addition, by using periodically varying magnetic fields, the nanoparticles were heated up to provide, thus, a trigger for drug release. The presence of the model drug was established by fluorescence spectroscopy and the same holds for the analysis of the model drug released from the nanotubes. Immobilization of proteins Core shell fibers of nano particles with fluid cores and solid shells can be used to entrap biological objects such as proteins, viruses or bacteria in conditions which do not affect their functions. This effect can be used among others for biosensor applications. For example, Green Fluorescent Protein is immobilized in nanostructured fibres providing large surface areas and short distances for the analyte to approach the sensor protein. With respect to using such fibers for sensor applications fluorescence of the core shell fibers was found to decay rapidly as the fibers were immersed into a solution containing urea: urea permeates through the wall into the core where it causes denaturation of the GFP. This simple experiment reveals that core–shell fibers are promising objects for preparing biosensors based on biological objects. Polymer nanostructured fibers, core–shell fibers, hollow fibers, and nanorods and nanotubes provide a platform for a broad range of applications both in material science as well as in life science. Biological objects of different complexity and synthetic objects carrying specific functions can be incorporated into such nanostructured polymer systems while keeping their specific functions vital. Biosensors, tissue engineering, drug delivery, or enzymatic catalysis is just a few of the possible examples. The incorporation of viruses and bacteria all the way up to microorganism should not really pose a problem and the applications coming from such biohybrid systems should be tremendous. Engineering applications Polymer nanocomposites for automotive tire industry Polymer nanocomposites are important for the automotive tire industry due to the possibility of achieving a higher fuel efficiency by designing polymer nanocomposites with suitable properties. The most common type of filler particles utilized by the tire industry had traditionally been Carbon black (Cb), produced from the incomplete combustion of coal tar and ethylene. The main reason is that the addition of Cb to rubbers enables the manufacturing of tires of a smaller rolling resistance which accounts for about 4% of the worldwide CO2 emissions from fossil fuels. A decrease in the rolling resistance of the car tires produced worldwide is anticipated to decrease the overall fuel consumption of cars due to the fact that a vehicle with tires of a smaller rolling resistance requires less energy to thrust forward. However, a smaller rolling resistance also leads to a lower wet grip performance, which poses concerns of the passenger's safety. The problem can be partially solved by replacing Cb with silica, because it enables the production of "green" tires that display both improved wet grip properties as well as a smaller rolling resistance. The main difference in the relevant properties of Cb and silica is that Cb is hydrophobic (as are the polymers used in the manufacturing of car ties) whereas silica is hydrophilic. So in order to increase the compatibility among the silica fillers and the polymer matrix, the silica is usuallyfunctionalized with coupling agents, which gives the possibility of tuning the filler-polymer interactions and thus producing nanocomposites of specific properties. Overall, the main unresolved issue on the mechanical properties of filled rubbers is the elucidation of the exact mechanism of their mechanical reinforcement and of the so-called Payne effect; and owing to a lack of suitable theoretical and experimental approaches, both of them are still poorly understood. Polymer nanocomposites for high temperature applications Polymer nanocomposites aided with carbon quantum dots have been found to show remarkable heat resistance. These nanocomposites can be used in environments where heat resistance is a requirement. Size and pressure effects on nanopolymers The size- and pressure- dependent glass transition temperatures of free-standing films or supported films having weak interactions with substrates decreases with decreasing of pressure and size. However, the glass transition temperature of supported films having strong interaction with substrates increases of pressure and the decrease of size. Different models like two layer model, three layer model, Tg (D, 0) ∝ 1/D and some more models relating specific heat, density and thermal expansion are used to obtain the experimental results on nanopolymers and even some observations like freezing of films due to memory effects in the visco-elastic eigenmodels of the films, and finite effects of the small molecule glass are observed. To describe Tg (D, 0) function of polymers more generally, a simple and unified model recently is provided based on the size-dependent melting temperature of crystals and Lindemann's criterion where σg is the root of mean squared displacement of surface and interior molecules of glasses at Tg (D, 0), α = σs2 (D, 0) / σv2 (D, 0) with subscripts s and v denoting surface and volume, respectively. For a nanoparticle, D has a usual meaning of diameter, for a nanowire, D is taken as its diameter, and for a thin film, D denotes its thickness. D0 denotes a critical diameter at which all molecules of a low-dimensional glass are located on its surface. Conclusion Devices that use the properties of low-dimensional objects, such as nanoparticles, are promising due to the possibility of tailoring a number of mechanical, electrophysical, optical and magnetic properties granting some degree of control over the size of nanoparticles during synthesis. In the case of polymer nanocomposites we can use the properties of disordered systems. Here recent developments in the field of polymer nano-composites and some of their applications have been reviewed. Though there is much use in this field, there are many limitations also. For example, in the release of drugs using nanofibres, cannot be controlled independently and a burst release is usually the case, whereas a more linear release is required. Let us now consider future aspects in this field. There is a possibility of building ordered arrays of nanoparticles in the polymer matrix. A number of possibilities also exist to manufacture the nanocomposite circuit boards. An even more attractive method exists to use polymer nanocomposites for neural networks applications. Another promising area of development is optoelectronics and optical computing. The single domain nature and super paramagnetic behavior of nanoparticles containing ferromagnetic metals could be possibly used for magneto-optical storage media manufacturing. See also Nanocomposite Biopolymer Copolymer Electroactive polymers Nanocarbon tubes References Nanomaterials Polymer material properties
Polymer nanocomposite
[ "Chemistry", "Materials_science" ]
3,531
[ "Polymer material properties", "Nanotechnology", "Polymer chemistry", "Nanomaterials" ]
14,782,003
https://en.wikipedia.org/wiki/Body%20schema
Body schema is an organism's internal model of its own body, including the position of its limbs. The neurologist Sir Henry Head originally defined it as a postural model of the body that actively organizes and modifies 'the impressions produced by incoming sensory impulses in such a way that the final sensation of body position, or of locality, rises into consciousness charged with a relation to something that has happened before'. As a postural model that keeps track of limb position, it plays an important role in control of action. It involves aspects of both central (brain processes) and peripheral (sensory, proprioceptive) systems. Thus, a body schema can be considered the collection of processes that registers the posture of one's body parts in space. The schema is updated during body movement. This is typically a non-conscious process, and is used primarily for spatial organization of action. It is therefore a pragmatic representation of the body’s spatial properties, which includes the length of limbs and limb segments, their arrangement, the configuration of the segments in space, and the shape of the body surface. Body schema also plays an important role in the integration and use of tools by humans. Body schema is different from body image; the distinction between them has developed over time. History Henry Head, an English neurologist who conducted pioneering work into the somatosensory system and sensory nerves, together with British neurologist Gordon Morgan Holmes, first described the concept in 1911. The concept was first termed "postural schema" to describe the disordered spatial representation of patients following damage to the parietal lobe of the brain. Head and Holmes discussed two schemas (or schemata): one body schema for the registration of posture or movement and another body schema for the localization of stimulated locations on the body surface. "Body schema" became the term used for the "organized models of ourselves". The term and definition first suggested by Head and Holmes has endured nearly a century of research with clarifications as more has become known about neuroscience and the brain. Properties Neuroscientists Patrick Haggard and Daniel Wolpert have identified seven fundamental properties of the body schema. It is spatially coded, modular, adaptable, supramodal, coherent, interpersonal, and updated with movement. Spatial encoding The body schema represents both position and configuration of the body as a 3-dimensional object in space. A combination of sensory information, primarily tactile and visual, contributes to the representation of the limbs in space. This integration allows for stimuli to be localized in external space with respect to the body. An example by Haggard and Wolpert shows the combination of tactile sensation of the hand with information about the joint angles of the arm, which allow for rapid movements of said arm to swat a fly. Modular The body schema is not represented wholly in a single region of the brain. Recent fMRI (functional Magnetic Resonance Imaging) studies confirm earlier results. For example, the schema for feet and hands are coded by different regions of the brain, while the fingers are represented by a separate part entirely. Adaptable Plastic changes to the body schema are active and continuous. For example, gradual changes to the body schema must occur over the lifetime of an individual as he or she grows and absolute and relative sizes of body parts change over his or her life span. The development of the body schema has also been shown to occur in young children. One study showed that with these children (9-, 14-, and 19-month-olds), older children handled spoons so as to optimally and comfortably grip them for use, whereas younger children tended to reach with their dominant hand, regardless of the orientation of the spoon and eventual ease of use. Short-term plasticity has been shown with the integration of tools into the body schema. The rubber hand illusion has also shown the rapid reorganization of the body schema on the timescale of seconds, showing the high level of plasticity and speed with which the body schema reorganizes. In the Illusion, participants view a dummy hand being stroked with a paintbrush, while their own hand is stroked identically. Participants may feel that the touches on their hand are coming from the dummy hand, and even that the dummy hand is, in some way, their own hand. Supramodal By its nature, body schema integrates proprioceptive, (the sense of the relative position of neighbouring parts of one's body), and tactile information to maintain a three-dimensional body representation. However, other sensory information, particularly visual, can be in the same representation of the body. This simultaneous participation means there are combined representations within the body schema, which suggests the involvement of a process to translate primary information (e.g. visual, tactile, etc.) into a single sensory modality or an abstract, amodal form. Coherent The body schema, to function properly, must be able to maintain coherent organization continuously. To do so, it must be able to resolve any differences between sensory inputs. Resolving these inter-sensory inconsistencies can result in interesting sensations, such as those experienced during the Rubber Hand Illusion. Interpersonal It is thought that an individual's body schema is used to represent both one's own body and the bodies of others. Mirror neurons are thought to play a role in the interpersonal characteristics of body schema. Interpersonal projection of one's body schema plays an important role in successfully imitating motions such as hand gestures, especially while maintaining the handedness and location of the gesture, but not necessarily copying the exact motion itself. Updated with movement A working body schema must be able to interactively track the movements and positions of body parts in space. Neurons in the premotor cortex may contribute to this function. A class of neuron in the premotor cortex is multisensory. Each of these multisensory neurons responds to tactile stimuli and also to visual stimuli. The neuron has a tactile receptive field (responsive region on the body surface) typically on the face, arms, or hands. The same neuron also responds to visual stimuli in the space near the tactile receptive field. For example, if a neuron's tactile receptive field covers the arm, the same neuron will respond to visual stimuli in the space near the arm. As shown by Graziano and colleagues, the visual receptive field will update with arm movement, translating through space as the arm moves. Similar body-part-centered neuronal receptive fields relate to the face. These neurons apparently monitor the location of body parts and the location of nearby objects with respect to body parts. Similar neuronal properties may also be important for the ability to incorporate external objects into the body schema, such as in tool use. Extended body schema The idea of the extended body schema is that, aside from the proprioceptive, visual, and sensory components that contribute to making a mental conception of one's body, the same processes that contribute to a body schema are also able to incorporate external objects into the mental conception of one's body. Part philosophical and part neuroscience, this concept builds upon the ideas of plasticity and adaptation to attempt to answer the question of where the body schema ends. There is debate as to whether this concept truly exists, with one side arguing that the body schema does not extend past the body and the other side believing otherwise. Supporting arguments The perspective shared by those who agree with the theory of the extended body schema follow reasoning in line with such that supports theories on tool use. In some studies, attempts at understanding tool assimilation are used to argue for the existence of the extended body schema. In an experiment involving the use and interaction with wool objects, subjects were tested on their ability to perceive afterimages of wool objects in varying contexts. Subjects accustomed their eyes to a dark room and then were shown a brief (1 millisecond) flash of light, intending to produce an afterimage effect of their arms which they held out in front of them during the experiment. Moving an arm afterwards would make the afterimage "fade" or disappear as it moved, thus indicating that the feature (the arm) was being tracked and integrated into the person's body schema. To test integration of the meaningless wool objects, subjects experienced four different contexts. Subjects held the wool objects in each hand and one hand (the active hand) would move, still holding the object (the active object). Using the active hand, the active wool object would be dropped once an afterimage was perceived. Using the active hand, one would grab the active wool object once an afterimage was perceived. The subjects were to hold onto a mechanical device which held the wool object. Once an afterimage was perceived, a subject's active hand would cause the mechanical device to drop the wool object. In all situations but the fourth, the subjects experienced the same "fading" effect as they did with their arm alone. This would thus indicate that the wool objects had been integrated into their body schema and contributes support towards the idea of the body's using proprioceptive and visual elements to create an extended body schema. The mechanical device acted as an intermediate between the subject and the active object, and the subjects' failure to detect an afterimage in that context indicates that this concept of extension is limited to being sensitive to only what the body is directly in contact with. Dissenting arguments The alternate perspective is that the body is the limit of any sort of body schema. An example of this division is found in a study and discussion on personal and extrapersonal attention, where personal relates to the body's sense of itself (the body schema) and extrapersonal relates to all external of such. Some research supports the claim that these two categories are purely distinct and do not intermingle, contrary to what the extended body schema theory describes. Evidence for such is primarily found in subjects with unilateral neglect, such as in the case of E.D.S., who was a middle-aged man with right hemisphere brain damage. When he was tested for hemispatial neglect using traditional measures such as sentence reading and cancellation tests, E.D.S. showed few signs and upon later examination showed no signs whatsoever, leading doctors to believe he was normal. However, he constantly had issues with physical therapy because he would claim to not be able to see his left leg; upon further examination, E.D.S. was known to have a particular type of hemispatial neglect that only affected the perception of his body. The motor function of the left side of his body was negatively affected though not totally compromised, yet when attempting tasks such as shaving, he would invariably not shave the left side of his face. This led some researchers to believe that there is a distinction between personal and extrapersonal neglect, which would thus reflect a similar distinction with body schema itself. Associated disorders Deafferentation The most direct of related disorders, deafferentation occurs when sensory input from the body is reduced or absent, without affecting motor neurons. The most famous case of this disorder is "IW", who lost all sensory input from below the neck, resulting in temporary paralysis. He was forced to learn to control his movement all over again using only his conscious body image and visual feedback. As a result, when constant visual input is lost during an activity, such as walking, it becomes impossible for him to complete the task, which may result in falling, or simply stopping. IW requires constant attention to tasks to be able to complete them accurately, demonstrating how automatic and subconscious the process of integrating touch and proprioception into the body schema actually is. Autotopagnosia Autotopagnosia typically occurs after left parietal lesions. Patients with this disorder make errors which result from confusion between adjacent body parts. For example, a patient may point to their knee when asked to point to their hip. Because the disorder involves the body schema, localization errors may be made both on the patient’s own body and that of others. The spatial unity of the body within the body schema has been damaged such that it has incorrectly been segmented in relation to its other modular parts. Phantom limb Phantom limbs are a phenomenon which occurs following amputation of a limb from an individual. In 90–98% of cases, amputees report feeling all or part of the limb or body part still there, taking up space. The amputee may perceive a limb under full control, or paralyzed. A common side effect of phantom limbs is phantom limb pain. The neurophysiological mechanisms by which phantom limbs occur is still under debate. A common theory posits that the afferent neurons, since deafferented due to amputation, typically remap to adjacent cortical regions within the brain. This can cause amputees to report feeling their missing limb being touched when a seemingly unrelated part of the body is stimulated (such as if the face is touched, but the amputee also feels their missing arm being stroked in a specific location). Another facet of phantom limbs is that the efferent copy (motor feedback) responsible for reporting on position to the body schema does not attenuate quickly. Thus the missing body part may be attributed by the amputee to still be in a fixed or movable position. Others Asomatognosia, somatoparaphrenia, anosognosia, anosodiaphoria, allochiria and hemispatial neglect all involve (or in some cases involve) aspects of impaired body schema. Hemispatial neglect is not uncommon because strokes sometimes cause it. Tool use Not only is it necessary for the body schema to be able to integrate and form a three-dimensional representation of the body, but it also plays an important role in tool use. Studies recording neuronal activity in the intraparietal cortex in macaques have shown that, with training, the macaque body schema updates to include tools, such as those used for reaching, into the body schema. In humans, body schema plays an important role in both simple and complex tool use, far beyond that of macaques. Extensive training is also not necessary for this integration. The mechanisms by which tools are integrated into the body schema are not fully understood. However, studies with long-term training have shown interesting phenomena. When wielding tools in both hands in a crossed posture, behavioral effects reverse in a similar way to when only hands are crossed. Thus, sensory stimuli are delivered the same way be it to the hands directly or indirectly via the tools. These studies suggest the mind incorporates the tools into the same or similar areas as it does the adjacent hands. Recent research into the short term plasticity of the body schema used individuals without any prior training with tools. These results, derived from the relation between afterimages and body schema, show that tools are incorporated into the body schema within seconds, regardless of length of training, though the results do not extend to other species besides humans. Confusion with body image Historically, body schema and body image were generally lumped together, used interchangeably, or ill-defined. In science and elsewhere, the two terms are still commonly misattributed or confused. Efforts have been made to distinguish the two and define them in clear and differentiable ways. A body image consists of perceptions, attitudes, and beliefs concerning one's body. In contrast, body schema consists of sensory-motor capacities that control movement and posture. Body image may involve a person’s conscious perception of his or her own physical appearance. It is how individuals see themselves when picturing themselves in their mind, or when perceiving themselves in a mirror. Body image differs from body schema as perception differs from movement. Both may be involved in action, especially when learning new movements. See also Body image (medicine) Multisensory integration Body transfer illusion Peripheral neuropathy Schema (psychology) Attention schema theory References Motor cognition Motor control Cognitive science
Body schema
[ "Biology" ]
3,345
[ "Behavior", "Motor control" ]
14,782,200
https://en.wikipedia.org/wiki/Clayton%20Lake%20State%20Park
Clayton Lake State Park is a state park of New Mexico, United States, featuring a recreational reservoir and a fossil trackway of dinosaur footprints. It is located north of Clayton, close to New Mexico's border with Colorado, Oklahoma, and Texas. The park is accessed via New Mexico State Road 455. The landscape is characterized by rolling grasslands, volcanic rocks, and sandstone bluffs, set on the western edge of the Great Plains. The park area was a stopover point for travelers along the Cimarron Cutoff of the Santa Fe Trail. Visitor activities include picnicking, camping, and fishing at the lake, as well as viewing one of the most extensive dinosaur trackways in North America. Clayton Lake was created by the New Mexico Department of Game and Fish in 1955 as a fishing lake and winter waterfowl resting area. A dam was constructed across Seneca Creek, which is actually a series of seeps except after heavy rains. During the fishing season, which usually runs from March to October each year, the lake is a popular spot for anglers hoping to catch trout, catfish, bass, and walleye. Boats are allowed on the lake, but are restricted to trolling speeds. The lake is closed to fishing during the winter, when it serves as a stopover for waterfowl. The park offers a group shelter and a modern comfort station. The dinosaur tracks are embedded in rock near the lake. They can be observed on the dam spillway at the end of a gentle trail. The best times to view the tracks are in the morning and the late afternoon. A sheltered gazebo and a boardwalk trail provide extensive information regarding the dinosaurs. In 2010 the Clayton Lake State Park with the Star Point Observatory was designated by the International Dark-Sky Association as a Dark Sky Park. References External links State site Clayton Lake State Park local site Dark-sky preserves in the United States Fossil parks in the United States Fossil trackways in the United States State parks of New Mexico Parks in Union County, New Mexico Protected areas established in 1965 Paleontology in New Mexico
Clayton Lake State Park
[ "Astronomy" ]
418
[ "Dark-sky preserves in the United States", "Dark-sky preserves" ]
14,782,277
https://en.wikipedia.org/wiki/Transport%20Direct%20Portal
The Transport Direct Portal was a distributed Internet-based multi-modal journey planner providing information for travel in England, Wales and Scotland. It was managed by Transport Direct, a division of the Department for Transport. It was launched in 2004 and was operated by a consortium led by Atos and later enhanced to include a cycle journey planning function. The portal closed on 30 September 2014 after an announcement earlier that month. Operation The portal offered a door-to-door journey planner which allowed the user to compare different transport modes including car, rail, bus and coach, air, walking and cycling. Specific features included: Routing using one or more modes: local bus, coaches, rail and walking. Cycle journey planning for 32 towns and counties. Car routing that takes into account likely traffic congestion based on past experience and can route to a suitable car park or park and ride site. Internal flight routing in Great Britain, with links to the operator to price your journey. Rail-only all-day journey plans with the ability to search for the train journeys with the cheaper fares. Compare journeys opportunities between selected major British cities and transport interchanges by different modes. Live travel and traffic news – travel and traffic incidents – both planned and unplanned – as well as real-time train running information. Real-time train running information that can be accessed from a Mobile phone and an Interactive television. A white label version which was available through (BBC and Visit Britain). An exposed Web service (Service-oriented architecture) was used by a number of partners including UK Department for Work and Pensions. Links back to the service are provided from Google Transit and National Rail Enquiries. Estimation of CO2 emissions for your journey, whether by car or public transport. A batch journey planner to support the creation of Travel Plans. A mobile friendly version of the door to door planner was added in August 2013. The planner used a distributed approach based on the Traveline journey planners for each Traveline region with the JourneyWeb protocol being used to manage journeys between Traveline regions. Each Journey Planner integrated information from many different transport operators and sources. Cost and usage The 10 millionth user session took place on 1 December 2006 with the number of session steadily growing over time; 1.126 million user sessions were recorded for August 2007. By March 2010, a total of 70 million user sessions have been provided, with the total uses by the start of 2011 being 81 million thus being used over 11 million times in a year. Operation of the Portal cost £5.9 million for the period April 2006 to March 2007. The total cost of the Transport Direct Programme was £55 million to March 2007. History Transport Direct established the Traveline organisation in 2000 to develop a number of regional call centres, initially using paper timetables, and to provide a national public transport information service. It was divided into 11 different areas (regions) each of which develop computerised journey planners. Transport Direct wanted to be able to provide a single point of access to this service and a contract to develop the Transport Direct Portal was awarded in 2002 to Atos Origin which would provide an integrated point of access to the regional journey planners using the JourneyWeb protocol which was developed for the purpose. Following a two-year period of development and testing and was officially launched by Alistair Darling, the Secretary of State for Transport on 31 December 2004. At the time it was claimed to be the first national door-to-door travel to provide details of both Public transport and car for journeys. By way of historical comparison, Google Maps was not launched until early the following year. Google Transit was released in Google Labs in December 2005 and was not integrated into Google Maps until October 2007. A number of issues with the underlying data that were picked up by the national media resulting in poor results. During 2005, new service delivery channels (Mobile phone, Personal digital assistant and Interactive television) were introduced. Functionality to find cheaper rail fares and a day trip planner were added as well as information about the location of car parks and other points of interest. The product was further enhanced in 2006 to accommodate the wider range of services and provide easier access from the home page to the core door-to-door and live travel news services. In March 2009, Transport Direct added cycle journey planning to the Portal for Manchester and Merseyside. The cycle data collected for Transport Direct was subsequently converted into an OpenStreetMap-compatible format. This was released on GitHub by CycleStreets in January 2018. Usage of the Transport Direct Portal grew significantly since its launch and 2008 was operating at an annual rate of about 18.5 million user sessions. By 2014 it had served more than 160 million travel information requests. The Department for Transport reviewed Transport Direct in 2014 and decided to close the portal as there are plenty of equivalent services provided by the private sector. References External links Transport Direct web site Atos Transport Division DfT Site on Transport Direct Cycling England on the Journey Planner British travel websites E-government in the United Kingdom Non-profit organisations based in the United Kingdom Public transport information systems Route planning websites Transport in the United Kingdom Web Map Services
Transport Direct Portal
[ "Technology" ]
1,033
[ "Public transport information systems", "Information systems" ]
14,782,739
https://en.wikipedia.org/wiki/Alfred%20Stock%20Memorial%20Prize
The Alfred-Stock Memorial Prize or Alfred-Stock-Gedächtnispreis (From 2023: Marianne Baudler Prize) is an award for "an outstanding independent scientific experimental investigation in the field of inorganic chemistry." It is awarded biennially (originally annually) by the German Chemical Society (Gesellschaft Deutscher Chemiker). The award, consisting of a gold medal and money, was created in 1950 in recognition of the pioneering achievements in inorganic chemistry by the German chemist Alfred Stock. In 2022, the GDCh board decided to change the name of the previous Alfred Stock Memorial Prize. The new name is Marianne Baudler Prize. Recipients Source: 1950 Egon Wiberg, München 1951 Walter Hieber, München 1952 Robert Schwarz, Aachen 1953 Josef Goubeau, Stuttgart 1954 Harry Julius Emeléus, Cambridge 1955 Ulrich Hofmann, Darmstadt 1956 Hermann Irving Schlesinger, Chicago 1958 Rudolf Scholder, Karlsruhe 1959 Ernst Otto Fischer, München 1961 Margot Becke-Goehring, Heidelberg 1963 Friedrich Seel, Saarbrücken 1964 Werner Fischer, Hannover 1967 Harald Schäfer, Münster 1970 Gerhard Fritz, Karlsruhe 1972 Max Schmidt, Würzburg 1974 Rudolf Hoppe, Gießen 1976 Heinrich Nöth, München 1979 Ulrich Wannagat, Braunschweig 1981 Hans Georg von Schnering, Stuttgart 1982 Hubert Schmidbaur, München 1983 Eugene G. Rochow, Captiva, Florida, USA 1986 Marianne Baudler, Köln 1988 Helmut Werner, Würzburg 1990 Herbert W. Roesky, Göttingen 1992 Gottfried Huttner, Heidelberg 1994 Otto J. Scherer, Kaiserslautern 1996 Martin Jansen, Bonn 1998 Peter Paetzold, Aachen 2000 Achim Müller, Bielefeld 2002 Peter Jutzi, Bielefeld 2004 Hansgeorg Schnöckel, Karlsruhe 2006 Karl Otto Christe, Los Angeles/USA 2008 Michael Lappert, Brighton/GB 2010 Matthias Driess, Berlin 2012 Werner Uhl, Münster 2014 Wolfgang Kaim, Stuttgart 2016 Holger Braunschweig, Würzburg 2018 Christian Limberg, Berlin 2020 Stefanie Dehnen, Marburg 2022 Franc Meyer, Göttingen (GDCh Prize for Inorganic Chemistry) See also List of chemistry awards References Chemistry awards German awards Awards established in 1950 1950 establishments in West Germany
Alfred Stock Memorial Prize
[ "Technology" ]
473
[ "Science and technology awards", "Chemistry awards", "Science award stubs" ]
14,783,675
https://en.wikipedia.org/wiki/AlphaIC
AlphaIC is a method for assessing the value of information technology (IT) investments that surpasses banal ROI analyses and looks at how IT affects an organization's intellectual capital. The methodology was developed in 2003–2004 by technologist Paolo Magrassi and economist Alessandro Cravera, based on the observation of two ongoing trends: On one side, research on the information technology (IT) 'productivity paradox’ and the quantitative assessment of IT’s impact as a general purpose technology. This was mainly stimulated by Erik Brynjolfsson’s works in 1998-2002; On the other side, research and practitioners’ work on intangible assets (a.k.a. a company's ‘intellectual capital’ ), such as that by Karl-Erik Sveiby at Skandia AV in 1986, Baruch Lev at NYU's Stern in 1996, and Cravera in 1999–2000. The two trends were merged in order to develop a ‘value of IT’ assessment methodology that go beyond simple return on investment (ROI) analyses as well as other existing methodologies, all unable to capture the true advantage provided by IT. The methodology is copyrighted, however a simple and concise description, including that of its application in real-world organizations, is to be found in. References Financial ratios
AlphaIC
[ "Mathematics" ]
270
[ "Financial ratios", "Quantity", "Metrics" ]
14,785,284
https://en.wikipedia.org/wiki/Bo%20Thid%C3%A9
Bo Yngve Thidé (born 8 January 1948) is a Swedish physicist and professor emeritus at Uppsala University. He has studied radio waves and other electromagnetic radiation in space, particularly their interaction with matter and fields. Thidé was born in Gothenburg, Sweden. He received his B.Sc. in 1972, his M.Sc. in 1973, and defended his Ph.D. thesis on semiclassical quantum theory at Uppsala University in 1979. His Ph.D. was obtained under the supervision of professor Per Olof Fröman at the Department of Theoretical Physics, Uppsala University. He has worked at the Swedish Institute of Space Physics in Uppsala since 1980, where he has been a professor since 2000. In 1981, Bo Thidé discovered electromagnetic emissions stimulated by powerful radio waves in the ionosphere during experiments in August 1981 at the EISCAT facility in Tromsø, Norway. For the first time it was shown that the plasma turbulence excited by powerful radio waves in the ionosphere radiates secondary electromagnetic radiation that can be detected and analysed on the ground. These stimulated electromagnetic emissions (SEE) exhibit a rich spectral structure, particularly near harmonics of the ionospheric electron gyro frequency. The SEE technique is now a useful tool in plasma turbulence research. For his discovery, Thidé was awarded the Edlund Prize of the Royal Swedish Academy of Sciences in 1991. In the mid-1980s, Thidé published a series of papers together with Bengt Lundborg on a highly accurate analytic approximation method to calculate the full three-dimensional wave pattern, spin angular momentum (polarization) and other properties of radio waves propagating in an inhomogeneous, magnetized, collisional plasma. Thidé was involved in a project in The Netherlands where 12 500 antenna designed to analyse solar storms were built. In 2001 he led the LOIS-project, that placed tens of thousands of antenna in southern Sweden. The project was a subsidiary of the multinational LOFAR project headed by The Netherlands and aimed to find traces of the first hydrogen atom formed during Big Bang. Together with colleagues from Italy and Spain, Thidé discovered in 2010 a new phenomenon in General Relativity which allows the detection of spinning black holes by analysing the orbital angular momentum and optical vortex structure of radiation from the accretion disk near the black holes. The results were published in Nature Physics. He was later part of a team which discovered that radio beams from fast-spinning black holes are twisted. Thidé has advocated Orbital angular momentum multiplexing for radio transmissions, opening up additional degrees of freedom. Thidé is the author of the book Electromagnetic Field Theory, which is used in the course Classical Electrodynamics at Uppsala University and University of Padua. He has also worked on fiber optics technology. Since 2016, he lives outside Söderhamn, where he continues his research from home. See also Orbital angular momentum multiplexing Optical vortex References External links : On-line Textbook by Bo Thidé 1948 births Living people Quantum physicists 20th-century Swedish physicists 21st-century Swedish physicists Uppsala University alumni Scientists from Gothenburg
Bo Thidé
[ "Physics" ]
626
[ "Quantum physicists", "Quantum mechanics" ]
14,786,510
https://en.wikipedia.org/wiki/Spinning%20cone
Spinning cone columns are used in a form of low temperature vacuum steam distillation to gently extract volatile chemicals from liquid foodstuffs while minimising the effect on the taste of the product. For instance, the columns can be used to remove some of the alcohol from wine, to remove "off" smells from cream, and to capture aroma compounds that would otherwise be lost in coffee processing. Mechanism The columns are made of stainless steel. Conical vanes are attached alternately to the wall of the column and to a central rotating shaft. The product is poured in at the top under vacuum, and steam is pumped into the column from below. The vanes provide a large surface area over which volatile compounds can evaporate into the steam, and the rotation ensures a thin layer of the product is constantly moved over the moving cone. It typically takes 20 seconds for the liquid to move through the column, and industrial columns might process . The temperature and pressure can be adjusted depending on the compounds targeted. Wine controversy Improvements in viticulture and warmer vintages have led to increasing levels of sugar in wine grapes, which have translated to higher levels of alcohol - which can reach over 15% ABV in Zinfandels from California. Some producers feel that this unbalances their wine, and use spinning cones to reduce the alcohol by 1-2 percentage points. In this case the wine is passed through the column once to distill out the most volatile aroma compounds which are then put to one side while the wine goes through the column a second time at higher temperature to extract alcohol. The aroma compounds are then mixed back into the wine. Some producers such as Joel Peterson of Ravenswood argue that technological "fixes" such as spinning cones remove a sense of terroir from the wine; if the wine has the tannins and other components to balance 15% alcohol, Peterson argues that it should be accepted on its own terms. The use of spinning cones, and other technologies such as reverse osmosis, was banned in the EU until recently, although for many years they could freely be used in wines imported into the EU from certain New World wine producing countries such as Australia and the USA. In November 2007, the Wine Standards Branch (WSB) of the UK's Food Standards Agency banned the sale of a wine called Sovio, made from Spanish grapes that would normally produce wines of 14% ABV. Sovio runs 40-50% of the wine over spinning cones to reduce the alcohol content to 8%, which means that under EU law it could not be sold as wine as it was below 8.5%; above that, under the rules prevailing at the time, it would be banned because spinning cones could not be used in EU winemaking. Subsequently, the EU legalized dealcoholization with a 2% adjustment limit in its Code of Winemaking Practices, publishing that in its Commission Regulation (EC) No 606/2009 and stipulating that the dealcoholization must be accomplished by physical separation techniques which would embrace the spinning cone method. More recently, in International Organisation of Vine and Wine Resolutions OIV-OENO 394A-2012 and OIV-OENO 394B-2012 of June 22, 2012 EU recommended winemaking procedures were modified to permit use of the spinning cone column and membrane techniques such as reverse osmosis on wine, subject to a limitation on the adjustment. That limitation is currently under review following the proposal by some EU members that it be eliminated altogether. The limitation is applicable only to products formally labeled as "wine". See also Winemaking Distillation Spinning band distillation References Further reading External links Flavourtech manufactures spinning cone columns. Oenology Wine terminology Distillation Chemical equipment Separation processes
Spinning cone
[ "Chemistry", "Engineering" ]
765
[ "Chemical equipment", "Distillation", "nan", "Separation processes" ]
14,786,985
https://en.wikipedia.org/wiki/Media%20technology
Media technology may refer to: Data storage devices Art media technology – :Category:Visual arts media Print media technology – :Category:Printing Digital media technology – :Category:Digital media Electronic media technology – :Category:Digital media or :Category:Electronic publishing Media technology university programmes Media psychology, the field of study that examines media, technology and the effect on human behavior
Media technology
[ "Technology" ]
75
[ "Information and communications technology", "Mass media technology" ]
14,787,365
https://en.wikipedia.org/wiki/Glass%20batch%20calculation
Glass batch calculation or glass batching is used to determine the correct mix of raw materials (batch) for a glass melt. Principle The raw materials mixture for glass melting is termed "batch". The batch must be measured properly to achieve a given, desired glass formulation. This batch calculation is based on the common linear regression equation: with NB and NG being the molarities 1-column matrices of the batch and glass components respectively, and B being the batching matrix. The symbol "T" stands for the matrix transpose operation, "−1" indicates matrix inversion, and the sign "·" means the scalar product. From the molarities matrices N, percentages by weight (wt%) can easily be derived using the appropriate molar masses. Example calculation An example batch calculation may be demonstrated here. The desired glass composition in wt% is: 67 SiO2, 12 Na2O, 10 CaO, 5 Al2O3, 1 K2O, 2 MgO, 3 B2O3, and as raw materials are used sand, trona, lime, albite, orthoclase, dolomite, and borax. The formulas and molar masses of the glass and batch components are listed in the following table: The batching matrix B indicates the relation of the molarity in the batch (columns) and in the glass (rows). For example, the batch component SiO2 adds 1 mol SiO2 to the glass, therefore, the intersection of the first column and row shows "1". Trona adds 1.5 mol Na2O to the glass; albite adds 6 mol SiO2, 1 mol Na2O, and 1 mol Al2O3, and so on. For the example given above, the complete batching matrix is listed below. The molarity matrix NG of the glass is simply determined by dividing the desired wt% concentrations by the appropriate molar masses, e.g., for SiO2 67/60.0843 = 1.1151.              The resulting molarity matrix of the batch, NB, is given here. After multiplication with the appropriate molar masses of the batch ingredients one obtains the batch mass fraction matrix MB:                or    The matrix MB, normalized to sum up to 100% as seen above, contains the final batch composition in wt%: 39.216 sand, 16.012 trona, 10.242 lime, 16.022 albite, 4.699 orthoclase, 7.276 dolomite, 6.533 borax. If this batch is melted to a glass, the desired composition given above is obtained. During glass melting, carbon dioxide (from trona, lime, dolomite) and water (from trona, borax) evaporate. Simple glass batch calculation can be found at the website of the University of Washington. Advanced batch calculation by optimization If the number of glass and batch components is not equal, if it is impossible to exactly obtain the desired glass composition using the selected batch ingredients, or if the matrix equation is not soluble for other reasons (i.e., the rows/columns are linearly dependent), the batch composition must be determined by optimization techniques. See also Glass ingredients Calculation of glass properties References Glass engineering and science Glass chemistry
Glass batch calculation
[ "Chemistry", "Materials_science", "Engineering" ]
701
[ "Glass engineering and science", "Glass chemistry", "Materials science" ]
14,788,918
https://en.wikipedia.org/wiki/1-Pentyne
1-Pentyne is an organic compound with the formula . It is a terminal alkyne, in fact the smallest that is liquid at room temperature. The compound is a common terminal alkyne substrate in diverse studies of catalysis. See also 2-Pentyne, an isomer References External links NIST Chemistry WebBook page for 1-pentyne Alkynes
1-Pentyne
[ "Chemistry" ]
81
[ "Organic compounds", "Alkynes" ]
14,789,332
https://en.wikipedia.org/wiki/SPECpower
SPECpower_ssj2008 is the first industry-standard benchmark that evaluates the power and performance characteristics of volume server class computers. It is available from the Standard Performance Evaluation Corporation (SPEC). SPECpower_ssj2008 is SPEC's first attempt at defining server power measurement standards. It was introduced in December, 2007. Several SPEC member companies contributed to the development of the new power-performance measurement standard, including AMD, Dell, Fujitsu Siemens Computers, HP, Intel, IBM, and Sun Microsystems. See also Average CPU power EEMBC EnergyBench IT energy management Performance per watt References Official SPEC website Benchmarks (computing) Evaluation of computers bs:SPEC de:Standard Performance Evaluation Corporation es:SPEC ja:Standard Performance Evaluation Corporation pl:SPEC (organizacja)
SPECpower
[ "Technology" ]
167
[ "Computing comparisons", "Evaluation of computers", "Computer performance", "Benchmarks (computing)", "Computers" ]
62,342
https://en.wikipedia.org/wiki/RSX-11
RSX-11 is a discontinued family of multi-user real-time operating systems for PDP-11 computers created by Digital Equipment Corporation. In widespread use through the late 1970s and early 1980s, RSX-11 was influential in the development of later operating systems such as VMS and Windows NT. As the original Real-Time System Executive name suggests, RSX was designed (and commonly used) for real time use, with process control a major use. It was also popular for program development and general computing. History Name and origins RSX-11 began as a port to the PDP-11 architecture of the earlier RSX-15 operating system for the PDP-15 minicomputer, first released in 1971. The main architect for RSX-15 (later renamed XVM/RSX) was Dennis “Dan” Brevik. Commenting on the RSX acronym, Brevik says: RSX-11D and IAS The porting effort first produced small paper tape based real-time executives (RSX-11A, RSX-11C) which later gained limited support for disks (RSX-11B). RSX-11B then evolved into the fully fledged RSX-11D disk-based operating system, which first appeared on the PDP-11/40 and PDP-11/45 in early 1973. The project leader for RSX-11D up to version 4 was Henry Krejci. While RSX-11D was being completed, Digital set out to adapt it for a small memory footprint, giving birth to RSX-11M, first released in 1973. From 1971 to 1976, the RSX-11M project was spearheaded by noted operating system designer Dave Cutler, then at his first project. Principles first tried in RSX-11M appear also in later designs led by Cutler, DEC's VMS and MICA and Microsoft's Windows NT. Under the direction of Ron McLean a derivative of RSX-11M, called RSX-20F, was developed to run on the PDP-11/40 front-end processor for the KL10 PDP-10 CPU. Meanwhile, RSX-11D saw further developments: under the direction of Garth Wolfendale (project leader 1972–1976) the system was redesigned and saw its first commercial release. Support for the 22-bit PDP-11/70 system was added. Wolfendale, originally from the UK, also set up the team that designed and prototyped the Interactive Application System (IAS) operating system in the UK; IAS was a variant of RSX-11D more suitable for time sharing. Later development and release of IAS was led by Andy Wilson, in Digital's UK facilities. Release dates Below are estimated release dates for RSX-11 and IAS. Data is taken from the printing date of the associated documentation. General availability date is expected to come closely after. When manuals have different printing dates, the latest date is used. RSX-11S is a proper subset of RSX-11M, so release dates are always assumed to be the same as the corresponding version of RSX-11M. On the other side, RSX-11M Plus is an enhanced version of RSX-11M, so it is expected to be later than the corresponding version of RSX-11M. Legal ownership, development model and availability RSX-11 is proprietary software. Copyright is asserted in binary files, source code and documentation alike. It was entirely developed internally by Digital. Therefore, no part of it is open source. However a copy of the kernel source is present in every RSX distribution, because it was used during the system generation process. The notable exception to this rule is Micro-RSX, which came with a pre-generated autoconfiguring binary kernel. Full sources was available as a separate product to those who already had a binary license, for reference purposes. Ownership of RSX-11S, RSX-11M, RSX-11M Plus and Micro/RSX was transferred from Digital to Mentec Inc. in March 1994 as part of a broader agreement. Mentec Inc. was the US subsidiary of Mentec Limited, an Irish firm specializing in PDP-11 hardware and software support. In 2006 Mentec Inc. was declared bankrupt while Mentec Ltd. was acquired by Irish firm Calyx in December 2006. The PDP-11 software, which was owned by Mentec Inc. was then bought by XX2247 LLC, which is the owner of the software today. It is unclear if new commercial licenses are possible to buy at this time. Hobbyists can run RSX-11M (version 4.3 or earlier) and RSX-11M Plus (version 3.0 or earlier) on the SIMH emulator thanks to a free license granted in May 1998 by Mentec Inc. Legal ownership of RSX-11A, RSX-11B, RSX-11C, RSX-11D, and IAS never changed hands; therefore it passed to Compaq when it acquired Digital in 1998 and then to Hewlett-Packard in 2002. In late 2015 Hewlett-Packard split into two separate companies (HP Inc. and Hewlett Packard Enterprise), so the current owner cannot be firmly established. No new commercial licenses have been issued since at least October 1979 (RSX-11A, RSX-11B, RSX-11C) or 1990 (IAS), and none of these operating systems have ever been licensed for hobbyist use. Versions Main versions RSX-11A, C – small paper tape real time executives RSX-11B – small real time executive based on RSX-11C with support for disk I/O. To start up the system, first DOS-11 was booted, and then RSX-11B was started. RSX-11B programs used DOS-11 macros to perform disk I/O. RSX-11D – a multiuser disk-based system, later evolved into IAS IAS – a timesharing-oriented variant of RSX-11D released at about the same time as the PDP-11/70. The first version of RSX to include DCL (Digital Command Language), which in IAS is known by its original name, PDS (Program Development System). RSX-11M – a multiuser version that was popular on all PDP-11s RSX-11S – a memory-resident version of RSX-11M used in embedded real-time applications. RSX-11S applications were developed under RSX-11M. RSX-11M-Plus – a much extended version of RSX-11M, originally designed to support the multi-processor PDP-11/74, a computer that was never released, but RSX-11M-Plus was then used widely as a standard operating system on the PDP-11/70. RSX-11M-Plus also ran on PDP-11/44, PDP-11/84, PDP-11/94 (Unibus machines), as well as PDP-11/73, PDP-11/83, and PDP-11/93 (Qbus machines). One of the advantages of RSX-11M-Plus over RSX-11M was that larger programs could be created. This was achieved by having the task builder (the linker) build the program to use the separate instruction and data space feature of some PDP-11 models to put executable code and data into separate address spaces. This also allowed programs to run faster, as it reduced the need for "overlays", in which you could overlay object modules at task build time, for very large programs. Overlays were specified in a task build command file. Hardware-specific variants RSX-20F – Customized version of RSX-11M, to be run on PDP-11/40 front end processor operating system for the DEC KL10 processor Micro/RSX – a pre-generated full version of RSX-11M-Plus with hardware autoconfiguration, implemented specifically for the Micro/PDP-11s, a low-cost multi-user system in a box, featuring ease of installation, no system generation, and a special documentation set. Later superseded by RSX-11M Plus. P/OS – A version of RSX-11M-Plus that was targeted to the DEC Professional line of PDP-11-based personal computers Clones in the USSR and other Eastern Bloc countries In 1968, the Soviet Government decided that manufacturing copies of IBM mainframes and DEC minicomputers, in cooperation with other COMECON countries, was more practical than pursuing original designs. Cloning of DEC designs began in 1974, under the name of SM EVM ( or ). As happened with ES EVM mainframes based on the System/360 architecture, the Russians and their allies sometimes significantly modified Western designs, and therefore many SM EVM machines were binary-incompatible with DEC offerings at the time. DOS/RV, , ОСРВM – Three names for an unauthorised clone of RSX-11M produced in the Eastern bloc. The name ОСРВ is an acronym for . This system appears to be an exact duplicate of RSX-11M except a different header in binary files. Differences between RSX and ОСРВ are due to hardware differences between SM and PDP computers and to bug-fixing done by Soviet engineers. However, the original RSX-11M was more used than its Russian clone ОСРВ, because the programmers modifying the original RSX-11M code were doing a better job, and patched RSX was more stable than ОСРВ. Other benefits included a faster update cycle for drivers and a larger choice of patches, made possible by a wider user community. A clone of the RSX-11M operating system ran on the Romanian-made CORAL series family of computers (such as CORAL 2030, a clone of PDP-11). Operation RSX-11 was often used for general-purpose timeshare computing, even though this was the target market for the competing RSTS/E operating system. RSX-11 provided features to ensure better than a maximum necessary response time to peripheral device input (i.e. real-time processing), its original intended use. These features included the ability to lock a process (called a task under RSX) into memory as part of system boot up and to assign a process a higher priority so that it would execute before any processes with a lower priority. In order to support large programs within the PDP-11's relatively small virtual address space of 64 KB, a sophisticated semi-automatic overlay system was used; for any given program, this overlay scheme was produced by RSX's taskbuilder program (called TKB). If the overlay scheme was especially complex, taskbuilding could take a rather long time (hours to days). The standard RSX prompt is ">" or "MCR>", (for the "Monitor Console Routine". All commands can be shortened to their first three characters when entered and correspondingly all commands are unique in their first three characters. Only the login command of "HELLO" can be executed by a user not yet logged in. "HELLO" was chosen as the login command because only the first three characters, "HEL", are relevant and this allows a non-logged in user to execute a "HELP" command. When run on certain PDP-11 processors, each DEC operating system displays a characteristic light pattern on the processor console panel when the system is idle. These patterns are created by an idle task running at the lowest level. The RSX-11M light pattern is two sets of lights that sweep outwards to the left and right from the center of the console (inwards if the IND indirect command file processor program was currently running on older versions of RSX). By contrast, the IAS light pattern was a single bar of lights that swept leftwards. Correspondingly, a jumbled light pattern (reflecting memory fetches) is a visible indication that the computer is under load (and the idle task is not being executed). Other PDP-11 operating systems such as RSTS/E have their own distinctive patterns in the console lights. See also Files-11, file system used in the RSX-11 and OpenVMS operating systems QIO AST RSTS/E RT-11 References External links Dan Brevik posted a history of precursors to RSX-11 in alt.sys.pdp11. - contains documents which trace RSX-11 back through RSX-15 and the real time executive written by John Neblett in the late 1950s for the RW-300 process control computer by TRW Al Kossow posted some further notes on RSX-11 in alt.sys.pdp11. DEC operating systems Real-time operating systems PDP-11 1972 software Discontinued operating systems
RSX-11
[ "Technology" ]
2,741
[ "Real-time computing", "Real-time operating systems" ]
62,382
https://en.wikipedia.org/wiki/Catalan%27s%20conjecture
Catalan's conjecture (or Mihăilescu's theorem) is a theorem in number theory that was conjectured by the mathematician Eugène Charles Catalan in 1844 and proven in 2002 by Preda Mihăilescu at Paderborn University. The integers 23 and 32 are two perfect powers (that is, powers of exponent higher than one) of natural numbers whose values (8 and 9, respectively) are consecutive. The theorem states that this is the only case of two consecutive perfect powers. That is to say, that History The history of the problem dates back at least to Gersonides, who proved a special case of the conjecture in 1343 where (x, y) was restricted to be (2, 3) or (3, 2). The first significant progress after Catalan made his conjecture came in 1850 when Victor-Amédée Lebesgue dealt with the case b = 2. In 1976, Robert Tijdeman applied Baker's method in transcendence theory to establish a bound on a,b and used existing results bounding x,y in terms of a, b to give an effective upper bound for x,y,a,b. Michel Langevin computed a value of for the bound, resolving Catalan's conjecture for all but a finite number of cases. Catalan's conjecture was proven by Preda Mihăilescu in April 2002. The proof was published in the Journal für die reine und angewandte Mathematik, 2004. It makes extensive use of the theory of cyclotomic fields and Galois modules. An exposition of the proof was given by Yuri Bilu in the Séminaire Bourbaki. In 2005, Mihăilescu published a simplified proof. Pillai's conjecture Pillai's conjecture concerns a general difference of perfect powers : it is an open problem initially proposed by S. S. Pillai, who conjectured that the gaps in the sequence of perfect powers tend to infinity. This is equivalent to saying that each positive integer occurs only finitely many times as a difference of perfect powers: more generally, in 1931 Pillai conjectured that for fixed positive integers A, B, C the equation has only finitely many solutions (x, y, m, n) with (m, n) ≠ (2, 2). Pillai proved that for fixed A, B, x, y, and for any λ less than 1, we have uniformly in m and n. The general conjecture would follow from the ABC conjecture. Pillai's conjecture means that for every natural number n, there are only finitely many pairs of perfect powers with difference n. The list below shows, for n ≤ 64, all solutions for perfect powers less than 1018, such that the exponent of both powers is greater than 1. The number of such solutions for each n is listed at . See also for the smallest solution (> 0). See also Beal's conjecture Equation xy = yx Fermat–Catalan conjecture Mordell curve Ramanujan–Nagell equation Størmer's theorem Tijdeman's theorem Thaine's theorem Notes References Predates Mihăilescu's proof. External links Ivars Peterson's MathTrek On difference of perfect powers Jeanine Daems: A Cyclotomic Proof of Catalan's Conjecture Conjectures Conjectures that have been proved Diophantine equations Theorems in number theory Abc conjecture
Catalan's conjecture
[ "Mathematics" ]
712
[ "Unsolved problems in mathematics", "Mathematical objects", "Equations", "Theorems in number theory", "Diophantine equations", "Conjectures", "Conjectures that have been proved", "Mathematical problems", "Abc conjecture", "Mathematical theorems", "Number theory" ]
62,389
https://en.wikipedia.org/wiki/Langmuir%20probe
A Langmuir probe is a device used to determine the electron temperature, electron density, and electric potential of a plasma. It works by inserting one or more electrodes into a plasma, with a constant or time-varying electric potential between the various electrodes or between them and the surrounding vessel. The measured currents and potentials in this system allow the determination of the physical properties of the plasma. I-V characteristic of the Debye sheath The beginning of Langmuir probe theory is the I–V characteristic of the Debye sheath, that is, the current density flowing to a surface in a plasma as a function of the voltage drop across the sheath. The analysis presented here indicates how the electron temperature, electron density, and plasma potential can be derived from the I–V characteristic. In some situations a more detailed analysis can yield information on the ion density (), the ion temperature , or the electron energy distribution function (EEDF) or . Ion saturation current density Consider first a surface biased to a large negative voltage. If the voltage is large enough, essentially all electrons (and any negative ions) will be repelled. The ion velocity will satisfy the Bohm sheath criterion, which is, strictly speaking, an inequality, but which is usually marginally fulfilled. The Bohm criterion in its marginal form says that the ion velocity at the sheath edge is simply the sound speed given by . The ion temperature term is often neglected, which is justified if the ions are cold. Z is the (average) charge state of the ions, and is the adiabatic coefficient for the ions. The proper choice of is a matter of some contention. Most analyses use , corresponding to isothermal ions, but some kinetic theory suggests that . For and , using the larger value results in the conclusion that the density is times smaller. Uncertainties of this magnitude arise several places in the analysis of Langmuir probe data and are very difficult to resolve. The charge density of the ions depends on the charge state Z, but quasineutrality allows one to write it simply in terms of the electron density as , where is the charge of an electron and is the number density of electrons. Using these results we have the current density to the surface due to the ions. The current density at large negative voltages is due solely to the ions and, except for possible sheath expansion effects, does not depend on the bias voltage, so it is referred to as the ion saturation current density and is given by where is as defined above. The plasma parameters, in particular, the density, are those at the sheath edge. Exponential electron current As the voltage of the Debye sheath is reduced, the more energetic electrons are able to overcome the potential barrier of the electrostatic sheath. We can model the electrons at the sheath edge with a Maxwell–Boltzmann distribution, i.e., , except that the high energy tail moving away from the surface is missing, because only the lower energy electrons moving toward the surface are reflected. The higher energy electrons overcome the sheath potential and are absorbed. The mean velocity of the electrons which are able to overcome the voltage of the sheath is , where the cut-off velocity for the upper integral is . is the voltage across the Debye sheath, that is, the potential at the sheath edge minus the potential of the surface. For a large voltage compared to the electron temperature, the result is . With this expression, we can write the electron contribution to the current to the probe in terms of the ion saturation current as , valid as long as the electron current is not more than two or three times the ion current. Floating potential The total current, of course, is the sum of the ion and electron currents: . We are using the convention that current from the surface into the plasma is positive. An interesting and practical question is the potential of a surface to which no net current flows. It is easily seen from the above equation that . If we introduce the ion reduced mass , we can write Since the floating potential is the experimentally accessible quantity, the current (below electron saturation) is usually written as . Electron saturation current When the electrode potential is equal to or greater than the plasma potential, then there is no longer a sheath to reflect electrons, and the electron current saturates. Using the Boltzmann expression for the mean electron velocity given above with and setting the ion current to zero, the electron saturation current density would be Although this is the expression usually given in theoretical discussions of Langmuir probes, the derivation is not rigorous and the experimental basis is weak. The theory of double layers typically employs an expression analogous to the Bohm criterion, but with the roles of electrons and ions reversed, namely where the numerical value was found by taking Ti=Te and γi=γe. In practice, it is often difficult and usually considered uninformative to measure the electron saturation current experimentally. When it is measured, it is found to be highly variable and generally much lower (a factor of three or more) than the value given above. Often a clear saturation is not seen at all. Understanding electron saturation is one of the most important outstanding problems of Langmuir probe theory. Effects of the bulk plasma The Debye sheath theory explains the basic behavior of Langmuir probes, but is not complete. Merely inserting an object like a probe into a plasma changes the density, temperature, and potential at the sheath edge and perhaps everywhere. Changing the voltage on the probe will also, in general, change various plasma parameters. Such effects are less well understood than sheath physics, but they can at least in some cases be roughly accounted. Pre-sheath The Bohm criterion requires the ions to enter the Debye sheath at the sound speed. The potential drop that accelerates them to this speed is called the pre-sheath. It has a spatial scale that depends on the physics of the ion source but which is large compared to the Debye length and often of the order of the plasma dimensions. The magnitude of the potential drop is equal to (at least) The acceleration of the ions also entails a decrease in the density, usually by a factor of about 2 depending on the details. Resistivity Collisions between ions and electrons will also affect the I-V characteristic of a Langmuir probe. When an electrode is biased to any voltage other than the floating potential, the current it draws must pass through the plasma, which has a finite resistivity. The resistivity and current path can be calculated with relative ease in an unmagnetized plasma. In a magnetized plasma, the problem is much more difficult. In either case, the effect is to add a voltage drop proportional to the current drawn, which shears the characteristic. The deviation from an exponential function is usually not possible to observe directly, so that the flattening of the characteristic is usually misinterpreted as a larger plasma temperature. Looking at it from the other side, any measured I-V characteristic can be interpreted as a hot plasma, where most of the voltage is dropped in the Debye sheath, or as a cold plasma, where most of the voltage is dropped in the bulk plasma. Without quantitative modeling of the bulk resistivity, Langmuir probes can only give an upper limit on the electron temperature. Sheath expansion It is not enough to know the current density as a function of bias voltage since it is the absolute current which is measured. In an unmagnetized plasma, the current-collecting area is usually taken to be the exposed surface area of the electrode. In a magnetized plasma, the projected area is taken, that is, the area of the electrode as viewed along the magnetic field. If the electrode is not shadowed by a wall or other nearby object, then the area must be doubled to account for current coming along the field from both sides. If the electrode dimensions are not small in comparison to the Debye length, then the size of the electrode is effectively increased in all directions by the sheath thickness. In a magnetized plasma, the electrode is sometimes assumed to be increased in a similar way by the ion Larmor radius. The finite Larmor radius allows some ions to reach the electrode that would have otherwise gone past it. The details of the effect have not been calculated in a fully self-consistent way. If we refer to the probe area including these effects as (which may be a function of the bias voltage) and make the assumptions , , and , and ignore the effects of bulk resistivity, and electron saturation, then the I-V characteristic becomes , where . Magnetized plasmas The theory of Langmuir probes is much more complex when the plasma is magnetized. The simplest extension of the unmagnetized case is simply to use the projected area rather than the surface area of the electrode. For a long cylinder far from other surfaces, this reduces the effective area by a factor of π/2 = 1.57. As mentioned before, it might be necessary to increase the radius by about the thermal ion Larmor radius, but not above the effective area for the unmagnetized case. The use of the projected area seems to be closely tied with the existence of a magnetic sheath. Its scale is the ion Larmor radius at the sound speed, which is normally between the scales of the Debye sheath and the pre-sheath. The Bohm criterion for ions entering the magnetic sheath applies to the motion along the field, while at the entrance to the Debye sheath it applies to the motion normal to the surface. This results in a reduction of the density by the sine of the angle between the field and the surface. The associated increase in the Debye length must be taken into account when considering ion non-saturation due to sheath effects. Especially interesting and difficult to understand is the role of cross-field currents. Naively, one would expect the current to be parallel to the magnetic field along a flux tube. In many geometries, this flux tube will end at a surface in a distant part of the device, and this spot should itself exhibit an I-V characteristic. The net result would be the measurement of a double-probe characteristic; in other words, electron saturation current equal to the ion saturation current. When this picture is considered in detail, it is seen that the flux tube must charge up and the surrounding plasma must spin around it. The current into or out of the flux tube must be associated with a force that slows down this spinning. Candidate forces are viscosity, friction with neutrals, and inertial forces associated with plasma flows, either steady or fluctuating. It is not known which force is strongest in practice, and in fact it is generally difficult to find any force that is powerful enough to explain the characteristics actually measured. It is also likely that the magnetic field plays a decisive role in determining the level of electron saturation, but no quantitative theory is as yet available. Electrode configurations Once one has a theory of the I-V characteristic of an electrode, one can proceed to measure it and then fit the data with the theoretical curve to extract the plasma parameters. The straightforward way to do this is to sweep the voltage on a single electrode, but, for a number of reasons, configurations using multiple electrodes or exploring only a part of the characteristic are used in practice. Single probe The most straightforward way to measure the I-V characteristic of a plasma is with a single probe, consisting of one electrode biased with a voltage ramp relative to the vessel. The advantages are simplicity of the electrode and redundancy of information, i.e. one can check whether the I-V characteristic has the expected form. Potentially additional information can be extracted from details of the characteristic. The disadvantages are more complex biasing and measurement electronics and a poor time resolution. If fluctuations are present (as they always are) and the sweep is slower than the fluctuation frequency (as it usually is), then the I-V is the average current as a function of voltage, which may result in systematic errors if it is analyzed as though it were an instantaneous I-V. The ideal situation is to sweep the voltage at a frequency above the fluctuation frequency but still below the ion cyclotron frequency. This, however, requires sophisticated electronics and a great deal of care. Double probe An electrode can be biased relative to a second electrode, rather than to the ground. The theory is similar to that of a single probe, except that the current is limited to the ion saturation current for both positive and negative voltages. In particular, if is the voltage applied between two identical electrodes, the current is given by; , which can be rewritten using as a hyperbolic tangent: . One advantage of the double probe is that neither electrode is ever very far above floating, so the theoretical uncertainties at large electron currents are avoided. If it is desired to sample more of the exponential electron portion of the characteristic, an asymmetric double probe may be used, with one electrode larger than the other. If the ratio of the collection areas is larger than the square root of the ion to electron mass ratio, then this arrangement is equivalent to the single tip probe. If the ratio of collection areas is not that big, then the characteristic will be in-between the symmetric double tip configuration and the single-tip configuration. If is the area of the larger tip then: Another advantage is that there is no reference to the vessel, so it is to some extent immune to the disturbances in a radio frequency plasma. On the other hand, it shares the limitations of a single probe concerning complicated electronics and poor time resolution. In addition, the second electrode not only complicates the system, but it makes it susceptible to disturbance by gradients in the plasma. Triple probe An elegant electrode configuration is the triple probe, consisting of two electrodes biased with a fixed voltage and a third which is floating. The bias voltage is chosen to be a few times the electron temperature so that the negative electrode draws the ion saturation current, which, like the floating potential, is directly measured. A common rule of thumb for this voltage bias is 3/e times the expected electron temperature. Because the biased tip configuration is floating, the positive probe can draw at most an electron current only equal in magnitude and opposite in polarity to the ion saturation current drawn by the negative probe, given by : and as before the floating tip draws effectively no current: . Assuming that: 1.) The electron energy distribution in the plasma is Maxwellian, 2.) The mean free path of the electrons is greater than the ion sheath about the tips and larger than the probe radius, and 3.) the probe sheath sizes are much smaller than the probe separation, then the current to any probe can be considered composed of two partsthe high energy tail of the Maxwellian electron distribution, and the ion saturation current: where the current Ie is thermal current. Specifically, , where S is surface area, Je is electron current density, and ne is electron density. Assuming that the ion and electron saturation current is the same for each probe, then the formulas for current to each of the probe tips take the form . It is then simple to show but the relations from above specifying that I+=-I− and Ifl=0 give , a transcendental equation in terms of applied and measured voltages and the unknown Te that in the limit qeVBias = qe(V+-V−) >> k Te, becomes . That is, the voltage difference between the positive and floating electrodes is proportional to the electron temperature. (This was especially important in the sixties and seventies before sophisticated data processing became widely available.) More sophisticated analysis of triple probe data can take into account such factors as incomplete saturation, non-saturation, unequal areas. Triple probes have the advantage of simple biasing electronics (no sweeping required), simple data analysis, excellent time resolution, and insensitivity to potential fluctuations (whether imposed by an rf source or inherent fluctuations). Like double probes, they are sensitive to gradients in plasma parameters. Special arrangements Arrangements with four (tetra probe) or five (penta probe) have sometimes been used, but the advantage over triple probes has never been entirely convincing. The spacing between probes must be larger than the Debye length of the plasma to prevent an overlapping Debye sheath. A pin-plate probe consists of a small electrode directly in front of a large electrode, the idea being that the voltage sweep of the large probe can perturb the plasma potential at the sheath edge and thereby aggravate the difficulty of interpreting the I-V characteristic. The floating potential of the small electrode can be used to correct for changes in potential at the sheath edge of the large probe. Experimental results from this arrangement look promising, but experimental complexity and residual difficulties in the interpretation have prevented this configuration from becoming standard. Various geometries have been proposed for use as ion temperature probes, for example, two cylindrical tips that rotate past each other in a magnetized plasma. Since shadowing effects depend on the ion Larmor radius, the results can be interpreted in terms of ion temperature. The ion temperature is an important quantity that is very difficult to measure. Unfortunately, it is also very difficult to analyze such probes in a fully self-consistent way. Emissive probes use an electrode heated either electrically or by the exposure to the plasma. When the electrode is biased more positive than the plasma potential, the emitted electrons are pulled back to the surface so the I-V characteristic is hardly changed. As soon as the electrode is biased negative with respect to the plasma potential, the emitted electrons are repelled and contribute a large negative current. The onset of this current or, more sensitively, the onset of a discrepancy between the characteristics of an unheated and a heated electrode, is a sensitive indicator of the plasma potential. To measure fluctuations in plasma parameters, arrays of electrodes are used, usually onebut occasionally two-dimensional. A typical array has a spacing of 1 mm and a total of 16 or 32 electrodes. A simpler arrangement to measure fluctuations is a negatively biased electrode flanked by two floating electrodes. The ion-saturation current is taken as a surrogate for the density and the floating potential as a surrogate for the plasma potential. This allows a rough measurement of the turbulent particle flux Cylindrical Langmuir probe in electron flow Most often, the Langmuir probe is a small sized electrode inserted into a plasma which is connected to an external circuit that measures the properties of the plasma with respect to ground. The ground is typically an electrode with a large surface area and is usually in contact with the same plasma (very often the metallic wall of the chamber). This allows the probe to measure the I-V characteristic of the plasma. The probe measures the characteristic current of the plasma when the probe is biased with a potential . Relations between the probe I-V characteristic and parameters of isotropic plasma were found by Irving Langmuir and they can be derived most elementary for the planar probe of a large surface area (ignoring the edge effects problem). Let us choose the point in plasma at the distance from the probe surface where electric field of the probe is negligible while each electron of plasma passing this point could reach the probe surface without collisions with plasma components: , is the Debye length and is the electron free path calculated for its total cross section with plasma components. In the vicinity of the point we can imagine a small element of the surface area parallel to the probe surface. The elementary current of plasma electrons passing throughout in a direction of the probe surface can be written in the form where is a scalar of the electron thermal velocity vector , is the element of the solid angle with its relative value , is the angle between perpendicular to the probe surface recalled from the point and the radius-vector of the electron thermal velocity forming a spherical layer of thickness in velocity space, and is the electron distribution function normalized to unity Taking into account uniform conditions along the probe surface (boundaries are excluded), , we can take double integral with respect to the angle , and with respect to the velocity , from the expression (), after substitution Eq. () in it, to calculate a total electron current on the probe where is the probe potential with respect to the potential of plasma , is the lowest electron velocity value at which the electron still could reach the probe surface charged to the potential , is the upper limit of the angle at which the electron having initial velocity can still reach the probe surface with a zero-value of its velocity at this surface. That means the value is defined by the condition Deriving the value from Eq. () and substituting it in Eq. (), we can obtain the probe I-V characteristic (neglecting the ion current) in the range of the probe potential in the form Differentiating Eq. () twice with respect to the potential , one can find the expression describing the second derivative of the probe I-V characteristic (obtained firstly by M. J. Druyvestein defining the electron distribution function over velocity in the evident form. M. J. Druyvestein has shown in particular that Eqs. () and () are valid for description of operation of the probe of any arbitrary convex geometrical shape. Substituting the Maxwellian distribution function: where is the most probable velocity, in Eq. () we obtain the expression From which the very useful in practice relation follows allowing one to derive the electron energy (for Maxwellian distribution function only!) by a slope of the probe I-V characteristic in a semilogarithmic scale. Thus in plasmas with isotropic electron distributions, the electron current on a surface of the cylindrical Langmuir probe at plasma potential is defined by the average electron thermal velocity and can be written down as equation (see Eqs. (), () at ) where is the electron concentration, is the probe radius, and is its length. It is obvious that if plasma electrons form an electron wind (flow) across the cylindrical probe axis with a velocity , the expression holds true. In plasmas produced by gas-discharge arc sources as well as inductively coupled sources, the electron wind can develop the Mach number . Here the parameter is introduced along with the Mach number for simplification of mathematical expressions. Note that , where is the most probable velocity for the Maxwellian distribution function, so that . Thus the general case where is of the theoretical and practical interest. Corresponding physical and mathematical considerations presented in Refs. [9,10] has shown that at the Maxwellian distribution function of the electrons in a reference system moving with the velocity across axis of the cylindrical probe set at plasma potential , the electron current on the probe can be written down in the form where and are Bessel functions of imaginary arguments and Eq. () is reduced to Eq. () at being reduced to Eq. () at . The second derivative of the probe I-V characteristic with respect to the probe potential can be presented in this case in the form (see Fig. 3) where and the electron energy is expressed in eV. All parameters of the electron population: , , and in plasma can be derived from the experimental probe I-V characteristic second derivative by its least square best fitting with the theoretical curve expressed by Eq. (). For detail and for problem of the general case of none-Maxwellian electron distribution functions see., Practical considerations For laboratory and technical plasmas, the electrodes are most commonly tungsten or tantalum wires several thousandths of an inch thick, because they have a high melting point but can be made small enough not to perturb the plasma. Although the melting point is somewhat lower, molybdenum is sometimes used because it is easier to machine and solder than tungsten. For fusion plasmas, graphite electrodes with dimensions from 1 to 10 mm are usually used because they can withstand the highest power loads (also sublimating at high temperatures rather than melting), and result in reduced bremsstrahlung radiation (with respect to metals) due to the low atomic number of carbon. The electrode surface exposed to the plasma must be defined, e.g. by insulating all but the tip of a wire electrode. If there can be significant deposition of conducting materials (metals or graphite), then the insulator should be separated from the electrode by a to prevent short-circuiting. In a magnetized plasma, it appears to be best to choose a probe size a few times larger than the ion Larmor radius. A point of contention is whether it is better to use proud probes, where the angle between the magnetic field and the surface is at least 15°, or flush-mounted probes, which are embedded in the plasma-facing components and generally have an angle of 1 to 5 °. Many plasma physicists feel more comfortable with proud probes, which have a longer tradition and possibly are less perturbed by electron saturation effects, although this is disputed. Flush-mounted probes, on the other hand, being part of the wall, are less perturbative. Knowledge of the field angle is necessary with proud probes to determine the fluxes to the wall, whereas it is necessary with flush-mounted probes to determine the density. In very hot and dense plasmas, as found in fusion research, it is often necessary to limit the thermal load to the probe by limiting the exposure time. A reciprocating probe is mounted on an arm that is moved into and back out of the plasma, usually in about one second by means of either a pneumatic drive or an electromagnetic drive using the ambient magnetic field. Pop-up probes are similar, but the electrodes rest behind a shield and are only moved the few millimeters necessary to bring them into the plasma near the wall. A Langmuir probe can be purchased off the shelf for on the order of 15,000 U.S. dollars, or they can be built by an experienced researcher or technician. When working at frequencies under 100 MHz, it is advisable to use blocking filters, and take necessary grounding precautions. In low temperature plasmas, in which the probe does not get hot, surface contamination may become an issue. This effect can cause hysteresis in the I-V curve and may limit the current collected by the probe. A heating mechanism or a glow discharge plasma may be used to clean the probe and prevent misleading results. See also Dual segmented Langmuir probe Plasma parameters Further reading References External links Notes on Langmuir Probe Theory and Design by F.F. Chen Plasma diagnostics Measuring instruments
Langmuir probe
[ "Physics", "Technology", "Engineering" ]
5,498
[ "Plasma diagnostics", "Measuring instruments", "Plasma physics" ]
62,426
https://en.wikipedia.org/wiki/Tier%201%20network
A Tier 1 network is an Internet Protocol (IP) network that can reach every other network on the Internet solely via settlement-free interconnection (also known as settlement-free peering). Tier 1 networks can exchange traffic with other Tier 1 networks without paying any fees for the exchange of traffic in either direction. In contrast, some Tier 2 networks and all Tier 3 networks must pay to transmit traffic on other networks. There is no authority that defines tiers of networks participating in the Internet. The most common and well-accepted definition of a Tier 1 network is a network that can reach every other network on the Internet without purchasing IP transit or paying for peering. By this definition, a Tier 1 network must be a transit-free network (purchases no transit) that peers for no charge with every other Tier 1 network and can reach all major networks on the Internet. Not all transit-free networks are Tier 1 networks, as it is possible to become transit-free by paying for peering, and it is also possible to be transit-free without being able to reach all major networks on the Internet. The most widely quoted source for identifying Tier 1 networks is published by Renesys Corporation, but the base information to prove the claim is publicly accessible from many locations, such as the RIPE RIS database, the Oregon Route Views servers, Packet Clearing House, and others. It can be difficult to determine whether a network is paying for peering or transit, as these business agreements are rarely public information, or are covered under a non-disclosure agreement. The Internet peering community is roughly the set of peering coordinators present at the Internet exchange points on more than one continent. The subset representing Tier 1 networks is collectively understood in a loose sense, but not published as such. Common definitions of Tier 2 and Tier 3 networks: Tier 2 network: A network that peers for no charge with some networks, but still purchases IP transit or pays for peering to reach at least some portion of the Internet. Tier 3 network: A network that solely purchases transit/peering from other networks to participate in the Internet. History The original Internet backbone was the ARPANET when it provided the routing between most participating networks. The development of the British JANET (1984) and U.S. NSFNET (1985) infrastructure programs to serve their nations' higher education communities, regardless of discipline, resulted in the NSFNet backbone by 1989. The Internet could be defined as the collection of all networks connected and able to interchange Internet Protocol datagrams with this backbone. Such was the weight of the NSFNET program and its funding ($200 million from 1986 to 1995)—and the quality of the protocols themselves—that by 1990, when the ARPANET itself was finally decommissioned, TCP/IP had supplanted or marginalized most other wide-area computer network protocols worldwide. When the Internet was opened to the commercial markets, multiple for-profit Internet backbone and access providers emerged. The network routing architecture then became decentralized and this meant a need for exterior routing protocols: in particular, the Border Gateway Protocol emerged. New Tier 1 ISPs and their peering agreements supplanted the government-sponsored NSFNet, that program being officially terminated on April 30, 1995. The NSFnet-supplied regional networks then sought to buy national-scale Internet connectivity from these now-numerous private long-haul networks. Routing through peering A bilateral private peering agreement typically involves a direct physical link between two partners. Traffic from one network to the other is then primarily routed through that direct link. A Tier 1 network may have various such links to other Tier 1 networks. Peering is founded on the principle of equality of traffic between the partners and as such disagreements may arise between partners in which usually one of the partners unilaterally disconnects the link in order to force the other into a payment scheme. Such disruptive de-peering has happened several times during the first decade of the 21st century. When this involves large-scale networks involving many millions of customers this may effectively partition a part of the Internet involving those carriers, especially if they decide to disallow routing through alternate routes. This is not largely a technical issue but a commercial matter in which a financial dispute is fought out using the other party's customers as hostages to obtain a better negotiating position. In the worst case, single-homed customers of each network will not be able to reach the other network at all. The de-peering party then hopes that the other network's customers will be hurt more by the decision than its own customers which may eventually conclude the negotiations in its favor. Lower tier ISPs and other parties not involved in the dispute may be unaffected by such a partition as there exist typically multiple routes onto the same network. The disputes referenced have also typically involved transit-free peering in which one player only exchanged data with the other that involved each other's networks—there was no data transiting through the other's network destined for other parts of the Internet. By the strict definition of peering and the strict definition of a Tier 1 network, a Tier 1 network only peers with other Tier 1 networks and has no transit routes going anywhere. More practically speaking, Tier 1 networks serve as transit networks for lower tier networks and only peer with other Tier 1 networks that offer the same services on an adequate scale—effectively being "peers" in the truest sense of the word. More appropriately then, peering means the exchange of an equitable and fair amount of data-miles between two networks, agreements of which do not preclude any pay-for-transit contracts to exist between the very same parties. On the subject of routing, settlement-free peering involves conditions disallowing the abuse of the other's network by sending it traffic not destined for that network (i.e. intended for transit). Transit agreements however would typically cater for just such outbound packets. Tier 1 providers are more central to the Internet backbone and would only purchase transit from other Tier 1 providers, while selling transit to providers of all tiers. Given their huge networks, Tier 1 providers often do not participate in public Internet Exchanges but rather sell transit services to such participants and engage in private peering. In the most logical definition, a Tier 1 provider will never pay for transit because the set of all Tier 1 providers sells transit to all of the lower tier providers everywhere, and becauseAs such, by the peering agreement, all the customers of any Tier 1 provider already have access to all the customers of all the other Tier 1 providers without the Tier 1 provider itself having to pay transit costs to the other networks. Effectively, the actual transit costs incurred by provider A on behalf of provider B are logically identical to the transit costs incurred by provider B on behalf of provider A—hence there not being any payment required. List of Tier 1 networks These networks are universally recognized as Tier 1 networks, because they can reach the entire internet (IPv4 and IPv6) via settlement-free peering. The CAIDA AS rank is a rank of importance on the internet. While most of these Tier 1 providers offer global coverage (based on the published network map on their respective public websites), there are some which are restricted geographically. However these do offer global coverage for mobiles and IP-VPN type services which are unrelated to being a Tier 1 provider. A 2008 report shows Internet traffic relying less on U.S. networks than previously. Regional Tier 1 networks A common point of contention regarding Tier 1 networks is the concept of a regional Tier 1 network. A regional Tier 1 network is a network which is not transit-free globally, but which maintains many of the classic behaviors and motivations of a Tier 1 network within a specific region. A typical scenario for this characteristic involves a network that was the incumbent telecommunications company in a specific country or region, usually tied to some level of government-supported monopoly. Within their specific countries or regions of origin, these networks maintain peering policies which mimic those of Tier 1 networks (such as lack of openness to new peering relationships and having existing peering with every other major network in that region). However, this network may then extend to another country, region, or continent outside of its core region of operations, where it may purchase transit or peer openly like a Tier 2 network. A commonly cited example of these behaviors involves the incumbent carriers within Australia, who will not peer with new networks in Australia under any circumstances, but who will extend their networks to the United States and peer openly with many networks. Less extreme examples of much less restrictive peering requirements being set for regions in which a network peers, but does not sell services or have a significant market share, are relatively common among many networks, not just regional Tier 1 networks. While the classification regional Tier 1 holds some merit for understanding the peering motivations of such a network within different regions, these networks do not meet the requirements of a true global Tier 1 because they are not transit-free globally. Other major networks This is a list of networks that are often considered and close to the status of Tier 1, because they can reach the majority (50+%) of the internet via settlement-free peering with their global rings. However, routes to one or more Tier 1 are missing or paid. Therefore, they are technically Tier 2, though practically something in between. See also Optical Carrier transmission rates Interconnect agreement Internet exchange point List of Internet exchange points References Internet architecture
Tier 1 network
[ "Technology" ]
1,916
[ "Internet architecture", "IT infrastructure" ]
62,437
https://en.wikipedia.org/wiki/SCADA
SCADA (an acronym for supervisory control and data acquisition) is a control system architecture comprising computers, networked data communications and graphical user interfaces for high-level supervision of machines and processes. It also covers sensors and other devices, such as programmable logic controllers, which interface with process plant or machinery. The operator interfaces which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA computer system. The subordinated operations, e.g. the real-time control logic or controller calculations, are performed by networked modules connected to the field sensors and actuators. The SCADA concept was developed to be a universal means of remote-access to a variety of local control modules, which could be from different manufacturers and allowing access through standard automation protocols. In practice, large SCADA systems have grown to become similar to distributed control systems in function, while using multiple means of interfacing with the plant. They can control large-scale processes that can span multiple sites, and work over large distances. It is one of the most commonly-used types of industrial control systems. Control operations The key attribute of a SCADA system is its ability to perform a supervisory operation over a variety of other proprietary devices. Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves. Level 1 contains the industrialized input/output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and targets. Level 4 is the production scheduling level. Level 1 contains the programmable logic controllers (PLCs) or remote terminal units (RTUs). Level 2 contains the SCADA to readings and equipment status reports that are communicated to level 2 SCADA as required. Data is then compiled and formatted in such a way that a control room operator using the human-machine interface (HMI) can make supervisory decisions to adjust or override normal RTU (PLC) controls. Data may also be fed to a historian, often built on a commodity database management system, to allow trending and other analytical auditing. SCADA systems typically use a tag database, which contains data elements called tags or points, which relate to specific instrumentation or actuators within the process system. Data is accumulated against these unique process control equipment tag references. Components A SCADA system usually consists of the following main elements: Supervisory computers This is the core of the SCADA system, gathering data on the process and sending control commands to the field connected devices. It refers to the computer and software responsible for communicating with the field connection controllers, which are RTUs and PLCs, and includes the HMI software running on operator workstations. In smaller SCADA systems, the supervisory computer may be composed of a single PC, in which case the HMI is a part of this computer. In larger SCADA systems, the master station may include several HMIs hosted on client computers, multiple servers for data acquisition, distributed software applications, and disaster recovery sites. To increase the integrity of the system the multiple servers will often be configured in a dual-redundant or hot-standby formation providing continuous control and monitoring in the event of a server malfunction or breakdown. Remote terminal units RTUs connect to sensors and actuators in the process, and are networked to the supervisory computer system. RTUs have embedded control capabilities and often conform to the IEC 61131-3 standard for programming and support automation via ladder logic, a function block diagram or a variety of other languages. Remote locations often have little or no local infrastructure so it is not uncommon to find RTUs running off a small solar power system, using radio, GSM or satellite for communications, and being ruggedised to survive from -20C to +70C or even -40C to +85C without external heating or cooling equipment. Programmable logic controllers PLCs are connected to sensors and actuators in the process, and are networked to the supervisory system. In factory automation, PLCs typically have a high speed connection to the SCADA system. In remote applications, such as a large water treatment plant, PLCs may connect directly to SCADA over a wireless link, or more commonly, utilise an RTU for the communications management. PLCs are specifically designed for control and were the founding platform for the IEC 61131-3 programming languages. For economical reasons, PLCs are often used for remote sites where there is a large I/O count, rather than utilising an RTU alone. Communication infrastructure This connects the supervisory computer system to the RTUs and PLCs, and may use industry standard or manufacturer proprietary protocols. Both RTUs and PLCs operate autonomously on the near-real time control of the process, using the last command given from the supervisory system. Failure of the communications network does not necessarily stop the plant process controls, and on resumption of communications, the operator can continue with monitoring and control. Some critical systems will have dual redundant data highways, often cabled via diverse routes. The HMI is the operator window of the supervisory system. It presents plant information to the operating personnel graphically in the form of mimic diagrams, which are a schematic representation of the plant being controlled, and alarm and event logging pages. The HMI is linked to the SCADA supervisory computer to provide live data to drive the mimic diagrams, alarm displays and trending graphs. In many installations the HMI is the graphical user interface for the operator, collects all data from external devices, creates reports, performs alarming, sends notifications, etc. Mimic diagrams consist of line graphics and schematic symbols to represent process elements, or may consist of digital photographs of the process equipment overlain with animated symbols. Supervisory operation of the plant is by means of the HMI, with operators issuing commands using mouse pointers, keyboards and touch screens. For example, a symbol of a pump can show the operator that the pump is running, and a flow meter symbol can show how much fluid it is pumping through the pipe. The operator can switch the pump off from the mimic by a mouse click or screen touch. The HMI will show the flow rate of the fluid in the pipe decrease in real time. The HMI package for a SCADA system typically includes a drawing program that the operators or system maintenance personnel use to change the way these points are represented in the interface. These representations can be as simple as an on-screen traffic light, which represents the state of an actual traffic light in the field, or as complex as a multi-projector display representing the position of all of the elevators in a skyscraper or all of the trains on a railway. A historian is a software service within the HMI which accumulates time-stamped data, events, and alarms in a database which can be queried or used to populate graphic trends in the HMI. The historian is a client that requests data from a data acquisition server. Alarm handling An important part of most SCADA implementations is alarm handling. The system monitors whether certain alarm conditions are satisfied, to determine when an alarm event has occurred. Once an alarm event has been detected, one or more actions are taken (such as the activation of one or more alarm indicators, and perhaps the generation of email or text messages so that management or remote SCADA operators are informed). In many cases, a SCADA operator may have to acknowledge the alarm event; this may deactivate some alarm indicators, whereas other indicators remain active until the alarm conditions are cleared. Alarm conditions can be explicit—for example, an alarm point is a digital status point that has either the value NORMAL or ALARM that is calculated by a formula based on the values in other analogue and digital points—or implicit: the SCADA system might automatically monitor whether the value in an analogue point lies outside high and low- limit values associated with that point. Examples of alarm indicators include a siren, a pop-up box on a screen, or a coloured or flashing area on a screen (that might act in a similar way to the "fuel tank empty" light in a car); in each case, the role of the alarm indicator is to draw the operator's attention to the part of the system 'in alarm' so that appropriate action can be taken. PLC/RTU programming "Smart" RTUs, or standard PLCs, are capable of autonomously executing simple logic processes without involving the supervisory computer. They employ standardized control programming languages (such as those under IEC 61131-3, a suite of five programming languages including function block, ladder, structured text, sequence function charts and instruction list), that are frequently used to create programs which run on these RTUs and PLCs. Unlike a procedural language like C or FORTRAN, IEC 61131-3 has minimal training requirements by virtue of resembling historic physical control arrays. This allows SCADA system engineers to perform both the design and implementation of a program to be executed on an RTU or PLC. A programmable automation controller (PAC) is a compact controller that combines the features and capabilities of a PC-based control system with that of a typical PLC. PACs are deployed in SCADA systems to provide RTU and PLC functions. In many electrical substation SCADA applications, "distributed RTUs" use information processors or station computers to communicate with digital protective relays, PACs, and other devices for I/O, and communicate with the SCADA master in lieu of a traditional RTU. PLC commercial integration Since about 1998, virtually all major PLC manufacturers have offered integrated HMI/SCADA systems, many of them using open and non-proprietary communications protocols. Numerous specialized third-party HMI/SCADA packages, offering built-in compatibility with most major PLCs, have also entered the market, allowing mechanical engineers, electrical engineers and technicians to configure HMIs themselves, without the need for a custom-made program written by a software programmer. The Remote Terminal Unit (RTU) connects to physical equipment. Typically, an RTU converts the electrical signals from the equipment to digital values. By converting and sending these electrical signals out to equipment the RTU can control equipment. Communication infrastructure and methods SCADA systems have traditionally used combinations of radio and direct wired connections, although SONET/SDH is also frequently used for large systems such as railways and power stations. The remote management or monitoring function of a SCADA system is often referred to as telemetry. Some users want SCADA data to travel over their pre-established corporate networks or to share the network with other applications. The legacy of the early low-bandwidth protocols remains, though. SCADA protocols are designed to be very compact. Many are designed to send information only when the master station polls the RTU. Typical legacy SCADA protocols include Modbus RTU, RP-570, Profibus and Conitel. These communication protocols, with the exception of Modbus (Modbus has been made open by Schneider Electric), are all SCADA-vendor specific but are widely adopted and used. Standard protocols are IEC 60870-5-101 or 104, IEC 61850 and DNP3. These communication protocols are standardized and recognized by all major SCADA vendors. Many of these protocols now contain extensions to operate over TCP/IP. Although the use of conventional networking specifications, such as TCP/IP, blurs the line between traditional and industrial networking, they each fulfill fundamentally differing requirements. Network simulation can be used in conjunction with SCADA simulators to perform various 'what-if' analyses. With increasing security demands (such as North American Electric Reliability Corporation (NERC) and critical infrastructure protection (CIP) in the US), there is increasing use of satellite-based communication. This has the key advantages that the infrastructure can be self-contained (not using circuits from the public telephone system), can have built-in encryption, and can be engineered to the availability and reliability required by the SCADA system operator. Earlier experiences using consumer-grade VSAT were poor. Modern carrier-class systems provide the quality of service required for SCADA. RTUs and other automatic controller devices were developed before the advent of industry wide standards for interoperability. The result is that developers and their management created a multitude of control protocols. Among the larger vendors, there was also the incentive to create their own protocol to "lock in" their customer base. A list of automation protocols is compiled here. An example of efforts by vendor groups to standardize automation protocols is the OPC-UA (formerly "OLE for process control" now Open Platform Communications Unified Architecture). Architecture development SCADA systems have evolved through four generations as follows: Early SCADA system computing was done by large minicomputers. Common network services did not exist at the time SCADA was developed. Thus SCADA systems were independent systems with no connectivity to other systems. The communication protocols used were strictly proprietary at that time. The first-generation SCADA system redundancy was achieved using a back-up mainframe system connected to all the Remote Terminal Unit sites and was used in the event of failure of the primary mainframe system. Some first generation SCADA systems were developed as "turn key" operations that ran on minicomputers such as the PDP-11 series. SCADA information and command processing were distributed across multiple stations which were connected through a LAN. Information was shared in near real time. Each station was responsible for a particular task, which reduced the cost as compared to First Generation SCADA. The network protocols used were still not standardized. Since these protocols were proprietary, very few people beyond the developers knew enough to determine how secure a SCADA installation was. Security of the SCADA installation was usually overlooked. Similar to a distributed architecture, any complex SCADA can be reduced to the simplest components and connected through communication protocols. In the case of a networked design, the system may be spread across more than one LAN network called a process control network (PCN) and separated geographically. Several distributed architecture SCADAs running in parallel, with a single supervisor and historian, could be considered a network architecture. This allows for a more cost-effective solution in very large scale systems. The growth of the internet has led SCADA systems to implement web technologies allowing users to view data, exchange information and control processes from anywhere in the world through web SOCKET connection. The early 2000s saw the proliferation of Web SCADA systems. Web SCADA systems use web browsers such as Google Chrome and Mozilla Firefox as the graphical user interface (GUI) for the operators HMI. This simplifies the client side installation and enables users to access the system from various platforms with web browsers such as servers, personal computers, laptops, tablets and mobile phones. Security SCADA systems that tie together decentralized facilities such as power, oil, gas pipelines, water distribution and wastewater collection systems were designed to be open, robust, and easily operated and repaired, but not necessarily secure. The move from proprietary technologies to more standardized and open solutions together with the increased number of connections between SCADA systems, office networks and the Internet has made them more vulnerable to types of network attacks that are relatively common in computer security. For example, United States Computer Emergency Readiness Team (US-CERT) released a vulnerability advisory warning that unauthenticated users could download sensitive configuration information including password hashes from an Inductive Automation Ignition system utilizing a standard attack type leveraging access to the Tomcat Embedded Web server. Security researcher Jerry Brown submitted a similar advisory regarding a buffer overflow vulnerability in a Wonderware InBatchClient ActiveX control. Both vendors made updates available prior to public vulnerability release. Mitigation recommendations were standard patching practices and requiring VPN access for secure connectivity. Consequently, the security of some SCADA-based systems has come into question as they are seen as potentially vulnerable to cyber attacks. In particular, security researchers are concerned about: The lack of concern about security and authentication in the design, deployment and operation of some existing SCADA networks The belief that SCADA systems have the benefit of security through obscurity through the use of specialized protocols and proprietary interfaces The belief that SCADA networks are secure because they are physically secured The belief that SCADA networks are secure because they are disconnected from the Internet SCADA systems are used to control and monitor physical processes, examples of which are transmission of electricity, transportation of gas and oil in pipelines, water distribution, traffic lights, and other systems used as the basis of modern society. The security of these SCADA systems is important because compromise or destruction of these systems would impact multiple areas of society far removed from the original compromise. For example, a blackout caused by a compromised electrical SCADA system would cause financial losses to all the customers that received electricity from that source. How security will affect legacy SCADA and new deployments remains to be seen. There are many threat vectors to a modern SCADA system. One is the threat of unauthorized access to the control software, whether it is human access or changes induced intentionally or accidentally by virus infections and other software threats residing on the control host machine. Another is the threat of packet access to the network segments hosting SCADA devices. In many cases, the control protocol lacks any form of cryptographic security, allowing an attacker to control a SCADA device by sending commands over a network. In many cases SCADA users have assumed that having a VPN offered sufficient protection, unaware that security can be trivially bypassed with physical access to SCADA-related network jacks and switches. Industrial control vendors suggest approaching SCADA security like Information Security with a defense in depth strategy that leverages common IT practices. Apart from that, research has shown that the architecture of SCADA systems has several other vulnerabilities, including direct tampering with RTUs, communication links from RTUs to the control center, and IT software and databases in the control center. The RTUs could, for instance, be targets of deception attacks injecting false data or denial-of-service attacks. The reliable function of SCADA systems in our modern infrastructure may be crucial to public health and safety. As such, attacks on these systems may directly or indirectly threaten public health and safety. Such an attack has already occurred, carried out on Maroochy Shire Council's sewage control system in Queensland, Australia. Shortly after a contractor installed a SCADA system in January 2000, system components began to function erratically. Pumps did not run when needed and alarms were not reported. More critically, sewage flooded a nearby park and contaminated an open surface-water drainage ditch and flowed 500 meters to a tidal canal. The SCADA system was directing sewage valves to open when the design protocol should have kept them closed. Initially this was believed to be a system bug. Monitoring of the system logs revealed the malfunctions were the result of cyber attacks. Investigators reported 46 separate instances of malicious outside interference before the culprit was identified. The attacks were made by a disgruntled ex-employee of the company that had installed the SCADA system. The ex-employee was hoping to be hired by the utility full-time to maintain the system. In April 2008, the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack issued a Critical Infrastructures Report which discussed the extreme vulnerability of SCADA systems to an electromagnetic pulse (EMP) event. After testing and analysis, the Commission concluded: "SCADA systems are vulnerable to EMP insult. The large numbers and widespread reliance on such systems by all of the Nation’s critical infrastructures represent a systemic threat to their continued operation following an EMP event. Additionally, the necessity to reboot, repair, or replace large numbers of geographically widely dispersed systems will considerably impede the Nation’s recovery from such an assault." Many vendors of SCADA and control products have begun to address the risks posed by unauthorized access by developing lines of specialized industrial firewall and VPN solutions for TCP/IP-based SCADA networks as well as external SCADA monitoring and recording equipment. The International Society of Automation (ISA) started formalizing SCADA security requirements in 2007 with a working group, WG4. WG4 "deals specifically with unique technical requirements, measurements, and other features required to evaluate and assure security resilience and performance of industrial automation and control systems devices". The increased interest in SCADA vulnerabilities has resulted in vulnerability researchers discovering vulnerabilities in commercial SCADA software and more general offensive SCADA techniques presented to the general security community. In electric and gas utility SCADA systems, the vulnerability of the large installed base of wired and wireless serial communications links is addressed in some cases by applying bump-in-the-wire devices that employ authentication and Advanced Encryption Standard encryption rather than replacing all existing nodes. In June 2010, anti-virus security company VirusBlokAda reported the first detection of malware that attacks SCADA systems (Siemens' WinCC/PCS 7 systems) running on Windows operating systems. The malware is called Stuxnet and uses four zero-day attacks to install a rootkit which in turn logs into the SCADA's database and steals design and control files. The malware is also capable of changing the control system and hiding those changes. The malware was found on 14 systems, the majority of which were located in Iran. In October 2013 National Geographic released a docudrama titled American Blackout which dealt with an imagined large-scale cyber attack on SCADA and the United States' electrical grid. Uses Both large and small systems can be built using the SCADA concept. These systems can range from just tens to thousands of control loops, depending on the application. Example processes include industrial, infrastructure, and facility-based processes, as described below: Industrial processes include manufacturing, process control, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes. Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electric power transmission and distribution, and wind farms. Facility processes, including buildings, airports, ships, and space stations. They monitor and control heating, ventilation, and air conditioning systems (HVAC), access, and energy consumption. However, SCADA systems may have security vulnerabilities, so the systems should be evaluated to identify risks and solutions implemented to mitigate those risks. See also References External links UK SCADA security guidelines BBC NEWS | Technology | Spies 'infiltrate US power grid' Articles containing video clips Control engineering Telemetry Electric power
SCADA
[ "Physics", "Engineering" ]
4,671
[ "Physical quantities", "Power (physics)", "Control engineering", "Electric power", "Electrical engineering" ]
62,529
https://en.wikipedia.org/wiki/Sea%20level
Mean sea level (MSL, often shortened to sea level) is an average surface level of one or more among Earth's coastal bodies of water from which heights such as elevation may be measured. The global MSL is a type of vertical datuma standardised geodetic datumthat is used, for example, as a chart datum in cartography and marine navigation, or, in aviation, as the standard sea level at which atmospheric pressure is measured to calibrate altitude and, consequently, aircraft flight levels. A common and relatively straightforward mean sea-level standard is instead a long-term average of tide gauge readings at a particular reference location. The term above sea level generally refers to the height above mean sea level (AMSL). The term APSL means above present sea level, comparing sea levels in the past with the level today. Earth's radius at sea level is 6,378.137 km (3,963.191 mi) at the equator. It is 6,356.752 km (3,949.903 mi) at the poles and 6,371.001 km (3,958.756 mi) on average. This flattened spheroid, combined with local gravity anomalies, defines the geoid of the Earth, which approximates the local mean sea level for locations in the open ocean. The geoid includes a significant depression in the Indian Ocean, whose surface dips as much as below the global mean sea level (excluding minor effects such as tides and currents). Measurement Precise determination of a "mean sea level" is difficult because of the many factors that affect sea level. Instantaneous sea level varies substantially on several scales of time and space. This is because the sea is in constant motion, affected by the tides, wind, atmospheric pressure, local gravitational differences, temperature, salinity, and so forth. The mean sea level at a particular location may be calculated over an extended time period and used as a datum. For example, hourly measurements may be averaged over a full Metonic 19-year lunar cycle to determine the mean sea level at an official tide gauge. Still-water level or still-water sea level (SWL) is the level of the sea with motions such as wind waves averaged out. Then MSL implies the SWL further averaged over a period of time such that changes due to, e.g., the tides, also have zero mean. Global MSL refers to a spatial average over the entire ocean area, typically using large sets of tide gauges and/or satellite measurements. One often measures the values of MSL with respect to the land; hence a change in relative MSL or (relative sea level) can result from a real change in sea level, or from a change in the height of the land on which the tide gauge operates, or both. In the UK, the ordnance datum (the 0 metres height on UK maps) is the mean sea level measured at Newlyn in Cornwall between 1915 and 1921. Before 1921, the vertical datum was MSL at the Victoria Dock, Liverpool. Since the times of the Russian Empire, in Russia and its other former parts, now independent states, the sea level is measured from the zero level of Kronstadt Sea-Gauge. In Hong Kong, "mPD" is a surveying term meaning "metres above Principal Datum" and refers to height of above chart datum and below the average sea level. In France, the Marégraphe in Marseilles measures continuously the sea level since 1883 and offers the longest collated data about the sea level. It is used for a part of continental Europe and the main part of Africa as the official sea level. Spain uses the reference to measure heights below or above sea level at Alicante, while the European Vertical Reference System is calibrated to the Amsterdam Peil elevation, which dates back to the 1690s. Satellite altimeters have been making precise measurements of sea level since the launch of TOPEX/Poseidon in 1992. A joint mission of NASA and CNES, TOPEX/Poseidon was followed by Jason-1 in 2001 and the Ocean Surface Topography Mission on the Jason-2 satellite in 2008. Height above mean sea level Height above mean sea level (AMSL) is the elevation (on the ground) or altitude (in the air) of an object, relative to a reference datum for mean sea level (MSL). It is also used in aviation, where some heights are recorded and reported with respect to mean sea level (contrast with flight level), and in the atmospheric sciences, and in land surveying. An alternative is to base height measurements on a reference ellipsoid approximating the entire Earth, which is what systems such as GPS do. In aviation, the reference ellipsoid known as WGS84 is increasingly used to define heights; however, differences up to exist between this ellipsoid height and local mean sea level. Another alternative is to use a geoid-based vertical datum such as NAVD88 and the global EGM96 (part of WGS84). Details vary in different countries. When referring to geographic features such as mountains, on a topographic map variations in elevation are shown by contour lines. A mountain's highest point or summit is typically illustrated with the AMSL height in metres, feet or both. In unusual cases where a land location is below sea level, such as Death Valley, California, the elevation AMSL is negative. Difficulties in use It is often necessary to compare the local height of the mean sea surface with a "level" reference surface, or geodetic datum, called the geoid. In the absence of external forces, the local mean sea level would coincide with this geoid surface, being an equipotential surface of the Earth's gravitational field which, in itself, does not conform to a simple sphere or ellipsoid and exhibits gravity anomalies such as those measured by NASA's GRACE satellites. In reality, the geoid surface is not directly observed, even as a long-term average, due to ocean currents, air pressure variations, temperature and salinity variations, etc. The location-dependent but time-persistent separation between local mean sea level and the geoid is referred to as (mean) ocean surface topography. It varies globally in a typical range of ±. Dry land Several terms are used to describe the changing relationships between sea level and dry land. "relative" means change relative to a fixed point in the sediment pile. "eustatic" refers to global changes in sea level relative to a fixed point, such as the centre of the earth, for example as a result of melting ice-caps. "steric" refers to global changes in sea level due to thermal expansion and salinity variations. "isostatic" refers to changes in the level of the land relative to a fixed point in the earth, possibly due to thermal buoyancy or tectonic effects, disregarding changes in the volume of water in the oceans. The melting of glaciers at the end of ice ages results in isostatic post-glacial rebound, when land rises after the weight of ice is removed. Conversely, older volcanic islands experience relative sea level rise, due to isostatic subsidence from the weight of cooling volcanos. The subsidence of land due to the withdrawal of groundwater is another isostatic cause of relative sea level rise. On planets that lack a liquid ocean, planetologists can calculate a "mean altitude" by averaging the heights of all points on the surface. This altitude, sometimes referred to as a "sea level" or zero-level elevation, serves equivalently as a reference for the height of planetary features. Change Local and eustatic Local mean sea level (LMSL) is defined as the height of the sea with respect to a land benchmark, averaged over a period of time long enough that fluctuations caused by waves and tides are smoothed out, typically a year or more. One must adjust perceived changes in LMSL to account for vertical movements of the land, which can occur at rates similar to sea level changes (millimetres per year). Some land movements occur because of isostatic adjustment to the melting of ice sheets at the end of the last ice age. The weight of the ice sheet depresses the underlying land, and when the ice melts away the land slowly rebounds. Changes in ground-based ice volume also affect local and regional sea levels by the readjustment of the geoid and true polar wander. Atmospheric pressure, ocean currents and local ocean temperature changes can affect LMSL as well. Eustatic sea level change (global as opposed to local change) is due to change in either the volume of water in the world's oceans or the volume of the oceanic basins. Two major mechanisms are currently causing eustatic sea level rise. First, shrinking land ice, such as mountain glaciers and polar ice sheets, is releasing water into the oceans. Second, as ocean temperatures rise, the warmer water expands. Short-term and periodic changes Many factors can produce short-term changes in sea level, typically within a few metres, in timeframes ranging from minutes to months: Recent changes Aviation Pilots can estimate height above sea level with an altimeter set to a defined barometric pressure. Generally, the pressure used to set the altimeter is the barometric pressure that would exist at MSL in the region being flown over. This pressure is referred to as either QNH or "altimeter" and is transmitted to the pilot by radio from air traffic control (ATC) or an automatic terminal information service (ATIS). Since the terrain elevation is also referenced to MSL, the pilot can estimate height above ground by subtracting the terrain altitude from the altimeter reading. Aviation charts are divided into boxes and the maximum terrain altitude from MSL in each box is clearly indicated. Once above the transition altitude, the altimeter is set to the international standard atmosphere (ISA) pressure at MSL which is 1013.25 hPa or 29.92 inHg. See also (UK and Ireland) References External links Sea Level Rise:Understanding the past – Improving projections for the future Permanent Service for Mean Sea Level Global sea level change: Determination and interpretation Environment Protection Agency Sea level rise reports Properties of isostasy and eustasy Measuring Sea Level from Space Rising Tide Video: Scripps Institution of Oceanography Sea Levels Online: National Ocean Service (CO-OPS) Système d'Observation du Niveau des Eaux Littorales (SONEL) Sea level rise – How much and how fast will sea level rise over the coming centuries? Geodesy Physical oceanography Oceanographical terminology Vertical datums
Sea level
[ "Physics", "Mathematics" ]
2,204
[ "Applied mathematics", "Geodesy", "Applied and interdisciplinary physics", "Physical oceanography" ]
62,599
https://en.wikipedia.org/wiki/Event-driven%20programming
In computer programming, event-driven programming is a programming paradigm in which the flow of the program is determined by external events. UI events from mice, keyboards, touchpads and touchscreens, and external sensor inputs are common cases. Events may also be programmatically generated, such as from messages from other programs, notifications from other threads, or other network events. Event-driven programming is the dominant paradigm used in graphical user interfaces applications and network servers. In an event-driven application, there is generally an event loop that listens for events and then triggers a callback function when one of those events is detected. Event-driven programs can be written in any programming language, although the task is easier in languages that provide high-level abstractions. Although they do not exactly fit the event-driven model, interrupt handling and exception handling have many similarities. It is important to differentiate between event-driven and message-driven (aka queue driven) paradigms: Event-driven services (e.g. AWS SNS) are decoupled from their consumers. Whereas queue / message driven services (e.g. AWS SQS) are coupled with their consumers. Event loop Because the event loop of retrieving/dispatching of events are common amongst applications, many programming frameworks take care of their implementation and expect the user to provide only the code for the event handlers. RPG, an early programming language from IBM, whose 1960s design concept was similar to event-driven programming discussed above, provided a built-in main I/O loop (known as the "program cycle") where the calculations responded in accordance to 'indicators' (flags) that were set earlier in the cycle. Event handlers The actual logic is contained in event-handler routines. These routines handle the events to which the main program will respond. For example, a single left-button mouse-click on a command button in a GUI program may trigger a routine that will open another window, save data to a database or exit the application. Many IDEs provide the programmer with GUI event templates, allowing the programmer to focus on writing the event code. Keeping track of history is normally trivial in a sequential program. Because event handlers execute in response to external events, correctly structuring the handlers to work when called in any order can require special attention and planning in an event-driven program. In addition to writing the event handlers, event handlers also need to be bound to events so that the correct function is called when the event takes place. For UI events, many IDEs combine the two steps: double-click on a button, and the editor creates an (empty) event handler associated with the user clicking the button and opens a text window so you can edit the event handler. Common uses Most existing GUI architectures use event-driven programming. Windows has an event loop. The Java AWT framework processes all UI changes on a single thread, called the Event dispatching thread. Similarly, all UI updates in the Java framework JavaFX occur on the JavaFX Application Thread. Most network servers and frameworks such as Node.js are also event-driven. Interrupt and exception handling See also Autonomous peripheral operation Dataflow programming DOM events Event-driven architecture Event stream processing (a similar concept) Hardware description language Interrupt Inversion of control Message-oriented middleware Programming paradigm Publish–subscribe pattern Reactor pattern Signal programming (a similar concept) Staged event-driven architecture (SEDA) Time-triggered system (an alternative architecture for computer systems) Virtual synchrony, a distributed execution model for event-driven programming References External links Concurrency patterns presentation given at scaleconf Event-Driven Programming: Introduction, Tutorial, History, tutorial by Stephen Ferg Event-Driven Programming, tutorial by Alan Gauld Event Collaboration, article by Martin Fowler Rethinking Swing Threading, article by Jonathan Simon The event-driven programming style , article by Chris McDonald Event Driven Programming using Template Specialization, article by Christopher Diggins Event-Driven Programming and Agents, chapter LabWindows/CVI Resources Distributed Publish/Subscribe Event System, an open-source example which is in production on MSN.com and Microsoft.com Javascript Event loop Programming paradigms Events (computing) Articles with example pseudocode
Event-driven programming
[ "Technology" ]
876
[ "Information systems", "Events (computing)" ]
62,610
https://en.wikipedia.org/wiki/Polyploidy
Polyploidy is a condition in which the cells of an organism have more than two paired sets of (homologous) chromosomes. Most species whose cells have nuclei (eukaryotes) are diploid, meaning they have two complete sets of chromosomes, one from each of two parents; each set contains the same number of chromosomes, and the chromosomes are joined in pairs of homologous chromosomes. However, some organisms are polyploid. Polyploidy is especially common in plants. Most eukaryotes have diploid somatic cells, but produce haploid gametes (eggs and sperm) by meiosis. A monoploid has only one set of chromosomes, and the term is usually only applied to cells or organisms that are normally diploid. Males of bees and other Hymenoptera, for example, are monoploid. Unlike animals, plants and multicellular algae have life cycles with two alternating multicellular generations. The gametophyte generation is haploid, and produces gametes by mitosis; the sporophyte generation is diploid and produces spores by meiosis. Polyploidy is the result of whole-genome duplication during the evolution of species. It may occur due to abnormal cell division, either during mitosis, or more commonly from the failure of chromosomes to separate during meiosis or from the fertilization of an egg by more than one sperm. In addition, it can be induced in plants and cell cultures by some chemicals: the best known is colchicine, which can result in chromosome doubling, though its use may have other less obvious consequences as well. Oryzalin will also double the existing chromosome content. Among mammals, a high frequency of polyploid cells is found in organs such as the brain, liver, heart, and bone marrow. It also occurs in the somatic cells of other animals, such as goldfish, salmon, and salamanders. It is common among ferns and flowering plants (see Hibiscus rosa-sinensis), including both wild and cultivated species. Wheat, for example, after millennia of hybridization and modification by humans, has strains that are diploid (two sets of chromosomes), tetraploid (four sets of chromosomes) with the common name of durum or macaroni wheat, and hexaploid (six sets of chromosomes) with the common name of bread wheat. Many agriculturally important plants of the genus Brassica are also tetraploids. Sugarcane can have ploidy levels higher than octaploid. Polyploidization can be a mechanism of sympatric speciation because polyploids are usually unable to interbreed with their diploid ancestors. An example is the plant Erythranthe peregrina. Sequencing confirmed that this species originated from E. × robertsii, a sterile triploid hybrid between E. guttata and E. lutea, both of which have been introduced and naturalised in the United Kingdom. New populations of E. peregrina arose on the Scottish mainland and the Orkney Islands via genome duplication from local populations of E. × robertsii. Because of a rare genetic mutation, E. peregrina is not sterile. On the other hand, polyploidization can also be a mechanism for a kind of 'reverse speciation', whereby gene flow is enabled following the polyploidy event, even between lineages that previously experienced no gene flow as diploids. This has been detailed at the genomic level in Arabidopsis arenosa and Arabidopsis lyrata. Each of these species experienced independent autopolyploidy events (within-species polyploidy, described below), which then enabled subsequent interspecies gene flow of adaptive alleles, in this case stabilising each young polyploid lineage. Such polyploidy-enabled adaptive introgression may allow polyploids at act as 'allelic sponges', whereby they accumulate cryptic genomic variation that may be recruited upon encountering later environmental challenges. Terminology Types Polyploid types are labeled according to the number of chromosome sets in the nucleus. The letter x is used to represent the number of chromosomes in a single set: haploid (one set; 1x), for example male European fire ants diploid (two sets; 2x), for example humans triploid (three sets; 3x), for example sterile saffron crocus, or seedless watermelons, also common in the phylum Tardigrada tetraploid (four sets; 4x), for example, Plains viscacha rat, Salmonidae fish, the cotton Gossypium hirsutum pentaploid (five sets; 5x), for example Kenai Birch (Betula kenaica) hexaploid (six sets; 6x), for example some species of wheat, kiwifruit heptaploid or septaploid (seven sets; 7x), for example some cultured Siberian sturgeon octaploid or octoploid, (eight sets; 8x), for example Acipenser (genus of sturgeon fish), dahlias decaploid (ten sets; 10x), for example certain strawberries dodecaploid or duodecaploid (twelve sets; 12x), for example the plants Celosia argentea and Spartina anglica or the amphibian Xenopus ruwenzoriensis. tetratetracontaploid (forty-four sets; 44x), for example black mulberry Classification Autopolyploidy Autopolyploids are polyploids with multiple chromosome sets derived from a single taxon. Two examples of natural autopolyploids are the piggyback plant, Tolmiea menzisii and the white sturgeon, Acipenser transmontanum. Most instances of autopolyploidy result from the fusion of unreduced (2n) gametes, which results in either triploid (n + 2n = 3n) or tetraploid (2n + 2n = 4n) offspring. Triploid offspring are typically sterile (as in the phenomenon of triploid block), but in some cases they may produce high proportions of unreduced gametes and thus aid the formation of tetraploids. This pathway to tetraploidy is referred to as the triploid bridge. Triploids may also persist through asexual reproduction. In fact, stable autotriploidy in plants is often associated with apomictic mating systems. In agricultural systems, autotriploidy can result in seedlessness, as in watermelons and bananas. Triploidy is also utilized in salmon and trout farming to induce sterility. Rarely, autopolyploids arise from spontaneous, somatic genome doubling, which has been observed in apple (Malus domesticus) bud sports. This is also the most common pathway of artificially induced polyploidy, where methods such as protoplast fusion or treatment with colchicine, oryzalin or mitotic inhibitors are used to disrupt normal mitotic division, which results in the production of polyploid cells. This process can be useful in plant breeding, especially when attempting to introgress germplasm across ploidal levels. Autopolyploids possess at least three homologous chromosome sets, which can lead to high rates of multivalent pairing during meiosis (particularly in recently formed autopolyploids, also known as neopolyploids) and an associated decrease in fertility due to the production of aneuploid gametes. Natural or artificial selection for fertility can quickly stabilize meiosis in autopolyploids by restoring bivalent pairing during meiosis. Rapid adaptive evolution of the meiotic machinery, resulting in reduced levels of multivalents (and therefore stable autopolyploid meiosis) has been documented in Arabidopsis arenosa and Arabidopsis lyrata, with specific adaptive alleles of these species shared between only the evolved polyploids. The high degree of homology among duplicated chromosomes causes autopolyploids to display polysomic inheritance. This trait is often used as a diagnostic criterion to distinguish autopolyploids from allopolyploids, which commonly display disomic inheritance after they progress past the neopolyploid stage. While most polyploid species are unambiguously characterized as either autopolyploid or allopolyploid, these categories represent the ends of a spectrum of divergence between parental subgenomes. Polyploids that fall between these two extremes, which are often referred to as segmental allopolyploids, may display intermediate levels of polysomic inheritance that vary by locus. About half of all polyploids are thought to be the result of autopolyploidy, although many factors make this proportion hard to estimate. Allopolyploidy Allopolyploids or amphipolyploids or heteropolyploids are polyploids with chromosomes derived from two or more diverged taxa. As in autopolyploidy, this primarily occurs through the fusion of unreduced (2n) gametes, which can take place before or after hybridization. In the former case, unreduced gametes from each diploid taxon – or reduced gametes from two autotetraploid taxa – combine to form allopolyploid offspring. In the latter case, one or more diploid F1 hybrids produce unreduced gametes that fuse to form allopolyploid progeny. Hybridization followed by genome duplication may be a more common path to allopolyploidy because F1 hybrids between taxa often have relatively high rates of unreduced gamete formation – divergence between the genomes of the two taxa result in abnormal pairing between homoeologous chromosomes or nondisjunction during meiosis. In this case, allopolyploidy can actually restore normal, bivalent meiotic pairing by providing each homoeologous chromosome with its own homologue. If divergence between homoeologous chromosomes is even across the two subgenomes, this can theoretically result in rapid restoration of bivalent pairing and disomic inheritance following allopolyploidization. However multivalent pairing is common in many recently formed allopolyploids, so it is likely that the majority of meiotic stabilization occurs gradually through selection. Because pairing between homoeologous chromosomes is rare in established allopolyploids, they may benefit from fixed heterozygosity of homoeologous alleles. In certain cases, such heterozygosity can have beneficial heterotic effects, either in terms of fitness in natural contexts or desirable traits in agricultural contexts. This could partially explain the prevalence of allopolyploidy among crop species. Both bread wheat and triticale are examples of an allopolyploids with six chromosome sets. Cotton, peanut, and quinoa are allotetraploids with multiple origins. In Brassicaceous crops, the Triangle of U describes the relationships between the three common diploid Brassicas (B. oleracea, B. rapa, and B. nigra) and three allotetraploids (B. napus, B. juncea, and B. carinata) derived from hybridization among the diploid species. A similar relationship exists between three diploid species of Tragopogon (T. dubius, T. pratensis, and T. porrifolius) and two allotetraploid species (T. mirus and T. miscellus). Complex patterns of allopolyploid evolution have also been observed in animals, as in the frog genus Xenopus. Aneuploid Organisms in which a particular chromosome, or chromosome segment, is under- or over-represented are said to be aneuploid (from the Greek words meaning "not", "good", and "fold"). Aneuploidy refers to a numerical change in part of the chromosome set, whereas polyploidy refers to a numerical change in the whole set of chromosomes. Endopolyploidy Polyploidy occurs in some tissues of animals that are otherwise diploid, such as human muscle tissues. This is known as endopolyploidy. Species whose cells do not have nuclei, that is, prokaryotes, may be polyploid, as seen in the large bacterium Epulopiscium fishelsoni. Hence ploidy is defined with respect to a cell. Monoploid A monoploid has only one set of chromosomes and the term is usually only applied to cells or organisms that are normally diploid. The more general term for such organisms is haploid. Temporal terms Neopolyploidy A polyploid that is newly formed. Mesopolyploidy That has become polyploid in more recent history; it is not as new as a neopolyploid and not as old as a paleopolyploid. It is a middle aged polyploid. Often this refers to whole genome duplication followed by intermediate levels of diploidization. Paleopolyploidy Ancient genome duplications probably occurred in the evolutionary history of all life. Duplication events that occurred long ago in the history of various evolutionary lineages can be difficult to detect because of subsequent diploidization (such that a polyploid starts to behave cytogenetically as a diploid over time) as mutations and gene translations gradually make one copy of each chromosome unlike the other copy. Over time, it is also common for duplicated copies of genes to accumulate mutations and become inactive pseudogenes. In many cases, these events can be inferred only through comparing sequenced genomes. Examples of unexpected but recently confirmed ancient genome duplications include baker's yeast (Saccharomyces cerevisiae), mustard weed/thale cress (Arabidopsis thaliana), rice (Oryza sativa), and two rounds of whole genome duplication (the 2R hypothesis) in an early evolutionary ancestor of the vertebrates (which includes the human lineage) and another near the origin of the teleost fishes. Angiosperms (flowering plants) have paleopolyploidy in their ancestry. All eukaryotes probably have experienced a polyploidy event at some point in their evolutionary history. Other similar terms Karyotype A karyotype is the characteristic chromosome complement of a eukaryote species. The preparation and study of karyotypes is part of cytology and, more specifically, cytogenetics. Although the replication and transcription of DNA is highly standardized in eukaryotes, the same cannot be said for their karyotypes, which are highly variable between species in chromosome number and in detailed organization despite being constructed out of the same macromolecules. In some cases, there is even significant variation within species. This variation provides the basis for a range of studies in what might be called evolutionary cytology. Homoeologous chromosomes Homoeologous chromosomes are those brought together following inter-species hybridization and allopolyploidization, and whose relationship was completely homologous in an ancestral species. For example, durum wheat is the result of the inter-species hybridization of two diploid grass species Triticum urartu and Aegilops speltoides. Both diploid ancestors had two sets of 7 chromosomes, which were similar in terms of size and genes contained on them. Durum wheat contains a hybrid genome with two sets of chromosomes derived from Triticum urartu and two sets of chromosomes derived from Aegilops speltoides. Each chromosome pair derived from the Triticum urartu parent is homoeologous to the opposite chromosome pair derived from the Aegilops speltoides parent, though each chromosome pair unto itself is homologous. Examples Animals Examples in animals are more common in non-vertebrates such as flatworms, leeches, and brine shrimp. Within vertebrates, examples of stable polyploidy include the salmonids and many cyprinids (i.e. carp). Some fish have as many as 400 chromosomes. Polyploidy also occurs commonly in amphibians; for example the biomedically important genus Xenopus contains many different species with as many as 12 sets of chromosomes (dodecaploid). Polyploid lizards are also quite common. Most are sterile and reproduce by parthenogenesis; others, like Liolaemus chiliensis, maintain sexual reproduction. Polyploid mole salamanders (mostly triploids) are all female and reproduce by kleptogenesis, "stealing" spermatophores from diploid males of related species to trigger egg development but not incorporating the males' DNA into the offspring. While some tissues of mammals, such as parenchymal liver cells, are polyploid, rare instances of polyploid mammals are known, but most often result in prenatal death. An octodontid rodent of Argentina's harsh desert regions, known as the plains viscacha rat (Tympanoctomys barrerae) has been reported as an exception to this 'rule'. However, careful analysis using chromosome paints shows that there are only two copies of each chromosome in T. barrerae, not the four expected if it were truly a tetraploid. This rodent is not a rat, but kin to guinea pigs and chinchillas. Its "new" diploid (2n) number is 102 and so its cells are roughly twice normal size. Its closest living relation is Octomys mimax, the Andean Viscacha-Rat of the same family, whose 2n = 56. It was therefore surmised that an Octomys-like ancestor produced tetraploid (i.e., 2n = 4x = 112) offspring that were, by virtue of their doubled chromosomes, reproductively isolated from their parents. Polyploidy was induced in fish by Har Swarup (1956) using a cold-shock treatment of the eggs close to the time of fertilization, which produced triploid embryos that successfully matured. Cold or heat shock has also been shown to result in unreduced amphibian gametes, though this occurs more commonly in eggs than in sperm. John Gurdon (1958) transplanted intact nuclei from somatic cells to produce diploid eggs in the frog, Xenopus (an extension of the work of Briggs and King in 1952) that were able to develop to the tadpole stage. The British scientist J. B. S. Haldane hailed the work for its potential medical applications and, in describing the results, became one of the first to use the word "clone" in reference to animals. Later work by Shinya Yamanaka showed how mature cells can be reprogrammed to become pluripotent, extending the possibilities to non-stem cells. Gurdon and Yamanaka were jointly awarded the Nobel Prize in 2012 for this work. Humans True polyploidy rarely occurs in humans, although polyploid cells occur in highly differentiated tissue, such as liver parenchyma, heart muscle, placenta and in bone marrow. Aneuploidy is more common. Polyploidy occurs in humans in the form of triploidy, with 69 chromosomes (sometimes called 69, XXX), and tetraploidy with 92 chromosomes (sometimes called 92, XXXX). Triploidy, usually due to polyspermy, occurs in about 2–3% of all human pregnancies and ~15% of miscarriages. The vast majority of triploid conceptions end as a miscarriage; those that do survive to term typically die shortly after birth. In some cases, survival past birth may be extended if there is mixoploidy with both a diploid and a triploid cell population present. There has been one report of a child surviving to the age of seven months with complete triploidy syndrome. He failed to exhibit normal mental or physical neonatal development, and died from a Pneumocystis carinii infection, which indicates a weak immune system. Triploidy may be the result of either digyny (the extra haploid set is from the mother) or diandry (the extra haploid set is from the father). Diandry is mostly caused by reduplication of the paternal haploid set from a single sperm, but may also be the consequence of dispermic (two sperm) fertilization of the egg. Digyny is most commonly caused by either failure of one meiotic division during oogenesis leading to a diploid oocyte or failure to extrude one polar body from the oocyte. Diandry appears to predominate among early miscarriages, while digyny predominates among triploid zygotes that survive into the fetal period. However, among early miscarriages, digyny is also more common in those cases less than weeks gestational age or those in which an embryo is present. There are also two distinct phenotypes in triploid placentas and fetuses that are dependent on the origin of the extra haploid set. In digyny, there is typically an asymmetric poorly grown fetus, with marked adrenal hypoplasia and a very small placenta. In diandry, a partial hydatidiform mole develops. These parent-of-origin effects reflect the effects of genomic imprinting. Complete tetraploidy is more rarely diagnosed than triploidy, but is observed in 1–2% of early miscarriages. However, some tetraploid cells are commonly found in chromosome analysis at prenatal diagnosis and these are generally considered 'harmless'. It is not clear whether these tetraploid cells simply tend to arise during in vitro cell culture or whether they are also present in placental cells in vivo. There are, at any rate, very few clinical reports of fetuses/infants diagnosed with tetraploidy mosaicism. Mixoploidy is quite commonly observed in human preimplantation embryos and includes haploid/diploid as well as diploid/tetraploid mixed cell populations. It is unknown whether these embryos fail to implant and are therefore rarely detected in ongoing pregnancies or if there is simply a selective process favoring the diploid cells. Fish A polyploidy event occurred within the stem lineage of the teleost fish. Plants Polyploidy is frequent in plants, some estimates suggesting that 30–80% of living plant species are polyploid, and many lineages show evidence of ancient polyploidy (paleopolyploidy) in their genomes. Huge explosions in angiosperm species diversity appear to have coincided with the timing of ancient genome duplications shared by many species. It has been established that 15% of angiosperm and 31% of fern speciation events are accompanied by ploidy increase. Polyploid plants can arise spontaneously in nature by several mechanisms, including meiotic or mitotic failures, and fusion of unreduced (2n) gametes. Both autopolyploids (e.g. potato) and allopolyploids (such as canola, wheat and cotton) can be found among both wild and domesticated plant species. Most polyploids display novel variation or morphologies relative to their parental species, that may contribute to the processes of speciation and eco-niche exploitation. The mechanisms leading to novel variation in newly formed allopolyploids may include gene dosage effects (resulting from more numerous copies of genome content), the reunion of divergent gene regulatory hierarchies, chromosomal rearrangements, and epigenetic remodeling, all of which affect gene content and/or expression levels. Many of these rapid changes may contribute to reproductive isolation and speciation. However, seed generated from interploidy crosses, such as between polyploids and their parent species, usually have aberrant endosperm development which impairs their viability, thus contributing to polyploid speciation. Polyploids may also interbreed with diploids and produce polyploid seeds, as observed in the agamic complexes of Crepis. Some plants are triploid. As meiosis is disturbed, these plants are sterile, with all plants having the same genetic constitution: Among them, the exclusively vegetatively propagated saffron crocus (Crocus sativus). Also, the extremely rare Tasmanian shrub Lomatia tasmanica is a triploid sterile species. There are few naturally occurring polyploid conifers. One example is the Coast Redwood Sequoia sempervirens, which is a hexaploid (6x) with 66 chromosomes (2n = 6x = 66), although the origin is unclear. Aquatic plants, especially the Monocotyledons, include a large number of polyploids. Crops The induction of polyploidy is a common technique to overcome the sterility of a hybrid species during plant breeding. For example, triticale is the hybrid of wheat (Triticum turgidum) and rye (Secale cereale). It combines sought-after characteristics of the parents, but the initial hybrids are sterile. After polyploidization, the hybrid becomes fertile and can thus be further propagated to become triticale. In some situations, polyploid crops are preferred because they are sterile. For example, many seedless fruit varieties are seedless as a result of polyploidy. Such crops are propagated using asexual techniques, such as grafting. Polyploidy in crop plants is most commonly induced by treating seeds with the chemical colchicine. Examples Triploid crops: some apple varieties (such as Belle de Boskoop, Jonagold, Mutsu, Ribston Pippin), banana, citrus, ginger, watermelon, saffron crocus, white pulp of coconut Tetraploid crops: very few apple varieties, durum or macaroni wheat, cotton, potato, canola/rapeseed, leek, tobacco, peanut, kinnow, Pelargonium Hexaploid crops: chrysanthemum, bread wheat, triticale, oat, kiwifruit Octaploid crops: strawberry, dahlia, pansies, sugar cane, oca (Oxalis tuberosa) Dodecaploid crops: some sugar cane hybrids Some crops are found in a variety of ploidies: tulips and lilies are commonly found as both diploid and triploid; daylilies (Hemerocallis cultivars) are available as either diploid or tetraploid; apples and kinnow mandarins can be diploid, triploid, or tetraploid. Fungi Besides plants and animals, the evolutionary history of various fungal species is dotted by past and recent whole-genome duplication events (see Albertin and Marullo 2012 for review). Several examples of polyploids are known: autopolyploid: the aquatic fungi of genus Allomyces, some Saccharomyces cerevisiae strains used in bakery, etc. allopolyploid: the widespread Cyathus stercoreus, the allotetraploid lager yeast Saccharomyces pastorianus, the allotriploid wine spoilage yeast Dekkera bruxellensis, etc. paleopolyploid: the human pathogen Rhizopus oryzae, the genus Saccharomyces, etc. In addition, polyploidy is frequently associated with hybridization and reticulate evolution that appear to be highly prevalent in several fungal taxa. Indeed, homoploid speciation (hybrid speciation without a change in chromosome number) has been evidenced for some fungal species (such as the basidiomycota Microbotryum violaceum). As for plants and animals, fungal hybrids and polyploids display structural and functional modifications compared to their progenitors and diploid counterparts. In particular, the structural and functional outcomes of polyploid Saccharomyces genomes strikingly reflect the evolutionary fate of plant polyploid ones. Large chromosomal rearrangements leading to chimeric chromosomes have been described, as well as more punctual genetic modifications such as gene loss. The homoealleles of the allotetraploid yeast S. pastorianus show unequal contribution to the transcriptome. Phenotypic diversification is also observed following polyploidization and/or hybridization in fungi, producing the fuel for natural selection and subsequent adaptation and speciation. Chromalveolata Other eukaryotic taxa have experienced one or more polyploidization events during their evolutionary history (see Albertin and Marullo, 2012 for review). The oomycetes, which are non-true fungi members, contain several examples of paleopolyploid and polyploid species, such as within the genus Phytophthora. Some species of brown algae (Fucales, Laminariales and diatoms) contain apparent polyploid genomes. In the Alveolata group, the remarkable species Paramecium tetraurelia underwent three successive rounds of whole-genome duplication and established itself as a major model for paleopolyploid studies. Bacteria Each Deinococcus radiodurans bacterium contains 4-8 copies of its chromosome. Exposure of D. radiodurans to X-ray irradiation or desiccation can shatter its genomes into hundred of short random fragments. Nevertheless, D. radiodurans is highly resistant to such exposures. The mechanism by which the genome is accurately restored involves RecA-mediated homologous recombination and a process referred to as extended synthesis-dependent strand annealing (SDSA). Azotobacter vinelandii can contain up to 80 chromosome copies per cell. However this is only observed in fast growing cultures, whereas cultures grown in synthetic minimal media are not polyploid. Archaea The archaeon Halobacterium salinarium is polyploid and, like Deinococcus radiodurans, is highly resistant to X-ray irradiation and desiccation, conditions that induce DNA double-strand breaks. Although chromosomes are shattered into many fragments, complete chromosomes can be regenerated by making use of overlapping fragments. The mechanism employs single-stranded DNA binding protein and is likely homologous recombinational repair. See also Diploidization Eukaryote hybrid genome Ploidy Polyploid complex Polysomy Reciprocal silencing Sympatry References Further reading External links Polyploidy on Kimball's Biology Pages The polyploidy portal a community-editable project with information, research, education, and a bibliography about polyploidy. Classical genetics Speciation he:פלואידיות#פוליפלואידיות fi:Ploidia#Polyploidia sv:Ploiditet#Polyploiditet
Polyploidy
[ "Biology" ]
6,576
[ "Evolutionary processes", "Speciation" ]
62,641
https://en.wikipedia.org/wiki/Vector%20field
In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space . A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point. The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow). A vector field is a special case of a vector-valued function, whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space. Likewise, n coordinates, a vector field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law (covariance and contravariance of vectors) in passing from one coordinate system to the other. Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector). More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field. Definition Vector fields on subsets of Euclidean space Given a subset of , a vector field is represented by a vector-valued function in standard Cartesian coordinates . If each component of is continuous, then is a continuous vector field. It is common to focus on smooth vector fields, meaning that each component is a smooth function (differentiable any number of times). A vector field can be visualized as assigning a vector to individual points within an n-dimensional space. One standard notation is to write for the unit vectors in the coordinate directions. In these terms, every smooth vector field on an open subset of can be written as for some smooth functions on . The reason for this notation is that a vector field determines a linear map from the space of smooth functions to itself, , given by differentiating in the direction of the vector field. Example: The vector field describes a counterclockwise rotation around the origin in . To show that the function is rotationally invariant, compute: Given vector fields , defined on and a smooth function defined on , the operations of scalar multiplication and vector addition, make the smooth vector fields into a module over the ring of smooth functions, where multiplication of functions is defined pointwise. Coordinate transformation law In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector. Thus, suppose that is a choice of Cartesian coordinates, in terms of which the components of the vector are and suppose that (y1,...,yn) are n functions of the xi defining a different coordinate system. Then the components of the vector V in the new coordinates are required to satisfy the transformation law Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of n functions in each coordinate system subject to the transformation law () relating the different coordinate systems. Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes. Vector fields on manifolds Given a differentiable manifold , a vector field on is an assignment of a tangent vector to each point in . More precisely, a vector field is a mapping from into the tangent bundle so that is the identity mapping where denotes the projection from to . In other words, a vector field is a section of the tangent bundle. An alternative definition: A smooth vector field on a manifold is a linear map such that is a derivation: for all . If the manifold is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold is often denoted by or (especially when thinking of vector fields as sections); the collection of all smooth vector fields is also denoted by (a fraktur "X"). Examples A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind; the length (magnitude) of the arrow will be an indication of the wind speed. A "high" on the usual barometric pressure map would then act as a source (arrows pointing away), and a "low" would be a sink (arrows pointing towards), since air tends to move from high pressure areas to low pressure areas. Velocity field of a moving fluid. In this case, a velocity vector is associated to each point in the fluid. Streamlines, streaklines and pathlines are 3 types of lines that can be made from (time-dependent) vector fields. They are: streaklines: the line produced by particles passing through a specific fixed point over various times pathlines: showing the path that a given particle (of zero mass) would follow. streamlines (or fieldlines): the path of a particle influenced by the instantaneous field (i.e., the path of a particle if the field is held fixed). Magnetic fields. The fieldlines can be revealed using small iron filings. Maxwell's equations allow us to use a given set of initial and boundary conditions to deduce, for every point in Euclidean space, a magnitude and direction for the force experienced by a charged test particle at that point; the resulting vector field is the electric field. A gravitational field generated by any massive object is also a vector field. For example, the gravitational field vectors for a spherically symmetric body would all point towards the sphere's center with the magnitude of the vectors reducing as radial distance from the body increases. Gradient field in Euclidean spaces Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇). A vector field V defined on an open set S is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) f on S such that The associated flow is called the , and is used in the method of gradient descent. The path integral along any closed curve γ (γ(0) = γ(1)) in a conservative field is zero: Central field in euclidean spaces A -vector field over is called a central field if where is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0. The point 0 is called the center of the field. Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0; this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient. Operations on vector fields Line integral A common technique in physics is to integrate a vector field along a curve, also called determining its line integral. Intuitively this is summing up all vector components in line with the tangents to the curve, expressed as their scalar products. For example, given a particle in a force field (e.g. gravitation), where each vector at some point in space represents the force acting there on the particle, the line integral along a certain path is the work done on the particle, when it travels along this path. Intuitively, it is the sum of the scalar products of the force vector and the small tangent vector in each point along the curve. The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous. Given a vector field and a curve , parametrized by in (where and are real numbers), the line integral is defined as To show vector field topology one can use line integral convolution. Divergence The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem. The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors. Curl in three dimensions The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three dimensions, it is defined by The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem. Index of a vector field The index of a vector field is an integer that helps describe its behaviour around an isolated zero (i.e., an isolated singularity of the field). In the plane, the index takes the value −1 at a saddle singularity but +1 at a source or sink singularity. Let n be the dimension of the manifold on which the vector field is defined. Take a closed surface (homeomorphic to the (n-1)-sphere) S around the zero, so that no other zeros lie in the interior of S. A map from this sphere to a unit sphere of dimension n − 1 can be constructed by dividing each vector on this sphere by its length to form a unit length vector, which is a point on the unit sphere Sn−1. This defines a continuous map from S to Sn−1. The index of the vector field at the point is the degree of this map. It can be shown that this integer does not depend on the choice of S, and therefore depends only on the vector field itself. The index is not defined at any non-singular point (i.e., a point where the vector is non-zero). It is equal to +1 around a source, and more generally equal to (−1)k around a saddle that has k contracting dimensions and n−k expanding dimensions. The index of the vector field as a whole is defined when it has just finitely many zeroes. In this case, all zeroes are isolated, and the index of the vector field is defined to be the sum of the indices at all zeroes. For an ordinary (2-dimensional) sphere in three-dimensional space, it can be shown that the index of any vector field on the sphere must be 2. This shows that every such vector field must have a zero. This implies the hairy ball theorem. For a vector field on a compact manifold with finitely many zeroes, the Poincaré-Hopf theorem states that the vector field’s index is the manifold’s Euler characteristic. Physical intuition Michael Faraday, in his concept of lines of force, emphasized that the field itself should be an object of study, which it has become throughout physics in the form of field theory. In addition to the magnetic field, other phenomena that were modeled by Faraday include the electrical field and light field. In recent decades many phenomenological formulations of irreversible dynamics and evolution equations in physics, from the mechanics of complex fluids and solids to chemical kinetics and quantum thermodynamics, have converged towards the geometric idea of "steepest entropy ascent" or "gradient flow" as a consistent universal modeling framework that guarantees compatibility with the second law of thermodynamics and extends well-known near-equilibrium results such as Onsager reciprocity to the far-nonequilibrium realm. Flow curves Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it; thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity. Given a vector field defined on , one defines curves on such that for each in an interval , By the Picard–Lindelöf theorem, if is Lipschitz continuous there is a unique -curve for each point in so that, for some , The curves are called integral curves or trajectories (or less commonly, flow lines) of the vector field and partition into equivalence classes. It is not always possible to extend the interval to the whole real number line. The flow may for example reach the edge of in a finite time. In two or three dimensions one can visualize the vector field as giving rise to a flow on . If we drop a particle into this flow at a point it will move along the curve in the flow depending on the initial point . If is a stationary point of (i.e., the vector field is equal to the zero vector at the point ), then the particle will remain at . Typical applications are pathline in fluid, geodesic flow, and one-parameter subgroups and the exponential map in Lie groups. Complete vector fields By definition, a vector field on is called complete if each of its flow curves exists for all time. In particular, compactly supported vector fields on a manifold are complete. If is a complete vector field on , then the one-parameter group of diffeomorphisms generated by the flow along exists for all time; it is described by a smooth mapping On a compact manifold without boundary, every smooth vector field is complete. An example of an incomplete vector field on the real line is given by . For, the differential equation , with initial condition , has as its unique solution if (and for all if ). Hence for , is undefined at so cannot be defined for all values of . The Lie bracket The flows associated to two vector fields need not commute with each other. Their failure to commute is described by the Lie bracket of two vector fields, which is again a vector field. The Lie bracket has a simple definition in terms of the action of vector fields on smooth functions : f-relatedness Given a smooth function between manifolds, , the derivative is an induced map on tangent bundles, . Given vector fields and , we say that is -related to if the equation holds. If is -related to , , then the Lie bracket is -related to . Generalizations Replacing vectors by p-vectors (pth exterior power of vectors) yields p-vector fields; taking the dual space and exterior powers yields differential k-forms, and combining these yields general tensor fields. Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras. See also Eisenbud–Levine–Khimshiashvili signature formula Field line Field strength Gradient flow and balanced flow in atmospheric dynamics Lie derivative Scalar field Time-dependent vector field Vector fields in cylindrical and spherical coordinates Tensor fields Slope field References Bibliography External links Online Vector Field Editor Vector field — Mathworld Vector field — PlanetMath 3D Magnetic field viewer Vector fields and field lines Vector field simulation An interactive application to show the effects of vector fields Differential topology Field Functions and mappings F
Vector field
[ "Physics", "Mathematics" ]
3,473
[ "Mathematical analysis", "Functions and mappings", "Physical quantities", "Quantity", "Mathematical objects", "Topology", "Differential topology", "Vector physical quantities", "Mathematical relations" ]
62,694
https://en.wikipedia.org/wiki/Stromatolite
Stromatolites ( ) or stromatoliths () are layered sedimentary formations (microbialite) that are created mainly by photosynthetic microorganisms such as cyanobacteria, sulfate-reducing bacteria, and Pseudomonadota (formerly proteobacteria). These microorganisms produce adhesive compounds that cement sand and other rocky materials to form mineral "microbial mats". In turn, these mats build up layer by layer, growing gradually over time. This process generates the characteristic lamination of stromatolites, a feature that is hard to interpret, in terms of its temporal and environmental significance. Different styles of stromatolite lamination have been described, which can be studied through microscopic and mathematical methods. A stromatolite may grow to a meter or more. Fossilized stromatolites provide important records of some of the most ancient life. As of the Holocene, living forms are rare. Definition Stromatolites are layered, biochemical, accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains in biofilms (specifically microbial mats), through the action of certain microbial lifeforms, especially cyanobacteria. Ancient stromatolites Morphology Fossilized stromatolites exhibit a variety of forms and structures, or morphologies, including conical, stratiform, domal, columnar, and branching types. Stromatolites occur widely in the fossil record of the Precambrian but are rare today. Very few Archean stromatolites contain fossilized microbes, but fossilized microbes are sometimes abundant in Proterozoic stromatolites. While features of some ancient apparent stromatolites are suggestive of biological activity, others possess features that are more consistent with abiotic (non-biological) precipitation. Finding reliable ways to distinguish between biologically formed and abiotic stromatolites is an active area of research in geology. Multiple morphologies of stromatolites may exist in a single local or geological stratum, depending on specific conditions at the time of their formation, such as water depth. Most stromatolites are spongiostromate in texture, having no recognisable microstructure or cellular remains. A minority are porostromate, having recognisable microstructure; these are mostly unknown from the Precambrian but persist throughout the Palaeozoic and Mesozoic. Since the Eocene, porostromate stromatolites are known only from freshwater settings. Fossil record Some Archean rock formations show macroscopic similarity to modern microbial structures, leading to the inference that these structures represent evidence of ancient life, namely stromatolites. However, others regard these patterns as being the result of natural material deposition or some other abiogenic mechanism. Scientists have argued for a biological origin of stromatolites due to the presence of organic globule clusters within the thin layers of the stromatolites, of aragonite nanocrystals (both features of current stromatolites), and of other microstructures in older stromatolites that parallel those in younger stromatolites that show strong indications of biological origin. Stromatolites are a major constituent of the fossil record of the first forms of life on Earth. They peaked about 1.25 billion years ago (Ga) and subsequently declined in abundance and diversity, so that by the start of the Cambrian they had fallen to 20% of their peak. The most widely supported explanation is that stromatolite builders fell victim to grazing creatures (the Cambrian substrate revolution); this theory implies that sufficiently complex organisms were common around 1 Ga. Another hypothesis is that protozoa such as foraminifera were responsible for the decline, favoring formation of thrombolites over stromatolites through microscopic bioturbation. Proterozoic stromatolite microfossils (preserved by permineralization in silica) include cyanobacteria and possibly some forms of the eukaryote chlorophytes (that is, green algae). One genus of stromatolite very common in the geologic record is Collenia. The connection between grazer and stromatolite abundance is well documented in the younger Ordovician evolutionary radiation; stromatolite abundance also increased after the Late Ordovician mass extinction and Permian–Triassic extinction event decimated marine animals, falling back to earlier levels as marine animals recovered. Fluctuations in metazoan population and diversity may not have been the only factor in the reduction in stromatolite abundance. Factors such as the chemistry of the environment may have been responsible for changes. While prokaryotic cyanobacteria reproduce asexually through cell division, they were instrumental in priming the environment for the evolutionary development of more complex eukaryotic organisms. They are thought to be largely responsible for increasing the amount of oxygen in the primeval Earth's atmosphere through their continuing photosynthesis (see Great Oxygenation Event). They use water, carbon dioxide, and sunlight to create their food. A layer of polysaccharides often forms over mats of cyanobacterial cells. In modern microbial mats, debris from the surrounding habitat can become trapped within the polysaccharide layer, which can be cemented together by the calcium carbonate to grow thin laminations of limestone. These laminations can accrete over time, resulting in the banded pattern common to stromatolites. The domal morphology of biological stromatolites is the result of the vertical growth necessary for the continued infiltration of sunlight to the organisms for photosynthesis. Layered spherical growth structures termed oncolites are similar to stromatolites and are also known from the fossil record. Thrombolites are poorly laminated or non-laminated clotted structures formed by cyanobacteria, common in the fossil record and in modern sediments. There is evidence that thrombolites form in preference to stromatolites when foraminifera are part of the biological community. The Zebra River Canyon area of the Kubis platform in the deeply dissected Zaris Mountains of southwestern Namibia provides a well-exposed example of the thrombolite-stromatolite-metazoan reefs that developed during the Proterozoic period, the stromatolites here being better developed in updip locations under conditions of higher current velocities and greater sediment influx. Modern occurrence Formation Time lapse photography of modern microbial mat formation in a laboratory setting gives some revealing clues to the behavior of cyanobacteria in stromatolites. Biddanda et al. (2015) found that cyanobacteria exposed to localized beams of light moved towards the light, or expressed phototaxis, and increased their photosynthetic yield, which is necessary for survival. In a novel experiment, the scientists projected a school logo onto a petri dish containing the organisms, which accreted beneath the lighted region, forming the logo in bacteria. The authors speculate that such motility allows the cyanobacteria to seek light sources to support the colony. In both light and dark conditions, the cyanobacteria form clumps that then expand outwards, with individual members remaining connected to the colony via long tendrils. In harsh environments where mechanical forces may tear apart the microbial mats, these substructures may provide evolutionary benefit to the colony, affording it at least some measure of shelter and protection. Lichen stromatolites are a proposed mechanism of formation of some kinds of layered rock structure that are formed above water, where rock meets air, by repeated colonization of the rock by endolithic lichens. Saline locations Modern stromatolites are mostly found in hypersaline lakes and marine lagoons where high saline levels prevent animal grazing. One such location where excellent modern specimens can be observed is Hamelin Pool Marine Nature Reserve, Shark Bay in Western Australia. In 2010, a fifth type of chlorophyll, namely chlorophyll f, was discovered by Min Chen from stromatolites in Shark Bay. Halococcus hamelinensis, a halophilic archaeon, occurs in living stromatolites in Shark Bay where it is exposed to extreme conditions of UV radiation, salinity and desiccation. H. hamelinesis possesses genes that encode enzymes employed in the repair of UV induced damages in DNA by the processes of nucleotide excision repair and photoreactivation. Other locations include Pampa del Tamarugal National Reserve in Chile; Lagoa Salgada, Rio Grande do Norte, Brazil, where modern stromatolites can be observed as both bioherms (domal type) and beds; and in the Puna de Atacama of the Andes. Inland stromatolites can be found in saline waters in Cuatro Ciénegas Basin, a unique ecosystem in the Mexican desert. Alchichica Lake in Puebla, Mexico has two distinct morphologic generations of stromatolites: columnar-dome like structures, rich in aragonite, forming near the shore line, dated back to 1,100 years before present (ybp) and spongy-cauliflower like thrombolytic structures that dominate the lake from top to the bottom, mainly composed of hydromagnesite, huntite, calcite and dated back to 2,800 ybp. The only open marine environment where modern stromatolites are known to prosper is the Exuma Cays in the Bahamas. Freshwater locations Laguna de Bacalar in Mexico's southern Yucatán Peninsula has an extensive formation of living giant microbialites (that is, stromatolites or thrombolites). The microbialite bed is over long with a vertical rise of several meters in some areas. These may be the largest sized living freshwater microbialites, or any organism, on Earth. A 1.5 km stretch of reef-forming stromatolites (primarily of the genus Scytonema) occurs in Chetumal Bay in Belize, just south of the mouth of the Rio Hondo and the Mexican border. Large microbialite towers up to 40 m high were discovered in the largest soda lake on Earth Lake Van in eastern Turkey. They are composed of aragonite and grow by precipitation of calcite from sub-lacustrine karst-water. Freshwater stromatolites are found in Lake Salda in southern Turkey. The waters are rich in magnesium and the stromatolite structures are made of hydromagnesite. Two instances of freshwater stromatolites are found in Canada, at Pavilion Lake and Kelly Lake in British Columbia. Pavilion Lake has the largest known freshwater stromatolites, and NASA has conducted xenobiology research there, called the "Pavilion Lake Research Project." The goal of the project is to better understand what conditions would likely harbor life on other planets. Microbialites have been discovered in an open pit pond at an abandoned asbestos mine near Clinton Creek, Yukon, Canada. These microbialites are extremely young and presumably began forming soon after the mine closed in 1978. The combination of a low sedimentation rate, high calcification rate, and low microbial growth rate appears to result in the formation of these microbialites. Microbialites at an historic mine site demonstrates that an anthropogenically constructed environment can foster microbial carbonate formation. This has implications for creating artificial environments for building modern microbialites including stromatolites. A very rare type of non-lake dwelling stromatolite lives in the Nettle Cave at Jenolan Caves, NSW, Australia. The cyanobacteria live on the surface of the limestone and are sustained by the calcium-rich dripping water, which allows them to grow toward the two open ends of the cave which provide light. Stromatolites composed of calcite have been found in both the Blue Lake in the dormant volcano, Mount Gambier and at least eight cenote lakes including the Little Blue Lake in the Lower South-East of South Australia. See also Banded iron formation Cotham Marble Gunflint Range Laguna Negra, Catamarca Microbially induced sedimentary structure Ojos de Mar Thrombolites References Further reading External links Stromatolite photo gallery, a teaching set from Ohio State University Trace fossils Cyanobacteria Sedimentary rocks
Stromatolite
[ "Biology" ]
2,640
[ "Algae", "Cyanobacteria" ]
62,696
https://en.wikipedia.org/wiki/Guinea%20pig
The guinea pig or domestic guinea pig (Cavia porcellus), also known as the cavy or domestic cavy ( ), is a species of rodent belonging to the genus Cavia, family Caviidae. Breeders tend to use the name "cavy" for the animal, but "guinea pig" is more commonly used in scientific and laboratory contexts. Despite their name, guinea pigs are not native to Guinea, nor are they closely related to pigs. Instead, they originated in the Andes region of South America, where wild guinea pigs can still be found today. Studies based on biochemistry and DNA hybridization suggest they are domesticated animals that do not exist naturally in the wild, but are descendants of a closely related cavy species such as C. tschudii. Originally, they were domesticated as livestock (source of meat) in the Andean region and are still consumed in some parts of the world. In Western society, the guinea pig has enjoyed widespread popularity as a pet since its introduction to Europe and North America by European traders in the 16th century. Their docile nature, friendly responsiveness to handling and feeding, and the relative ease of caring for them have continued to make guinea pigs a popular choice of household pets. Consequently, organizations devoted to the competitive breeding of guinea pigs have been formed worldwide. Through artificial selection, many specialized breeds with varying coat colors and textures have been selected by breeders. Livestock breeds of guinea pig play an important role in folk culture for many indigenous Andean peoples, especially as a food source. They are not only used in folk medicine and in community religious ceremonies but also raised for their meat. Guinea pigs are an important culinary staple in the Andes Mountains, where it is known as cuy. Lately, marketers tried to increase their consumption outside South America. Biological experimentation on domestic guinea pigs has been carried out since the 17th century. The animals were used so frequently as model organisms in the 19th and 20th centuries that the epithet guinea pig came into use to describe a human test subject. Since that time, they have mainly been replaced by other rodents, such as mice and rats. However, they are still used in research, primarily as models to study such human medical conditions as juvenile diabetes, tuberculosis, scurvy (like humans, they require dietary intake of vitamin C), and pregnancy complications. History Cavia porcellus is not found naturally in the wild; it is likely descended from closely related species of cavies, such as C. aperea, C. fulgida, and C. tschudii. These closely related species are still commonly found in various regions of South America. Studies from 2007 to 2010 applying molecular markers, and morphometric studies on the skull and skeletal morphology of current and mummified animals revealed the ancestor to be most likely C. tschudii. Some species of cavy, identified in the 20th century as C. anolaimae and C. guianae, may be domestic guinea pigs that have become feral by reintroduction into the wild. Regionally known as cuy (Spanish word derived from quechua quwi), the guinea pig was first domesticated as early as 5000 BC for food by tribes in the Andean region of South America (the present-day southern part of Colombia, Ecuador, Peru, and Bolivia), some thousands years after the domestication of the South American camelids. The Moche people of ancient Peru worshipped animals and often depicted the guinea pig in their art. Early accounts from Spanish settlers state that guinea pigs were the preferred sacrificial animal of the Inca people native to Peru. These claims are supported by archaeological digs and transcribed Quechua mythology, providing evidence that sacrificial rituals involving guinea pigs served many purposes in society such as appeasing the gods, accompanying the dead, or reading the future. From about 1200 to the Spanish conquest in 1532, the indigenous people used selective breeding to develop many varieties of domestic guinea pigs, forming the basis for some modern domestic breeds. They continue to be a food source in the region; many households in the Andean highlands raise the animal. In the early 1500s, Spanish, Dutch, and English traders took guinea pigs to Europe, where they quickly became popular as exotic pets among the upper classes and royalty, including Queen Elizabeth I. The earliest known written account of the guinea pig dates from 1547, in a description of the animal from Santo Domingo. Because cavies are not native to Hispaniola, the animal was believed to have been earlier introduced there by Spanish travelers. However, based on more recent excavations on West Indian islands, the animal may have been introduced to the Caribbean around 500 BC by ceramic-making horticulturalists from South America. It was present in the Ostionoid period on Puerto Rico, for example, long before the advent of the Spaniards. The guinea pig was first described in the West in 1554 by the Swiss naturalist Conrad Gessner. Its binomial scientific name was first used by Erxleben in 1777; it is an amalgam of Pallas' generic designation (1766) and Linnaeus' specific conferral (1758). The earliest-known European illustration of a domestic guinea pig is a painting (artist unknown) in the collection of the National Portrait Gallery in London, dated to 1580, which shows a girl in a typical Elizabethan dress holding a tortoise-shell guinea pig in her hands. She is flanked by her two brothers, one of whom holds a pet bird. The picture dates from the same period as the oldest recorded guinea pig remains in England, which are a partial cavy skeleton found at Hill Hall, an Elizabethan manor house in Essex, and dated to around 1575. Nomenclature Latin name The scientific name of the common species is Cavia porcellus, with being Latin for "little pig". Cavia is Neo-Latin; it is derived from cabiai, the animal's name in the language of the Galibi tribes once native to French Guiana. Cabiai may be an adaptation of the Portuguese çavia (now savia), which is itself derived from the Tupi word saujá, meaning rat. Guinea pig The origin of "guinea" in "guinea pig" is hard to explain. One proposed explanation is that the animals were brought to Europe by way of Guinea, leading people to think they had originated there. "Guinea" was also frequently used in English to refer generally to any far-off, unknown country, so the name may be a colorful reference to the animal's exotic origins. Another hypothesis suggests the "guinea" in the name is a corruption of "Guiana", an area in South America. A common misconception is that they were so named because they were sold for the price of a guinea coin. This hypothesis is untenable because the guinea was first struck in England in 1663, and William Harvey used the term "Ginny-pig" as early as 1653. Others believe "guinea" may be an alteration of the word coney (rabbit); guinea pigs were referred to as "pig coneys" in Edward Topsell's 1607 treatise on quadrupeds. How the animals came to be called "pigs" is not clear. They are built somewhat like pigs, with large heads relative to their bodies, stout necks, and rounded rumps with no tail of any consequence; some of the sounds they emit are very similar to those made by pigs, and they spend a large amount of time eating. They can survive for long periods in small quarters, like a "pig pen," and were easily transported by ship to Europe. Other languages Guinea pigs are called quwi or jaca in Quechua and cuy or cuyo (plural cuyes, cuyos) in the Spanish of Ecuador, Peru, and Bolivia. The animal's name alludes to pigs in many European languages. The German word for them is , literally "little sea pig", in Polish they are called , in Hungarian as , and in . The German word derives from the Middle High German name Merswin. This word originally meant "dolphin" and was used because of the animals' grunting sounds (which were thought to be similar). Many other, possibly less scientifically based, explanations of the German name exist. For example, sailing ships stopping to reprovision in the New World would pick up guinea pig stores, providing an easily transportable source of fresh meat. The French term is cochon d'Inde (Indian pig), or cobaye; the Dutch called it Guinees biggetje (Guinean piglet), or cavia (in some Dutch dialects it is called Spaanse rat); and in Portuguese, the guinea pig is variously referred to as cobaia, from the Tupi word via its Latinization, or as porquinho da Índia (little Indian pig). This association with pigs is not universal among European terms; for example, the common word in Spanish is conejillo de Indias (little rabbit of the Indies). The Chinese refer to the animal as (túnshǔ, "pig mouse"), and sometimes as (hélánzhū, 'Netherlands pig') or (tiānzhúshǔ, "Indian mouse"). The Japanese word for guinea pig is (morumotto), which derives from the name of another mountain-dwelling rodent, the marmot. This word is how the guinea pigs were called by Dutch traders, who first brought them to Nagasaki in 1843. The other, and less common, Japanese word for guinea pig, using kanji, is 天竺鼠 (てんじくねずみ or tenjiku-nezumi), which translates as "India rat". Biology Guinea pigs are relatively large for rodents. In pet breeds, adults typically weigh between and measure between in length. Some livestock breeds weigh when full grown. Pet breeds live an average of four to five years but may live as long as eight years. According to Guinness World Records, , the longest-lived guinea pig was 14 years, 10 months, and 2 weeks old. Most guinea pigs have fur, but one laboratory breed adopted by some pet owners, the skinny pig, is mostly furless. In contrast, several breeds have long fur, such as the Peruvian, the Silkie, and the Texel. They have four front teeth and small back teeth. Their front teeth grow continuously, so guinea pigs chew on materials such as wood to wear them down to prevent them from becoming too long. In the 1990s, a minority scientific opinion emerged proposing that caviomorphs such as guinea pigs, chinchillas, and degus are not actually rodents, and should be reclassified as a separate order of mammals (similar to the rodent-like lagomorphs which includes rabbits and hares). Subsequent research using wider sampling restored the consensus among mammalian biologists regarding the current classification of rodents, including guinea pigs, as monophyletic. Wild cavies are found on grassy plains and occupy an ecological niche similar to that of cattle. They are social animals, living in the wild in small groups ("herds") that consist of several females ("sows"), a male ("boar"), and their young ("pups" not "piglets," a break with the preceding porcine nomenclature). Herds of animals move together, eating grass or other vegetation, yet do not store food. While they do not burrow themselves or build nests, they frequently seek shelter in the burrows of other animals, as well as in crevices and tunnels formed by vegetation. They are crepuscular and tend to be most active during dawn and dusk when it is harder for predators to spot them. Male and female guinea pigs do not significantly differ in appearance apart from general size. The position of the anus is very close to the genitals in both sexes. Sexing animals at a young age must be done by someone trained in the differences. Female genitals are distinguished by a Y-shaped configuration formed from a vulvar flap. While male genitals may look similar, with the penis and anus forming a similar shape, the penis will protrude if pressure is applied to the surrounding hair anterior to the genital region. The male's testes may also be visible externally from scrotal swelling. Behavior Guinea pigs can learn complex paths to food and can accurately remember a learned path for months. Their most robust problem-solving strategy is motion. While guinea pigs can jump small obstacles, they cannot jump very high. Most of them are poor climbers and are not particularly agile. They startle easily, and when they sense danger, they either freeze in place for long periods or run for cover with rapid, darting motions. Larger groups of startled guinea pigs "stampede," running in haphazard directions as a means of confusing predators. When happily excited, guinea pigs may (often repeatedly) perform little hops in the air (a movement known as "popcorning"), analogous to the ferret's war dance or rabbit happy hops (binkies). Guinea pigs are also good swimmers, although they do not like being wet and infrequently need bathing. Like many rodents, guinea pigs sometimes participate in social grooming and regularly self-groom. A milky-white substance is secreted from their eyes and rubbed into the hair during the grooming process. Groups of boars often chew each other's hair, but this is a method of establishing hierarchy within a group, rather than a social gesture. Dominance is also established through biting (especially of the ears), piloerection, aggressive noises, head thrusts, and leaping attacks. Non-sexual simulated mounting for dominance is also common among same-sex groups. Guinea pig eyesight is not as good as that of a human in terms of distance and color, but they have a wider angle of vision (about 340°) and see in partial color (dichromacy). They have well-developed senses of hearing, smell, and touch. Guinea pigs have developed a different biological rhythm from their wild counterparts and have longer periods of activity followed by short sleep in between. Activity is scattered randomly throughout the day; aside from an avoidance of intense light, no regular circadian patterns are apparent.Guinea pigs do not generally thrive when housed with other species. Larger animals may regard guinea pigs as prey, though some dogs and cats can be trained to accept them. Opinion is divided over the cohousing of guinea pigs and rabbits. Some published sources say that guinea pigs and rabbits complement each other well when sharing a cage. However, rabbits have different nutritional requirements; as lagomorphs, they synthesize their own Vitamin C, so the two species will not thrive if fed the same food when housed together. Rabbits may also harbor diseases (such as respiratory infections from Bordetella and Pasteurella), to which guinea pigs are susceptible. Housing guinea pigs with other rodents such as gerbils and hamsters may increase instances of respiratory and other infections, and such rodents may act aggressively toward guinea pigs. Vocalization Vocalization is the primary means of communication between members of the species. These are the most common sounds made by the guinea pig: A "wheek" is a loud noise, the name of which is onomatopoeic, also known as a whistle. An expression of general excitement may occur in response to the presence of its owner or feeding. It is sometimes used to find other guinea pigs if they are running. If a guinea pig is lost, it may wheek for assistance. A bubbling or purring sound is made when the guinea pig enjoys itself, such as when petting and holding. It may also make this sound when grooming, crawling around to investigate a new place, or when given food. A rumbling sound is normally related to dominance within a group, though it can also come as a response to being scared or angry. In the case of being scared, the rumble often sounds higher, and the body vibrates shortly. While courting, a male usually purrs deeply, swaying and circling the female in a behavior called rumblestrutting. A low rumble while walking away reluctantly shows passive resistance. Chutting and whining are sounds made in pursuit situations by the pursuer and pursuee, respectively. A chattering sound is made by rapidly gnashing the teeth, and is generally a sign of warning. Guinea pigs tend to raise their heads when making this sound. Squealing or shrieking is a high-pitched sound of discontent in response to pain or danger. Chirping, a less common sound likened to bird song, seems to be related to stress or discomfort or when a baby guinea pig wants to be fed. Very rarely, the chirping will last for several minutes. Reproduction Males (boars) reach sexual maturity in 3–5 weeks. Similarly, females (sows) can be fertile as early as four weeks old and carry litters before becoming fully grown adults. A sow can breed year-round (with spring being the peak). A sow can have as many as five litters in a year, but six is theoretically possible. Unlike the offspring of most rodents, which are altricial at birth, newborn cavy pups are precocial, and are well-developed with hair, teeth, claws, and partial eyesight. The pups are immediately mobile and begin eating solid food immediately, though they continue to suckle. Sows can once again become pregnant 6–48 hours after giving birth, but it is not healthy for a female to be constantly pregnant. The gestation period lasts from , with an average of . Because of the long gestation period and the large size of the pups, pregnant sows may become large and eggplant-shaped, although the change in size and shape varies depending upon the size of the litter. Litter size ranges from one to six, with three being the average; the largest recorded litter size is 9. The guinea pig mother only has two nipples, but she can readily raise the more average-sized litters of 2 to 4 pups. In smaller litters, difficulties may occur during labour due to oversized pups. Large litters result in higher incidences of stillbirth, but because the pups are delivered at an advanced stage of development, lack of access to the mother's milk has little effect on the mortality rate of newborns. Cohabitating females assist in mothering duties if lactating; guinea pigs practice alloparental care, in which a sow may adopt the pups of another. This might take place if the original parents die or are, for some reason, separated from them. This behavior is common and is seen in many other animal species, such as the elephant. Toxemia of pregnancy (hypertension) is a common problem and kills many pregnant females. Signs of toxemia include anorexia (loss of appetite), lack of energy, excessive salivation, a sweet or fruity breath odor due to ketones, and seizures in advanced cases. Pregnancy toxemia appears to be most common in hot climates. Other serious complications during pregnancy can include a prolapsed uterus, hypocalcaemia, and mastitis. Females that do not give birth may develop an irreversible fusing or calcified cartilage of the pubic symphysis, a joint in the pelvis, which may occur after six months of age. If they become pregnant after this has happened, the birth canal may not widen sufficiently, which may lead to dystocia and death as they attempt to give birth. Husbandry Living environment Domestic guinea pigs generally live in cages, although some owners of large numbers of cavies dedicate entire rooms to their pets. Wire mesh floors can cause injury and may be associated with an infection commonly known as bumblefoot (ulcerative pododermatitis), so cages with solid bottoms, where the animal walks directly on the bedding, are typically used. Large cages allow for adequate running space and can be constructed from wire grid panels and plastic sheeting, a style known as C&C, or "cubes and coroplast." Red cedar (Eastern or Western) and pine, both softwoods, were commonly used as bedding. Still, these materials are believed to contain harmful phenols (aromatic hydrocarbons) and oils. Bedding materials made from hardwoods (such as aspen), paper products, and corn cobs are alternatives. Guinea pigs tend to be messy; they often jump into their food bowls or kick bedding and feces into them, and their urine sometimes crystallizes on cage surfaces, making it difficult to remove. After its cage has been cleaned, a guinea pig typically urinates and drags its lower body across the floor of the cage to mark its territory. Male guinea pigs may mark their territory in this way when they are put back into their cages after being taken out. Guinea pigs thrive in groups of two or more; groups of sows or groups of one or more sows and a neutered boar are common combinations, but boars can sometimes live together. Guinea pigs learn to recognize and bond with other individual guinea pigs, and tests show that a boar's neuroendocrine stress response to a strange environment is significantly lowered in the presence of a bonded female but not with unfamiliar females. Groups of boars may also get along, provided their cage has enough space, they are introduced at an early age, and no females are present. In Switzerland, where owning a single guinea pig is considered harmful to its well-being, keeping a guinea pig without a companion is illegal. There is a service to rent guinea pigs, to temporarily replace a dead cage-mate. Sweden has similar laws against keeping a guinea pig by itself. Diet The guinea pig's natural diet is grass; their molars are particularly suited for grinding plant matter and grow continuously throughout their life. Most mammals that graze are large and have a long digestive tract. Guinea pigs have much longer colons than most rodents. Easily digestible food is processed in the gastrointestinal tract and expelled as regular feces. But to get nutrients out of hard-to-digest fiber, guinea pigs ferment fiber in the cecum (in the GI tract) and then expel the contents as cecotropes, which are reingested (cecotrophy). The cecotropes are then absorbed in the small intestine to utilize the nutrients. The cecotropes are eaten directly from the anus unless the guinea pig is pregnant or obese. They share this behavior with lagomorphs (rabbits, hares, pikas) and some other animals. In geriatric boars or sows (rarely in young ones), the muscles which allow the cecotropes to be expelled from the anus can become weak. This creates a condition known as fecal impaction, which prevents the animal from redigesting cecotropes even though harder pellets may pass through the impacted mass. The condition may be temporarily alleviated by a human carefully removing the impacted feces from the anus. Guinea pigs benefit from a diet of fresh grass hay, such as timothy hay, in addition to food pellets, which are often based on timothy hay. Alfalfa hay is also a popular food choice, and most guinea pigs will eat large amounts of alfalfa when offered it, though some controversy exists over offering alfalfa to adult guinea pigs. Some pet owners and veterinary organizations have advised that, as a legume rather than a grass hay, alfalfa consumed in large amounts may lead to obesity, as well as bladder stones from the excess calcium in all animals except for pregnant and very young guinea pigs. However, published scientific sources mention alfalfa as a food source that can replenish protein, amino acids, and fiber. Like humans, but unlike most other mammals, guinea pigs cannot synthesize vitamin C and must obtain this vital nutrient from food. If guinea pigs do not ingest enough vitamin C, they can suffer from potentially fatal scurvy. They require about 10 mg of vitamin C daily (20 mg if pregnant), which can be obtained through fresh, raw fruits and vegetables (such as broccoli, apple, cabbage, carrot, celery, and spinach) or dietary supplements or by eating fresh pellets designed for guinea pigs, if they have been handled properly. Healthy diets for guinea pigs require a complex balance of calcium, magnesium, phosphorus, potassium, and hydrogen ions; but adequate amounts of vitamins A, D, and E are also necessary. Poor diets for guinea pigs have been associated with muscular dystrophy, metastatic calcification, difficulties with pregnancy, vitamin deficiencies, and teeth problems. Guinea pigs tend to be fickle eaters when it comes to fresh fruits and vegetables after having learned early in life what is and is not appropriate to consume. Their eating habits may be difficult to change after maturity. They do not respond well to sudden changes in their diet, and they may stop eating and starve rather than accept new food types. A constant supply of hay is generally recommended, as guinea pigs feed continuously and may develop bad habits if food is not present, such as chewing on their hair. Being rodents, as their teeth grow constantly (as do their nails, like humans), they routinely gnaw on things, lest their teeth become too large for their jaw (a common problem in rodents). Guinea pigs chew on cloth, paper, plastic, and rubber if available. Guinea pig owners may "Guinea Pig proof" their household, especially if they are free to roam, to avoid any destruction or harm to the guinea pig itself. Some plants are poisonous to guinea pigs, including bracken, bryony, buttercup, charlock, deadly nightshade, foxglove, hellebore, hemlock, lily of the valley, mayweed, monkshood, privet, ragwort, rhubarb, speedwell, toadflax (both Linaria vulgaris and Linaria dalmatica), and wild celery. Additionally, any plant which grows from a bulb (e.g., tulip or onion) is normally considered poisonous, as well as ivy and oak tree leaves. Health problems Common ailments in domestic guinea pigs include respiratory tract infections, diarrhea, scurvy (vitamin C deficiency, typically characterized by sluggishness), abscesses due to infection (often in the neck, due to hay embedded in the throat, or from external scratches), and infections by lice, mites, or fungus. Mange mites (Trixacarus caviae) are a common cause of hair loss, and other symptoms may also include excessive scratching, unusually aggressive behavior when touched (due to pain), and, in some instances, seizures. Guinea pigs may also suffer from "running lice" (Gliricola porcelli), a small, white insect that can be seen moving through the hair; their eggs, which appear as black or white specks attached to the hair, are sometimes referred to as "static lice." Other causes of hair loss can be hormonal upsets caused by underlying medical conditions such as ovarian cysts. Foreign bodies, especially tiny pieces of hay or straw, can become lodged in the eyes of guinea pigs, resulting in excessive blinking, tearing, and, in some cases, an opaque film over the eye due to corneal ulcer. Hay or straw dust can also cause sneezing. While it is normal for guinea pigs to sneeze periodically, frequent sneezing may be a symptom of pneumonia, especially in response to atmospheric changes. Pneumonia may also be accompanied by torticollis and can be fatal. Because the guinea pig has a stout, compact body, it more easily tolerates excessive cold than excessive heat. Its normal body temperature is , so its ideal ambient air temperature range is similar to a human's, about . Consistent ambient temperatures in excess of have been linked to hyperthermia and death, especially among pregnant sows. Guinea pigs are not well suited to environments that feature wind or frequent drafts, and respond poorly to extremes of humidity outside of the range of 30–70%. Guinea pigs are prey animals whose survival instinct is to mask pain and signs of illness, and many times, health problems may not be apparent until a condition is severe or in its advanced stages. Treatment of disease is made more difficult by the extreme sensitivity guinea pigs have to most antibiotics, including penicillin, which kill off the intestinal flora and quickly bring on episodes of diarrhea and in some cases, death. Similar to the inherited genetic diseases of other breeds of animals (such as hip dysplasia in canines), some genetic abnormalities of guinea pigs have been reported. Most commonly, the roan coloration of Abyssinian guinea pigs is associated with congenital eye disorders and problems with the digestive system. Other genetic disorders include "waltzing disease" (deafness coupled with a tendency to run in circles), palsy, and tremor conditions. Importance As pets Social behaviors If handled correctly early in life, guinea pigs become amenable to being picked up and carried and seldom bite or scratch. They are timid explorers who often hesitate to escape their cage even when an opportunity presents itself. Still, they show considerable curiosity when allowed to walk freely, especially in familiar and safe terrain. Guinea pigs that become familiar with their owner will whistle on the owner's approach; they will also learn to whistle in response to the rustling of plastic bags or the opening of refrigerator doors, where their food is most commonly stored. Coats and grooming Domesticated guinea pigs occur in many breeds that have developed since their introduction to Europe and North America. These varieties vary in hair and color composition. The most common variety found in pet stores is the English shorthair (also known as the American), which has a short, smooth coat, and the Abyssinian, whose coat is ruffled with cowlicks, or rosettes. Also popular among breeders are the Peruvian and the Sheltie (or Silkie), both straight longhair breeds, and the Texel, a curly longhair. Grooming of guinea pigs is primarily accomplished using combs or brushes. Shorthair breeds are typically brushed weekly, while longhair breeds may require daily grooming. Clubs and associations Cavy clubs and associations dedicated to the showing and breeding guinea pigs have been established worldwide. The American Cavy Breeders Association, an adjunct to the American Rabbit Breeders' Association, is the governing body in the United States and Canada. The British Cavy Council governs cavy clubs in the United Kingdom. Similar organizations exist in Australia (Australian National Cavy Council) and New Zealand (New Zealand Cavy Council). Each club publishes its standard of perfection and determines which breeds are eligible for showing. Human allergies Allergic symptoms, including rhinitis, conjunctivitis, and asthma, have been documented in laboratory animal workers who come into contact with guinea pigs. Allergic reactions following direct exposure to guinea pigs in domestic settings have also been reported. Two major guinea pig allergens, Cav p I and Cav p II, have been identified in guinea pig fluids (urine and saliva) and guinea pig dander. People who are allergic to guinea pigs are usually allergic to hamsters and gerbils, as well. Allergy shots can successfully treat an allergy to guinea pigs. However, treatment can take up to 18 months. Traditional uses in Andean populations Folklore traditions involving guinea pigs are numerous; they are exchanged as gifts, used in customary social and religious ceremonies, and frequently referred to in spoken metaphors. They also are used in traditional healing rituals by folk doctors, or curanderos, who use the animals to diagnose diseases such as jaundice, rheumatism, arthritis, and typhus. They are rubbed against the bodies of the sick and are seen as a supernatural medium. Black guinea pigs are considered especially useful for diagnoses. The animal may be cut open and its entrails examined to determine whether the cure was effective. These methods are widely accepted in many parts of the Andes, where Western medicine is unavailable or distrusted. Peruvians consume an estimated 65 million guinea pigs each year. The animal is so entrenched in the culture that one famous painting of the Last Supper in the main cathedral in Cusco shows Christ and his disciples dining on guinea pig. The animal remains an important aspect of certain religious events in both rural and urban areas of Peru. A religious celebration, known as ("collecting the cuys"), is a major festival in many villages in the Antonio Raimondi province of eastern Peru and is celebrated in smaller ceremonies in Lima. It is a syncretistic event, combining elements of Catholicism and pre-Columbian religious practices, and revolves around the celebration of local patron saints. The exact form the takes differs from town to town; in some localities, a sirvinti (servant) is appointed to go from door to door, collecting donations of guinea pigs, while in others, guinea pigs may be brought to a communal area to be released in a mock bullfight. Meals such as cuy chactado are always served as part of these festivities, and the killing and serving of the animal are framed by some communities as a symbolic satire of local politicians or important figures. In the Tungurahua and Cotopaxi provinces of central Ecuador, guinea pigs are employed in the celebrations surrounding the feast of Corpus Christi as part of the Ensayo, which is a community meal, and the Octava, where castillos (greased poles) are erected with prizes tied to the crossbars, from which several guinea pigs may be hung. The Peruvian town of Churin has an annual festival that involves dressing guinea pigs in elaborate costumes for competition. There are also guinea pig festivals held in Huancayo, Cusco, Lima, and Huacho, featuring costumes and guinea pig dishes. Most guinea pig celebrations occur on National Guinea Pig Day (Día Nacional del Cuy) across Peru on the second Friday of October. In popular culture and media As a result of their widespread popularity, especially in households with children, guinea pigs have shown a presence in culture and media. Some noted appearances of the animal in literature include the short story "Pigs Is Pigs" by Ellis Parker Butler, which is a tale of bureaucratic incompetence. Two guinea pigs held at a railway station breed unchecked while humans argue whether they are "pigs" or "pets" to determine freight charges. Butler's story, in turn, inspired the Star Trek: The Original Series episode "The Trouble with Tribbles", written by David Gerrold. In children's literature The Fairy Caravan, a novel by Beatrix Potter, and Michael Bond's Olga da Polga series for children, both feature guinea pigs as the protagonist. Another appearance is in The Magician's Nephew by C. S. Lewis: in the first (chronologically) of his The Chronicles of Narnia series, a guinea pig is the first creature to travel to the Wood between the Worlds. In Ursula Dubosarsky's Maisie and the Pinny Gig, a little girl has a recurrent dream about a giant guinea pig, while guinea pigs feature significantly in several of Dubosarsky's other books, including the young adult novel The White Guinea Pig and The Game of the Goose. In film and television Guinea pigs have also been featured in film and television. In the TV movie Shredderman Rules, the main character and the main character's crush both have guinea pigs, which play a minor part in the plot. A guinea pig named Rodney, voiced by Chris Rock, was a prominent character in the 1998 film Dr. Dolittle, and Linny the Guinea Pig is a co-star on Nick Jr.'s Wonder Pets. Guinea pigs were used in some major advertising campaigns in the 1990s and 2000s, notably for Egg Banking plc, Snapple, and Blockbuster Video. In the South Park season 12 episode "Pandemic 2: The Startling", giant guinea pigs dressed in costumes rampage over the Earth. The 2009 Walt Disney Pictures movie G-Force features a group of highly intelligent guinea pigs trained as operatives of the U.S. government. As livestock In South America Guinea pigs (called cuy, cuye, or curí) were originally domesticated for their meat in the Andes. Traditionally, the animal was reserved for ceremonial meals and as a delicacy by indigenous people in the Andean highlands. Still, since the 1960s, it has become more socially acceptable for consumption by all people. It continues to be a significant part of the diet in Peru and Bolivia, particularly in the Andes Mountains highlands; it is also eaten in some areas of Ecuador (mainly in the Sierra) and in Colombia, mainly in the southwestern part of the country (Cauca and Nariño departments). Because guinea pigs require much less room than traditional livestock and reproduce extremely quickly, they are a more profitable source of food and income than many traditional stock animals, such as pigs and cattle; moreover, they can be raised in an urban environment. Both rural and urban families raise guinea pigs for supplementary income, and the animals are commonly bought and sold at local markets and large-scale municipal fairs. Guinea pig meat is high in protein and low in fat and cholesterol, and is described as being similar to rabbit and the dark meat of chicken. The animal may be served fried (chactado or frito), broiled (asado), or roasted (al horno), and in urban restaurants may also be served in a casserole or a fricassee. Ecuadorians commonly consume sopa or locro de cuy, a soup dish. Pachamanca or huatia, an earth oven cooking method, is also popular, and cuy cooked this way is usually served with chicha (corn beer) in traditional settings. In the United States, Europe, and Japan Andean immigrants in New York City raise and sell guinea pigs for meat, and some South American restaurants in major cities in the United States serve cuy as a delicacy. In the 1990s and 2000s, La Molina University began exporting large-breed guinea pigs to Europe, Japan, and the United States in the hope of increasing human consumption outside of countries in northern South America. Sub-Saharan Africa Efforts have been made to promote guinea pig husbandry in developing countries of West Africa, where they occur more widely than generally known because they are usually not covered by livestock statistics. However, it has not been known when and where the animals have been introduced to Africa. In Cameroon, they are widely distributed. In the Democratic Republic of the Congo, they can be found both in peri-urban environments as well as in rural regions, for example, in South Kivu. They are also frequently held in rural households in Iringa Region of southwestern Tanzania. Peruvian breeding program Peruvian research universities, especially La Molina National Agrarian University, began experimental programs in the 1960s intending to breed larger-sized guinea pigs. Subsequent university efforts have sought to change breeding and husbandry procedures in South America to make the raising of guinea pigs as livestock more economically sustainable. The variety of guinea pig produced by La Molina is fast-growing and can weigh . All the large breeds of guinea pig are known as cuy mejorados and the pet breeds are known as cuy criollos. The three original lines out of Peru were the Perú (weighing by 2 weeks), the Andina, and the Inti. In scientific research The use of guinea pigs in scientific experimentation dates back at least to the 17th century, when the Italian biologists Marcello Malpighi and Carlo Fracassati conducted vivisections of guinea pigs in their examinations of anatomic structures. In 1780, Antoine Lavoisier used a guinea pig in his experiments with the calorimeter, a device used to measure heat production. Guinea pigs played a major role in the establishment of germ theory in the late 19th century, through the experiments of Louis Pasteur, Émile Roux, and Robert Koch. Guinea pigs have been launched into orbital space flight several times, first by the USSR on the Sputnik 9 biosatellite of March 9, 1961 – with a successful recovery. China also launched and recovered a biosatellite in 1990 which included guinea pigs as passengers. Guinea pigs remained popular laboratory animals until the later 20th century: about 2.5 million guinea pigs were used annually in the U.S. for research in the 1960s, but that total decreased to about 375,000 by the mid-1990s. As of 2007, they constitute about 2% of the current total of laboratory animals. In the past, they were widely used to standardize vaccines and antiviral agents; they were also often employed in studies on the production of antibodies in response to extreme allergic reactions, or anaphylaxis. Less common uses included research in pharmacology and irradiation. Since the middle 20th century, they have been replaced in laboratory contexts primarily by mice and rats. This is in part because research into the genetics of guinea pigs has lagged behind that of other rodents, although geneticists W. E. Castle and Sewall Wright made some contributions to this area of study, especially regarding coat color. The guinea pig genome was sequenced in 2008 as part of the Mammalian Genome Project, but the guinea pig sequence scaffolds have not been assigned to chromosomes. The guinea pig was most extensively used in research and diagnosis of infectious diseases. Common uses included identification of brucellosis, Chagas disease, cholera, diphtheria, foot-and-mouth disease, glanders, Q fever, Rocky Mountain spotted fever, and various strains of typhus. They are still frequently used to diagnose tuberculosis since they are easily infected by human tuberculosis bacteria. Because guinea pigs are one of the few animals which, like humans and other primates, cannot synthesize vitamin C but must obtain it from their diet, they are ideal for researching scurvy. From the accidental discovery in 1907 that scurvy could be induced in guinea pigs to their use to prove the chemical structure of the "scorbutic factor" in 1932, the guinea pig model proved a crucial part of vitamin C research. Complement, an important component for serology, was first isolated from the blood of the guinea pig. Guinea pigs have an unusual insulin mutation, and are a suitable species for the generation of anti-insulin antibodies. Present at a level 10 times that found in other mammals, the insulin in guinea pigs may be important in growth regulation, a role usually played by growth hormone. Additionally, guinea pigs have been identified as model organisms for the study of juvenile diabetes and, because of the frequency of pregnancy toxemia, of pre-eclampsia in human females. Their placental structure is similar to that of humans, and their gestation period can be divided into trimesters that resemble the stages of fetal development in humans. Guinea pig strains used in scientific research are primarily outbred strains. Aside from the typical American or English stock, the two main outbred strains in laboratory use are the Hartley and Dunkin-Hartley; these English strains are albino, although pigmented strains are also available. Inbred strains are less common and are usually used for very specific research, such as immune system molecular biology. Of the inbred strains that have been created, the two still used with any frequency are, following Sewall Wright's designations, "Strain 2" and "Strain 13". Hairless breeds of guinea pigs have been used in scientific research since the 1980s, particularly for dermatological studies. A hairless and immunodeficient breed was the result of a spontaneous genetic mutation in inbred laboratory strains from the Hartley stock at the Eastman Kodak Company in 1979. An immunocompetent hairless breed was also identified by the Institute Armand Frappier in 1978, and Charles River Laboratories has reproduced this breed for research since 1982. Cavy fanciers then began acquiring hairless breeds, and the pet hairless varieties are referred to as "skinny pigs." Metaphorical usage In English, the term "guinea pig" is commonly used as a metaphor for a subject of scientific experimentation, or in modern times a subject of any experiment or test. This usage dates back to the early 20th century: the earliest examples cited by the Oxford English Dictionary date from 1913 and 1920. In 1933, Consumers Research founders F. J. Schlink and Arthur Kallet wrote a book entitled 100,000,000 Guinea Pigs, extending the metaphor to consumer society. The book became a national bestseller in the United States, thus further popularizing the term, and spurred the growth of the consumer protection movement. During World War II, the Guinea Pig Club was established at Queen Victoria Hospital, East Grinstead, Sussex, England, as a social club and mutual support network for the patients of plastic surgeon Archibald McIndoe, who were undergoing previously untested reconstruction procedures. The negative connotation of the term was later employed in the novel The Guinea Pigs (1970) by Czech author Ludvík Vaculík as an allegory for Soviet totalitarianism. See also Rodents as pets Peter Gurney, guinea pig rights advocate Save the Newchurch Guinea Pigs, against breeding for animal research Kurloff cell, special cells found in the blood and organs of guinea pigs References Sources External links American Cavy Breeders' Association (ACBA) View the guinea pig genome on Ensembl Cavies Domesticated animals Rodents of South America Fauna of the Andes Mammals of Peru Mammals of Bolivia Mammals of Colombia Bolivian cuisine Ecuadorian cuisine Peruvian cuisine Mammals described in 1758 Taxa named by Carl Linnaeus Animal models Pleistocene rodents Quaternary mammals of South America Extant Pleistocene first appearances
Guinea pig
[ "Biology" ]
9,487
[ "Model organisms", "Animal models" ]
62,709
https://en.wikipedia.org/wiki/Autogyro
An autogyro (from Greek and , "self-turning"), or gyroplane, is a class of rotorcraft that uses an unpowered rotor in free autorotation to develop lift. Part 1 (Definitions and Abbreviations) of Subchapter A of Chapter I of Title 14 of the U. S. Code of Federal Regulations states that gyroplane "means a rotorcraft whose rotors are not engine-driven, except for initial starting, but are made to rotate by action of the air when the rotorcraft is moving; and whose means of propulsion, consisting usually of conventional propellers, is independent of the rotor system." While similar to a helicopter rotor in appearance, the autogyro's unpowered rotor disc must have air flowing upward across it to make it rotate. Forward thrust is provided independently, by an engine-driven propeller. It was originally named the autogiro by its Spanish inventor and engineer, Juan de la Cierva, in his attempt to create an aircraft that could fly safely at low speeds. He first flew one on 9 January 1918, at Cuatro Vientos Airport in Madrid. The aircraft resembled the fixed-wing aircraft of the day, with a front-mounted engine and propeller. The term became trademarked by the Cierva Autogiro Company. De la Cierva's Autogiro is considered the predecessor of the modern helicopter. The term gyrocopter (derived from helicopter) was used by E.Burke Wilford who developed the Reiseler Kreiser feathering rotor equipped in the first half of the twentieth century. Gyroplane was later adopted as a trademark by Bensen Aircraft. The success of the Autogiro garnered the interest of industrialists and under license from de la Cierva in the 1920s and 1930s, the Pitcairn & Kellett companies made further innovations. Late-model autogyros patterned after Etienne Dormoy's Buhl A-1 Autogyro and Igor Bensen's designs feature a rear-mounted engine and propeller in a pusher configuration. Principle of operation An autogyro is characterized by a free-spinning rotor that turns because of the passage of air through the rotor from below. The downward component of the total aerodynamic reaction of the rotor gives lift to the vehicle, sustaining it in the air. A separate propeller provides forward thrust and can be placed in a puller configuration, with the engine and propeller at the front of the fuselage, or in a pusher configuration, with the engine and propeller at the rear of the fuselage. Whereas a helicopter works by forcing the rotor blades through the air, drawing air from above, the autogyro rotor blade generates lift in the same way as a glider's wing, by changing the angle of the air as the air moves upward and backward relative to the rotor blade. The free-spinning blades turn by autorotation; the rotor blades are angled so that they not only give lift, but the angle of the blades causes the lift to accelerate the blades' rotation rate until the rotor turns at a stable speed with the drag force and the thrust force in balance. Because the craft must be moving forward with respect to the surrounding air to force air through the overhead rotor, autogyros are generally not capable of vertical takeoff (except in a strong headwind). A few types such as the Air & Space 18A have shown short takeoff or landing. Pitch control is achieved by tilting the rotor fore and aft, and roll control is by tilting the rotor laterally. The tilt of the rotor can be effected by utilizing a tilting hub (Cierva), a swashplate (Air & Space 18A), or servo-flaps. A rudder provides yaw control. On pusher configuration autogyros, the rudder is typically placed in the propeller slipstream to maximize yaw control at low airspeed (but not always, as seen in the McCulloch J-2, with twin rudders placed outboard of the propeller arc). Flight controls There are three primary flight controls: control stick, rudder pedals, and throttle. Typically, the control stick is termed the cyclic and tilts the rotor in the desired direction to provide pitch and roll control (some autogyros do not tilt the rotor relative to the airframe, or only do so in one dimension, and have conventional control surfaces to vary the remaining degrees of freedom). The rudder pedals provide yaw control, and the throttle controls engine power. Secondary flight controls include the rotor transmission clutch, also known as a pre-rotator, which when engaged drives the rotor to start it spinning before takeoff, and collective pitch to reduce blade pitch before driving the rotor. Collective pitch controls are not usually fitted to autogyros but can be found on the Air & Space 18A, McCulloch J-2 and the Westermayer Tragschrauber, and can provide near VTOL performance. Pusher vs tractor configuration Modern autogyros typically follow one of two basic configurations. The most common design is the pusher configuration, where the engine and propeller are located behind the pilot and rotor mast, such as in the Bensen "Gyrocopter". Its main advantages are the simplicity and lightness of its construction and the unobstructed visibility. It was developed by Igor Bensen in the decades following World War II, who also founded the Popular Rotorcraft Association (PRA) to help it become more widespread. Less common today is the tractor configuration. In this version, the engine and propeller are located at the front of the aircraft, ahead of the pilot and rotor mast. This was the primary configuration in early autogyros but became less common. Nonetheless, the tractor configuration has some advantages compared to a pusher, namely greater yaw stability (as the center of mass is farther away from the rudder), and greater ease in aligning the center of thrust with the center of mass to prevent "bunting" (engine thrust overwhelming the pitch control). History Juan de la Cierva was a Spanish engineer, inventor, pilot, and aeronautical enthusiast. In 1921, he participated in a design competition to develop a bomber for the Spanish military. De la Cierva designed a three-engined aircraft, but during an early test flight, the bomber stalled and crashed. De la Cierva was troubled by the stall phenomenon and vowed to develop an aircraft that could fly safely at low airspeeds. The result was the first successful rotorcraft, which he named autogiro in 1923. De la Cierva's autogiro used an airplane fuselage with a forward-mounted propeller and engine, an un-powered rotor mounted on a mast, and a horizontal and vertical stabilizer. His aircraft became the predecessor of the modern helicopter. Early development After four years of experimentation, de la Cierva invented the first practical rotorcraft the autogyro (autogiro in Spanish), in 1923. His first three designs (C.1, C.2, and C.3) were unstable because of aerodynamic and structural deficiencies in their rotors. His fourth design, the C.4, made the first documented flight of an autogyro on 17January 1923, piloted by Alejandro Gomez Spencer at Cuatro Vientos airfield in Madrid, Spain (9January according to de la Cierva). De la Cierva had fitted the rotor of the C.4 with flapping hinges to attach each rotor blade to the hub. The flapping hinges allowed each rotor blade to flap, or move up and down, to compensate for dissymmetry of lift, the difference in lift produced between the right and left sides of the rotor as the autogyro moves forward. Three days later, the engine failed shortly after takeoff and the aircraft descended slowly and steeply to a safe landing, validating de la Cierva's efforts to produce an aircraft that could be flown safely at low airspeeds. De la Cierva developed his C.6 model with the assistance of Spain's Military Aviation establishment, having expended all his funds on the development and construction of the first five prototypes. The C.6 first flew in February 1925, piloted by Captain Joaquín Loriga, including a flight of from Cuatro Vientos airfield to Getafe airfield in about eight minutes, a significant accomplishment for any rotorcraft of the time. Shortly after de la Cierva's success with the C.6, he accepted an offer from Scottish industrialist JamesG. Weir to establish the Cierva Autogiro Company in England, following a demonstration of the C.6 before the British Air Ministry at RAE Farnborough, on 20October 1925. Britain had become the world centre of autogyro development. A crash in February 1926, caused by blade root failure, led to an improvement in rotor hub design. A drag hinge was added in conjunction with the flapping hinge to allow each blade to move fore and aft and relieve in-plane stresses, generated as a byproduct of the flapping motion. This development led to the Cierva C.8, which, on 18September 1928, made the first rotorcraft crossing of the English Channel followed by a tour of Europe. United States industrialist Harold Frederick Pitcairn, on learning of the successful flights of the autogyro, visited de la Cierva in Spain. In 1928, he visited him again, in England, after taking a C.8 L.IV test flight piloted by Arthur H.C.A. Rawson. Being particularly impressed with the autogyro's safe vertical descent capability, Pitcairn purchased a C.8 L.IV with a Wright Whirlwind engine. Arriving in the United States on 11December 1928 accompanied by Rawson, this autogyro was redesignated C.8W. Subsequently, production of autogyros was licensed to several manufacturers, including the Pitcairn Autogiro Company in the United States and Focke-Wulf of Germany. In 1927, German engineer Engelbert Zaschka invented a combined helicopter and autogyro. The principal advantage of the Zaschka machine is its ability to remain motionless in the air for any length of time and to descend in a vertical line so that a landing could be accomplished on the flat roof of a large house. In appearance, the machine does not differ much from the ordinary monoplane, but the carrying wings revolve around the body. Development of the autogyro continued in the search for a means to accelerate the rotor before takeoff (called prerotating). Rotor drives initially took the form of a rope wrapped around the rotor axle and then pulled by a team of men to accelerate the rotorthis was followed by a long taxi to bring the rotor up to speed sufficient for takeoff. The next innovation was flaps on the tail to redirect the propeller slipstream into the rotor while on the ground. This design was first tested on a C.19 in 1929. Efforts in 1930 had shown that the development of a light and efficient mechanical transmission was not a trivial undertaking. In 1932 the Pitcairn-Cierva Autogiro Company of Willow Grove, Pennsylvania, United States solved this problem with a transmission driven by the engine. Buhl Aircraft Company produced its Buhl A-1, the first autogyro with a propulsive rear motor, designed by Etienne Dormoy and meant for aerial observation (motor behind pilot and camera). It had its maiden flight on 15December 1931. De la Cierva's early autogyros were fitted with fixed rotor hubs, small fixed wings, and control surfaces like those of a fixed-wing aircraft. At low airspeeds, the control surfaces became ineffective and could readily lead to loss of control, particularly during landing. In response, de la Cierva developed a direct control rotor hub, which could be tilted in any direction by the pilot. De la Cierva's direct control was first developed on the Cierva C.19 Mk.V and saw the production on the Cierva C.30 series of 1934. In March 1934, this type of autogyro became the first rotorcraft to take off and land on the deck of a ship, when a C.30 performed trials on board the Spanish navy seaplane tender Dédalo off Valencia. Later that year, during the leftist Asturias revolt in October, an autogyro made a reconnaissance flight for the loyal troops, marking the first military employment of a rotorcraft. When improvements in helicopters made them practical, autogyros became largely neglected. Also, they were susceptible to ground resonance. They were, however, used in the 1930s by major newspapers, and by the United States Postal Service for the mail service between cities in the northeast. Winter War During the Winter War of 1939–1940, the Red Army Air Force used armed Kamov A-7 autogyros to provide fire correction for artillery batteries, carrying out20 combat flights. The A-7 was the first rotary-wing aircraft designed for combat, armed with one 7.62×54mmR PV-1 machine gun, a pair of Degtyaryov machine guns, and six RS-82 rockets or four FAB-100 bombs. World War II The Avro Rota autogyro, a military version of the Cierva C.30, was used by the Royal Air Force to calibrate coastal radar stations during and after the Battle of Britain. In World War II, Germany pioneered a very small gyroglider rotor kite, the Focke-Achgelis Fa 330 "Bachstelze" (wagtail), towed by U-boats to provide aerial surveillance. The Imperial Japanese Army developed the Kayaba Ka-1 autogyro for reconnaissance, artillery-spotting, and anti-submarine uses. The Ka-1 was based on the Kellett KD-1 first imported to Japan in 1938. The craft was initially developed for use as an observation platform and for artillery spotting duties. The army liked the craft's short take-off span, and especially its low maintenance requirements. Production began in 1941, with the machines assigned to artillery units for spotting the fall of shells. These carried two crewmen: a pilot and a spotter. Later, the Japanese Army commissioned two small aircraft carriers intended for coastal antisubmarine (ASW) duties. The spotter's position on the Ka-1 was modified to carry one small depth charge. Ka-1 ASW autogyros operated from shore bases as well as the two small carriers. They appear to have been responsible for at least one submarine sinking. With the beginning of German invasion in USSR June 1941, the Soviet Air Force organized new courses for training Kamov A-7 aircrew and ground support staff. In August 1941, per the decision of the chief artillery directorate of the Red Army, based on the trained flight group and five combat-ready A-7 autogyros, the 1st autogyro artillery spotting aircraft squadron was formed, which was included in the strength of the 24th Army of the Soviet Air Force, combat active in the area around Elnya near Smolensk. From 30August to 5October 1941 the autogyros made19 combat sorties for artillery spotting. Not one autogyro was lost in action, while the unit was disbanded in 1942 due to the shortage of serviceable aircraft. Postwar developments The autogyro was resurrected after World WarII when Dr. Igor Bensen, a Russian immigrant in the United States, saw a captured German U-boat's Fa330 gyroglider and was fascinated by its characteristics. At work, he was tasked with the analysis of the British military Rotachute gyro glider designed by an expatriate Austrian, Raoul Hafner. This led him to adapt the design for his purposes and eventually market the Bensen B-7 in 1955. Bensen submitted an improved version, the Bensen B-8M, for testing to the United States Air Force, which designated it the X-25. The B-8M was designed to use surplus McCulloch engines used on flying unmanned target drones. Ken Wallis developed a miniature autogyro craft, the Wallis autogyro, in England in the 1960s, and autogyros built similar to Wallis' design appeared for many years. Ken Wallis' designs have been used in various scenarios, including military training, police reconnaissance, and in a search for the Loch Ness Monster, as well as an appearance in the 1967 James Bond movie You Only Live Twice. Three different autogyro designs have been certified by the Federal Aviation Administration for commercial production: the Umbaugh U-18/Air & Space 18A of 1965, the Avian 2/180 Gyroplane of 1967, and the McCulloch J-2 of 1972. All have been commercial failures, for various reasons. The Kaman KSA-100 SAVER (Stowable Aircrew Vehicle Escape Rotorseat) is an aircraft-stowable gyroplane escape device designed and built for the United States Navy. Designed to be installed in naval combat aircraft as part of the ejection sequence, only one example was built and it did not enter service. It was powered by a Williams WRC-19 turbofan making it the first jet-powered autogyro. Bensen Gyrocopter The basic Bensen Gyrocopter design is a simple frame of square aluminium or galvanized steel tubing, reinforced with triangles of lighter tubing. It is arranged so that the stress falls on the tubes, or special fittings, not the bolts. Afront-to-back keel mounts a steerable nosewheel, seat, engine, and vertical stabilizer. Outlying mainwheels are mounted on an axle. Some versions may mount seaplane-style floats for water operations. Bensen-type autogyros use a pusher configuration for simplicity and to increase visibility for the pilot. Power can be supplied by a variety of engines. McCulloch drone engines, Rotax marine engines, Subaru automobile engines, and other designs have been used in Bensen-type designs. The rotor is mounted atop the vertical mast. The rotor system of all Bensen-type autogyros is of a two-blade teetering design. There are some disadvantages associated with this rotor design, but the simplicity of the rotor design lends itself to ease of assembly and maintenance and is one of the reasons for its popularity. Aircraft-quality birch was specified in early Bensen designs, and a wood/steel composite is used in the world-speed-record-holding Wallis design. Gyroplane rotor blades are made from other materials such as aluminium and GRP-based composite. Bensen's success triggered several other designs, some of them fatally flawed with an offset between the centre of gravity and thrust line, risking a power push-over (PPO or buntover) causing the death of the pilot and giving gyroplanes, in general, a poor reputationin contrast to de la Cierva's original intention and early statistics. Most new autogyros are now safe from PPO. 21st-century development and use In 2002, a Groen Brothers Aviation's Hawk 4 provided perimeter patrol for the Winter Olympics and Paralympics in Salt Lake City, Utah. The aircraft completed 67missions and accumulated 75hours of maintenance-free flight time during its 90-day operational contract. Worldwide, over 1,000 autogyros are used by authorities for military and law enforcement. The first U.S. police authorities to evaluate an autogyro were the Tomball, Texas, police, on a $40,000 grant from the U.S. Department of Justice together with city funds, costing much less than a helicopter to buy ($75,000) and operate ($50/hour). Although it is able to land in 40-knot crosswinds, a minor accident happened when the rotor was not kept under control in a wind gust. Since 2009, several projects in Iraqi Kurdistan have been realized. In 2010, the first autogyro was handed over to the Kurdish Minister of Interiors, Mr. Karim Sinjari. The project for the interior ministry was to train pilots to control and monitor the approach and takeoff paths of the airports in Erbil, Sulaymaniyah, and Dohuk to prevent terrorist encroachments. The gyroplane pilots also form the backbone of the pilot crew of the Kurdish police, who are trained to pilot on Eurocopter EC 120 B helicopters. In18 months from 2009 to 2010, the German pilot couple Melanie and Andreas Stützfor undertook the first world tour by autogyro, in which they flew several different gyroplane types in Europe, southern Africa, Australia, New Zealand, the United States, and South America. The adventure was documented in the book "WELTFLUGThe Gyroplane Dream" and in the film "Weltflug.tv –The Gyrocopter World Tour". Helicopter autogyration While autogyros are not helicopters, helicopters are capable of autorotation. If a helicopter suffers a power failure, the pilot can adjust the collective pitch to keep the rotor spinning generating enough lift to touch down and skid in a relatively soft landing via autorotation of its rotor disc. Certification by national aviation authorities United Kingdom certification Some autogyros, such as the Rotorsport MT03, MTO Sport (open tandem), and Calidus (enclosed tandem), and the Magni Gyro M16C (open tandem) & M24 (enclosed side by side) have type approval by the United Kingdom Civil Aviation Authority (CAA) under British Civil Airworthiness Requirements CAP643 SectionT. Others operate under a permit to fly issued by the Popular Flying Association similar to theU.S. experimental aircraft certification. However, the CAA's assertion that autogyros have a poor safety record means that a permit to fly will be granted only to existing types of an autogyro. All new types of autogyro must be submitted for full type approval under CAP643 SectionT. The CAA allows gyro flight over congested areas. In 2005, the CAA issued a mandatory permit directive (MPD) which restricted operations for single-seat autogyros and were subsequently integrated into CAP643 Issue 3 published on 12August 2005. The restrictions are concerned with the offset between the centre of gravity and thrust line and apply to all aircraft unless evidence is presented to the CAA that the CG/Thrust Line offset is less than 2 inches (5 cm) in either direction. The restrictions are summarised as follows: Aircraft with a cockpit/nacelle may be operated only by pilots with more than 50 hours of solo flight experience following the issue of their licence. Open-frame aircraft are restricted to a minimum speed of , except in the flare. All aircraft are restricted to a Vne (maximum airspeed) of Flight is not permitted when surface winds exceed or if the gust spread exceeds Flight is not permitted in moderate, severe, or extreme turbulence and airspeed must be reduced to if turbulence is encountered mid-flight. These restrictions do not apply to autogyros with type approval under CAA CAP643 Section T, which are subject to the operating limits specified in the type approval. United States certification A certificated autogyro must meet mandated stability and control criteria; in the United States these are outlined in Federal Aviation Regulations Part 27: Airworthiness Standards: Normal Category Rotorcraft. The U.S. Federal Aviation Administration issues a Standard Airworthiness Certificate to qualified autogyros. Amateur-built or kit-built aircraft are operated under a Special Airworthiness Certificate in the Experimental category. Per FAR 1.1, the FAA uses the term "gyroplane" for all autogyros, regardless of the type of airworthiness certificate. World records In 1931, Amelia Earhart (U.S.) flew a Pitcairn PCA-2 to a women's world altitude record of 18,415 ft (5,613 m). Wing Commander Ken Wallis (U.K.) held most of the autogyro world records during his autogyro flying career. These include a time-to-climb, a speed record of 189 km/h (111.7 mph), and the straight-line distance record of . On 16November 2002, at 89 years of age, Wallis increased the speed record to 207.7 km/h (129.1 mph) – and simultaneously set another world record as the oldest pilot to set a world record. Until 2019, the autogyro was one of the last remaining types of aircraft which had not yet circumnavigated the globe. The 2004 Expedition Global Eagle was the first attempt to do so using an autogyro. The expedition set the record for the longest flight over water by an autogyro during the segment from Muscat, Oman, to Karachi. The attempt was finally abandoned because of bad weather after having covered . , Andrew Keech (U.S.) holds several records. He made a transcontinental flight in his self-built Little Wing Autogyro "Woodstock" from Kitty Hawk, North Carolina, to San Diego, California, in October 2003, breaking the record set 72years earlier by Johnny Miller in a Pitcairn PCA-2. He also set three world records for speed over a recognized course. On 9February 2006 he broke two of his world records and set a record for distance, ratified by the Fédération Aéronautique Internationale (FAI): Speed over a closed circuit of without payload: , speed over a closed circuit of without payload: , and distance over a closed circuit without landing: . On 7 November 2015, the Italian astrophysicist and pilot Donatella Ricci took off with a MagniGyro M16 from the Caposile aerodrome in Venice, aiming to set a new altitude world record. She reached an altitude of 8,138.46 m (26,701 ft), breaking the women's world altitude record held for 84years by Amelia Earhart. The following day, she increased the altitude by a further 261 m, reaching 8,399 m (27,556 ft), setting the new altitude world record with an autogyro. She improved by 350 m (+4.3%) the preceding record established by Andrew Keech in 2004. Norman Surplus, from Larne in Northern Ireland, became the second person to attempt a world circumnavigation by gyroplane/autogyro type aircraft on 22March 2010, flying a Rotorsport UK MT-03 Autogyro, registered G-YROX. Surplus was unable to get permission to enter Russian airspace from Japan, but he established nine world autogyro records on his flight between Northern Ireland and Japan between 2010 and 2011. FAI world records for autogyro flight. G-YROX was delayed (by the Russian impasse) in Japan for over three years before being shipped across the Pacific to the state of Oregon, United States. From 1June 2015, Surplus flew from McMinnville, Oregon, across the continental United States, through northern Canada/Greenland, and in late July/August made the first crossing of the North Atlantic by autogyro aircraft to land back in Larne, Northern Ireland on 11August 2015. He established a further ten FAI World Records during this phase of the circumnavigation flight. After a nine-year wait (since 2010), permission to fly U.K. registered gyroplanes through the Russian Federation was finally approved, and on 22April 2019, Surplus and G-YROX continued eastwards from Larne, Northern Ireland, to cross Northern Europe and rendezvous with fellow gyroplane pilot James Ketchell piloting Magni M16 Gyroplane G-KTCH. Flying in loose formation the two aircraft made the first Trans-Russia flight by gyroplane together to reach the Bering Sea. To cross the Bering Strait, the two aircraft took off from Provideniya Bay, Russia on 7June 2019 and landed at Nome, Alaska on 6June having also made the first gyroplane crossing of the International Date Line. After crossing Alaska and western Canada, on 28June 2019, Surplus piloting G-YROX, became the first person to circumnavigate the world in a gyroplane upon returning to the Evergreen Aviation and Space Museum, McMinnville, Oregon, U.S. Over the nine years it had taken Surplus to complete the task, G-YROX flew through 32countries. The first physical circumnavigation of the globe by an Autogyro, Oregon to Oregon, had taken Surplus and G-YROX, four years and 28 days to complete, after being dogged by long diplomatic delays in gaining the necessary permission to fly across Russian Federation Airspace. However, as the flight had been severely stalled and interrupted en-route by lengthy delays it was no longer deemed eligible for setting a first, continuously flown, speed record around the world and so this task was then left to James Ketchell to complete, by setting a first official speed record flight around the world for an Autogyro type aircraft, some three months later. Subsequently, on 22September 2019, Ketchell was awarded the world record from the Guinness World Records as the first circumnavigation of the world in an autogyro and from the Fédération Aéronautique Internationale for the first certified "Speed around the World, Eastbound" circumnavigation in an E-3a Autogyro. He completed his journey in 175 days. See also CarterCopter / Carter PAV ELA Aviación Fairey Rotodyne Gyrodyne PAL-V Piasecki Aircraft ARC Aerosystems References Further reading "Development of the Autogiro : A Technical Perspective" : J. Gordon Leishman: Hofstra University, New York, 2003. Jeff Lewis' in-depth history of the Autogyro Popular Rotorcraft Association (United States) Aircraft configurations Spanish inventions Vehicles introduced in 1923
Autogyro
[ "Engineering" ]
6,295
[ "Aircraft configurations", "Aerospace engineering" ]
62,728
https://en.wikipedia.org/wiki/Juan%20de%20la%20Cierva
Juan de la Cierva y Codorníu, 1st Count of la Cierva (; 21 September 1895 – 9 December 1936), was a Spanish civil engineer, pilot and a self-taught aeronautical engineer. His most famous accomplishment was the invention in 1920 of a rotorcraft called Autogiro, a single-rotor type of aircraft that came to be called autogyro in the English language. In 1923, after four years of experimentation, De la Cierva developed the articulated rotor, which resulted in the world's first successful flight of a stable rotary-wing aircraft, with his C.4 prototype. Early life Juan de la Cierva was born to a wealthy, aristocratic Spanish family, and for a time his father was the war minister. At the age of eight he was spending his pocket money with his friends on experiments with gliders in one of his father's work sheds. In their teens they constructed an aeroplane from the wreckage they had bought from a French aviator who had crashed the plane. The final aeroplane used wood from a Spanish bar counter for the propeller. He eventually earned a civil engineering degree and after building and testing the first successful autogyro, moved to the United Kingdom in 1925, where, with the support of Scottish industrialist James G. Weir, he established the Cierva Autogiro Company. At the outbreak of the Spanish Civil War, De la Cierva supported the Nationalist coalition forces, helping the rebels to obtain the De Havilland DH-89 'Dragon Rapide' which flew General Franco from the Canary Islands to Spanish Morocco. His brother was summarily executed by the Republican army in Paracuellos del Jarama. The gyroplane (autogyro) De la Cierva started building aircraft in 1912. In 1914, he designed and built a tri-motor aeroplane which was accepted by the Spanish government. In 1919 he started to consider the use of a rotor to generate lift at low airspeed, and eliminate the risk of stall. In order to achieve this, he used the ability of a lifting rotor to autorotate, whereby at a suitable pitch setting, a rotor will continue to rotate without mechanical drive, sustained by the torque equilibrium of the lift and drag forces acting on the blades. With De la Cierva's autogyro, the rotor was drawn through the air by means of a conventional propeller, with the result that the rotor generated sufficient lift to sustain level flight, climb and descent. Before this could be satisfactorily achieved, De la Cierva experienced several failures primarily associated with the unbalanced rolling movement generated when attempting take-off, due to dissymmetry of lift between the advancing and retreating blades. This major difficulty was resolved by the introduction of the flapping hinge. In 1923, De la Cierva's first successful autogyro was flown at Getafe aerodrome in Spain by Lt. Gomez Spencer. This pioneering work was carried out in De la Cierva's native Spain. In 1925, he brought his C.6 to Britain and demonstrated it to the Air Ministry at Farnborough, Hampshire. This machine had a four blade rotor with flapping hinges but relied upon conventional airplane controls for pitch, roll and yaw. It was based upon an Avro 504K fuselage, initial rotation of the rotor was achieved by the rapid uncoiling of a rope passed around stops on the undersides of the blades. The Farnborough demonstration was a great success, and resulted in an invitation to continue the work in the UK. As a direct result, and with the assistance of the Scottish industrialist James George Weir, the Cierva Autogiro Company, Ltd., was formed the following year. From the outset De la Cierva concentrated upon the design and the manufacture of rotor systems, relying on other established aircraft manufacturers to produce the airframes, predominantly the A.V. Roe Company. The Avro built C.8 was a refinement of the C.6, with the more powerful 180hp Lynx radial engine, and several C.8s were built. The C.8R incorporated drag hinges, due to blade flapping motion causing high blade root stresses in the rotor plane of rotation; this modification resulted in other problems such as ground resonance for which drag hinge dampers were fitted. The resolution of these fundamental rotor problems opened the way to progress, confidence built up rapidly, and after several cross-country flights a C.8L4 was entered for the 1928 Kings Cup Air Race. Although forced to withdraw, the C.8L4 subsequently completed a tour of the British Isles. Later that year it flew from London to Paris thus becoming the first rotating wing aircraft to cross the English Channel. The tour was subsequently extended to include Berlin, Brussels and Amsterdam. A predominant problem with the autogyro was driving the rotor prior to takeoff. Several methods were attempted in addition to the coiled rope system, which could take the rotor speed to 50% of that required, at which point movement along the ground to reach flying speed was necessary, while tilting the rotor to establish autorotation. Another approach was to tilt the tail stabiliser to deflect engine slipstream up through the rotor. The most acceptable solution was finally achieved with the C.19 Mk.4, which was produced in some quantities; a direct drive from the engine to the rotor was fitted, through which the rotor could be accelerated up to speed. The system was then declutched prior to executing the take-off run. As De la Cierva's autogyros achieved success and acceptance, others began to follow and with them came further innovation. Most important was the development of direct rotor control through cyclic pitch variation, achieved initially by tilting the rotor hub and subsequently by Raoul Hafner by the application of a spider mechanism that acted directly on each rotor blade. The first production direct control autogyro was the C.30, produced in quantity by Avro, Liore et Olivier, and Focke-Wulf. This machine allowed for change of motion in any direction – upwards, downwards or sideways – by the tilting of the horizontal rotors and also effected a minimising of some of controls used in more conventional aircraft of the period. Development of cyclic pitch variation was also influenced by the Dutch helicopter pioneer Albert Gillis von Baumhauer, who adopted swashplate principle in his designs and probably influenced Cierva in their meeting in 1928. The introduction of jump take-off was another major improvement in capability. The rotor was accelerated in no-lift pitch until the rotor speed required for flight was achieved, and then declutched. The loss of torque caused the blades to swing forward on angled drag hinges with a resultant increase in collective pitch, causing the aircraft to leap into the air. With all the engine power now applied to the forward thrusting propeller, it was now possible to continue in forward flight with the rotor in autorotation. The C.40 was the first production jump takeoff autogyro. Autogyros were built in many countries under De la Cierva licences, including France, Germany, Japan, Russia and the United States. De la Cierva's motivation was to produce an aircraft that would not stall but near the end of his life he accepted the advantages offered by the helicopter and began the initial work towards that end. In 1936, the Cierva Autogiro Company, Ltd. responded to a British Air Ministry specification for a Royal Navy helicopter with the gyrodyne. Death On the morning of 9 December 1936, he boarded a Dutch DC-2 of KLM at Croydon Airfield, bound for Amsterdam. After delay caused by heavy fog, the airliner took off at about 10:30 am but drifted slightly off course after takeoff and exploded after flying into a house on gently rising terrain to the south of the airport, killing 15 people, among them de la Cierva. Legacy Juan de la Cierva's work on rotor-wing dynamics made possible the modern helicopter, whose development as a practical means of flight had been prevented by a lack of understanding of these matters. The understanding that he established is applicable to all rotor-winged aircraft; though lacking true vertical flight capability, work on the autogyro forms the basis for helicopter analysis. De la Cierva's death in an aeroplane crash in December 1936 prevented him from fulfilling his recent decision to build a useful and reliable aircraft capable of true vertical flight for the Royal Navy, but it was his work on the autogyro that was used to achieve this goal. Technology developed for the autogyro was used in the development of the experimental Fw 61 helicopter, which was flown in 1936 by Cierva Autogiro Company licensee Focke-Achgelis. His pioneering work also led to the development of a third type of rotorcraft, the gyrodyne, a concept of his former technical assistant and successor as chief technical officer of the Cierva Autogyro Company, Dr. James Allan Jamieson Bennett. In 1966, Juan de la Cierva was inducted into the International Aerospace Hall of Fame for his innovation in rotor blade technology, using them to generate lift and to control the aircraft's attitude with precision. The Juan de la Cierva scholarship from the Spanish Ministry of Science in named after him. See also Cierva C.1 Cierva C.2 Cierva C.3 Cierva C.4 Cierva C.6 Cierva C.8 Cierva C.9 Cierva C.12 Cierva C.17 Cierva C.19 Cierva C.24 Cierva Air Horse Cierva W.9 Cierva CR Twin References Further reading Brooks, Peter W.: Cierva Autogiros. Smithsonian Institution Press, Washington 1988 Ord-Hume, Arthur W. J. G. (2011) Juan de la Cierva and his Autogiros. Catrine, Ayrshire:Stenlake Publishing External links Juan de la Cierva History of the autogyro and gyrodyne Cierva, Pitcairn and the Legacy of Rotary-Wing Flight U.S. Centennial of Flight – Juan de la Cierva "It is Easy to Fly Autogiro Declares Inventor" Popular Mechanics, January 1930 p. 45 and see drawings by scrolling up to p. 44 |- 1895 births 1936 deaths Aerospace engineers Aviators killed in aviation accidents or incidents in England Victims of aviation accidents or incidents in 1936 Counts of Spain People from Francoist Spain 20th-century Spanish engineers Spanish aviation pioneers Spanish inventors Aircraft designers Fellows of the Royal Aeronautical Society Members of the Early Birds of Aviation People from Murcia Spanish people of the Spanish Civil War (National faction) Accidental deaths in London Burials at Cementerio de la Almudena People educated at Instituto San Isidro Spanish scientists
Juan de la Cierva
[ "Engineering" ]
2,244
[ "Aerospace engineers", "Aerospace engineering" ]
62,729
https://en.wikipedia.org/wiki/Sense%20of%20balance
The sense of balance or equilibrioception is the perception of balance and spatial orientation. It helps prevent humans and nonhuman animals from falling over when standing or moving. Equilibrioception is the result of a number of sensory systems working together; the eyes (visual system), the inner ears (vestibular system), and the body's sense of where it is in space (proprioception) ideally need to be intact. The vestibular system, the region of the inner ear where three semicircular canals converge, works with the visual system to keep objects in focus when the head is moving. This is called the vestibulo-ocular reflex (VOR). The balance system works with the visual and skeletal systems (the muscles and joints and their sensors) to maintain orientation or balance. Visual signals sent to the brain about the body's position in relation to its surroundings are processed by the brain and compared to information from the vestibular and skeletal systems. Vestibular system In the vestibular system, equilibrioception is determined by the level of a fluid called endolymph in the labyrinth, a complex set of tubing in the inner ear. Dysfunction When the sense of balance is interrupted it causes dizziness, disorientation and nausea. Balance can be upset by Ménière's disease, superior canal dehiscence syndrome, an inner ear infection, by a bad common cold affecting the head or a number of other medical conditions including but not limited to vertigo. It can also be temporarily disturbed by quick or prolonged acceleration, for example, riding on a merry-go-round. Blows can also affect equilibrioreception, especially those to the side of the head or directly to the ear. Most astronauts find that their sense of balance is impaired when in orbit because they are in a constant state of weightlessness. This causes a form of motion sickness called space adaptation syndrome. System overview This overview also explains acceleration as its processes are interconnected with balance. Mechanical There are five sensory organs innervated by the vestibular nerve; three semicircular canals (Horizontal SCC, Superior SCC, Posterior SCC) and two otolith organs (saccule and utricle). Each semicircular canal (SSC) is a thin tube that doubles in thickness briefly at a point called osseous ampullae. At their center-base, each contains an ampullary cupula. The cupula is a gelatin bulb connected to the stereocilia of hair cells, affected by the relative movement of the endolymph it is bathed in. Since the cupula is part of the bony labyrinth, it rotates along with actual head movement, and by itself without the endolymph, it cannot be stimulated and therefore, could not detect movement. Endolymph follows the rotation of the canal; however, due to inertia its movement initially lags behind that of the bony labyrinth. The delayed movement of the endolymph bends and activates the cupula. When the cupula bends, the connected stereocilia bend along with it, activating chemical reactions in the hair cells surrounding crista ampullaris and eventually create action potentials carried by the vestibular nerve signaling to the body that it has moved in space. After any extended rotation, the endolymph catches up to the canal and the cupula returns to its upright position and resets. When extended rotation ceases, however, endolymph continues, (due to inertia) which bends and activates the cupula once again to signal a change in movement. Pilots doing long banked turns begin to feel upright (no longer turning) as endolymph matches canal rotation; once the pilot exits the turn the cupula is once again stimulated, causing the feeling of turning the other way, rather than flying straight and level. The horizontal SCC handles head rotations about a vertical axis (e.g. looking side to side), the superior SCC handles head movement about a lateral axis (e.g. head to shoulder), and the posterior SCC handles head rotation about a rostral-caudal axis (e.g. nodding). SCC sends adaptive signals, unlike the two otolith organs, the saccule and utricle, whose signals do not adapt over time. A shift in the otolithic membrane that stimulates the cilia is considered the state of the body until the cilia are once again stimulated. For example, lying down stimulates cilia and standing up stimulates cilia, however, for the time spent lying the signal that you are lying remains active, even though the membrane resets. Otolithic organs have a thick, heavy gelatin membrane that, due to inertia (like endolymph), lags behind and continues ahead past the macula it overlays, bending and activating the contained cilia. Utricle responds to linear accelerations and head-tilts in the horizontal plane (head to shoulder), whereas saccule responds to linear accelerations and head-tilts in the vertical plane (up and down). Otolithic organs update the brain on the head-location when not moving; SCC update during movement. Kinocilium are the longest stereocilia and are positioned (one per 40-70 regular cilia) at the end of the bundle. If stereocilia go towards kinocilium, depolarization occurs, causing more neurotransmitters, and more vestibular nerve firings, as compared to when stereocilia tilt away from kinocilium (hyperpolarization, less neurotransmitter, less firing). Neural First order vestibular nuclei (VN) project to lateral vestibular nucleus (IVN), medial vestibular nucleus (MVN), and superior vestibular nucleus (SVN). The inferior cerebellar peduncle is the largest center through which balance information passes. It is the area of integration between proprioceptive, and vestibular inputs, to aid in unconscious maintenance of balance and posture. The inferior olivary nucleus aids in complex motor tasks by encoding coordinating timing sensory information; this is decoded and acted upon in the cerebellum. The cerebellar vermis has three main parts. The vestibulocerebellum regulates eye movements by the integration of visual info provided by the superior colliculus and balance information. The spinocerebellum integrates visual, auditory, proprioceptive, and balance information to act out body and limb movements. It receives input from the trigeminal nerve, dorsal column (of the spinal cord), midbrain, thalamus, reticular formation and vestibular nuclei (medulla) outputs. Lastly, the cerebrocerebellum plans, times, and initiates movement after evaluating sensory input from, primarily, motor cortex areas, via pons and cerebellar dentate nucleus. It outputs to the thalamus, motor cortex areas, and red nucleus. The flocculonodular lobe is a cerebellar lobe that helps maintain body equilibrium by modifying muscle tone (the continuous and passive muscle contractions). MVN and IVN are in the medulla, LVN and SVN are smaller and in pons. SVN, MVN, and IVN ascend within the medial longitudinal fasciculus. LVN descend the spinal cord within the lateral vestibulospinal tract and ends at the sacrum. MVN also descend the spinal cord, within the medial vestibulospinal tract, ending at lumbar 1. The thalamic reticular nucleus distributes information to various other thalamic nuclei, regulating the flow of information. It is speculatively able to stop signals, ending transmission of unimportant info. The thalamus relays info between pons (cerebellum link), motor cortices, and insula. The insula is also heavily connected to motor cortices; the insula is likely where balance is likely brought into perception. The oculomotor nuclear complex refers to fibers going to tegmentum (eye movement), red nucleus (gait (natural limb movement)), substantia nigra (reward), and cerebral peduncle (motor relay). Nucleus of Cajal are one of the named oculomotor nuclei, they are involved in eye movements and reflex gaze coordination. The abducens nerve solely innervates the lateral rectus muscle of the eye, moving the eye with the trochlear nerve. The trochlear solely innervates the superior oblique muscle of the eye. Together, trochlear and abducens contract and relax to simultaneously direct the pupil towards an angle and depress the globe on the opposite side of the eye (e.g. looking down directs the pupil down and depresses (towards the brain) the top of the globe). The pupil is not only directed, but often rotated, by these muscles. (See visual system) The thalamus and superior colliculus are connected via the lateral geniculate nucleus. The superior colliculus (SC) is the topographical map for balance and quick orienting movements with primarily visual inputs. SC integrates multiple senses. Other animals Some animals have better equilibrioception than humans; for example, a cat uses its inner ear and tail to walk on a thin fence. Equilibrioception in many marine animals is done with an entirely different organ, the statocyst, which detects the position of tiny calcareous stones to determine which way is "up". In plants Plants could be said to exhibit a form of equilibrioception, in that when rotated from their normal attitude the stems grow in the direction that is upward (away from gravity) while their roots grow downward (in the direction of gravity). This phenomenon is known as gravitropism and it has been shown that, for example, poplar stems can detect reorientation and inclination. See also Proprioception Vertigo References External links Vestibular system Sensory systems Motor control
Sense of balance
[ "Biology" ]
2,136
[ "Behavior", "Motor control" ]
62,737
https://en.wikipedia.org/wiki/Finger%20%28protocol%29
In computer networking, the Name/Finger protocol and the Finger user information protocol are simple network protocols for the exchange of human-oriented status and user information. Name/Finger protocol The Name/Finger protocol is based on Request for Comments document RFC 742 (December 1977) as an interface to the name and finger programs that provide status reports on a particular computer system or a particular person at network sites. The finger program was written in 1971 by Les Earnest who created the program to solve the need of users who wanted information on other users of the network. Information on who is logged in was useful to check the availability of a person to meet. This was probably the earliest form of presence information for remote network users. Prior to the finger program, the only way to get this information on WAITS was with a WHO program that showed IDs and terminal line numbers (the server's internal number of the communication line over which the user's terminal is connected) for logged-in users. In reference to the name FINGER, Les Earnest, wrote that he saw users of the WAITS time-sharing system run their fingers down the output of the WHO command. Finger user information protocol The finger daemon runs on TCP port 79. The client will (in the case of remote hosts) open a connection to port 79. An RUIP (Remote User Information Program) is started on the remote end of the connection to process the request. The local host sends the RUIP one line query based upon the Finger query specification, and waits for the RUIP to respond. The RUIP receives and processes the query, returns an answer, then initiates the close of the connection. The local host receives the answer and the close signal, then proceeds to close its end of the connection. The Finger user information protocol is based on RFC 1288 (The Finger User Information Protocol, December 1991). Typically the server side of the protocol is implemented by a program fingerd or in.fingerd (for finger daemon), while the client side is implemented by the name and finger programs which are supposed to return a friendly, human-oriented status report on either the system at the moment or a particular person in depth. There is no required format, and the protocol consists mostly of specifying a single command line. The program would supply information such as whether a user is currently logged-on, e-mail address, full name etc. As well as standard user information, finger displays the contents of the .project and .plan files in the user's home directory. Often this file (maintained by the user) contains either useful information about the user's current activities, similar to micro-blogging, or alternatively all manner of humor. Security concerns Supplying such detailed information as e-mail addresses and full names was considered acceptable and convenient in the early days of networking, but later was considered questionable for privacy and security reasons. Finger information has been used by hackers as a way to initiate a social engineering attack on a company's computer security system. By using a finger client to get a list of a company's employee names, email addresses, phone numbers, and so on, a hacker can call or email someone at a company requesting information while posing as another employee. The finger daemon has also had several exploitable security holes crackers have used to break into systems. For example, in 1988 the Morris worm exploited an overflow vulnerability in fingerd (among others) to spread. For these reasons, by the late 1990s the vast majority of sites on the Internet no longer offered the service. Application support It is implemented on Unix (like macOS), Unix-like systems (like Linux and FreeBSD), and current versions of Windows (finger.exe command). Other software has finger support: ELinks Lynx Minuet Kristall Lagrange See also LDAP Ph Protocol Social network service WebFinger References External links (December 1977) (December 1991) Internet protocols Internet Standards OS/2 commands Unix user management and support-related utilities Unix network-related software Windows administration 1977 software
Finger (protocol)
[ "Technology" ]
825
[ "Windows commands", "Computing commands", "OS/2 commands" ]
62,745
https://en.wikipedia.org/wiki/Lethal%20injection
Lethal injection is the practice of injecting one or more drugs into a person (typically a barbiturate, paralytic, and potassium solution) for the express purpose of causing rapid death. The main application for this procedure is capital punishment, but the term may also be applied in a broader sense to include euthanasia and other forms of suicide. The drugs cause the person to become unconscious, stops their breathing, and causes a heart arrhythmia, in that order. First developed in the United States, the method has become a legal means of execution in Mainland China, Thailand (since 2003), Guatemala, Taiwan, the Maldives, Nigeria, and Vietnam, though Guatemala abolished the death penalty for civilian cases in 2017 and has not conducted an execution since 2000, and the Maldives has never carried out an execution since its independence. Although Taiwan permits lethal injection as an execution method, no executions have been carried out in this manner; the same is true for Nigeria. Lethal injection was also used in the Philippines until the country re-abolished the death penalty in 2006. Although primarily introduced as a more "humane" method of execution, lethal injection has been subject to criticism, being described by some as cruel and unusual. Opponents in particular critique the operation of lethal injections by untrained corrections officers and the lack of guarantee that the victim will be unconscious in every individual case. There have been instances in which condemned individuals have been injected with paralytics, and then a cardiac arrest-inducing agent, while still conscious; this has been compared to torture. Proponents often say that there is no reasonable or less cruel alternative. History Lethal injection gained popularity in the late 20th century as a form of execution intended to supplant electrocution, gas inhalation, hanging and firing squad, that were considered to be less humane. It is now the most common form of legal execution in the United States. Lethal injection was proposed on January 17, 1888, by Julius Mount Bleyer, a New York doctor who praised it as being cheaper than hanging. Bleyer's idea was never used, due to a series of botched executions and the eventual rise of public disapproval in electrocutions. Lethal injections were first used by Nazi Germany to execute prisoners during World War II. Nazi Germany developed the Action T4 euthanasia program led by Karl Brandt as one method to dispose of Lebensunwertes Leben ("life unworthy of life"). During the war, lethal injections were also administered to children detained at the Sisak concentration camp by the camp's commander, the physician Antun Najžer. Royal Commission on Capital Punishment 1949–1953 also considered lethal injection, but eventually ruled it out after pressure from the British Medical Association (BMA). Implementation On May 10, 1977, Oklahoma became the first U.S. state to approve lethal injection as Governor David Boren signed a bill into law. Episcopal Reverend Bill Wiseman had introduced the method into the Oklahoma legislature, where it passed and was quickly sent to the Governor's desk (Title 22, Section 1014(A)). The next day, Texas became the second U.S. state to approve a lethal injection law. Since then, until 2004, 37 of the 38 states using capital punishment introduced lethal injection statutes (the last state, Nebraska, maintaining electrocution as its single method until adopting injection in 2009, after its Supreme Court deemed the electric chair unconstitutional). On May 11, 1977, the day after the new method had become state law, Oklahoma's state medical examiner Jay Chapman proposed making the process a new, less painful method of execution, known as Chapman's protocol: "An intravenous saline drip shall be started in the prisoner's arm, into which shall be introduced a lethal injection consisting of an ultrashort-acting barbiturate in combination with a chemical paralytic." The Chapman protocol was approved by anesthesiologist Stanley Deutsch, formerly Head of the Department of Anaesthesiology of the Oklahoma University Medical School, the On August 29, 1977, Texas adopted the new method of execution, switching from electrocution. On December 7, 1982, Texas became the first U.S. state and territory in the world to use lethal injection to carry out capital punishment, for the execution of Charles Brooks, Jr. The People's Republic of China began using this method in 1997, Guatemala in 1996, the Philippines in 1999, Thailand in 2003, and Taiwan in 2005. Vietnam first used this method in 2013. The Philippines abolished the death penalty in 2006, with their last execution being in 2000. All seven executions made from 1999 to 2000 were carried out through lethal injection. Guatemalan law still allows for the death penalty and lethal injection is the sole method allowed, but no penalties have been carried out since 2000 when the country experienced the live televised double executions of Amílcar Cetino Pérez and Tomás Cerrate Hernández. The export of drugs to be used for lethal injection was banned by the European Union (EU) in 2011, together with other items under the EU Torture Regulation. Since then, pentobarbital followed thiopental in the European Union's ban. Complications of executions and cessation of supply of lethal injection drugs By early 2014, a number of botched executions involving lethal injection, and a rising shortage of suitable drugs, had some U.S. states reconsidering lethal injection as a form of execution. Tennessee, which had previously offered inmates a choice between lethal injection and the electric chair, passed a law in May 2014 which gave the state the option to use the electric chair if lethal injection drugs are either unavailable or declared unconstitutional. At the same time, Wyoming and Utah were considering the use of execution by firing squad in addition to other existing execution methods. In 2016, Pfizer joined over 20 American and European pharmaceutical manufacturers that had previously blocked the sale of their drugs for use in lethal injections, effectively closing the open market for FDA-approved manufacturers for any potential lethal execution drug. In the execution of Carey Dean Moore on August 14, 2018, the State of Nebraska used a novel drug cocktail comprising diazepam, fentanyl, cisatracurium, and potassium chloride, over the strong objections of the German pharmaceutical company Fresenius Kabi. Potassium acetate had been incorrectly used in place of potassium chloride in Oklahoma in January 2015 for the execution of Charles Frederick Warner. In August 2017, the State of Florida first used the drug in the execution of Mark James Asay using a combination of etomidate, rocuronium bromide, and potassium acetate as part of a new protocol. Procedures Procedure in the United States In the United States, the typical lethal injection begins with the condemned person being strapped onto a gurney; two intravenous cannulas ("IVs") are then inserted, one in each arm. Only one is necessary to carry out the execution; the other is reserved as a backup in the event the primary line fails. A line leading from the IV line in an adjacent room is attached to the prisoner's IV and secured so that the line does not snap during the injections. The arm of the condemned person is swabbed with alcohol before the cannula is inserted. The needles and equipment used are sterilized. Questions have been raised about why these precautions against infection are performed despite the purpose of the injection being death. The several explanations include: cannulae are sterilized and have their quality heavily controlled during manufacture, so using sterile ones is a routine medical procedure. Secondly, the prisoner could receive a stay of execution after the cannulae have been inserted, as happened in the case of James Autry in October 1983 (he was eventually executed on March 14, 1984). Third, use of unsterilized equipment would be a hazard to the prison personnel in case of an accidental needle stick injury. Following connection of the lines, saline drips are started in both arms. This, too, is standard medical procedure: it must be ascertained that the IV lines are not blocked, ensuring the chemicals have not precipitated in the IV lines and blocked the needle, preventing the drugs from reaching the subject. A heart monitor is attached to the inmate. In most states, the intravenous injection is a series of drugs given in a set sequence, designed to first induce unconsciousness followed by death through paralysis of respiratory muscles and/or by cardiac arrest through depolarization of cardiac muscle cells. The execution of the condemned in most states involves three separate injections (in sequential order): Sodium thiopental or pentobarbital: ultra-short-action barbiturate, an anesthetic agent used at a high dose that renders the person unconscious in less than 30 seconds. Depression of respiratory activity is one of the characteristic actions of this drug. Consequently, the lethal-injection doses, as described in the sodium thiopental section below, will—even in the absence of the following two drugs—cause death due to lack of breathing, as happens with overdoses of opioids. Pancuronium bromide: non-depolarizing muscle relaxant, which causes complete, fast, and sustained paralysis of the striated skeletal muscles, including the diaphragm and the rest of the respiratory muscles; this would eventually cause death by asphyxiation. Potassium chloride: a potassium salt, which increases the blood and cardiac concentration of potassium to stop the heart via an abnormal heartbeat and thus cause death by cardiac arrest. The drugs are not mixed externally to avoid precipitation. A sequential injection is also key to achieve the desired effects in the appropriate order: administration of the pentobarbital renders the person unconscious; the infusion of the pancuronium bromide induces complete paralysis, including that of the lungs and diaphragm rendering the person unable to breathe. If the person being executed were not already completely unconscious, the injection of a highly concentrated solution of potassium chloride could cause severe pain at the site of the IV line, as well as along the punctured vein; it interrupts the electrical activity of the heart muscle and causes it to stop beating, bringing about the death of the person being executed. The intravenous tubing leads to a room next to the execution chamber, usually separated from the condemned by a curtain or wall. Typically, a prison employee trained in venipuncture inserts the needle, while a second prison employee orders, prepares, and loads the drugs into the lethal injection syringes. Two other staff members take each of the three syringes and secure them into the IVs. After the curtain is opened to allow the witnesses to see inside the chamber, the condemned person is then permitted to make a final statement. Following this, the warden signals that the execution may commence, and the (either prison staff or private citizens depending on the jurisdiction) then manually inject the three drugs in sequence. During the execution, the condemned's cardiac rhythm is monitored. Death is pronounced after cardiac activity stops. Death usually occurs within seven minutes, although, due to complications in finding a suitable vein, the whole procedure can take up to two hours, as was the case with the execution of Christopher Newton on May 24, 2007. According to state law, if a physician's participation in the execution is prohibited for reasons of medical ethics, then the death ruling can be made by the state medical examiner's office. After confirmation that death has occurred, a coroner signs the condemned's death certificate. Missouri and, before the abolition of capital punishment, Delaware, uses or used a lethal injection machine designed by Massachusetts-based Fred A. Leuchter consisting of two components: the delivery module and the control module. The delivery module is in the execution chamber. It must be pre-loaded with the proper chemicals and operates the timing of the dosage. The control module is in the control room. This is the portion which officially starts the procedure. This is done by first arming the machine, and then with station members simultaneously pressing each of their buttons on the panel to activate the delivery. The computer then deletes who actually started the syringes, so the participants are not aware if their syringe contained saline or one of the drugs necessary for execution (to assuage guilt in a manner similar to the blank cartridge in execution by firing squad). The delivery module has eight syringes. The end syringes (i.e., syringes 7 and 8) containing saline, syringes 2, 4 and 6 containing the lethal drugs for the main line and syringes 1, 3 and 5 containing the injections for the backup line. The system was used in New Jersey before the abolition of the death penalty in 2007. Illinois previously used the computer, and Missouri and Delaware use the manual injection switch on the delivery panel. Eleven states have switched, or have stated their intention to switch, to a one-drug lethal injection protocol. A one-drug method is using the single drug sodium thiopental to execute someone. The first state to switch to this method was Ohio, on December 8, 2009. In 2011, after pressure by activist organizations, the manufacturers of pentobarbital and sodium thiopental halted the supply of the drugs to U.S. prisons performing lethal injections and required all resellers to do the same. Procedure in China In the past, the People's Republic of China executed prisoners primarily by means of shooting. In recent years, lethal injection has become more common. The specific lethal injection procedures, including the drug or drugs used, are a state secret and not publicly known. Lethal injection in China was legalized in 1996. The number of shooting executions slowly decreased; and, in February 2009, the Supreme People's Court ordered the discontinuation of firing squads by the following year under the conclusion that injections were more humane to the prisoner. It has been suggested that the switch is also in response to executions being horrifying to the public. Lethal injections are less expensive than firing squads, with a single dose costing 300 yuan compared to 700 yuan for a shooting execution. Procedure in Vietnam Prior to 2013, shooting was the primary method of execution in Vietnam. The use of lethal injection method was approved by the government in 2010, adopted in 2011, and then started being used in 2013. Urges to adopt other methods than lethal injection to replace the shooting execution began earlier, in 2006, after concerns of the mental state of the firing squad members after executions. The drugs used consist of pancuronium bromide (paralytic), potassium chloride (cardiotoxin), and sodium thiopental (anesthetic). The production of these substances, however, is low in Vietnam. This led to drug shortages and to considering using other domestic poisons or the readoption of shootings. The first prisoner in Vietnam to be executed by lethal injection, on August 6, 2013, was 27-year-old Nguyen Anh Tuan, arrested for murder and robbery. Between 2013 and 2016, 429 prisoners were executed by this method in the country. Drugs Conventional lethal injection protocol Typically, three drugs are used in lethal injection. Pancuronium bromide (Pavulon) is used to cause muscle paralysis and decreased neural transmission to the lungs, potassium chloride to stop the heart, and midazolam for sedation. Pancuronium bromide (Pavulon) Lethal injection dosage: 100 milligrams Pancuronium bromide (Trade name: Pavulon): The related drug curare, like pancuronium, is a non-depolarizing muscle relaxant (a paralytic agent) that blocks the action of acetylcholine at the motor end-plate of the neuromuscular junction. Binding of acetylcholine to receptors on the end-plate causes depolarization and contraction of the muscle fiber; non-depolarizing neuromuscular blocking agents like pancuronium stop this binding from taking place The typical dose for pancuronium bromide in capital punishment by lethal injection is 0.2 mg/kg and the duration of paralysis is around 4 to 8 hours. Paralysis of respiratory muscles will lead to death in a considerably shorter time. Pancuronium bromide is a derivative of the alkaloid malouetine from the plant Malouetia bequaertiana. Instead of pancuronium, other drugs in use are succinylcholine chloride and tubocurarine chloride. Potassium chloride Lethal injection dosage: 100 mEq (milliequivalents) Potassium is an electrolyte, 98% of which is intracellular. The 2% remaining outside the cell has great implications for cells that generate action potentials. Doctors prescribe potassium for patients when potassium levels in the blood are insufficient, called hypokalemia. The potassium can be given orally, which is the safest route; or it can be given intravenously, in which case strict rules and hospital protocols govern the rate at which it is given. The usual intravenous dose of 10–20 mEq per hour is given slowly since it takes time for the electrolyte to equilibrate into the cells. When used in state-sanctioned lethal injection, bolus potassium injection affects the electrical conduction of heart muscle and ultimately leads to cardiac arrest. The potassium bolus delivered for lethal injection causes a rapid onset of elevated extracellular potassium, also known as hyperkalemia, causing depolarization of the resting membrane potential of the heart muscle cells, particularly impacting the heart's pacemaker cells. However, potassium's effect on membrane potential is concentration dependent and ultimately occurs in two phases. Given the reference range for serum potassium is 3.5-5.5 mEq/L, concentrations up to 8 mEq/L shorten action potential duration and the refractory period due to an allosteric effect of potassium ions on potassium channels, leading to increased conduction velocity and subsequently quicker potassium efflux which contributes to quicker repolarization and the mentioned shortening of the refractory period. At approximately 8 mEq/L and beyond, the shortened refractory period and increased resting membrane potential diminishes the quantity of voltage-gated sodium channels ready to contribute to rapid phase 0 depolarization due to the inactivation gate requiring further repolarization to open back up. At potassium concentrations beyond 14mEq/L, enough sodium channels remain inactivated to no longer generate an action potential, ultimately leading to no heart beat. Heart potassium levels after lethal injection can reach 160.0 mEq/L. Depolarizing the muscle cell inhibits its ability to fire by reducing the available number of sodium channels (they are placed in an inactivated state). ECG changes vary depending on serum potassium concentrations and on the individual. Peaked T-waves signifying faster repolarization and potentially instances of early-repolarization and phase 2 re-entry (Brugada, Short QT, and Early-Repolarization Syndromes) are evident in the first phase of hyperkalemia. This progresses into a broadening and lengthening of the P wave and PR interval, then eventually disappearance of the P wave, widening of the QRS complex, and finally, asystole. This process can occur in the span of 30 to 60 seconds, but there have been cases of 'botched' procedures, leading to one inmate gasping for air for approximately 10 to 13 minutes. Sodium thiopental Lethal injection dosage: 2–5 grams Sodium thiopental (US trade name: Sodium Pentothal) is an ultra-short acting barbiturate, often used for anesthesia induction and for medically induced coma. The typical anesthesia induction dose is 0.35 grams. Loss of consciousness is induced within 30–45 seconds at the typical dose, while a 5 gram dose (14 times the normal dose) is likely to induce unconsciousness in 10 seconds. A full medical dose of thiopental reaches the brain in about 30 seconds. This induces an unconscious state. Five to twenty minutes after injection, approximately 15% of the drug is in the brain, with the rest in other parts of the body. The half-life of this drug is about 11.5 hours, and the concentration in the brain remains at around 5–10% of the total dose during that time. When a 'mega-dose' is administered, as in state-sanctioned lethal injection, the concentration in the brain during the tail phase of the distribution remains higher than the peak concentration found in the induction dose for anesthesia, because repeated doses—or a single very high dose as in lethal injection—accumulate in high concentrations in body fat, from which the thiopental is gradually released. This is the reason why an ultra-short acting barbiturate, such as thiopental, can be used for long-term induction of medical coma. Historically, thiopental has been one of the most commonly used and studied drugs for the induction of coma. Protocols vary for how it is given, but the typical doses are anywhere from 500 mg up to 1.5 grams. It is likely that this data was used to develop the initial protocols for state-sanctioned lethal injection, according to which one gram of thiopental was used to induce the coma. Now, most states use 5 grams to be absolutely certain the dosage is effective. Pentobarbital was introduced at the end of 2010 due to a shortage of sodium thiopental, and has since become the primary sedative in lethal injections in the United States. Barbiturates are the same class of drug used in medically assisted suicide. In euthanasia protocols, the typical dose of thiopental is 1.5 grams; the Dutch Euthanasia protocol indicates 1-1.5 grams or 2 grams in case of high barbiturate tolerance. The dose used for capital punishment is therefore about 3 times more than the dose used in euthanasia. New lethal injection protocols The Ohio protocol, developed after the incomplete execution of Romell Broom, aims to ensure the rapid and painless onset of anesthesia by only using sodium thiopental and eliminating the use of Pavulon and potassium as the second and third drugs, respectively. It also provides for a secondary fail-safe measure using intramuscular injection of midazolam, followed by sufentanil or hydromorphone in the event intravenous administration of the sodium thiopental proves problematic. The first state to switch to use midazolam as the first drug in a new three-drug protocol was Florida on October 15, 2013. Then on November 14, 2013, Ohio made the same move. Primary: Sodium thiopental, 5 grams, intravenous Secondary: Midazolam, 10 mg, intramuscular, sufentanil, 450 micrograms, intramuscular and/or hydromorphone, 40 mg, intramuscular In the brief for the U.S. courts written by accessories, the State of Ohio implies that they were unable to find any physicians willing to participate in development of protocols for executions by lethal injection, as this would be a violation of medical ethics, such as the Geneva Promise, and such physicians would be thrown out of the medical community and shunned for engaging in such deeds, even if they could not lawfully be stripped of their license. On December 8, 2009, Kenneth Biros became the first person executed using Ohio's new single-drug execution protocol. He was pronounced dead at 11:47 am EST, 10 minutes after receiving the injection. On September 10, 2010, Washington became the second state to use the single-drug Ohio protocol with the execution of Cal Coburn Brown, who was proclaimed dead within two minutes after receiving the single-drug injection of sodium thiopental. Seven states (Arizona, Georgia, Idaho, Missouri, Ohio, South Dakota, and Texas) have used the single-drug execution protocol. The state of Washington used this single drug method, but stopped when execution was abolished. Five additional states (Arkansas, Kentucky, Louisiana, North Carolina, and Tennessee) announced that they would switch to a single-drug protocol but, as of April 2014, had not executed anyone since switching protocols. After sodium thiopental began being used in executions, Hospira, the only American company that made the drug, stopped manufacturing it due to its use in executions. The subsequent nationwide shortage of sodium thiopental led states to seek other drugs to use in executions. Pentobarbital, often used for animal euthanasia, was used as part of a three-drug cocktail for the first time on December 16, 2010, when John David Duty was executed in Oklahoma. It was then used as the drug in a single-drug execution for the first time on March 10, 2011, when Johnnie Baston was executed in Ohio. Euthanasia protocol Lethal injection has also been used in cases of euthanasia to facilitate voluntary death in patients with terminal or chronically painful conditions. Euthanasia can be accomplished either through oral, intravenous, or intramuscular administration of drugs. In individuals who are incapable of swallowing lethal doses of medication, an intravenous route is preferred. The following is a Dutch protocol for parenteral (intravenous) administration to obtain euthanasia, with the old protocol listed first and the new protocol listed second: First a coma is induced by intravenous administration of 1 g sodium thiopental (Nesdonal), if necessary, 1.5–2.0 g of the product in case of strong tolerance to barbiturates. Then, 45 mg alcuronium chloride (Alloferin) or 18 mg pancuronium bromide (Pavulon) is injected. To ensure optimal availability, these agents are preferably given intravenously. However, they can also be injected intramuscularly. In severe hepatitis or cirrhosis of the liver, alcuronium is the agent of first choice. Intravenous administration is the most reliable and rapid way to accomplish euthanasia, so can be safely recommended. A coma is first induced by intravenous administration of 20 mg/kg sodium thiopental in a small volume (10 ml physiological saline). Then, a triple intravenous dose of a nondepolarizing neuromuscular muscle relaxant is given, such as 20 mg pancuronium bromide or 20 mg vecuronium bromide (Norcuron). The muscle relaxant should preferably be given intravenously, to ensure optimal availability. Only for pancuronium dibromide, the agent may also be given intramuscularly in a dose of 40 mg. A euthanasia machine may allow an individual to perform the process alone. Constitutionality in the United States In 2006, the Supreme Court ruled in Hill v. McDonough that death-row inmates in the United States could challenge the constitutionality of states' lethal injection procedures through a federal civil rights lawsuit. Since then, numerous death-row inmates have brought such challenges in the lower courts, claiming that lethal injection as practiced violates the ban on "cruel and unusual punishment" found in the Eighth Amendment to the United States Constitution. Lower courts evaluating these challenges have reached opposing conclusions. For example, courts have found that lethal injection as practiced in California, Florida, and Tennessee is unconstitutional. Other courts have found that lethal injection as practiced in Missouri, Arizona, and Oklahoma is constitutionally acceptable. As of 2014, California has nearly 750 prisoners condemned to death by lethal injection despite the moratorium imposed when in 2006 a federal court found California's lethal injection procedures to be unconstitutional. A newer lethal injection facility has been constructed at San Quentin State Prison which cost over $800,000, but it has yet to be used because a state court found that the California Department of Corrections and Rehabilitation violated the California Administrative Procedure Act by attempting to prevent public oversight when new injection procedures were being created. On September 25, 2007, the United States Supreme Court agreed to hear a lethal-injection challenge arising from Kentucky, Baze v. Rees. In Baze, the Supreme Court addressed whether Kentucky's particular lethal-injection procedure (using the standard three-drug protocol) comports with the Eighth Amendment; it also determined the proper legal standard by which lethal-injection challenges in general should be judged, all in an effort to bring some uniformity to how these claims are handled by the lower courts. Although uncertainty over whether executions in the United States would be put on hold during the period in which the United States Supreme Court considers the constitutionality of lethal injection initially arose after the court agreed to hear Baze, no executions took place during the period between when the court agreed to hear the case and when its ruling was announced, with the exception of one lethal injection in Texas hours after the court made its announcement. On April 16, 2008, the Supreme Court rejected Baze v. Rees, thereby upholding Kentucky's method of lethal injection in a majority 7–2 decision. Justices Ruth Bader Ginsburg and David Souter dissented. Several states immediately indicated plans to proceed with executions. The U.S. Supreme Court also upheld a modified lethal-injection protocol in the 2015 case Glossip v. Gross. By the time of that case, Oklahoma had altered its execution protocol to use midazolam instead of thiopental or pentobarbital; the latter two drugs had become unavailable for executions due to the European embargo on selling them to prisons. Inmates on Oklahoma's death row alleged that the use of midazolam was unconstitutional, because the drug was not proven to render a person unconscious as thiobarbital would. The Supreme Court found that the prisoners failed to demonstrate that midazolam would create a high risk of severe pain, and that the prisoners had not provided an alternative, practical method of execution that would have a lower risk. Consequently, it ruled that the new method was permissible under the Eighth Amendment. On March 15, 2018, Russell Bucklew, a Missouri death-row inmate who had been scheduled to be executed on May 21, 2014, appealed the constitutionality of lethal injection on an as-applied basis. The basis for Bucklew's appeal was due to Bucklew's allegation that his rare medical condition would interfere with the effects of the drugs, potentially causing him to choke on his own blood. On April 1, 2019, The Supreme Court ruled against Bucklew on the grounds that his proposed alternative to lethal injection, nitrogen hypoxia, was neither "readily implemented" nor established to "significantly reduce a substantial risk of severe pain". Bucklew was executed on October 1, 2019. Ethics of lethal injection The American Medical Association (AMA) believes that a physician's opinion on capital punishment is a personal decision. Since the AMA is founded on preserving life, they argue that a doctor "should not be a participant" in executions in any professional capacity with the exception of "certifying death, provided that the condemned has been declared dead by another person" and "relieving the acute suffering of a condemned person while awaiting execution". The AMA, however, does not have the ability to enforce its prohibition of doctors from participation in lethal injection. As medical licensing is handled on the state level, it does not have the authority to revoke medical licenses. Typically, most states do not require that physicians administer the drugs for lethal injection, but most states do require doctors, nurses or paramedics to prepare the substances before their application and to attest the inmate's death after it. Some states specifically detail that participation in a lethal injection is not to be considered practicing medicine. For example, Delaware law reads "the administration of the required lethal substance or substances required by this section shall not be construed to be the practice of medicine and any pharmacist or pharmaceutical supplier is authorized to dispense drugs to the Commissioner or the Commissioner's designee, without prescription, for carrying out the provisions of this section, notwithstanding any other provision of law" (excerpt from Title 11, Chapter 42, § 4209). State law allows for the dispensing of the drugs/chemicals for lethal injection to the state's department of corrections without a prescription. However, states are still subject to DEA regulation with respect to lethal injection drugs. Controversy Opposition Opponents of lethal injection have voiced concerns that abuse, misuse and even criminal conduct is possible when there is not a proper chain of command and authority for the acquisition of death-inducing drugs. Awareness Opponents of lethal injection believe that it is not painless as practiced in the United States. They argue that thiopental is an ultrashort-acting barbiturate that may wear off (anesthesia awareness) and lead to consciousness and an uncomfortable death wherein the inmates are unable to express discomfort because they have been paralyzed by the paralytic agent. Opponents point to sodium thiopental's typical use as an induction agent and not in the maintenance phase of surgery because of its short-acting nature. Following the administration of thiopental, pancuronium bromide, a paralytic agent, is given. Opponents argue that pancuronium bromide not only dilutes the thiopental, but, as it paralyzes the inmate, also prevents the inmate from expressing pain. Additional concerns have been raised over whether inmates are administered an appropriate amount of thiopental owing to the rapid redistribution of the drug out of the brain to other parts of the body. Additionally, opponents argue that the method of administration is also flawed. They contend that because the personnel administering the lethal injection lack expertise in anesthesia, the risk of failure to induce unconsciousness is greatly increased. In reference to this issue, Jay Chapman, the creator of the American method, said, "It never occurred to me when we set this up that we'd have complete idiots administering the drugs". Opponents also argue that the dose of sodium thiopental must be set for each individual patient, and not restricted to a fixed protocol. Finally, they contend that remote administration may result in an increased risk that insufficient amounts of the lethal-injection drugs enter the inmate's bloodstream. In summary, opponents argue that the effect of dilution or of improper administration of thiopental is that the inmate dies an agonizing death through suffocation due to the paralytic effects of pancuronium bromide and the intense burning sensation caused by potassium chloride. Opponents of lethal injection, as practiced, argue that the procedure is designed to create the appearance of serenity and a painless death, rather than actually providing it. Specifically, opponents object to the use of pancuronium bromide, arguing that it serves no useful purpose in lethal injection since the inmate is physically restrained. Therefore, the default function of pancuronium bromide would be to suppress the autonomic nervous system, specifically to stop breathing. Research In 2005, University of Miami researchers, in cooperation with the attorney representing death-row inmates from Virginia, published a research letter in the medical journal The Lancet. The article presented protocol information from Texas, Virginia, and North and South Carolina which showed that executioners had no anesthesia training, drugs were administered remotely with no monitoring for anesthesia, data were not recorded, and no peer review was done. Their analysis of toxicology reports from Arizona, Georgia, North and South Carolina showed that post mortem concentrations of thiopental in the blood were lower than that required for surgery in 43 of 49 executed inmates (88%), and that 21 (43%) inmates had concentrations consistent with awareness. This led the authors to conclude that a substantial probability existed that some of the inmates were aware and suffered extreme pain and distress during execution. The authors attributed the risk of consciousness among inmates to the lack of training and monitoring in the process, but carefully made no recommendations on how to alter the protocol or how to improve the process. Indeed, the authors conclude, "because participation of doctors in protocol design or execution is ethically prohibited, adequate anesthesia cannot be certain. Therefore, to prevent unnecessary cruelty and suffering, cessation and public review of lethal injections is warranted". Paid expert consultants on both sides of the lethal-injection debate have found opportunity to criticize the 2005 Lancet article. Subsequent to the initial publication in the Lancet, three letters to the editor and a response from the authors extended the analysis. The issue of contention is whether thiopental, like many lipid-soluble drugs, may be redistributed from blood into tissues after death, effectively lowering thiopental concentrations over time, or whether thiopental may distribute from tissues into the blood, effectively increasing post mortem blood concentrations over time. Given the near absence of scientific, peer-reviewed data on the topic of thiopental post mortem pharmacokinetics, the controversy continues in the lethal-injection community and, in consequence, many legal challenges to lethal injection have not used the Lancet article. In 2007, the same group that authored the Lancet study extended its study of the lethal-injection process through a critical examination of the pharmacology of the barbiturate thiopental. This study – published in the online journal PLOS Medicine – confirmed and extended the conclusions made in the original article and goes further to disprove the assertion that the lethal-injection process is painless. To date, these two studies by the University of Miami team serve as the only critical peer-reviewed examination of the pharmacology of the lethal-injection process. Cruel and unusual On occasion, difficulties inserting the intravenous needles have also occurred, with personnel sometimes taking over half an hour to find a suitable vein. Typically, the difficulty is found in convicts with diabetes or a history of intravenous drug use. Opponents argue that excessive time taken to insert intravenous lines is tantamount to cruel and unusual punishment. In addition, opponents point to instances where the intravenous line has failed, or when adverse reactions to drugs or unnecessary delays have happened during the process of execution. On December 13, 2006, Angel Nieves Diaz was not executed successfully in Florida using a standard lethal-injection dose. Diaz was 55 years old and had been sentenced to death for murder. Diaz did not succumb to the lethal dose even after 35 minutes, necessitating a second dose of drugs to complete the execution. At first, a prison spokesman denied Diaz had suffered pain and claimed the second dose was needed because Diaz had some sort of liver disease. After performing an autopsy, the medical examiner, Dr. William Hamilton, stated that Diaz's liver appeared normal, but that the needle had pierced through Diaz's vein into his flesh. The deadly chemicals had subsequently been injected into soft tissue rather than into the vein. Two days after the execution, then-Governor Jeb Bush suspended all executions in the state and appointed a commission "to consider the humanity and constitutionality of lethal injections." The ban was lifted by Governor Charlie Crist when he signed the death warrant for Mark Dean Schwab on July 18, 2007. On November 1, 2007, the Florida Supreme Court unanimously upheld the state's lethal-injection procedures. A study published in 2007 in PLOS Medicine suggested that "the conventional view of lethal injection leading to an invariably peaceful and painless death is questionable". The execution of Romell Broom was abandoned in Ohio on September 15, 2009, after prison officials failed to find a vein after two hours of trying on his arms, legs, hands, and ankle. This stirred up more intense debate in the United States about lethal injection. Broom's execution was later rescheduled for March 2022, but he died in 2020 before the sentence could be carried out. Dennis McGuire was executed in Lucasville, Ohio, on January 17, 2014. According to reporters, McGuire's execution took more than 20 minutes, and he was gasping for air for 10–13 minutes after the drugs had been administered. It was the first use of a new drug combination which was introduced in Ohio after the European Union banned sodium thiopental exports. This reignited criticism of the conventional three-drug method. Clayton Lockett died of a heart attack during a failed execution attempt on April 29, 2014, at Oklahoma State Penitentiary in McAlester, Oklahoma. Lockett was administered an untested mixture of drugs that had not previously been used for executions in the U.S. He survived for 43 minutes before being pronounced dead. Lockett convulsed and spoke during the process and attempted to rise from the execution table 14 minutes into the procedure, despite having been declared unconscious. Lethal injection, by design, is outwardly ambiguous with respect to what can be seen by witnesses. The 8th amendment of the US constitution proscribes cruel punishment but only the punished can accurately gauge the experience of cruelty. In execution, the inmate is unable to be a witness to their own execution, so it falls on the assembled witnesses to decide. Eyewitnesses to execution report very different observations, and these differences range from an opinion that the execution was painless to comments that the execution was highly problematic. Post mortem examinations of inmates executed by lethal injection have revealed a common finding of heavily congested lungs consistent with pulmonary edema. The occurrence of pulmonary edema found at autopsy raises the question about the actual cruelty of lethal injection. If pulmonary edema occurs as a consequence of lethal injection, the experience of death may be more akin to drowning than simply the painless death described by lethal injection proponents. Pulmonary edema can only occur if the inmate has heart function and cannot occur after death. European Union export ban Due to its use for executions in the US, the UK introduced a ban on the export of sodium thiopental in December 2010, after it was established that no European supplies to the US were being used for any other purpose. The restrictions were based on "the European Union Torture Regulation (including licensing of drugs used in execution by lethal injection)". From December 21, 2011, the European Union extended trade restrictions to prevent the export of certain medicinal products for capital punishment, stating, "The Union disapproves of capital punishment in all circumstances and works towards its universal abolition". Support Commonality The combination of a barbiturate induction agent and a nondepolarizing paralytic agent is used in thousands of anesthetics every day. Supporters of the death penalty argue that unless anesthesiologists have been wrong for the past 40 years, the use of pentothal and pancuronium is safe and effective. In fact, potassium is given in heart bypass surgery to induce cardioplegia. Therefore, the combination of these three drugs remains in use. Supporters of the death penalty speculate that the designers of the lethal-injection protocols intentionally used the same drugs as are used in everyday surgery to avoid controversy. The only modification is that a massive coma-inducing dose of barbiturates is given. In addition, similar protocols have been used in countries that support euthanasia or physician-assisted suicide. Anesthesia awareness Thiopental is a rapid and effective drug for inducing unconsciousness, since it causes loss of consciousness upon a single circulation through the brain due to its high lipophilicity. Only a few other drugs, such as methohexital, etomidate, or propofol, have the capability to induce anesthesia so rapidly. (Narcotics such as fentanyl are inadequate as induction agents for anesthesia.) Supporters argue that since the thiopental is given at a much higher dose than for medically induced coma protocols, it is effectively impossible for the condemned to wake up. Anesthesia awareness occurs when general anesthesia is inadequately maintained, for a number of reasons. Typically, anesthesia is 'induced' with an intravenous drug, but 'maintained' with an inhaled anesthetic given by the anesthesiologist or nurse-anesthetist (note that there are several other methods for safely and effectively maintaining anesthesia). Barbiturates are used only for induction of anesthesia and although these drugs rapidly and reliably induce anesthesia, wear off quickly. A neuromuscular-blocking drug may then be given to cause paralysis which facilitates intubation, although this is not always required. The anesthesiologist or nurse-anesthetist is responsible for ensuring that the maintenance technique (typically inhalational) is started soon after induction to prevent the patient from waking up. General anesthesia is not maintained with barbiturate drugs because they are so short-acting. An induction dose of thiopental wears off after a few minutes because the thiopental redistributes from the brain to the rest of the body very quickly. Also thiopental has a long half-life and needs time for the drug to be eliminated from the body. If a very large initial dose is given, little or no redistribution takes place because the body is saturated with the drug; thus recovery of consciousness requires the drug to be eliminated from the body. Because this process is not only slow (taking many hours or days), but also unpredictable in duration, barbiturates are unsatisfactory for the maintenance of anesthesia. Thiopental has a half-life around 11.5 hours (but the action of a single dose is terminated within a few minutes by redistribution of the drug from the brain to peripheral tissues) and the long-acting barbiturate phenobarbital has a half-life around 4–5 days. In contrast, the inhaled anesthetics have extremely short half-lives and allow the patient to wake up rapidly and predictably after surgery. The average time to death once a lethal-injection protocol has been started is about 7–11 minutes. Because it takes only about 30 seconds for the thiopental to induce anesthesia, 30–45 seconds for the pancuronium to cause paralysis, and about 30 seconds for the potassium to stop the heart, death can theoretically be attained in as little as 90 seconds. Given that it takes time to administer the drug, time for the line to flush itself, time for the change of the drug being administered, and time to ensure that death has occurred, the whole procedure takes about 7–11 minutes. Procedural aspects in pronouncing death also contribute to delay, so the condemned is usually pronounced dead within 10–20 minutes of starting the drugs. Supporters of the death penalty say that a huge dose of thiopental, which is between 14 and 20 times the anesthetic-induction dose and which has the potential to induce a medical coma lasting 60 hours, could never wear off in only 10–20 minutes. Dilution effect Death-penalty supporters state that the claim that pancuronium dilutes the sodium thiopental dose is erroneous. Supporters argue that pancuronium and thiopental are commonly used together in everyday surgery and that if there were a dilution effect, it would be a known drug interaction. Drug interactions are a complex topic. Simplistically, drug interactions can be classified as either synergistic or inhibitory interactions. In addition, drug interactions can occur directly at the site of action through common pathways, or indirectly through metabolism of the drug in the liver or through elimination in the kidney. Pancuronium and thiopental have different sites of action, one in the brain and one at the neuromuscular junction. Since the half-life of thiopental is 11.5 hours, the metabolism of the drugs is not an issue when dealing with the short time frame in lethal injections. The only other plausible interpretation would be a direct one, or one in which the two compounds interact with each other. Supporters of the death penalty argue that this theory does not hold true. They state that even if the 100 mg of pancuronium directly prevented 500 mg of thiopental from working, sufficient thiopental to induce coma would be present for 50 hours. In addition, if this interaction did occur, then the pancuronium would be incapable of causing paralysis. Supporters of the death penalty state that the claim that the pancuronium prevents the thiopental from working, yet is still capable of causing paralysis, is not based on any scientific evidence and is a drug interaction that has never before been documented for any other drugs. Single drug Terminally ill patients in Oregon who have requested physician-assisted suicide have received lethal doses of barbiturates. The protocol has been highly effective in producing a so-called painless death, but the time required to cause death can be prolonged. Some patients have taken days to die, and a few patients have actually survived the process and have regained consciousness up to three days after taking the lethal dose. In a California legal proceeding addressing the issue of the lethal-injection cocktail being "cruel and unusual", state authorities said that the time to death following a single injection of a barbiturate could be as much as 45 minutes. Barbiturate overdoses typically cause death by depression of the respiratory center, but the effect is variable. Some patients may have complete cessation of respiratory drive, whereas others may only have depression of respiratory function. In addition, cardiac activity can last for a long time after cessation of respiration. Since death is pronounced after asystole and given that the expectation is for a rapid death in lethal injection, multiple drugs are required, specifically potassium chloride to stop the heart. In fact, in the case of Clarence Ray Allen, a second dose of potassium chloride was required to attain. Stockpiling and sourcing of drugs A 2017 study found that four U.S. states that allow capital punishment are stockpiling lethal-injection drugs that are in short supply and may be needed for life-saving medical procedures elsewhere. This stockpiling of lethal-injection drugs also extends to the federal level, with the source of such drugs being put into question. At least one alleged supplier, Absolute Standards, is neither registered with the FDA, nor registered as a controlled substances manufacturer with the DEA, and has seen investigations over its alleged involvement. See also Capital punishment by country Drug injection Execution methods Execution chamber List of people executed by lethal injection References Additional references External links Death Penalty Worldwide, by Cornell Law School – Academic database on every death penalty country in the world Lethalinjection.org, by UC Berkeley School of Law – Web-based information clearinghouse on lethal injection Execution methods
Lethal injection
[ "Environmental_science" ]
10,467
[ "Toxicology", "Lethal injection" ]
62,754
https://en.wikipedia.org/wiki/H.%20A.%20Rey
H. A. Rey (born Hans Augusto Reyersbach; September 16, 1898 – August 26, 1977) was a German-born American illustrator and author, known best for the series of children's picture books that he and his wife Margret Rey created about Curious George. Early life Hans Augusto Reyersbach was born in Hamburg, German Empire on September 16, 1898. He and his wife, Margret, were both German Jews. They first met in Hamburg at Margret's sister's 16th birthday party. They met again in Brazil, where Rey was working as a salesman of bathtubs and Margret had gone to escape the rise of Nazism in Germany. They got married in 1935 and moved to Paris, France in August of that year. They lived in Montmartre and fled Paris in June 1940 on bicycles, carrying the Curious George manuscript with them. Curious George While in Paris, Rey's animal drawings came to the attention of a French publisher, who commissioned him to write a children's book. The characters in Cecily G. and the Nine Monkeys included an impish monkey named Curious George, and the couple then decided to write a book focused entirely on him. The outbreak of World War II interrupted their work. Being Jews, the Reys decided to flee Paris before the Nazis invaded the city. Hans assembled two bicycles, and they left the city just a few hours before it fell. Among the meager possessions they brought with them was the illustrated manuscript of Curious George. The Reys' odyssey took them to Bayonne, France, where they were issued life-saving visas signed by Portuguese Vice-Consul Manuel Vieira Braga (following instructions from Aristides de Sousa Mendes) on June 20, 1940. They crossed the Spanish border, where they bought train tickets to Lisbon. From there, they returned to Brazil, where they had met five years earlier, but this time they continued on to New York. The Reys escaped Europe carrying the manuscript to the first Curious George book, which was published in New York by Houghton Mifflin in 1941. They originally planned to use watercolor illustrations, but since they were responsible for the color separation, Rey changed these to the cartoon-like images that continue to be featured in each of the books. A collector's edition with the original watercolors has since been released. Curious George was an instant success, and the Reys were commissioned to write more adventures of the mischievous monkey and his friend, the Man with the Yellow Hat. They wrote seven stories in all, with Hans mainly doing the illustrations and Margret working mostly on the stories, though they both admitted to sharing the work and cooperating fully in every stage of development. At first, however, covers omitted Margret's name. In later editions, this was changed, and Margret now receives full credit for her role in developing the stories. Curious George Takes a Job was named to the Lewis Carroll Shelf Award list in 1960. In 1963, the Reys relocated to Cambridge, Massachusetts, in a house near Harvard Square, and lived there until Rey died on August 26, 1977. In the 1990s, friends of the Reys founded a children's bookstore named Curious George & Friends (formerly Curious George Goes to Wordsworth), which operated in Harvard Square until 2011. A new Curious George themed store opened in 2012, The World's Only Curious George Store, which moved to Central Square in 2019. Star charts Rey's interest in astronomy began during World War I and led to his desire to redraw constellation diagrams, which Rey found difficult to remember, so that they were more intuitive. This led to the 1952 publication of The Stars: A New Way to See Them (). His constellation diagrams were adopted widely and now appear in many astronomy guides, such as Donald H. Menzel's A Field Guide to the Stars and Planets. As of 2008 The Stars: A New Way to See Them and a simplified presentation for children called Find the Constellations are still in print. A new edition of Find the Constellations was released in 2008, updated with modern fonts, the new status of Pluto, and some more current measurements of planetary sizes and orbital radii. Collected papers The University of Oregon holds H. A. Rey papers dated 1940 to 1961, dominated by correspondence, primarily between Rey and his American and British publishers. The de Grummond Children's Literature Collection in Hattiesburg, Mississippi, holds more than 300 boxes of Rey papers dated 1973 to 2002. Dr. Lena Y. de Grummond, a professor in the field of library science at the University of Southern Mississippi, contacted the Reys in 1966 about USM's new children's literature collection. H. A. and Margret donated a pair of sketches at the time. When Margret Rey died in 1996, her will designated that the entire literary estate of the Reys be donated to the de Grummond Collection. Books written by H. A. Rey Cecily G. and the Nine Monkeys Curious George Curious George Takes a Job Curious George Rides a Bike Curious George Gets a Medal Curious George Learns the Alphabet Curious George Goes to the Hospital Feed the Animals Find the Constellations Elizabite - Adventures of a Carnivorous Plant How Do You Get There? Pretzel The Stars: A New Way to See Them Where's My Baby? See the Circus Tit for Tat Billy's Picture Whiteblack the Penguin Sees the World Au Clair de la Lune and other French Nursery Songs (1941) Spotty (1945) Mary had a Little Lamb and other Nursery Songs (1951) Humpty Dumpty and other Mother Goose Songs (©1943 Harper & Brothers) Books illustrated by H. A. Rey Dem Andenken Christian Morgensterns 12 Lithographien zu seinem Werk, von Hans Reyersbach (= H. A. Rey), signiert und mit Text in Bleistift HR 22 (1922) Die Sommerfrische: 10 Idyllen in Linol-Schnitt, von Hans Reyersbach (= H. A. Rey), Berlin (1923) Grotesken - 12 Lithographien zu Christian Morgensterns Grotesken von Hans Reyersbach (= H. A. Rey). Neue Folge. 400 Exemplare, Hamburg Kurt Enoch Verlag (1923) Curious George, written by Margret Rey (1941) Elizabite - The Adventures of a Carnivorous Plant (1942) Don't Frighten the Lion (1942) Katy No-Pocket (1944) Pretzel, written by Margret Rey (1944) We Three Kings and other Christmas Carols (1944) Curious George Takes a Job, written by Margret Rey (1947) Curious George Rides a Bike, written by Margret Rey (1952) Curious George Gets a Medal. written by Margret Rey (1957) Curious George Flies a Kite, written by Margret Rey (1958) Curious George Learns the Alphabet, written by Margret Rey (1963) Curious George Goes to the Hospital, written by Margret Rey (1966) Wordless Novel Zebrology. Chatto and Windus; London, England; (1937) References Citations New York Times: "How Curious George Escaped the Nazis" A curious tale of George's creators Jaeger, Roland: "H. A. und Margret Rey", in: Spalek, John M. / Feilchenfeldt, Konrad / Hawrylchak, Sandra H. (ed.): Deutschsprachige Exilliteratur seit 1933, vol. 3, USA, part 2; Bern/München 2000, p. 351−360 Jaeger, Roland: "Collecting Curious George. Children's Books Illustrated by H. A. Rey", in: Firsts. The Book Collector's Magazine, vol. 8, 1998, no. 12 (Dec.), p. 50–57 Jaeger, Roland: "Der Schöpfer von 'Curious George': Kinderbuch-Illustrator H. A. Rey". in: Aus dem Antiquariat, 1997, No. 10, A543−A551 External links Margret and H. A. Rey Interactive Timeline: Life in Paris and a Narrow Escape Curious George Saves the Day: The Art of Margret and H. A. Rey, The Jewish Museum (New York), March 14, 2010 – August 1, 2010. See IMDB: Monkey Business: The Adventures of Curious George's Creators (2017) H.A. & Margret Rey Papers at the University of Southern Mississippi Libraries H.A. & Margret Rey Digital Collections at the University of Southern Mississippi Libraries H.A. Rey papers at the University of Oregon 1898 births 1977 deaths Artists from Cambridge, Massachusetts American academics of English literature American children's book illustrators German children's book illustrators American children's writers German children's writers German illustrators German male writers Curious George Jewish American artists Jewish American children's writers Jewish emigrants from Nazi Germany to the United States People associated with astronomy Writers from Cambridge, Massachusetts Writers from Hamburg Writers who illustrated their own writing Wordless novels
H. A. Rey
[ "Astronomy" ]
1,915
[ "People associated with astronomy" ]
62,775
https://en.wikipedia.org/wiki/Pre-abelian%20category
In mathematics, specifically in category theory, a pre-abelian category is an additive category that has all kernels and cokernels. Spelled out in more detail, this means that a category C is pre-abelian if: C is preadditive, that is enriched over the monoidal category of abelian groups (equivalently, all hom-sets in C are abelian groups and composition of morphisms is bilinear); C has all finite products (equivalently, all finite coproducts); note that because C is also preadditive, finite products are the same as finite coproducts, making them biproducts; given any morphism f: A → B in C, the equaliser of f and the zero morphism from A to B exists (this is by definition the kernel of f), as does the coequaliser (this is by definition the cokernel of f). Note that the zero morphism in item 3 can be identified as the identity element of the hom-set Hom(A,B), which is an abelian group by item 1; or as the unique morphism A → 0 → B, where 0 is a zero object, guaranteed to exist by item 2. Examples The original example of an additive category is the category Ab of abelian groups. Ab is preadditive because it is a closed monoidal category, the biproduct in Ab is the finite direct sum, the kernel is inclusion of the ordinary kernel from group theory and the cokernel is the quotient map onto the ordinary cokernel from group theory. Other common examples: The category of (left) modules over a ring R, in particular: the category of vector spaces over a field K. The category of (Hausdorff) abelian topological groups. The category of Banach spaces. The category of Fréchet spaces. The category of (Hausdorff) bornological spaces. These will give you an idea of what to think of; for more examples, see abelian category (every abelian category is pre-abelian). Elementary properties Every pre-abelian category is of course an additive category, and many basic properties of these categories are described under that subject. This article concerns itself with the properties that hold specifically because of the existence of kernels and cokernels. Although kernels and cokernels are special kinds of equalisers and coequalisers, a pre-abelian category actually has all equalisers and coequalisers. We simply construct the equaliser of two morphisms f and g as the kernel of their difference g − f; similarly, their coequaliser is the cokernel of their difference. (The alternative term "difference kernel" for binary equalisers derives from this fact.) Since pre-abelian categories have all finite products and coproducts (the biproducts) and all binary equalisers and coequalisers (as just described), then by a general theorem of category theory, they have all finite limits and colimits. That is, pre-abelian categories are finitely complete. The existence of both kernels and cokernels gives a notion of image and coimage. We can define these as im f := ker coker f; coim f := coker ker f. That is, the image is the kernel of the cokernel, and the coimage is the cokernel of the kernel. Note that this notion of image may not correspond to the usual notion of image, or range, of a function, even assuming that the morphisms in the category are functions. For example, in the category of topological abelian groups, the image of a morphism actually corresponds to the inclusion of the closure of the range of the function. For this reason, people will often distinguish the meanings of the two terms in this context, using "image" for the abstract categorical concept and "range" for the elementary set-theoretic concept. In many common situations, such as the category of sets, where images and coimages exist, their objects are isomorphic. Put more precisely, we have a factorisation of f: A → B as A → C → I → B, where the morphism on the left is the coimage, the morphism on the right is the image, and the morphism in the middle (called the parallel of f) is an isomorphism. In a pre-abelian category, this is not necessarily true. The factorisation shown above does always exist, but the parallel might not be an isomorphism. In fact, the parallel of f is an isomorphism for every morphism f if and only if the pre-abelian category is an abelian category. An example of a non-abelian, pre-abelian category is, once again, the category of topological abelian groups. As remarked, the image is the inclusion of the closure of the range; however, the coimage is a quotient map onto the range itself. Thus, the parallel is the inclusion of the range into its closure, which is not an isomorphism unless the range was already closed. Exact functors Recall that all finite limits and colimits exist in a pre-abelian category. In general category theory, a functor is called left exact if it preserves all finite limits and right exact if it preserves all finite colimits. (A functor is simply exact if it's both left exact and right exact.) In a pre-abelian category, exact functors can be described in particularly simple terms. First, recall that an additive functor is a functor F: C → D between preadditive categories that acts as a group homomorphism on each hom-set. Then it turns out that a functor between pre-abelian categories is left exact if and only if it is additive and preserves all kernels, and it's right exact if and only if it's additive and preserves all cokernels. Note that an exact functor, because it preserves both kernels and cokernels, preserves all images and coimages. Exact functors are most useful in the study of abelian categories, where they can be applied to exact sequences. Maximal exact structure On every pre-abelian category there exists an exact structure that is maximal in the sense that it contains every other exact structure. The exact structure consists of precisely those kernel-cokernel pairs where is a semi-stable kernel and is a semi-stable cokernel. Here, is a semi-stable kernel if it is a kernel and for each morphism in the pushout diagram the morphism is again a kernel. is a semi-stable cokernel if it is a cokernel and for every morphism in the pullback diagram the morphism is again a cokernel. A pre-abelian category is quasi-abelian if and only if all kernel-cokernel pairs form an exact structure. An example for which this is not the case is the category of (Hausdorff) bornological spaces. The result is also valid for additive categories that are not pre-abelian but Karoubian. Special cases An abelian category is a pre-abelian category such that every monomorphism and epimorphism is normal. A quasi-abelian category is a pre-abelian category in which kernels are stable under pushouts and cokernels are stable under pullbacks. A semi-abelian category is a pre-abelian category in which for each morphism the induced morphism is always a monomorphism and an epimorphism. The pre-abelian categories most commonly studied are in fact abelian categories; for example, Ab is an abelian category. Pre-abelian categories that are not abelian appear for instance in functional analysis. Citations References Nicolae Popescu; 1973; Abelian Categories with Applications to Rings and Modules; Academic Press, Inc.; out of print Dennis Sieg and Sven-Ake Wegner, Maximal exact structures on additive categories, Math. Nachr. 284 (2011), 2093–2100. Septimu Crivei, Maximal exact structures on additive categories revisited, Math. Nachr. 285 (2012), 440–446. Additive categories
Pre-abelian category
[ "Mathematics" ]
1,756
[ "Mathematical structures", "Category theory", "Additive categories" ]
62,781
https://en.wikipedia.org/wiki/Complete%20category
In mathematics, a complete category is a category in which all small limits exist. That is, a category C is complete if every diagram F : J → C (where J is small) has a limit in C. Dually, a cocomplete category is one in which all small colimits exist. A bicomplete category is a category which is both complete and cocomplete. The existence of all limits (even when J is a proper class) is too strong to be practically relevant. Any category with this property is necessarily a thin category: for any two objects there can be at most one morphism from one object to the other. A weaker form of completeness is that of finite completeness. A category is finitely complete if all finite limits exists (i.e. limits of diagrams indexed by a finite category J). Dually, a category is finitely cocomplete if all finite colimits exist. Theorems It follows from the existence theorem for limits that a category is complete if and only if it has equalizers (of all pairs of morphisms) and all (small) products. Since equalizers may be constructed from pullbacks and binary products (consider the pullback of (f, g) along the diagonal Δ), a category is complete if and only if it has pullbacks and products. Dually, a category is cocomplete if and only if it has coequalizers and all (small) coproducts, or, equivalently, pushouts and coproducts. Finite completeness can be characterized in several ways. For a category C, the following are all equivalent: C is finitely complete, C has equalizers and all finite products, C has equalizers, binary products, and a terminal object, C has pullbacks and a terminal object. The dual statements are also equivalent. A small category C is complete if and only if it is cocomplete. A small complete category is necessarily thin. A posetal category vacuously has all equalizers and coequalizers, whence it is (finitely) complete if and only if it has all (finite) products, and dually for cocompleteness. Without the finiteness restriction a posetal category with all products is automatically cocomplete, and dually, by a theorem about complete lattices. Examples and nonexamples The following categories are bicomplete: Set, the category of sets Top, the category of topological spaces Grp, the category of groups Ab, the category of abelian groups Ring, the category of rings K-Vect, the category of vector spaces over a field K R''-Mod, the category of modules over a commutative ring R'' CmptH, the category of all compact Hausdorff spaces Cat, the category of all small categories Whl, the category of wheels sSet, the category of simplicial sets The following categories are finitely complete and finitely cocomplete but neither complete nor cocomplete: The category of finite sets The category of finite abelian groups The category of finite-dimensional vector spaces Any (pre)abelian category is finitely complete and finitely cocomplete. The category of complete lattices is complete but not cocomplete. The category of metric spaces, Met, is finitely complete but has neither binary coproducts nor infinite products. The category of fields, Field, is neither finitely complete nor finitely cocomplete. A poset, considered as a small category, is complete (and cocomplete) if and only if it is a complete lattice. The partially ordered class of all ordinal numbers is cocomplete but not complete (since it has no terminal object). A group, considered as a category with a single object, is complete if and only if it is trivial. A nontrivial group has pullbacks and pushouts, but not products, coproducts, equalizers, coequalizers, terminal objects, or initial objects. References Further reading Limits (category theory)
Complete category
[ "Mathematics" ]
849
[ "Mathematical structures", "Category theory", "Limits (category theory)" ]
62,798
https://en.wikipedia.org/wiki/Fabaceae
The Fabaceae () or Leguminosae, commonly known as the legume, pea, or bean family, are a large and agriculturally important family of flowering plants. It includes trees, shrubs, and perennial or annual herbaceous plants, which are easily recognized by their fruit (legume) and their compound, stipulate leaves. The family is widely distributed, and is the third-largest land plant family in number of species, behind only the Orchidaceae and Asteraceae, with about 765 genera and nearly 20,000 known species. The five largest genera of the family are Astragalus (over 3,000 species), Acacia (over 1,000 species), Indigofera (around 700 species), Crotalaria (around 700 species), and Mimosa (around 400 species), which constitute about a quarter of all legume species. The c. 19,000 known legume species amount to about 7% of flowering plant species. Fabaceae is the most common family found in tropical rainforests and dry forests of the Americas and Africa. Recent molecular and morphological evidence supports the fact that the Fabaceae is a single monophyletic family. This conclusion has been supported not only by the degree of interrelation shown by different groups within the family compared with that found among the Leguminosae and their closest relations, but also by all the recent phylogenetic studies based on DNA sequences. These studies confirm that the Fabaceae are a monophyletic group that is closely related to the families Polygalaceae, Surianaceae and Quillajaceae and that they belong to the order Fabales. Along with the cereals, some fruits and tropical roots, a number of Leguminosae have been a staple human food for millennia and their use is closely related to human evolution. The family Fabaceae includes a number of plants that are common in agriculture including Glycine max (soybean), Phaseolus (beans), Pisum sativum (pea), Cicer arietinum (chickpeas), Vicia faba (broad bean), Medicago sativa (alfalfa), Arachis hypogaea (peanut), Ceratonia siliqua (carob), Trigonella foenum-graecum (fenugreek), and Glycyrrhiza glabra (liquorice). A number of species are also weedy pests in different parts of the world, including Cytisus scoparius (broom), Robinia pseudoacacia (black locust), Ulex europaeus (gorse), Pueraria montana (kudzu), and a number of Lupinus species. Etymology The name 'Fabaceae' comes from the defunct genus Faba, now included in Vicia. The term "faba" comes from Latin, and appears to simply mean "bean". Leguminosae is an older name still considered valid, and refers to the fruit of these plants, which are called legumes. Description Fabaceae range in habit from giant trees (like Koompassia excelsa) to small annual herbs, with the majority being herbaceous perennials. Plants have indeterminate inflorescences, which are sometimes reduced to a single flower. The flowers have a short hypanthium and a single carpel with a short gynophore, and after fertilization produce fruits that are legumes. Growth habit The Fabaceae have a wide variety of growth forms, including trees, shrubs, herbaceous plants, and even vines or lianas. The herbaceous plants can be annuals, biennials, or perennials, without basal or terminal leaf aggregations. Many Legumes have tendrils. They are upright plants, epiphytes, or vines. The latter support themselves by means of shoots that twist around a support or through cauline or foliar tendrils. Plants can be heliophytes, mesophytes, or xerophytes. Leaves The leaves are usually alternate and compound. Most often they are even- or odd-pinnately compound (e.g. Caragana and Robinia respectively), often trifoliate (e.g. Trifolium, Medicago) and rarely palmately compound (e.g. Lupinus), in the Mimosoideae and the Caesalpinioideae commonly bipinnate (e.g. Acacia, Mimosa). They always have stipules, which can be leaf-like (e.g. Pisum), thornlike (e.g. Robinia) or be rather inconspicuous. Leaf margins are entire or, occasionally, serrate. Both the leaves and the leaflets often have wrinkled pulvini to permit nastic movements. In some species, leaflets have evolved into tendrils (e.g. Vicia). Many species have leaves with structures that attract ants which protect the plant from herbivore insects (a form of mutualism). Extrafloral nectaries are common among the Mimosoideae and the Caesalpinioideae, and are also found in some Faboideae (e.g. Vicia sativa). In some Acacia, the modified hollow stipules are inhabited by ants and are known as domatia. Roots Many Fabaceae host bacteria in their roots within structures called root nodules. These bacteria, known as rhizobia, have the ability to take nitrogen gas (N2) out of the air and convert it to a form of nitrogen that is usable to the host plant (NO3− or NH3). This process is called nitrogen fixation. The legume, acting as a host, and rhizobia, acting as a provider of usable nitrate, form a symbiotic relationship. Members of the Phaseoleae genus Apios form tubers, which can be edible. Flowers The flowers often have five generally fused sepals and five free petals. They are generally hermaphroditic and have a short hypanthium, usually cup-shaped. There are normally ten stamens and one elongated superior ovary, with a curved style. They are usually arranged in indeterminate inflorescences. Fabaceae are typically entomophilous plants (i.e. they are pollinated by insects), and the flowers are usually showy to attract pollinators. In the Caesalpinioideae, the flowers are often zygomorphic, as in Cercis, or nearly symmetrical with five equal petals, as in Bauhinia. The upper petal is the innermost one, unlike in the Faboideae. Some species, like some in the genus Senna, have asymmetric flowers, with one of the lower petals larger than the opposing one, and the style bent to one side. The calyx, corolla, or stamens can be showy in this group. In the Mimosoideae, the flowers are actinomorphic and arranged in globose inflorescences. The petals are small and the stamens, which can be more than just 10, have long, coloured filaments, which are the showiest part of the flower. All of the flowers in an inflorescence open at once. In the Faboideae, the flowers are zygomorphic, and have a specialized structure. The upper petal, called the banner or standard, is large and envelops the rest of the petals in bud, often reflexing when the flower blooms. The two adjacent petals, the wings, surround the two bottom petals. The two bottom petals are fused together at the apex (remaining free at the base), forming a boat-like structure called the keel. The stamens are always ten in number, and their filaments can be fused in various configurations, often in a group of nine stamens plus one separate stamen. Various genes in the CYCLOIDEA (CYC)/DICHOTOMA (DICH) family are expressed in the upper (also called dorsal or adaxial) petal; in some species, such as Cadia, these genes are expressed throughout the flower, producing a radially symmetrical flower. Fruit The ovary most typically develops into a legume. A legume is a simple dry fruit that usually dehisces (opens along a seam) on two sides. A common name for this type of fruit is a "pod", although that can also be applied to a few other fruit types. A few species have evolved samarae, loments, follicles, indehiscent legumes, achenes, drupes, and berries from the basic legume fruit. Physiology and biochemistry The Fabaceae are rarely cyanogenic. Where they are, the cyanogenic compounds are derived from tyrosine, phenylalanine or leucine. They frequently contain alkaloids. Proanthocyanidins can be present either as cyanidin or delphinidine or both at the same time. Flavonoids such as kaempferol, quercitin and myricetin are often present. Ellagic acid has never been found in any of the genera or species analysed. Sugars are transported within the plants in the form of sucrose. C3 photosynthesis has been found in a wide variety of genera. The family has also evolved a unique chemistry. Many legumes contain toxic and indigestible substances, antinutrients, which may be removed through various processing methods. Pterocarpans are a class of molecules (derivatives of isoflavonoids) found only in the Fabaceae. Forisome proteins are found in the sieve tubes of Fabaceae; uniquely they are not dependent on ADT. Evolution, phylogeny and taxonomy Evolution The order Fabales contains around 7.3% of eudicot species and the greatest part of this diversity is contained in just one of the four families that the order contains: Fabaceae. This clade also includes the families Polygalaceae, Surianaceae and Quillajaceae and its origins date back 94 to 89 million years, although it started its diversification 79 to 74 million years ago. The Fabaceae diversified during the Paleogene to become a ubiquitous part of the modern earth's biota, along with many other families belonging to the flowering plants. The Fabaceae have an abundant and diverse fossil record, especially for the Tertiary period. Fossils of flowers, fruit, leaves, wood and pollen from this period have been found in numerous locations. The earliest fossils that can be definitively assigned to the Fabaceae appeared in the early Palaeocene (approximately 65 million years ago). Representatives of the 3 sub-families traditionally recognised as being members of the Fabaceae – Cesalpinioideae, Papilionoideae and Mimosoideaeas well as members of the large clades within these sub-familiessuch as the genistoideshave been found in periods later, starting between 55 and 50 million years ago. In fact, a wide variety of taxa representing the main lineages in the Fabaceae have been found in the fossil record dating from the middle to the late Eocene, suggesting that the majority of the modern Fabaceae groups were already present and that a broad diversification occurred during this period. Therefore, the Fabaceae started their diversification approximately 60 million years ago and the most important clades separated 50 million years ago. The age of the main Cesalpinioideae clades have been estimated as between 56 and 34 million years and the basal group of the Mimosoideae as 44 ± 2.6 million years. The division between Mimosoideae and Faboideae is dated as occurring between 59 and 34 million years ago and the basal group of the Faboideae as 58.6 ± 0.2 million years ago. It has been possible to date the divergence of some of the groups within the Faboideae, even though diversification within each genus was relatively recent. For instance, Astragalus separated from the Oxytropis 16 to 12 million years ago. In addition, the separation of the aneuploid species of Neoastragalus started 4 million years ago. Inga, another genus of the Papilionoideae with approximately 350 species, seems to have diverged in the last 2 million years. It has been suggested, based on fossil and phylogenetic evidence, that legumes originally evolved in arid and/or semi-arid regions along the Tethys seaway during the Palaeogene Period. However, others contend that Africa (or even the Americas) cannot yet be ruled out as the origin of the family. The current hypothesis about the evolution of the genes needed for nodulation is that they were recruited from other pathways after a polyploidy event. Several different pathways have been implicated as donating duplicated genes to the pathways need for nodulation. The main donors to the pathway were the genes associated with the arbuscular mycorrhiza symbiosis genes, the pollen tube formation genes and the haemoglobin genes. One of the main genes shown to be shared between the arbuscular mycorrhiza pathway and the nodulation pathway is SYMRK and it is involved in the plant-bacterial recognition. The pollen tube growth is similar to the infection thread development in that infection threads grow in a polar manner that is similar to a pollen tubes polar growth towards the ovules. Both pathways include the same type of enzymes, pectin-degrading cell wall enzymes. The enzymes needed to reduce nitrogen, nitrogenases, require a substantial input of ATP but at the same time are sensitive to free oxygen. To meet the requirements of this paradoxical situation, the plants express a type of haemoglobin called leghaemoglobin that is believed to be recruited after a duplication event. These three genetic pathways are believed to be part of a gene duplication event then recruited to work in nodulation. Phylogeny and taxonomy Phylogeny The phylogeny of the legumes has been the object of many studies by research groups from around the world. These studies have used morphology, DNA data (the chloroplast intron trnL, the chloroplast genes rbcL and matK, or the ribosomal spacers ITS) and cladistic analysis in order to investigate the relationships between the family's different lineages. Fabaceae is consistently recovered as monophyletic. The studies further confirmed that the traditional subfamilies Mimosoideae and Papilionoideae were each monophyletic but both were nested within the paraphyletic subfamily Caesalpinioideae. All the different approaches yielded similar results regarding the relationships between the family's main clades. Following extensive discussion in the legume phylogenetics community, the Legume Phylogeny Working Group reclassified Fabaceae into six subfamilies, which necessitated the segregation of four new subfamilies from Caesalpinioideae and merging Caesapinioideae sensu stricto with the former subfamily Mimosoideae. The exact branching order of the different subfamilies is still unresolved. Taxonomy The Fabaceae are placed in the order Fabales according to most taxonomic systems, including the APG III system. The family now includes six subfamilies: Cercidoideae: 12 genera and ~335 species. Mainly tropical. Bauhinia, Cercis. Detarioideae: 84 genera and ~760 species. Mainly tropical. Amherstia, Detarium, Tamarindus. Duparquetioideae: 1 genus and 1 species. West and Central Africa. Duparquetia. Dialioideae: 17 genera and ~85 species. Widespread throughout the tropics. Dialium. Caesalpinioideae: 148 genera and ~4400 species. Pantropical. Caesalpinia, Senna, Mimosa, Acacia. Includes the former subfamily Mimosoideae (80 genera and ~3200 species; mostly tropical and warm temperate Asia and America). Faboideae (Papilionoideae): 503 genera and ~14,000 species. Cosmopolitan. Astragalus, Lupinus, Pisum. Ecology Distribution and habitat The Fabaceae have an essentially worldwide distribution, being found everywhere except Antarctica and the high Arctic. The trees are often found in tropical regions, while the herbaceous plants and shrubs are predominant outside the tropics. Biological nitrogen fixation Biological nitrogen fixation (BNF, performed by the organisms called diazotrophs) is a very old process that probably originated in the Archean eon when the primitive atmosphere lacked oxygen. It is only carried out by Euryarchaeota and just 6 of the more than 50 phyla of bacteria. Some of these lineages co-evolved together with the flowering plants establishing the molecular basis of a mutually beneficial symbiotic relationship. BNF is carried out in nodules that are mainly located in the root cortex, although they are occasionally located in the stem as in Sesbania rostrata. The spermatophytes that co-evolved with actinorhizal diazotrophs (Frankia) or with rhizobia to establish their symbiotic relationship belong to 11 families contained within the Rosidae clade (as established by the gene molecular phylogeny of rbcL, a gene coding for part of the RuBisCO enzyme in the chloroplast). This grouping indicates that the predisposition for forming nodules probably only arose once in flowering plants and that it can be considered as an ancestral characteristic that has been conserved or lost in certain lineages. However, such a wide distribution of families and genera within this lineage indicates that nodulation had multiple origins. Of the 10 families within the Rosidae, 8 have nodules formed by actinomyces (Betulaceae, Casuarinaceae, Coriariaceae, Datiscaceae, Elaeagnaceae, Myricaceae, Rhamnaceae and Rosaceae), and the two remaining families, Ulmaceae and Fabaceae have nodules formed by rhizobia. The rhizobia and their hosts must be able to recognize each other for nodule formation to commence. Rhizobia are specific to particular host species although a rhizobia species may often infect more than one host species. This means that one plant species may be infected by more than one species of bacteria. For example, nodules in Acacia senegal can contain seven species of rhizobia belonging to three different genera. The most distinctive characteristics that allow rhizobia to be distinguished apart are the rapidity of their growth and the type of root nodule that they form with their host. Root nodules can be classified as being either indeterminate, cylindrical and often branched, and determinate, spherical with prominent lenticels. Indeterminate nodules are characteristic of legumes from temperate climates, while determinate nodules are commonly found in species from tropical or subtropical climates. Nodule formation is common throughout the Fabaceae. It is found in the majority of its members that only form an association with rhizobia, which in turn form an exclusive symbiosis with the Fabaceae (with the exception of Parasponia, the only genus of the 18 Ulmaceae genera that is capable of forming nodules). Nodule formation is present in all the Fabaceae sub-families, although it is less common in the Caesalpinioideae. All types of nodule formation are present in the subfamily Papilionoideae: indeterminate (with the meristem retained), determinate (without meristem) and the type included in Aeschynomene. The latter two are thought to be the most modern and specialised type of nodule as they are only present in some lines of the subfamily Papilionoideae. Even though nodule formation is common in the two monophyletic subfamilies Papilionoideae and Mimosoideae they also contain species that do not form nodules. The presence or absence of nodule-forming species within the three sub-families indicates that nodule formation has arisen several times during the evolution of the Fabaceae and that this ability has been lost in some lineages. For example, within the genus Acacia, a member of the Mimosoideae, A. pentagona does not form nodules, while other species of the same genus readily form nodules, as is the case for Acacia senegal, which forms both rapidly and slow growing rhizobial nodules. Chemical ecology A large number of species within many genera of leguminous plants, e.g. Astragalus, Coronilla, Hippocrepis, Indigofera, Lotus, Securigera and Scorpiurus, produce chemicals that derive from the compound 3-nitropropanoic acid (3-NPA, beta-nitropropionic acid). The free acid 3-NPA is an irreversible inhibitor of mitochondrial respiration, and thus the compound inhibits the tricarboxylic acid cycle. This inhibition caused by 3-NPA is especially toxic to nerve cells and represents a very general toxic mechanism suggesting a profound ecological importance due to the big number of species producing this compound and its derivatives. A second and closely related class of secondary metabolites that occur in many species of leguminous plants is defined by isoxazolin-5-one derivatives. These compounds occur in particular together with 3-NPA and related derivatives at the same time in the same species, as found in Astragalus canadensis and Astragalus collinus. 3-NPA and isoxazlin-5-one derivatives also occur in many species of leaf beetles (see defense in insects). Economic and cultural importance Legumes are economically and culturally important plants due to their extraordinary diversity and abundance, the wide variety of edible vegetables they represent and due to the variety of uses they can be put to: in horticulture and agriculture, as a food, for the compounds they contain that have medicinal uses and for the oil and fats they contain that have a variety of uses. Food and forage The history of legumes is tied in closely with that of human civilization, appearing early in Asia, the Americas (the common bean, several varieties) and Europe (broad beans) by 6,000 BCE, where they became a staple, essential as a source of protein. Their ability to fix atmospheric nitrogen reduces fertilizer costs for farmers and gardeners who grow legumes, and means that legumes can be used in a crop rotation to replenish soil that has been depleted of nitrogen. Legume seeds and foliage have a comparatively higher protein content than non-legume materials, due to the additional nitrogen that legumes receive through the process. Legumes are commonly used as natural fertilizers. Some legume species perform hydraulic lift, which makes them ideal for intercropping. Farmed legumes can belong to numerous classes, including forage, grain, blooms, pharmaceutical/industrial, fallow/green manure and timber species, with most commercially farmed species filling two or more roles simultaneously. There are of two broad types of forage legumes. Some, like alfalfa, clover, vetch, and Arachis, are sown in pasture and grazed by livestock. Other forage legumes such as Leucaena or Albizia are woody shrub or tree species that are either broken down by livestock or regularly cut by humans to provide fodder. Grain legumes are cultivated for their seeds, and are also called pulses. The seeds are used for human and animal consumption or for the production of oils for industrial uses. Grain legumes include both herbaceous plants like beans, lentils, lupins, peas and peanuts, and trees such as carob, mesquite and tamarind. Lathyrus tuberosus, once extensively cultivated in Europe, forms tubers used for human consumption. Bloom legume species include species such as lupin, which are farmed commercially for their blooms, and thus are popular in gardens worldwide. Laburnum, Robinia, Gleditsia (honey locust), Acacia, Mimosa, and Delonix are ornamental trees and shrubs. Industrial farmed legumes include Indigofera, cultivated for the production of indigo, Acacia, for gum arabic, and Derris, for the insecticide action of rotenone, a compound it produces. Fallow or green manure legume species are cultivated to be tilled back into the soil to exploit the high nitrogen levels found in most legumes. Numerous legumes are farmed for this purpose, including Leucaena, Cyamopsis and Sesbania. Various legume species are farmed for timber production worldwide, including numerous Acacia species, Dalbergia species, and Castanospermum australe. Melliferous plants offer nectar to bees and other insects to encourage them to carry pollen from the flowers of one plant to others thereby ensuring pollination. Many Fabaceae species are important sources of pollen and nectar for bees, including for honey production in the beekeeping industry. Example Fabaceae such as alfalfa, and various clovers including white clover and sweet clover, are important sources of nectar and honey for the Western honey bee. Industrial uses Natural gums Natural gums are vegetable exudates that are released as the result of damage to the plant such as that resulting from the attack of an insect or a natural or artificial cut. These exudates contain heterogeneous polysaccharides formed of different sugars and usually containing uronic acids. They form viscous colloidal solutions. There are different species that produce gums. The most important of these species belong to the Fabaceae. They are widely used in the pharmaceutical, cosmetic, food, and textile sectors. They also have interesting therapeutic properties; for example gum arabic is antitussive and anti-inflammatory. The most well known gums are tragacanth (Astragalus gummifer), gum arabic (Acacia senegal) and guar gum (Cyamopsis tetragonoloba). Dyes Several species of Fabaceae are used to produce dyes. The heartwood of logwood, Haematoxylon campechianum, is used to produce red and purple dyes. The histological stain called haematoxylin is produced from this species. The wood of the Brazilwood tree (Caesalpinia echinata) is also used to produce a red or purple dye. The Madras thorn (Pithecellobium dulce) has reddish fruit that are used to produce a yellow dye. Indigo dye is extracted from the indigo plant Indigofera tinctoria that is native to Asia. In Central and South America dyes are produced from two species in the same genus: indigo and Maya blue from Indigofera suffruticosa and Natal indigo from Indigofera arrecta. Yellow dyes are extracted from Butea monosperma, commonly called flame of the forest and from dyer's greenweed, (Genista tinctoria). Ornamentals Legumes have been used as ornamental plants throughout the world for many centuries. Their vast diversity of heights, shapes, foliage and flower colour means that this family is commonly used in the design and planting of everything from small gardens to large parks. The following is a list of the main ornamental legume species, listed by subfamily. Subfamily Caesalpinioideae: Bauhinia forficata, Caesalpinia gilliesii, Caesalpinia spinosa, Ceratonia siliqua, Cercis siliquastrum, Gleditsia triacanthos, Gymnocladus dioica, Parkinsonia aculeata, Senna multiglandulosa. Subfamily Mimosoideae: Acacia caven, Acacia cultriformis, Acacia dealbata, Acacia karroo, Acacia longifolia, Acacia melanoxylon, Acacia paradoxa, Acacia retinodes, Acacia saligna, Acacia verticillata, Acacia visco, Albizzia julibrissin, Calliandra tweediei, Paraserianthes lophantha, Prosopis chilensis. Subfamily Faboideae: Clianthus puniceus, Cytisus scoparius, Erythrina crista-galli, Erythrina falcata, Laburnum anagyroides, Lotus peliorhynchus, Lupinus arboreus, Lupinus polyphyllus, Otholobium glandulosum, Retama monosperma, Robinia hispida, Robinia luxurians, Robinia pseudoacacia, Sophora japonica, Sophora macnabiana, Sophora macrocarpa, Spartium junceum, Teline monspessulana, Tipuana tipu, Wisteria sinensis. Emblematic Fabaceae The cockspur coral tree (Erythrina crista-galli), is the national flower of Argentina and Uruguay. The elephant ear tree (Enterolobium cyclocarpum) is the national tree of Costa Rica, by Executive Order of 31 August 1959. The brazilwood tree (Caesalpinia echinata) has been the national tree of Brazil since 1978. The golden wattle Acacia pycnantha is Australia's national flower. The Hong Kong orchid tree Bauhinia blakeana is the national flower of Hong Kong. Image gallery References External links Fabaceae at the Angiosperm Phylogeny Website LegumeWeb Database at the International Legume Database & Information Service (ILDIS) Fabaceae Nitrogen cycle Extant Paleocene first appearances Rosid families Soil improvers
Fabaceae
[ "Chemistry" ]
6,340
[ "Nitrogen cycle", "Metabolism" ]
62,810
https://en.wikipedia.org/wiki/Reelin
Reelin, encoded by the RELN gene, is a large secreted extracellular matrix glycoprotein that helps regulate processes of neuronal migration and positioning in the developing brain by controlling cell–cell interactions. Besides this important role in early development, reelin continues to work in the adult brain. It modulates synaptic plasticity by enhancing the induction and maintenance of long-term potentiation. It also stimulates dendrite and dendritic spine development in the hippocampus, and regulates the continuing migration of neuroblasts generated in adult neurogenesis sites of the subventricular and subgranular zones. It is found not only in the brain but also in the liver, thyroid gland, adrenal gland, fallopian tube, breast and in comparatively lower levels across a range of anatomical regions. Reelin has been suggested to be implicated in pathogenesis of several brain diseases. The expression of the protein has been found to be significantly lower in schizophrenia and psychotic bipolar disorder, but the cause of this observation remains uncertain, as studies show that psychotropic medication itself affects reelin expression. Moreover, epigenetic hypotheses aimed at explaining the changed levels of reelin expression are controversial. Total lack of reelin causes a form of lissencephaly. Reelin may also play a role in Alzheimer's disease, temporal lobe epilepsy and autism. Reelin's name comes from the abnormal reeling gait of reeler mice, which were later found to have a deficiency of this brain protein and were homozygous for mutation of the RELN gene. The primary phenotype associated with loss of reelin function is a failure of neuronal positioning throughout the developing central nervous system (CNS). The mice heterozygous for the reelin gene, while having little neuroanatomical defects, display the endophenotypic traits linked to psychotic disorders. Discovery Mutant mice have provided insight into the underlying molecular mechanisms of the development of the central nervous system. Useful spontaneous mutations were first identified by scientists who were interested in motor behavior, and it proved relatively easy to screen littermates for mice that showed difficulties moving around the cage. A number of such mice were found and given descriptive names such as reeler, weaver, lurcher, nervous, and staggerer. The "reeler" mouse was described for the first time in 1951 by D.S.Falconer in Edinburgh University as a spontaneous variant arising in a colony of at least mildly inbred snowy-white bellied mice stock in 1948. Histopathological studies in the 1960s revealed that the cerebellum of reeler mice is dramatically decreased in size while the normal laminar organization found in several brain regions is disrupted. The 1970s brought about the discovery of cellular layer inversion in the mouse neocortex, which attracted more attention to the reeler mutation. In 1994, a new allele of reeler was obtained by means of insertional mutagenesis. This provided the first molecular marker of the locus, permitting the RELN gene to be mapped to chromosome 7q22 and subsequently cloned and identified. Japanese scientists at Kochi Medical School successfully raised antibodies against normal brain extracts in reeler mice, later these antibodies were found to be specific monoclonal antibodies for reelin, and were termed CR-50 (Cajal-Retzius marker 50). They noted that CR-50 reacted specifically with Cajal-Retzius neurons, whose functional role was unknown until then. The Reelin receptors, apolipoprotein E receptor 2 (ApoER2) and very-low-density lipoprotein receptor (VLDLR), were discovered by Trommsdorff, Herz and colleagues, who initially found that the cytosolic adaptor protein Dab1 interacts with the cytoplasmic domain of LDL receptor family members. They then went on to show that the double knockout mice for ApoER2 and VLDLR, which both interact with Dab1, had cortical layering defects similar to those in reeler. The downstream pathway of reelin was further clarified with the help of other mutant mice, including yotari and scrambler. These mutants have phenotypes similar to that of reeler mice, but without mutation in reelin. It was then demonstrated that the mouse disabled homologue 1 (Dab1) gene is responsible for the phenotypes of these mutant mice, as Dab1 protein was absent (yotari) or only barely detectable (scrambler) in these mutants. Targeted disruption of Dab1 also caused a phenotype similar to that of reeler. Pinpointing the DAB1 as a pivotal regulator of the reelin signaling cascade started the tedious process of deciphering its complex interactions. There followed a series of speculative reports linking reelin's genetic variation and interactions to schizophrenia, Alzheimer's disease, autism and other highly complex dysfunctions. These and other discoveries, coupled with the perspective of unraveling the evolutionary changes that allowed for the creation of human brain, highly intensified the research. As of 2008, some 13 years after the gene coding the protein was discovered, hundreds of scientific articles address the multiple aspects of its structure and functioning. Tissue distribution and secretion Studies show that reelin is absent from synaptic vesicles and is secreted via constitutive secretory pathway, being stored in Golgi secretory vesicles. Reelin's release rate is not regulated by depolarization, but strictly depends on its synthesis rate. This relationship is similar to that reported for the secretion of other extracellular matrix proteins. During the brain development, reelin is secreted in the cortex and hippocampus by the so-called Cajal-Retzius cells, Cajal cells, and Retzius cells. Reelin-expressing cells in the prenatal and early postnatal brain are predominantly found in the marginal zone (MZ) of the cortex and in the temporary subpial granular layer (SGL), which is manifested to the highest extent in human, and in the hippocampal stratum lacunosum-moleculare and the upper marginal layer of the dentate gyrus. In the developing cerebellum, reelin is expressed first in the external granule cell layer (EGL), before the granule cell migration to the internal granule cell layer (IGL) takes place. Having peaked just after the birth, the synthesis of reelin subsequently goes down sharply, becoming more diffuse compared with the distinctly laminar expression in the developing brain. In the adult brain, reelin is expressed by GABA-ergic interneurons of the cortex and glutamatergic cerebellar neurons, the glutamatergic stellate cells and fan cells in the superficial entorhinal cortex that are supposed to carry a role in encoding new episodic memories, and by the few extant Cajal-Retzius cells. Among GABAergic interneurons, reelin seems to be detected predominantly in those expressing calretinin and calbindin, like bitufted, horizontal, and Martinotti cells, but not parvalbumin-expressing cells, like chandelier or basket neurons. In the white matter, a minute proportion of interstitial neurons has also been found to stain positive for reelin expression. Outside the brain, reelin is found in adult mammalian blood, liver, pituitary pars intermedia, and adrenal chromaffin cells. In the liver, reelin is localized in hepatic stellate cells. The expression of reelin increases when the liver is damaged, and returns to normal following its repair. In the eyes, reelin is secreted by retinal ganglion cells and is also found in the endothelial layer of the cornea. Just as in the liver, its expression increases after an injury has taken place. The protein is also produced by the odontoblasts, which are cells at the margins of the dental pulp. Reelin is found here both during odontogenesis and in the mature tooth. Some authors suggest that odontoblasts play an additional role as sensory cells able to transduce pain signals to the nerve endings. According to the hypothesis, reelin participates in the process by enhancing the contact between odontoblasts and the nerve terminals. Structure Reelin is composed of 3461 amino acids with a relative molecular mass of 388 kDa. It also has serine protease activity. Murine RELN gene consists of 65 exons spanning approximately 450 kb. One exon, coding for only two amino acids near the protein's C-terminus, undergoes alternative splicing, but the exact functional impact of this is unknown. Two transcription initiation sites and two polyadenylation sites are identified in the gene structure. The reelin protein starts with a signaling peptide 27 amino acids in length, followed by a region bearing similarity to F-spondin (the reeler domain), marked as "SP" on the scheme, and by a region unique to reelin, marked as "H". Next comes 8 repeats of 300–350 amino acids. These are called reelin repeats and have an epidermal growth factor motif at their center, dividing each repeat into two subrepeats, A (the BNR/Asp-box repeat) and B (the EGF-like domain). Despite this interruption, the two subdomains make direct contact, resulting in a compact overall structure. The final reelin domain contains a highly basic and short C-terminal region (CTR, marked "+") with a length of 32 amino acids. This region is highly conserved, being 100% identical in all investigated mammals. It was thought that CTR is necessary for reelin secretion, because the Orleans reeler mutation, which lacks a part of 8th repeat and the whole CTR, is unable to secrete the misshaped protein, leading to its concentration in cytoplasm. However, other studies have shown that the CTR is not essential for secretion itself, but mutants lacking the CTR were much less efficient in activating downstream signaling events. Reelin is cleaved in vivo at two sites located after domains 2 and 6 – approximately between repeats 2 and 3 and between repeats 6 and 7, resulting in the production of three fragments. This splitting does not decrease the protein's activity, as constructs made of the predicted central fragments (repeats 3–6) bind to lipoprotein receptors, trigger Dab1 phosphorylation and mimic functions of reelin during cortical plate development. Moreover, the processing of reelin by embryonic neurons may be necessary for proper corticogenesis. Function The primary functions of Reelin are the regulation of corticogenesis and neuronal cell positioning in the prenatal period, but the protein also continues to play a role in adults. Reelin is found in numerous tissues and organs, and one could roughly subdivide its functional roles by the time of expression and by localisation of its action. During development A number of non-nervous tissues and organs express reelin during development, with the expression sharply going down after organs have been formed. The role of the protein here is largely unexplored, because the knockout mice show no major pathology in these organs. Reelin's role in the growing central nervous system has been extensively characterized. It promotes the differentiation of progenitor cells into radial glia and affects the orientation of its fibers, which serve as the guides for the migrating neuroblasts. The position of reelin-secreting cell layer is important, because the fibers orient themselves in the direction of its higher concentration. For example, reelin regulates the development of layer-specific connections in hippocampus and entorhinal cortex. Mammalian corticogenesis is another process where reelin plays a major role. In this process the temporary layer called preplate is split into the marginal zone on the top and subplate below, and the space between them is populated by neuronal layers in the inside-out pattern. Such an arrangement, where the newly created neurons pass through the settled layers and position themselves one step above, is a distinguishing feature of mammalian brain, in contrast to the evolutionary older reptile cortex, in which layers are positioned in an "outside-in" fashion. When reelin is absent, like in the mutant reeler mouse, the order of cortical layering becomes roughly inverted, with younger neurons finding themselves to be unable to pass the settled layers. Subplate neurons fail to stop and invade the upper most layer, creating the so-called superplate in which they mix with Cajal-Retzius cells and some cells normally destined for the second layer. There is no agreement concerning the role of reelin in the proper positioning of cortical layers. The original hypothesis, that the protein is a stop signal for the migrating cells, is supported by its ability to induce the dissociation, its role in asserting the compact granule cell layer in the hippocampus, and by the fact that migrating neuroblasts evade the reelin-rich areas. But an experiment in which murine corticogenesis went normally despite the malpositioned reelin secreting layer, and lack of evidence that reelin affects the growth cones and leading edges of neurons, caused some additional hypotheses to be proposed. According to one of them, reelin makes the cells more susceptible to some yet undescribed positional signaling cascade. Reelin may also ensure correct neuronal positioning in the spinal cord: according to one study, location and level of its expression affects the movement of sympathetic preganglionic neurons. The protein is thought to act on migrating neuronal precursors and thus controls correct cell positioning in the cortex and other brain structures. The proposed role is one of a dissociation signal for neuronal groups, allowing them to separate and go from tangential chain-migration to radial individual migration. Dissociation detaches migrating neurons from the glial cells that are acting as their guides, converting them into individual cells that can strike out alone to find their final position. Reelin takes part in the developmental change of NMDA receptor configuration, increasing mobility of NR2B-containing receptors and thus decreasing the time they spend at the synapse. It has been hypothesized that this may be a part of the mechanism behind the "NR2B-NR2A switch" that is observed in the brain during its postnatal development. Ongoing reelin secretion by GABAergic hippocampal neurons is necessary to keep NR2B-containing NMDA receptors at a low level. In adults In the adult nervous system, reelin plays an eminent role at the two most active neurogenesis sites, the subventricular zone and the dentate gyrus. In some species, the neuroblasts from the subventricular zone migrate in chains in the rostral migratory stream (RMS) to reach the olfactory bulb, where reelin dissociates them into individual cells that are able to migrate further individually. They change their mode of migration from tangential to radial, and begin using the radial glia fibers as their guides. There are studies showing that along the RMS itself the two receptors, ApoER2 and VLDLR, and their intracellular adapter DAB1 function independently of Reelin, most likely by the influence of a newly proposed ligand, thrombospondin-1. In the adult dentate gyrus, reelin provides guidance cues for new neurons that are constantly arriving to the granule cell layer from subgranular zone, keeping the layer compact. Reelin also plays an important role in the adult brain by modulating cortical pyramidal neuron dendritic spine expression density, the branching of dendrites, and the expression of long-term potentiation as its secretion is continued diffusely by the GABAergic cortical interneurons those origin is traced to the medial ganglionic eminence. In the adult organism the non-neural expression is much less widespread, but goes up sharply when some organs are injured. The exact function of reelin upregulation following an injury is still being researched. Evolutionary significance Reelin-DAB1 interactions could have played a key role in the structural evolution of the cortex that evolved from a single layer in the common predecessor of the amniotes into multiple-layered cortex of contemporary mammals. Research shows that reelin expression goes up as the cortex becomes more complex, reaching the maximum in the human brain in which the reelin-secreting Cajal-Retzius cells have significantly more complex axonal arbour. Reelin is present in the telencephalon of all the vertebrates studied so far, but the pattern of expression differs widely. For example, zebrafish have no Cajal-Retzius cells at all; instead, the protein is being secreted by other neurons. These cells do not form a dedicated layer in amphibians, and radial migration in their brains is very weak. As the cortex becomes more complex and convoluted, migration along the radial glia fibers becomes more important for the proper lamination. The emergence of a distinct reelin-secreting layer is thought to play an important role in this evolution. There are conflicting data concerning the importance of this layer, and these are explained in the literature either by the existence of an additional signaling positional mechanism that interacts with the reelin cascade, or by the assumption that mice that are used in such experiments have redundant secretion of reelin compared with more localized synthesis in the human brain. Cajal-Retzius cells, most of which disappear around the time of birth, coexpress reelin with the HAR1 gene that is thought to have undergone the most significant evolutionary change in humans compared with chimpanzee, being the most "evolutionary accelerated" of the genes from the human accelerated regions. There is also evidence of that variants in the DAB1 gene have been included in a recent selective sweep in Chinese populations. Mechanism of action Receptors Reelin's control of cell-cell interactions is thought to be mediated by binding of reelin to the two members of low density lipoprotein receptor gene family: VLDLR and the ApoER2. The two main reelin receptors seem to have slightly different roles: VLDLR conducts the stop signal, while ApoER2 is essential for the migration of late-born neocortical neurons. It also has been shown that the N-terminal region of reelin, a site distinct from the region of reelin shown to associate with VLDLR/ApoER2 binds to the alpha-3-beta-1 integrin receptor. The proposal that the protocadherin CNR1 behaves as a Reelin receptor has been disproven. As members of lipoprotein receptor superfamily, both VLDLR and ApoER2 have in their structure an internalization domain called NPxY motif. After binding to the receptors reelin is internalized by endocytosis, and the N-terminal fragment of the protein is re-secreted. This fragment may serve postnatally to prevent apical dendrites of cortical layer II/III pyramidal neurons from overgrowth, acting via a pathway independent of canonical reelin receptors. Reelin receptors are present on both neurons and glial cells. Furthermore, radial glia express the same amount of ApoER2 but being ten times less rich in VLDLR. beta-1 integrin receptors on glial cells play more important role in neuronal layering than the same receptors on the migrating neuroblasts. Reelin-dependent strengthening of long-term potentiation is caused by ApoER2 interaction with NMDA receptor. This interaction happens when ApoER2 has a region coded by exon 19. ApoER2 gene is alternatively spliced, with the exon 19-containing variant more actively produced during periods of activity. According to one study, the hippocampal reelin expression rapidly goes up when there is need to store a memory, as demethylases open up the RELN gene. The activation of dendrite growth by reelin is apparently conducted through Src family kinases and is dependent upon the expression of Crk family proteins, consistent with the interaction of Crk and CrkL with tyrosine-phosphorylated Dab1. Moreover, a Cre-loxP recombination mouse model that lacks Crk and CrkL in most neurons was reported to have the reeler phenotype, indicating that Crk/CrkL lie between DAB1 and Akt in the reelin signaling chain. Signaling cascades Reelin activates the signaling cascade of Notch-1, inducing the expression of FABP7 and prompting progenitor cells to assume radial glial phenotype. In addition, corticogenesis in vivo is highly dependent upon reelin being processed by embryonic neurons, which are thought to secrete some as yet unidentified metalloproteinases that free the central signal-competent part of the protein. Some other unknown proteolytic mechanisms may also play a role. It is supposed that full-sized reelin sticks to the extracellular matrix fibers on the higher levels, and the central fragments, as they are being freed up by the breaking up of reelin, are able to permeate into the lower levels. It is possible that as neuroblasts reach the higher levels they stop their migration either because of the heightened combined expression of all forms of reelin, or due to the peculiar mode of action of the full-sized reelin molecules and its homodimers. The intracellular adaptor DAB1 binds to the VLDLR and ApoER2 through an NPxY motif and is involved in transmission of Reelin signals through these lipoprotein receptors. It becomes phosphorylated by Src and Fyn kinases and apparently stimulates the actin cytoskeleton to change its shape, affecting the proportion of integrin receptors on the cell surface, which leads to the change in adhesion. Phosphorylation of DAB1 leads to its ubiquitination and subsequent degradation, and this explains the heightened levels of DAB1 in the absence of reelin. Such negative feedback is thought to be important for proper cortical lamination. Activated by two antibodies, VLDLR and ApoER2 cause DAB1 phosphorylation but seemingly without the subsequent degradation and without rescuing the reeler phenotype, and this may indicate that a part of the signal is conducted independently of DAB1. A protein having an important role in lissencephaly and accordingly called LIS1 (PAFAH1B1), was shown to interact with the intracellular segment of VLDLR, thus reacting to the activation of reelin pathway. Complexes Reelin molecules have been shown to form a large protein complex, a disulfide-linked homodimer. If the homodimer fails to form, efficient tyrosine phosphorylation of DAB1 in vitro fails. Moreover, the two main receptors of reelin are able to form clusters that most probably play a major role in the signaling, causing the intracellular adaptor DAB1 to dimerize or oligomerize in its turn. Such clustering has been shown in the study to activate the signaling chain even in the absence of Reelin itself. In addition, reelin itself can cut the peptide bonds holding other proteins together, being a serine protease, and this may affect the cellular adhesion and migration processes. Reelin signaling leads to phosphorylation of actin-interacting protein cofilin 1 at ser3; this may stabilize the actin cytoskeleton and anchor the leading processes of migrating neuroblasts, preventing their further growth. Interaction with Cdk5 Cyclin-dependent kinase 5 (Cdk5), a major regulator of neuronal migration and positioning, is known to phosphorylate DAB1 and other cytosolic targets of reelin signaling, such as Tau, which could be activated also via reelin-induced deactivation of GSK3B, and NUDEL, associated with Lis1, one of the DAB1 targets. LTP induction by reelin in hippocampal slices fails in p35 knockouts. P35 is a key Cdk5 activator, and double p35/Dab1, p35/RELN, p35/ApoER2, p35/VLDLR knockouts display increased neuronal migration deficits, indicating a synergistic action of reelin → ApoER2/VLDLR → DAB1 and p35/p39 → Cdk5 pathways in the normal corticogenesis. Possible pathological role Lissencephaly Disruptions of the RELN gene are considered to be the cause of the rare form of lissencephaly with cerebellar hypoplasia classed as a microlissencephaly called Norman-Roberts syndrome. The mutations disrupt splicing of the RELN mRNA transcript, resulting in low or undetectable amounts of reelin protein. The phenotype in these patients was characterized by hypotonia, ataxia, and developmental delay, with lack of unsupported sitting and profound mental retardation with little or no language development. Seizures and congenital lymphedema are also present. A novel chromosomal translocation causing the syndrome was described in 2007. Schizophrenia Reduced expression of reelin and its mRNA levels in the brains of schizophrenia sufferers had been reported in 1998 and 2000, and independently confirmed in postmortem studies of the hippocampus, cerebellum, basal ganglia, and cerebral cortex. The reduction may reach up to 50% in some brain regions and is coupled with reduced expression of GAD-67 enzyme, which catalyses the transition of glutamate to GABA. Blood levels of reelin and its isoforms are also altered in schizophrenia, along with mood disorders, according to one study. Reduced reelin mRNA prefrontal expression in schizophrenia was found to be the most statistically relevant disturbance found in the multicenter study conducted in 14 separate laboratories in 2001 by Stanley Foundation Neuropathology Consortium. Epigenetic hypermethylation of DNA in schizophrenia patients is proposed as a cause of the reduction, in agreement with the observations dating from the 1960s that administration of methionine to schizophrenic patients results in a profound exacerbation of schizophrenia symptoms in sixty to seventy percent of patients. The proposed mechanism is a part of the "epigenetic hypothesis for schizophrenia pathophysiology" formulated by a group of scientists in 2008 (D. Grayson; A. Guidotti; E. Costa). A postmortem study comparing a DNA methyltransferase (DNMT1) and Reelin mRNA expression in cortical layers I and V of schizophrenic patients and normal controls demonstrated that in the layer V both DNMT1 and Reelin levels were normal, while in the layer I DNMT1 was threefold higher, probably leading to the twofold decrease in the Reelin expression. There is evidence that the change is selective, and DNMT1 is overexpressed in reelin-secreting GABAergic neurons but not in their glutamatergic neighbours. Methylation inhibitors and histone deacetylase inhibitors, such as valproic acid, increase reelin mRNA levels, while L-methionine treatment downregulates the phenotypic expression of reelin. One study indicated the upregulation of histone deacetylase HDAC1 in the hippocampi of patients. Histone deacetylases suppress gene promoters; hyperacetylation of histones was shown in murine models to demethylate the promoters of both reelin and GAD67. DNMT1 inhibitors in animals have been shown to increase the expression of both reelin and GAD67, and both DNMT inhibitors and HDAC inhibitors shown in one study to activate both genes with comparable dose- and time-dependence. As one study shows, S-adenosyl methionine (SAM) concentration in patients' prefrontal cortex is twice as high as in the cortices of non-affected people. SAM, being a methyl group donor necessary for DNMT activity, could further shift epigenetic control of gene expression. Chromosome region 7q22 that harbours the RELN gene is associated with schizophrenia, and the gene itself was associated with the disease in a large study that found the polymorphism rs7341475 to increase the risk of the disease in women, but not in men. The women that have the single-nucleotide polymorphism (SNP) are about 1.4 times more likely to get ill, according to the study. Allelic variations of RELN have also been correlated with working memory, memory and executive functioning in nuclear families where one of the members suffers from schizophrenia. The association with working memory was later replicated. In one small study, nonsynonymous polymorphism Val997Leu of the gene was associated with left and right ventricular enlargement in patients. One study showed that patients have decreased levels of one of reelin receptors, VLDLR, in the peripheral lymphocytes. After six months of antipsychotic therapy the expression went up; according to authors, peripheral VLRLR levels may serve as a reliable peripheral biomarker of schizophrenia. Considering the role of reelin in promoting dendritogenesis, suggestions were made that the localized dendritic spine deficit observed in schizophrenia could be in part connected with the downregulation of reelin. Reelin pathway could also be linked to schizophrenia and other psychotic disorders through its interaction with risk genes. One example is the neuronal transcription factor NPAS3, disruption of which is linked to schizophrenia and learning disability. Knockout mice lacking NPAS3 or the similar protein NPAS1 have significantly lower levels of reelin; the precise mechanism behind this is unknown. Another example is the schizophrenia-linked gene MTHFR, with murine knockouts showing decreased levels of reelin in the cerebellum. Along the same line, it is worth noting that the gene coding for the subunit NR2B that is presumably affected by reelin in the process of NR2B->NR2A developmental change of NMDA receptor composition, stands as one of the strongest risk gene candidates. Another shared aspect between NR2B and RELN is that they both can be regulated by the TBR1 transcription factor. The heterozygous reeler mouse, which is haploinsufficient for the RELN gene, shares several neurochemical and behavioral abnormalities with schizophrenia and bipolar disorder, but the exact relevance of these murine behavioral changes to the pathophysiology of schizophrenia remains debatable. As previously described, reelin plays a crucial role in modulating early neuroblast migration during brain development. Evidences of altered neural cell positioning in post-mortem schizophrenia patient brains and changes to gene regulatory networks that control cell migration suggests a potential link between altered reelin expression in patient brain tissue to disrupted cell migration during brain development. To model the role of reelin in the context of schizophrenia at a cellular level, olfactory neurosphere-derived cells were generated from the nasal biopsies of schizophrenia patients, and compared to cells from healthy controls. Schizophrenia patient-derived cells have reduced levels of reelin mRNA and protein when compared to healthy control cells, but expresses the key reelin receptors and DAB1 accessory protein. When grown in vitro, schizophrenia patient-derived cells were unable to respond to reelin coated onto tissue culture surfaces; In contrast, cells derived from healthy controls were able to alter their cell migration when exposed to reelin. This work went on to show that the lack of cell migration response in patient-derived cells were caused by the cell's inability to produce enough focal adhesions of the appropriate size when in contact with extracellular reelin. More research into schizophrenia cell-based models are needed to look at the function of reelin, or lack of, in the pathophysiology of schizophrenia. Bipolar disorder Decrease in RELN expression with concurrent upregulation of DNMT1 is typical of bipolar disorder with psychosis, but is not characteristic of patients with major depression without psychosis, which could speak of specific association of the change with psychoses. One study suggests that unlike in schizophrenia, such changes are found only in the cortex and do not affect the deeper structures in psychotic bipolar patients, as their basal ganglia were found to have the normal levels of DNMT1 and subsequently both the reelin and GAD67 levels were within the normal range. In a genetic study conducted in 2009, preliminary evidence requiring further DNA replication suggested that variation of the RELN gene (SNP rs362719) may be associated with susceptibility to bipolar disorder in women. Autism Autism is a neurodevelopmental disorder that is generally believed to be caused by mutations in several locations, likely triggered by environmental factors. The role of reelin in autism is not decided yet. Reelin was originally in 2001 implicated in a study finding associations between autism and a polymorphic GGC/CGG repeat preceding the 5' ATG initiator codon of the RELN gene in an Italian population. Longer triplet repeats in the 5' region were associated with an increase in autism susceptibility. However, another study of 125 multiple-incidence families and 68 single-incidence families from the subsequent year found no significant difference between the length of the polymorphic repeats in affected and controls. Although, using a family based association test larger reelin alleles were found to be transmitted more frequently than expected to affected children. An additional study examining 158 subjects with German lineage likewise found no evidence of triplet repeat polymorphisms associated with autism. And a larger study from 2004 consisting of 395 families found no association between autistic subjects and the CGG triplet repeat as well as the allele size when compared to age of first word. In 2010 a large study using data from 4 European cohorts would find some evidence for an association between autism and the rs362780 RELN polymorphism. Studies of transgenic mice have been suggestive of an association, but not definitive. Temporal lobe epilepsy: granule cell dispersion Decreased reelin expression in the hippocampal tissue samples from patients with temporal lobe epilepsy was found to be directly correlated with the extent of granule cell dispersion (GCD), a major feature of the disease that is noted in 45%–73% of patients. The dispersion, according to a small study, is associated with the RELN promoter hypermethylation. According to one study, prolonged seizures in a rat model of mesial temporal lobe epilepsy have led to the loss of reelin-expressing interneurons and subsequent ectopic chain migration and aberrant integration of newborn dentate granule cells. Without reelin, the chain-migrating neuroblasts failed to detach properly. Moreover, in a kainate-induced mouse epilepsy model, exogenous reelin had prevented GCD, according to one study. Alzheimer's disease The Reelin receptors ApoER2 and VLDLR belong to the LDL receptor gene family. All members of this family are receptors for Apolipoprotein E (ApoE). Therefore, they are often synonymously referred to as 'ApoE receptors'. ApoE occurs in 3 common isoforms (E2, E3, E4) in the human population. ApoE4 is the primary genetic risk factor for late-onset Alzheimer's disease. This strong genetic association has led to the proposal that ApoE receptors play a central role in the pathogenesis of Alzheimer's disease. According to one study, reelin expression and glycosylation patterns are altered in Alzheimer's disease. In the cortex of the patients, reelin levels were 40% higher compared with controls, but the cerebellar levels of the protein remain normal in the same patients. This finding is in agreement with an earlier study showing the presence of Reelin associated with amyloid plaques in a transgenic AD mouse model. A large genetic study of 2008 showed that RELN gene variation is associated with an increased risk of Alzheimer's disease in women. The number of reelin-producing Cajal-Retzius cells is significantly decreased in the first cortical layer of patients. Reelin has been shown to interact with amyloid precursor protein, and, according to one in-vitro study, is able to counteract the Aβ-induced dampening of NMDA-receptor activity. This is modulated by ApoE isoforms, which selectively alter the recycling of ApoER2 as well as AMPA and NMDA receptors. Cancer DNA methylation patterns are often changed in tumours, and the RELN gene could be affected: according to one study, in the pancreatic cancer the expression is suppressed, along with other reelin pathway components In the same study, cutting the reelin pathway in cancer cells that still expressed reelin resulted in increased motility and invasiveness. On the contrary, in prostate cancer the RELN expression is excessive and correlates with Gleason score. Retinoblastoma presents another example of RELN overexpression. This gene has also been seen recurrently mutated in cases of acute lymphoblastic leukaemia. Other conditions One genome-wide association study indicates a possible role for RELN gene variation in otosclerosis, an abnormal growth of bone of the middle ear. In a statistical search for the genes that are differentially expressed in the brains of cerebral malaria-resistant versus cerebral malaria-susceptible mice, Delahaye et al. detected a significant upregulation of both RELN and DAB1 and speculated on possible protective effects of such over-expression. In 2020, a study reported a novel variant in RELN gene (S2486G) which was associated with ankylosing spondylitis in a large family. This suggested a potential insight into the pathophysiological involvement of reelin via inflammation and osteogenesis pathways in ankylosing spondylitis, and it could broaden the horizon toward new therapeutic strategies. A 2020 study from UT Southwestern Medical Center suggests circulating Reelin levels might correlate with MS severity and stages, and that lowering Reelin levels might be a novel way to treat MS. Factors affecting reelin expression The expression of reelin is controlled by a number of factors besides the sheer number of Cajal-Retzius cells. For example, TBR1 transcription factor regulates RELN along with other T-element-containing genes. On a higher level, increased maternal care was found to correlate with reelin expression in rat pups; such correlation was reported in hippocampus and in the cortex. According to one report, prolonged exposure to corticosterone significantly decreased reelin expression in murine hippocampi, a finding possibly pertinent to the hypothetical role of corticosteroids in depression. One small postmortem study has found increased methylation of RELN gene in the neocortex of persons past their puberty compared with those that had yet to enter the period of maturation. Psychotropic medication As reelin is being implicated in a number of brain disorders and its expression is usually measured posthumously, assessing the possible medication effects is important. According to the epigenetic hypothesis, drugs that shift the balance in favour of demethylation have a potential to alleviate the proposed methylation-caused downregulation of RELN and GAD67. In one study, clozapine and sulpiride but not haloperidol and olanzapine were shown to increase the demethylation of both genes in mice pretreated with l-methionine. Valproic acid, a histone deacetylase inhibitor, when taken in combination with antipsychotics, is proposed to have some benefits. But there are studies conflicting the main premise of the epigenetic hypothesis, and a study by Fatemi et al. shows no increase in RELN expression by valproic acid; that indicates the need for further investigation. Fatemi et al. conducted the study in which RELN mRNA and reelin protein levels were measured in rat prefrontal cortex following a 21-day of intraperitoneal injections of the following drugs: In 2009, Fatemi et al. published the more detailed work on rats using the same medication. Here, cortical expression of several participants (VLDLR, DAB1, GSK3B) of the signaling chain was measured besides reelin itself, and also the expression of GAD65 and GAD67. References Further reading External links Human RELN at WikiGenes Molecular neuroscience Developmental neuroscience Glycoproteins Biology of bipolar disorder Articles containing video clips Genes on human chromosome 7
Reelin
[ "Chemistry" ]
8,647
[ "Glycoproteins", "Glycobiology", "Molecular neuroscience", "Molecular biology" ]
62,844
https://en.wikipedia.org/wiki/Density%20matrix
In quantum mechanics, a density matrix (or density operator) is a matrix that describes an ensemble of physical systems as quantum states (even if the ensemble contains only one system). It allows for the calculation of the probabilities of the outcomes of any measurements performed upon the systems of the ensemble using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed ensembles (sometimes ambiguously called mixed states). Mixed ensembles arise in quantum mechanics in two different situations: when the preparation of the systems lead to numerous pure states in the ensemble, and thus one must deal with the statistics of possible preparations, and when one wants to describe a physical system that is entangled with another, without describing their combined state; this case is typical for a system interacting with some environment (e.g. decoherence). In this case, the density matrix of an entangled system differs from that of an ensemble of pure states that, combined, would give the same statistical results upon measurement. Density matrices are thus crucial tools in areas of quantum mechanics that deal with mixed ensembles, such as quantum statistical mechanics, open quantum systems and quantum information. Definition and motivation The density matrix is a representation of a linear operator called the density operator. The density matrix is obtained from the density operator by a choice of an orthonormal basis in the underlying space. In practice, the terms density matrix and density operator are often used interchangeably. Pick a basis with states , in a two-dimensional Hilbert space, then the density operator is represented by the matrix where the diagonal elements are real numbers that sum to one (also called populations of the two states , ). The off-diagonal elements are complex conjugates of each other (also called coherences); they are restricted in magnitude by the requirement that be a positive semi-definite operator, see below. A density operator is a positive semi-definite, self-adjoint operator of trace one acting on the Hilbert space of the system. This definition can be motivated by considering a situation where some pure states (which are not necessarily orthogonal) are prepared with probability each. This is known as an ensemble of pure states. The probability of obtaining projective measurement result when using projectors is given by which makes the density operator, defined as a convenient representation for the state of this ensemble. It is easy to check that this operator is positive semi-definite, self-adjoint, and has trace one. Conversely, it follows from the spectral theorem that every operator with these properties can be written as for some states and coefficients that are non-negative and add up to one. However, this representation will not be unique, as shown by the Schrödinger–HJW theorem. Another motivation for the definition of density operators comes from considering local measurements on entangled states. Let be a pure entangled state in the composite Hilbert space . The probability of obtaining measurement result when measuring projectors on the Hilbert space alone is given by where denotes the partial trace over the Hilbert space . This makes the operator a convenient tool to calculate the probabilities of these local measurements. It is known as the reduced density matrix of on subsystem 1. It is easy to check that this operator has all the properties of a density operator. Conversely, the Schrödinger–HJW theorem implies that all density operators can be written as for some state . Pure and mixed states A pure quantum state is a state that can not be written as a probabilistic mixture, or convex combination, of other quantum states. There are several equivalent characterizations of pure states in the language of density operators. A density operator represents a pure state if and only if: it can be written as an outer product of a state vector with itself, that is, it is a projection, in particular of rank one. it is idempotent, that is it has purity one, that is, It is important to emphasize the difference between a probabilistic mixture (i.e. an ensemble) of quantum states and the superposition of two states. If an ensemble is prepared to have half of its systems in state and the other half in , it can be described by the density matrix: where and are assumed orthogonal and of dimension 2, for simplicity. On the other hand, a quantum superposition of these two states with equal probability amplitudes results in the pure state with density matrix Unlike the probabilistic mixture, this superposition can display quantum interference. Geometrically, the set of density operators is a convex set, and the pure states are the extremal points of that set. The simplest case is that of a two-dimensional Hilbert space, known as a qubit. An arbitrary mixed state for a qubit can be written as a linear combination of the Pauli matrices, which together with the identity matrix provide a basis for self-adjoint matrices: where the real numbers are the coordinates of a point within the unit ball and Points with represent pure states, while mixed states are represented by points in the interior. This is known as the Bloch sphere picture of qubit state space. Example: light polarization An example of pure and mixed states is light polarization. An individual photon can be described as having right or left circular polarization, described by the orthogonal quantum states and or a superposition of the two: it can be in any state (with ), corresponding to linear, circular, or elliptical polarization. Consider now a vertically polarized photon, described by the state . If we pass it through a circular polarizer that allows either only polarized light, or only polarized light, half of the photons are absorbed in both cases. This may make it seem like half of the photons are in state and the other half in state , but this is not correct: if we pass through a linear polarizer there is no absorption whatsoever, but if we pass either state or half of the photons are absorbed. Unpolarized light (such as the light from an incandescent light bulb) cannot be described as any state of the form (linear, circular, or elliptical polarization). Unlike polarized light, it passes through a polarizer with 50% intensity loss whatever the orientation of the polarizer; and it cannot be made polarized by passing it through any wave plate. However, unpolarized light can be described as a statistical ensemble, e. g. as each photon having either polarization or polarization with probability 1/2. The same behavior would occur if each photon had either vertical polarization or horizontal polarization with probability 1/2. These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state. For this example of unpolarized light, the density operator equals There are also other ways to generate unpolarized light: one possibility is to introduce uncertainty in the preparation of the photon, for example, passing it through a birefringent crystal with a rough surface, so that slightly different parts of the light beam acquire different polarizations. Another possibility is using entangled states: a radioactive decay can emit two photons traveling in opposite directions, in the quantum state . The joint state of the two photons together is pure, but the density matrix for each photon individually, found by taking the partial trace of the joint density matrix, is completely mixed. Equivalent ensembles and purifications A given density operator does not uniquely determine which ensemble of pure states gives rise to it; in general there are infinitely many different ensembles generating the same density matrix. Those cannot be distinguished by any measurement. The equivalent ensembles can be completely characterized: let be an ensemble. Then for any complex matrix such that (a partial isometry), the ensemble defined by will give rise to the same density operator, and all equivalent ensembles are of this form. A closely related fact is that a given density operator has infinitely many different purifications, which are pure states that generate the density operator when a partial trace is taken. Let be the density operator generated by the ensemble , with states not necessarily orthogonal. Then for all partial isometries we have that is a purification of , where is an orthogonal basis, and furthermore all purifications of are of this form. Measurement Let be an observable of the system, and suppose the ensemble is in a mixed state such that each of the pure states occurs with probability . Then the corresponding density operator equals The expectation value of the measurement can be calculated by extending from the case of pure states: where denotes trace. Thus, the familiar expression for pure states is replaced by for mixed states. Moreover, if has spectral resolution where is the projection operator into the eigenspace corresponding to eigenvalue , the post-measurement density operator is given by when outcome i is obtained. In the case where the measurement result is not known the ensemble is instead described by If one assumes that the probabilities of measurement outcomes are linear functions of the projectors , then they must be given by the trace of the projector with a density operator. Gleason's theorem shows that in Hilbert spaces of dimension 3 or larger the assumption of linearity can be replaced with an assumption of non-contextuality. This restriction on the dimension can be removed by assuming non-contextuality for POVMs as well, but this has been criticized as physically unmotivated. Entropy The von Neumann entropy of a mixture can be expressed in terms of the eigenvalues of or in terms of the trace and logarithm of the density operator . Since is a positive semi-definite operator, it has a spectral decomposition such that , where are orthonormal vectors, , and . Then the entropy of a quantum system with density matrix is This definition implies that the von Neumann entropy of any pure state is zero. If are states that have support on orthogonal subspaces, then the von Neumann entropy of a convex combination of these states, is given by the von Neumann entropies of the states and the Shannon entropy of the probability distribution : When the states do not have orthogonal supports, the sum on the right-hand side is strictly greater than the von Neumann entropy of the convex combination . Given a density operator and a projective measurement as in the previous section, the state defined by the convex combination which can be interpreted as the state produced by performing the measurement but not recording which outcome occurred, has a von Neumann entropy larger than that of , except if . It is however possible for the produced by a generalized measurement, or POVM, to have a lower von Neumann entropy than . Von Neumann equation for time evolution Just as the Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as the Liouville–von Neumann equation) describes how a density operator evolves in time. The von Neumann equation dictates that where the brackets denote a commutator. This equation only holds when the density operator is taken to be in the Schrödinger picture, even though this equation seems at first look to emulate the Heisenberg equation of motion in the Heisenberg picture, with a crucial sign difference: where is some Heisenberg picture operator; but in this picture the density matrix is not time-dependent, and the relative sign ensures that the time derivative of the expected value comes out the same as in the Schrödinger picture. If the Hamiltonian is time-independent, the von Neumann equation can be easily solved to yield For a more general Hamiltonian, if is the wavefunction propagator over some interval, then the time evolution of the density matrix over that same interval is given by Wigner functions and classical analogies The density matrix operator may also be realized in phase space. Under the Wigner map, the density matrix transforms into the equivalent Wigner function, The equation for the time evolution of the Wigner function, known as Moyal equation, is then the Wigner-transform of the above von Neumann equation, where is the Hamiltonian, and is the Moyal bracket, the transform of the quantum commutator. The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physics. In the limit of a vanishing Planck constant , reduces to the classical Liouville probability density function in phase space. Example applications Density matrices are a basic tool of quantum mechanics, and appear at least occasionally in almost any type of quantum-mechanical calculation. Some specific examples where density matrices are especially helpful and common are as follows: Statistical mechanics uses density matrices, most prominently to express the idea that a system is prepared at a nonzero temperature. Constructing a density matrix using a canonical ensemble gives a result of the form , where is the inverse temperature and is the system's Hamiltonian. The normalization condition that the trace of be equal to 1 defines the partition function to be . If the number of particles involved in the system is itself not certain, then a grand canonical ensemble can be applied, where the states summed over to make the density matrix are drawn from a Fock space. Quantum decoherence theory typically involves non-isolated quantum systems developing entanglement with other systems, including measurement apparatuses. Density matrices make it much easier to describe the process and calculate its consequences. Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible, as the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them. Similarly, in quantum computation, quantum information theory, open quantum systems, and other fields where state preparation is noisy and decoherence can occur, density matrices are frequently used. Noise is often modelled via a depolarizing channel or an amplitude damping channel. Quantum tomography is a process by which, given a set of data representing the results of quantum measurements, a density matrix consistent with those measurement results is computed. When analyzing a system with many electrons, such as an atom or molecule, an imperfect but useful first approximation is to treat the electrons as uncorrelated or each having an independent single-particle wavefunction. This is the usual starting point when building the Slater determinant in the Hartree–Fock method. If there are electrons filling the single-particle wavefunctions , then the collection of electrons together can be characterized by a density matrix . C*-algebraic formulation of states It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable. For this reason, observables are identified with elements of an abstract C*-algebra A (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on A. However, by using the GNS construction, we can recover Hilbert spaces that realize A as a subalgebra of operators. Geometrically, a pure state on a C*-algebra A is a state that is an extreme point of the set of all states on A. By properties of the GNS construction these states correspond to irreducible representations of A. The states of the C*-algebra of compact operators K(H) correspond exactly to the density operators, and therefore the pure states of K(H) are exactly the pure states in the sense of quantum mechanics. The C*-algebraic formulation can be seen to include both classical and quantum systems. When the system is classical, the algebra of observables become an abelian C*-algebra. In that case the states become probability measures. History The formalism of density operators and matrices was introduced in 1927 by John von Neumann and independently, but less systematically, by Lev Landau and later in 1946 by Felix Bloch. Von Neumann introduced the density matrix in order to develop both quantum statistical mechanics and a theory of quantum measurements. The name density matrix itself relates to its classical correspondence to a phase-space probability measure (probability distribution of position and momentum) in classical statistical mechanics, which was introduced by Eugene Wigner in 1932. In contrast, the motivation that inspired Landau was the impossibility of describing a subsystem of a composite quantum system by a state vector. See also Atomic electron transition Density functional theory Green–Kubo relations Green's function (many-body theory) Lindblad equation Wigner quasi-probability distribution Notes and references Functional analysis Quantum information science Statistical mechanics Lev Landau
Density matrix
[ "Physics", "Mathematics" ]
3,472
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Statistical mechanics" ]
62,866
https://en.wikipedia.org/wiki/United%20States%20Department%20of%20Energy
The United States Department of Energy (DOE) is an executive department of the U.S. federal government that oversees U.S. national energy policy and energy production, the research and development of nuclear power, the military's nuclear weapons program, nuclear reactor production for the United States Navy, energy-related research, and energy conservation. The DOE was created in 1977 in the aftermath of the 1973 oil crisis. It sponsors more physical science research than any other U.S. federal agency, the majority of which is conducted through its system of National Laboratories. The DOE also directs research in genomics, with the Human Genome Project originating from a DOE initiative. The department is headed by the secretary of energy, who reports directly to the president of the United States and is a member of the Cabinet. The current secretary of energy is Jennifer Granholm, who has served in the position since February 2021. The department's headquarters are in southwestern Washington, D.C., in the James V. Forrestal Building, with additional offices in Germantown, Maryland. History Formation and consolidation In 1942, during World War II, the United States started the Manhattan Project to develop the atomic bomb under the U.S. Army Corps of Engineers. After the war, in 1946, the Atomic Energy Commission (AEC) was created to control the future of the project. The Atomic Energy Act of 1946 also created the framework for the first National Laboratories. Among other nuclear projects, the AEC produced fabricated uranium fuel cores at locations such as Fernald Feed Materials Production Center in Cincinnati, Ohio. The Energy Reorganization Act of 1974 split the responsibilities of the AEC into the new Nuclear Regulatory Commission, which was charged with regulating the nuclear power industry, and the Energy Research and Development Administration, which was assigned to manage the nuclear weapon, naval reactor, and energy development programs. The 1973 oil crisis called attention to the need to consolidate energy policy. In 1977, President Jimmy Carter signed into law the Department of Energy Organization Act, which established the Department of Energy. The new agency, which began operations on October 1, 1977, consolidated the Federal Energy Administration, the Energy Research and Development Administration, the Federal Power Commission, and programs of various other agencies. Former Secretary of Defense James Schlesinger, who served under Presidents Nixon and Ford during the Vietnam War, was appointed as the first secretary. President Carter proposed the Department of Energy with the goal of promoting energy conservation and energy independence, and developing alternative sources of energy to reduce the use of fossil fuels. With international energy's future uncertain for America, Carter acted quickly to have the department come into action the first year of his presidency. This was an extremely important issue of the time as the oil crisis was causing shortages and inflation. With the Three Mile Island accident, Carter was able to intervene with the help of the department. Through the DOE, Carter was able to make changes within the Nuclear Regulatory Commission, including improving management and procedures, since nuclear energy and weapons are responsibilities of the department. Weapon plans stolen In December 1999, the FBI was investigating how China obtained plans for a specific nuclear device. Wen Ho Lee was accused of stealing nuclear secrets from Los Alamos National Laboratory for the People's Republic of China. Federal officials, including then-Energy Secretary Bill Richardson, publicly named Lee as a suspect before he was charged with a crime. The U.S. Congress held hearings to investigate the Department of Energy's handling of his case. Republican senators thought that an independent agency should be in charge of nuclear weapons and security issues, rather than the DOE. All but one of the 59 charges against Lee were eventually dropped because the investigation proved the plans the Chinese obtained could not have come from Lee. Lee filed suit and won a $1.6 million settlement against the federal government and news agencies. The episode eventually led to the creation of the National Nuclear Security Administration, a semi-autonomous agency within the department. Loan guarantee program of 2005 In 2001, American Solar Challenge was sponsored by the DOE and the National Renewable Energy Laboratory. After the 2005 race, the DOE discontinued its sponsorship. Title XVII of Energy Policy Act of 2005 authorizes the DOE to issue loan guarantees to eligible projects that "avoid, reduce, or sequester air pollutants or anthropogenic emissions of greenhouse gases" and "employ new or significantly improved technologies as compared to technologies in service in the United States at the time the guarantee is issued". In loan guarantees, a conditional commitment requires to meet an equity commitment, as well as other conditions, before the loan guarantee is completed. In September 2008, the DOE, the Nuclear Threat Initiative (NTI), the Institute of Nuclear Materials Management (INMM), and the International Atomic Energy Agency (IAEA) partnered to develop and launch the World Institute for Nuclear Security (WINS), an international non-governmental organization designed to provide a forum to share best practices in strengthening the security and safety of nuclear and radioactive materials and facilities. In December 2024, the Loan Programs Office announced it would extend the largest loan ever sanctioned – a $15 billion (US) low-interest loan to support the modernization of Pacific Gas & Electric’s hydroelectric power structure, enhance transmission lines critical for renewable energy integration, data center operations, and the growing fleet of electric vehicles. Initially requested as a $30 billion (US) loan, the amount was reduced due to concerns over the company’s repayment capacity. Organization The department announced a reorganization with new names of under secretaries in 2022. The department is under the control and supervision of a United States Secretary of Energy, a political appointee of the President of the United States. The Energy Secretary is assisted in managing the department by a United States Deputy Secretary of Energy, also appointed by the president, who assumes the duties of the secretary in the secretary's absence. The department also has three under secretaries, each appointed by the president, who oversee the major areas of the department's work. The president also appoints seven officials with the rank of Assistant Secretary of Energy who have line management responsibility for major organizational elements of the department. The Energy Secretary assigns their functions and duties. Symbolism in the seal Excerpt from the Code of Federal Regulations, in Title 10: Energy: The official seal of the Department of Energy "includes a green shield bisected by a gold-colored lightning bolt, on which is emblazoned a gold-colored symbolic sun, atom, oil derrick, windmill, and dynamo. It is crested by the white head of an eagle, atop a white rope. Both appear on a blue field surrounded by concentric circles in which the name of the agency, in gold, appears on a green background." "The eagle represents the care in planning and the purposefulness of efforts required to respond to the Nation's increasing demands for energy. The sun, atom, oil derrick, windmill, and dynamo serve as representative technologies whose enhanced development can help meet these demands. The rope represents the cohesiveness in the development of the technologies and their link to our future capabilities. The lightning bolt represents the power of the natural forces from which energy is derived and the Nation's challenge in harnessing the forces." "The color scheme is derived from nature, symbolizing both the source of energy and the support of man's existence. The blue field represents air and water, green represents mineral resources and the earth itself, and gold represents the creation of energy in the release of natural forces. By invoking this symbolism, the color scheme represents the Nation's commitment to meet its energy needs in a manner consistent with the preservation of the natural environment." Facilities The Department of Energy operates a system of national laboratories and technical facilities for research and development, as follows: Ames National Laboratory Argonne National Laboratory Brookhaven National Laboratory Fermi National Accelerator Laboratory Idaho National Laboratory Lawrence Berkeley National Laboratory Lawrence Livermore National Laboratory Los Alamos National Laboratory National Energy Technology Laboratory National Renewable Energy Laboratory Oak Ridge National Laboratory Pacific Northwest National Laboratory Princeton Plasma Physics Laboratory Sandia National Laboratories (SNL) Savannah River National Laboratory DOE/SNL Scaled Wind Farm Technology (SWiFT) Facility SLAC National Accelerator Laboratory Thomas Jefferson National Accelerator Facility Albany Research Center Bettis Atomic Power Laboratory – under NNSA designs/develops nuclear-powered propulsion for the U.S. Navy Kansas City National Security Campus Knolls Atomic Power Laboratory – under NNSA designs/develops nuclear-powered propulsion for the U.S. Navy National Petroleum Technology Office Nevada National Security Site New Brunswick Laboratory Office of Fossil Energy Office of River Protection Pantex Plant Radiological and Environmental Sciences Laboratory Savannah River Site—separate from Savannah River National Laboratory Y-12 National Security Complex Yucca Mountain nuclear waste repository Other major DOE facilities include: Airstrip: Pahute Mesa Airstrip – Nye County, Nevada, part of Nevada National Security Site Nuclear weapons sites The DOE/NNSA has federal responsibility for the design, testing and production of all nuclear weapons. NNSA in turn uses contractors to carry out its responsibilities at the following government owned sites: Research, development, and manufacturing guidance: Los Alamos National Laboratory and Lawrence Livermore National Laboratory Engineering of the non-nuclear components and system integration: Sandia National Laboratories Manufacturing of key components: The Kansas City Plant, Savannah River Site and Y-12 National Security Complex. Testing: Nevada Test Site Final weapon and warhead assembling and dismantling: Pantex Related legislation 1920 – Federal Power Act 1935 – Public Utility Holding Company Act of 1935 1946 – Atomic Energy Act PL 79-585 (created the Atomic Energy Commission) [Superseded by the Atomic Energy Act of 1954] 1954 – Atomic Energy Act of 1954, as Amended PL 83-703 1956 – Colorado River Storage Project PL 84-485 1957 – Atomic Energy Commission Acquisition of Property PL 85-162 1957 – Price-Anderson Nuclear Industries Indemnity Act PL 85-256 1968 – Natural Gas Pipeline Safety Act PL 90-481 1973 – Mineral Leasing Act Amendments (Trans-Alaska Oil Pipeline Authorization) PL 93-153 1974 – Energy Reorganization Act PL 93-438 (Split the AEC into the Energy Research and Development Administration and the Nuclear Regulatory Commission) 1975 – Energy Policy and Conservation Act PL 94-163 1977 – Department of Energy Organization Act PL 95-91 (Dismantled ERDA and replaced it with the Department of Energy) 1978 – National Energy Act PL 95-617, 618, 619, 620, 621 1980 – Energy Security Act PL 96-294 1989 – Natural Gas Wellhead Decontrol Act PL 101-60 1992 – Energy Policy Act of 1992 PL 102-486 2000 – National Nuclear Security Administration Act PL 106-65 2005 – Energy Policy Act of 2005 PL 109-58 2007 – Energy Independence and Security Act of 2007 PL 110-140 2008 – Food, Conservation, and Energy Act of 2008 PL 110-234 Budget On May 7, 2009 President Barack Obama unveiled a $26.4 billion budget request for DOE for fiscal year (FY) 2010, including $2.3 billion for the DOE Office of Energy Efficiency and Renewable Energy (EERE). That budget aimed to substantially expand the use of renewable energy sources while improving energy transmission infrastructure. It also proposed significant investments in hybrids and plug-in hybrids, smart grid technologies, and scientific research and innovation. As part of the $789 billion economic stimulus package in the American Recovery and Reinvestment Act of 2009, Congress provided Energy with an additional $38.3 billion for fiscal years 2009 and 2010, adding about 75 percent to Energy's annual budgets. Most of the stimulus spending was in the form of grants and contracts. For fiscal year 2013, each of the operating units of the Department of Energy operated with the following budgets: In March 2018, Energy Secretary Rick Perry testified to a Senate panel about the Trump administration's DOE budget request for fiscal year 2019. The budget request prioritized nuclear security while making large cuts to energy efficiency and renewable energy programs. The proposal was a $500 million increase in funds over fiscal year 2017. It "promotes innovations like a new Office of Cybersecurity, Energy Security, and Emergency Response (CESER) and gains for the Office of Fossil Energy. Investments would be made to strengthen the National Nuclear Security Administration and modernize the nuclear force, as well as in weapons activities and advanced computing." However, the budget for the Office of Energy Efficiency and Renewable Energy would be lowered to $696 million under the plan, down from $1.3 billion in fiscal year 2017. Overall, the department's energy and related programs would be cut by $1.9 billion. Programs and contracts Energy Savings Performance Contract Energy Savings Performance Contracts (ESPCs) are contracts under which a contractor designs, constructs, and obtains the necessary financing for an energy savings project, and the federal agency makes payments over time to the contractor from the savings in the agency's utility bills. The contractor guarantees the energy improvements will generate savings, and after the contract ends, all continuing cost savings accrue to the federal agency. Energy Innovation Hubs Energy Innovation Hubs are multi-disciplinary, meant to advance highly promising areas of energy science and technology from their early stages of research to the point that the risk level will be low enough for industry to commercialize the technologies. The Consortium for Advanced Simulation of Light Water Reactors (CASL) was the first DOE Energy Innovation Hub established in July 2010, for the purpose of providing advanced modeling and simulation (M&S) solutions for commercial nuclear reactors. The 2009 DOE budget includes $280 million to fund eight Energy Innovation Hubs, each of which is focused on a particular energy challenge. Two of the eight hubs are included in the EERE budget and will focus on integrating smart materials, designs, and systems into buildings to better conserve energy and on designing and discovering new concepts and materials needed to convert solar energy into electricity. Another two hubs, included in the DOE Office of Science budget, were created to tackle the challenges of devising advanced methods of energy storage and creating fuels directly from sunlight without the use of plants or microbes. Yet another hub was made to develop "smart" materials to allow the electrical grid to adapt and respond to changing conditions. In 2012, the DOE awarded $120 million to the Ames Laboratory to start a new EIH, the Critical Materials Institute, which will focus on improving the supply of rare earth elements. Advanced Research Projects Agency-Energy ARPA-E was officially created by the America COMPETES Act , authored by Congressman Bart Gordon, within the United States Department of Energy (DOE) in 2007, though without a budget. The initial budget of about $400 million was a part of the economic stimulus bill of February 2009. Other DOE Isotope Program - coordinates isotope production Federal Energy Management Program Foundation for Energy Security and Innovation - a 501(c)(3) organization dedicated to supporting DOE research Fusion Energy Sciences - a program to research nuclear fusion, with a yearly budget in 2020 of $670 million, with $250 million of that going to ITER GovEnergy - an annual event partly sponsored by the DOE Grid Deployment Office - a division dedicated to spreading adoption of grid-enhancing technologies and improving transmission permitting National Science Bowl - a high school and middle school science knowledge competition Solar Decathlon - an international collegiate competition to design and build solar-powered houses State Energy Program Weatherization Assistance Program List of secretaries of energy See also Federal Energy Regulatory Commission National Council on Electricity Policy United States federal executive departments References Further reading External links Department of Energy in the Federal Register Department of Energy on USAspending.gov Advanced Energy Initiative Twenty In Ten 1977 establishments in Washington, D.C. Government agencies established in 1977 Energy
United States Department of Energy
[ "Engineering" ]
3,185
[ "Energy organizations", "Energy ministries" ]
62,893
https://en.wikipedia.org/wiki/Dingo
The dingo (either included in the species Canis familiaris, or considered one of the following independent taxa: Canis familiaris dingo, Canis dingo, or Canis lupus dingo) is an ancient (basal) lineage of dog found in Australia. Its taxonomic classification is debated as indicated by the variety of scientific names presently applied in different publications. It is variously considered a form of domestic dog not warranting recognition as a subspecies, a subspecies of dog or wolf, or a full species in its own right. The dingo is a medium-sized canine that possesses a lean, hardy body adapted for speed, agility, and stamina. The dingo's three main coat colourations are light ginger or tan, black and tan, or creamy white. The skull is wedge-shaped and appears large in proportion to the body. The dingo is closely related to the New Guinea singing dog: their lineage split early from the lineage that led to today's domestic dogs, and can be traced back through Maritime Southeast Asia to Asia. The oldest remains of dingoes in Australia are around 3,500 years old. A dingo pack usually consists of a mated pair, their offspring from the current year, and sometimes offspring from the previous year. Etymology The name "dingo" comes from the Dharug language used by the Indigenous Australians of the Sydney area. The first British colonists to arrive in Australia in 1788 established a settlement at Port Jackson and noted "dingoes" living with Indigenous Australians. The name was first recorded in 1789 by Watkin Tench in his Narrative of the Expedition to Botany Bay: Related Dharug words include "ting-ko" meaning "bitch", and "tun-go-wo-re-gal" meaning "large dog". The dingo has different names in different indigenous Australian languages, such as boolomo, dwer-da, joogoong, kal, kurpany, maliki, mirigung, noggum, papa-inura, and wantibirri. Some authors propose that a difference existed between camp dingoes and wild dingoes as they had different names among indigenous tribes. The people of the Yarralin, Northern Territory, region frequently call those dingoes that live with them walaku, and those that live in the wilderness ngurakin. They also use the name walaku to refer to both dingoes and dogs. The colonial settlers of New South Wales wrote using the name dingo only for camp dogs. It is proposed that in New South Wales the camp dingoes only became wild after the collapse of Aboriginal society. Taxonomy Dogs associated with indigenous people were first recorded by Jan Carstenszoon in the Cape York Peninsula area in 1623. In 1699, Captain William Dampier visited the coast of what is now Western Australia and recorded that "my men saw two or three beasts like hungry wolves, lean like so many skeletons, being nothing but skin and bones". In 1788, the First Fleet arrived in Botany Bay under the command of Australia's first colonial governor, Arthur Phillip, who took ownership of a dingo and in his journal made a brief description with an illustration of the "Dog of New South Wales". In 1793, based on Phillip's brief description and illustration, the "Dog of New South Wales" was classified by Friedrich Meyer as Canis dingo. In 1999, a study of the maternal lineage through the use of mitochondrial DNA (mDNA) as a genetic marker indicates that the dingo and New Guinea singing dog developed at a time when human populations were more isolated from each other. In the third edition of Mammal Species of the World published in 2005, the mammalogist W. Christopher Wozencraft listed under the wolf Canis lupus its wild subspecies, and proposed two additional subspecies: "familiaris Linnaeus, 1758 [domestic dog]" and "dingo Meyer, 1793 [domestic dog]". Wozencraft included hallstromi—the New Guinea singing dog—as a taxonomic synonym for the dingo. He referred to the mDNA study as one of the guides in forming his decision. The inclusion of familiaris and dingo under a "domestic dog" clade has been noted by other mammalogists, and their classification under the wolf debated. In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group considered the New Guinea singing dog and the dingo to be feral dogs (Canis familiaris), which therefore should not be assessed for the IUCN Red List. In 2020, the American Society of Mammalogists considered the dingo a synonym of the domestic dog. However, recent DNA sequencing of a 'pure' wild dingo from South Australia suggests that the dingo has a different DNA methylation pattern to the German Shepherd. In 2024, a study found that the Dingo and New Guinea singing dog show 5.5% genome introgression from the ancestor of the recently extinct Japanese wolf, with Japanese dogs showing 4% genome introgression. This introgression occurred before the ancestor of the Japanese wolf arrived in Japan. Domestic status The dingo is regarded as a feral dog because it descended from domesticated ancestors. The dingo's relationship with Indigenous Australians is one of commensalism, in which two organisms live in close association, but do not depend on each other for survival. They both hunt and sleep together. The dingo is, therefore, comfortable enough around humans to associate with them, but is still capable of living independently. Any free-ranging, unowned dog can be socialised to become an owned dog, as some dingoes do when they join human families. Although the dingo exists in the wild, it associates with humans, but has not been selectively bred unlike other domesticated animals. Therefore, its status as a domestic animal is not clear. Whether the dingo was a wild or domesticated species was not clarified from Meyer's original description, which translated from the German language reads: It is not known if it is the only dog species in New South Wales, and if it can also still be found in the wild state; however, so far it appears to have lost little of its wild condition; moreover, no divergent varieties have been discovered. History The earliest known dingo remains, found in Western Australia, date to 3,450 years ago. Based on a comparison of modern dingoes with these early remains, dingo morphology has not changed over these thousands of years. This suggests that no artificial selection has been applied over this period and that the dingo represents an early form of dog. They have lived, bred, and undergone natural selection in the wild, isolated from other dogs until the arrival of European settlers, resulting in a unique breed. In 2020, an MDNA study of ancient dog remains from the Yellow River and Yangtze River basins of southern China showed that most of the ancient dogs fell within haplogroup A1b, as do the Australian dingoes and the pre-colonial dogs of the Pacific, but in low frequency in China today. The specimen from the Tianluoshan archaeological site, Zhejiang province dates to 7,000 YBP (years before present) and is basal to the entire haplogroup A1b lineage. The dogs belonging to this haplogroup were once widely distributed in southern China, then dispersed through Southeast Asia into New Guinea and Oceania, but were replaced in China by dogs of other lineages 2,000 YBP. The oldest reliable date for dog remains found in mainland Southeast Asia is from Vietnam at 4,000 YBP, and in Island Southeast Asia from Timor-Leste at 3,000 YBP. The earliest dingo remains in the Torres Straits date to 2,100 YBP. In New Guinea, the earliest dog remains date to 2,500–2,300 YBP from Caution Bay near Port Moresby, but no ancient New Guinea singing dog remains have been found. The earliest dingo skeletal remains in Australia are estimated at 3,450 YBP from the Mandura Caves on the Nullarbor Plain, south-eastern Western Australia; 3,320 YBP from Woombah Midden near Woombah, New South Wales; and 3,170 YBP from Fromme's Landing on the Murray River near Mannum, South Australia. Dingo bone fragments were found in a rock shelter located at Mount Burr, South Australia, in a layer that was originally dated 7,000–8,500 YBP. Excavations later indicated that the levels had been disturbed, and the dingo remains "probably moved to an earlier level." The dating of these early Australian dingo fossils led to the widely held belief that dingoes first arrived in Australia 4,000 YBP and then took 500 years to disperse around the continent. However, the timing of these skeletal remains was based on the dating of the sediments in which they were discovered, and not the specimens themselves. In 2018, the oldest skeletal bones from the Madura Caves were directly carbon dated between 3,348 and 3,081 YBP, providing firm evidence of the earliest dingo and that dingoes arrived later than had previously been proposed. The next-most reliable timing is based on desiccated flesh dated 2,200 YBP from Thylacine Hole, 110 km west of Eucla on the Nullarbor Plain, southeastern Western Australia. When dingoes first arrived, they would have been taken up by Indigenous Australians, who then provided a network for their swift transfer around the continent. Based on the recorded distribution time for dogs across Tasmania and cats across Australia once indigenous Australians had acquired them, the dispersal of dingoes from their point of landing until they occupied continental Australia is proposed to have taken only 70 years. The red fox is estimated to have dispersed across the continent in only 60–80 years. At the end of the last glacial maximum and the associated rise in sea levels, Tasmania became separated from the Australian mainland 12,000 YBP, and New Guinea 6,500–8,500 YBP by the inundation of the Sahul Shelf. Fossil remains in Australia date to around 3,500 YBP and no dingo remains have been uncovered in Tasmania, so the dingo is estimated to have arrived in Australia at a time between 3,500 and 12,000 YBP. To reach Australia through Island Southeast Asia even at the lowest sea level of the last glacial maximum, a journey of at least over open sea between ancient Sunda and Sahul was necessary, so they must have accompanied humans on boats. Phylogeny Whole genome sequencing indicates that, while dogs are a genetically divergent subspecies of the grey wolf, the dog is not a descendant of the extant grey wolf. Rather, these are sister taxa which share a common ancestor from a ghost population of wolves that disappeared at the end of the Late Pleistocene. The dog and the dingo are not separate species. The dingo and the Basenji are basal members of the domestic dog clade. Mitochondrial genome sequences indicate that the dingo falls within the domestic dog clade, and that the New Guinea singing dog is genetically closer to those dingoes that live in southeastern Australia than to those that live in the northwest. The dingo and New Guinea singing dog lineage can be traced back from Island Southeast Asia to Mainland Southeast Asia. Gene flow from the genetically divergent Tibetan wolf forms 2% of the dingo's genome, which likely represents ancient admixture in eastern Eurasia. By the close of the last ice age 11,700 years ago, five ancestral dog lineages had diversified from each other, with one of these being represented today by the New Guinea singing dog. In 2020, the first whole genome sequencing of the dingo and the New Guinea singing dog was undertaken. The study indicates that the ancestral lineage of the dingo/New Guinea singing dog clade arose in southern East Asia, migrated through Island Southeast Asia 9,900 , and reached Australia 8,300 ; however, the human population which brought them remains unknown. The dingo's genome indicates that it was once a domestic dog which commenced a process of feralisation since its arrival 8,300 years ago, with the new environment leading to changes in those genomic regions which regulate metabolism, neurodevelopment, and reproduction. A 2016 genetic study shows that the lineage of those dingoes found today in the northwestern part of the Australian continent split from the lineage of the New Guinea singing dog and southeastern dingo 8,300 years ago, followed by a split between the New Guinea singing dog lineage from the southeastern dingo lineage 7,800 years ago. The study proposes that two dingo migrations occurred when sea levels were lower and Australia and New Guinea formed one landmass named Sahul that existed until 6,500–8,000 years ago. Whole genome analysis of the dingo indicates there are three sub-populations which exist in Northeast (Tropical), Southeast (Alpine), and West/Central Australia (Desert). Morphological data showing the dingo skulls from Southeastern Australia (Alpine dingoes) being quite distinct from the other ecotypes. And genomic and mitochondrial DNA sequencing demonstrating at least 2 dingo mtDNA haplotypes colonised Australia. In 2020, a genetic study found that the New Guinea Highland wild dogs were genetically basal to the dingo and the New Guinea singing dog, and therefore the potential originator of both. Description The dingo is a medium-sized canid with a lean, hardy body that is adapted for speed, agility, and stamina. The head is the widest part of the body, wedge-shaped, and large in proportion to the body. Captive dingoes are longer and heavier than wild dingoes, as they have access to better food and veterinary care. The average wild dingo male weighs and the female , compared with the captive male and the female . The average wild dingo male length is and the female , compared with the captive male and the female . The average wild dingo male stands at the shoulder height of and the female , compared with the captive male and the female . Dingoes rarely carry excess fat and the wild ones display exposed ribs. Dingoes from northern and northwestern Australia are often larger than those found in central and southern Australia. The dingo is similar to the New Guinea singing dog in morphology apart from the dingo's greater height at the withers. The average dingo can reach speeds of up to 60 kilometres per hour. Compared with the dog, the dingo is able to rotate its wrists and can turn doorknobs or raise latches in order to escape confinement. Dingo shoulder joints are unusually flexible, and they can climb fences, cliffs, trees, and rocks. These adaptations help dingoes climbing in difficult terrain, where they prefer high vantage points. A similar adaptation can be found in the Norwegian Lundehund, which was developed on isolated Norwegian islands to hunt in cliff and rocky areas. Wolves do not have this ability. Compared with the skull of the dog, the dingo possesses a longer muzzle, longer carnassial teeth, longer and more slender canine teeth, larger auditory bullae, a flatter cranium with a larger sagittal crest, and larger nuchal lines. In 2014, a study was conducted on pre-20th century dingo specimens that are unlikely to have been influenced by later hybridisation. The dingo skull was found to differ relative to the domestic dog by its larger palatal width, longer rostrum, shorter skull height, and wider sagittal crest. However, this was rebutted with the figures falling within the wider range of the domestic dog and that each dog breed differs from the others in skull measurements. Based on a comparison with the remains of a dingo found at Fromme's Landing, the dingo's skull and skeleton have not changed over the past 3,000 years. Compared to the wolf, the dingo possesses a paedomorphic cranium similar to domestic dogs. However, the dingo has a larger brain size compared to dogs of the same body weight, with the dingo being more comparable with the wolf than dogs are. In this respect, the dingo resembles two similar mesopredators, the dhole and the coyote. The eyes are triangular (or almond-shaped) and are hazel to dark in colour with dark rims. The ears are erect and occur high on the skull. Coat colour The dingo's three main coat colours are described as being light ginger (or tan), black and tan, and creamy white. The ginger colour ranges from a deep rust to a pale cream and can be found in 74% of dingoes. Often, small white markings are seen on the tip of the tail, the feet, and the chest, but with no large white patches. Some do not exhibit white tips. The black and tan dingoes possess a black coat with a tan muzzle, chest, belly, legs, and feet and can be found in 12% of dingoes. Solid white can be found in 2% of dingoes and solid black 1%. Only three genes affect coat colour in the dingo compared with nine genes in the domestic dog. The ginger colour is dominant and carries the other three main colours – black, tan, and white. White dingoes breed true, and black and tan dingoes breed true; when these cross, the result is a sandy colour. The coat is not oily, nor does it have a dog-like odour. The dingo has a single coat in the tropical north of Australia and a double thick coat in the cold mountains of the south, the undercoat being a wolf-grey colour. Patchy and brindle coat colours can be found in dingoes with no dog ancestry and these colours are less common in dingoes of mixed ancestry. Tail The dingo's tail is flattish, tapering after mid-length and does not curve over the back, but is carried low. Gait When walking, the dingo's rear foot steps in line with the front foot, and these do not possess dewclaws. Lifespan Dingoes in the wild live 3–5 years with few living past 7–8 years. Some have been recorded living up to 10 years. In captivity, they live for 14–16 years. One dingo has been recorded to live just under 20 years. Adaptation Hybrids, distribution and habitat The wolf-like canids are a group of large carnivores that are genetically closely related because their chromosomes number 78; therefore they can potentially interbreed to produce fertile hybrids. In the Australian wild there exist dingoes, feral dogs, and the crossings of these two, which produce dingo–dog hybrids. Most studies looking at the distribution of dingoes focus on the distribution of dingo-dog hybrids, instead. Dingoes occurred throughout mainland Australia before European settlement. They are not found in the fossil record of Tasmania, so they apparently arrived in Australia after Tasmania had separated from the mainland due to rising sea levels. The introduction of agriculture reduced dingo distribution, and by the early 1900s, large barrier fences, including the Dingo Fence, excluded them from the sheep-grazing areas. Land clearance, poisoning, and trapping caused the extinction of the dingo and hybrids from most of their former range in southern Queensland, New South Wales, Victoria, and South Australia. Today, they are absent from most of New South Wales, Victoria, the southeastern third of South Australia, and the southwestern tip of Western Australia. They are sparse in the eastern half of Western Australia and the adjoining areas of the Northern Territory and South Australia. They are regarded as common across the remainder of the continent. The dingo could be considered an ecotype or an ecospecies that has adapted to Australia's unique environment. The dingo's present distribution covers a variety of habitats, including the temperate regions of eastern Australia, the alpine moorlands of the eastern highlands, the arid hot deserts of Central Australia, and the tropical forests and wetlands of Northern Australia. The occupation of, and adaption to, these habitats may have been assisted by their relationship with indigenous Australians. Prey and diet A 20-year study of the dingo's diet was conducted across Australia by the federal and state governments. These examined a total of 13,000 stomach contents and fecal samples. For the fecal samples, determining the matching tracks of foxes and feral cats was possible without including these samples in the study, but in distinguishing between the tracks left by dingoes and those of dingo hybrids or feral dogs was impossible. The study found that these canines prey on 177 species represented by 72.3% mammals (71 species), 18.8% birds (53 species), 3.3% vegetation (seeds), 1.8% reptiles (23 species), and 3.8% insects, fish, crabs, and frogs (28 species). The relative proportions of prey are much the same across Australia, apart from more birds being eaten in the north and south-east coastal regions, and more lizards in Central Australia. Some 80% of the diet consisted of 10 species: red kangaroo, swamp wallaby, cattle, dusky rat, magpie goose, common brushtail possum, long-haired rat, agile wallaby, European rabbit, and common wombat. Of the mammals eaten, 20% could be regarded as large. However, the relative proportions of the size of prey mammals varied across regions. In the tropical coast region of northern Australia, agile wallabies, dusky rats, and magpie geese formed 80% of the diet. In Central Australia, the rabbit has become a substitute for native mammals, and during droughts, cattle carcasses provide most of the diet. On the Barkly Tableland, no rabbits occur nor does any native species dominate the diet, except for long-haired rats that form occasional plagues. In the Fortescue River region, the large red kangaroo and common wallaroo dominate the diet, as few smaller mammals are found in this area. On the Nullarbor Plain, rabbits and red kangaroos dominate the diet, and twice as much rabbit is eaten as red kangaroo. In the temperate mountains of eastern Australia, swamp wallaby and red-necked wallaby dominate the diet on the lower slopes and wombat on the higher slopes. Possums are commonly eaten here when found on the ground. In coastal regions, dingoes patrol the beaches for washed-up fish, seals, penguins, and other birds. Dingoes drink about a litre of water each day in the summer and half a litre in winter. In arid regions during the winter, dingoes may live from the liquid in the bodies of their prey, as long as the number of prey is sufficient. In arid Central Australia, weaned pups draw most of their water from their food. There, regurgitation of water by the females for the pups was observed. During lactation, captive females have no higher need of water than usual, since they consume the urine and feces of the pups, thus recycling the water and keeping the den clean. Tracked dingoes in the Strzelecki Desert regularly visited water-points every 3–5 days, with two dingoes surviving 22 days without water during both winter and summer. Hunting behaviour Dingoes, dingo hybrids, and feral dogs usually attack from the rear as they pursue their prey. They kill their prey by biting the throat, which damages the trachea and the major blood vessels of the neck. The size of the hunting pack is determined by the type of prey targeted, with large packs formed to help hunt large prey. Large prey can include kangaroos, cattle, water buffalo, and feral horses. Dingoes will assess and target prey based on the prey's ability to inflict damage. Large kangaroos are the most commonly killed prey. The main tactic is to sight the kangaroo, bail it up, then kill it. Dingoes typically hunt large kangaroos by having lead dingoes chase the quarry toward the paths of their pack mates, which are skilled at cutting corners in chases. The kangaroo becomes exhausted and is then killed. This same tactic is used by wolves, African wild dogs, and hyenas. Another tactic shared with African wild dogs is a relay pursuit until the prey is exhausted. A pack of dingoes is three times as likely to bring down a kangaroo than an individual because the killing is done by those following the lead chaser, which has also become exhausted. Two patterns are seen for the final stage of the attack. An adult or juvenile kangaroo is nipped at the hamstrings of the hind legs to slow it before an attack to the throat. A small adult female or juvenile is bitten on the neck or back by dingoes running beside it. In one area of Central Australia, dingoes hunt kangaroos by chasing them into a wire fence, where they become temporarily immobilised. The largest male red kangaroos tend to ignore dingoes, even when the dingoes are hunting the younger males and females. A large eastern grey kangaroo successfully fought off an attack by a single dingo that lasted over an hour. Wallabies are hunted in a similar manner to kangaroos, the difference being that a single dingo hunts using scent rather than sight and the hunt may last several hours. Dingo packs may attack young cattle and buffalo, but never healthy, grown adults. They focus on the sick or injured young. The tactics include harassing a mother with young, panicking a herd to separate the adults from the young, or watching a herd and looking for any unusual behaviour that might then be exploited. One 1992 study in the Fortescue River region observed that cattle defend their calves by circling around the calves or aggressively charging dingoes. In one study of 26 approaches, 24 were by more than one dingo and only four resulted in calves being killed. Dingoes often revisited carcasses. They did not touch fresh cattle carcasses until these were largely skin and bone, and even when these were plentiful, they still preferred to hunt kangaroos. Of 68 chases of sheep, 26 sheep were seriously injured, but only eight were killed. The dingoes could outrun the sheep and the sheep were defenceless. However, the dingoes in general appeared not to be motivated to kill sheep, and in many cases just loped alongside the sheep before veering off to chase another sheep. For those that did kill and consume sheep, a large quantity of kangaroo was still in their diet, indicating once again a preference for kangaroo. Lone dingoes can run down a rabbit, but are more successful by targeting kits near rabbit warrens. Dingoes take nestling birds, in addition to birds that are moulting and therefore cannot fly. Predators often use highly intelligent hunting techniques. Dingoes on Fraser Island have been observed using waves to entrap, tire, and help drown an adult swamp wallaby and an echidna. In the coastal wetlands of northern Australia, dingoes depend on magpie geese for a large part of their diet and a lone dingo sometimes distracts these while a white-breasted sea eagle makes a kill that is too heavy for it to carry off, with the dingo then driving the sea eagle away. They also scavenge on prey dropped from the nesting platforms of sea eagles. Lone dingoes may hunt small rodents and grasshoppers in grass by using their senses of smell and hearing, then pouncing on them with their forepaws. Competitors Dingoes and their hybrids co-exist with the native quoll. They also co-occur in the same territory as the introduced European red fox and feral cat, but little is known about the relationships between these three. Dingoes and their hybrids can drive off foxes from sources of water and occasionally eat feral cats. Dingoes can be killed by feral water buffalo and cattle goring and kicking them, from snake bite, and predation on their pups (and occasionally adults) by wedge-tailed eagles. Communication Like all domestic dogs, dingoes tend towards phonetic communication. However, in contrast to domestic dogs, dingoes howl and whimper more, and bark less. Eight sound classes with 19 sound types have been identified. Barking Compared to most domestic dogs, the bark of a dingo is short and monosyllabic, and is rarely used. Barking was observed to make up only 5% of vocalisations. Dog barking has always been distinct from wolf barking. Australian dingoes bark mainly in swooshing noises or in a mixture of atonal and tonal sounds. In addition, barking is almost exclusively used for giving warnings. Warn-barking in a homotypical sequence and a kind of "warn-howling" in a heterotypical sequence have also been observed. The bark-howling starts with several barks and then fades into a rising and ebbing howl and is probably (similar to coughing) used to warn the puppies and members of the pack. Additionally, dingoes emit a sort of "wailing" sound, which they mostly use when approaching a watering hole, probably to warn already present dingoes. According to the present state of knowledge, getting Australian dingoes to bark more frequently by putting them in contact with other domestic dogs is not possible. However, German zoologist Alfred Brehm reported a dingo that learned the more "typical" form of barking and how to use it, while its brother did not. Whether dingoes bark or bark-howl less frequently in general is not certain. Howling Dingoes have three basic forms of howling (moans, bark-howls, and snuffs) with at least 10 variations. Usually, three kinds of howls are distinguished: long and persistent, rising and ebbing, and short and abrupt. Observations have shown that each kind of howling has several variations, though their purpose is unknown. The frequency of howling varies with the season and time of day, and is also influenced by breeding, migration, lactation, social stability, and dispersal behaviour. Howling can be more frequent in times of food shortage, because the dogs become more widely distributed within their home range. Additionally, howling seems to have a group function, and is sometimes an expression of joy (for example, greeting-howls). Overall, howling was observed less frequently in dingoes than among grey wolves. It may happen that one dog will begin to howl, and several or all other dogs will howl back and bark from time to time. In the wilderness, dingoes howl over long distances to attract other members of the pack, to find other dogs, or to keep intruders at bay. Dingoes howl in chorus with significant pitches, and with increasing number of pack members, the variability of pitches also increases. Therefore, dingoes are suspected to be able to measure the size of a pack without visual contact. Moreover, their highly variable chorus howls have been proposed to generate a confounding effect in the receivers by making pack size appear larger. Other forms Growling, making up about 65% of the vocalisations, is used in an agonistic context for dominance, and as a defensive sound. Similar to many domestic dogs, a reactive usage of defensive growling is only rarely observed. Growling very often occurs in combination with other sounds, and has been observed almost exclusively in swooshing noises (similar to barking). During observations in Germany, dingoes were heard to produce a sound that observers have called Schrappen. It was only observed in an agonistic context, mostly as a defence against obtrusive pups or for defending resources. It was described as a bite intention, during which the receiver is never touched or hurt. Only a clashing of the teeth could be heard. Aside from vocal communication, dingoes communicate, like all domestic dogs, via scent marking specific objects (for example, Spinifex) or places (such as waters, trails, and hunting grounds) using chemical signals from their urine, feces, and scent glands. Males scent mark more frequently than females, especially during the mating season. They also scent rub, whereby a dog rolls its neck, shoulders, or back on something that is usually associated with food or the scent markings of other dogs. Unlike wolves, dingoes can react to social cues and gestures from humans. Behaviour Dingoes tend to be nocturnal in warmer regions, but less so in cooler areas. Their main period of activity is around dusk and dawn, making them a crepuscular species in the colder climates. The periods of activity are short (often less than 1 hour) with short times of resting. Dingoes have two kinds of movement: a searching movement (apparently associated with hunting) and an exploratory movement (probably for contact and communication with other dogs). According to studies in Queensland, the wild dogs (dingo hybrids) there move freely at night through urban areas and cross streets and seem to get along quite well. Social behaviour The dingo's social behaviour is about as flexible as that of a coyote or grey wolf, which is perhaps one of the reasons the dingo was originally believed to have descended from the Indian wolf. While young males are often solitary and nomadic in nature, breeding adults often form a settled pack. However, in areas of the dingo's habitat with a widely spaced population, breeding pairs remain together, apart from others. Dingo distributions are a single dingo, 73%; two dingoes, 16%; three dingoes, 5%; four dingoes, 3%; and packs of five to seven dingoes, 3%. A dingo pack usually consists of a mated pair, their offspring from the current year, and sometimes offspring from the previous year. Where conditions are favourable among dingo packs, the pack is stable with a distinct territory and little overlap between neighbours. The size of packs often appears to correspond to the size of prey available in the pack's territory. Desert areas have smaller groups of dingoes with a more loose territorial behaviour and sharing of the water sites. The average monthly pack size was noted to be between three and 12 members. Similar to other canids, a dingo pack largely consists of a mated pair, their current year's offspring, and occasionally a previous year's offspring. Dominance hierarchies exist both between and within males and females, with males usually being more dominant than females. However, a few exceptions have been noted in captive packs. During travel, while eating prey, or when approaching a water source for the first time, the breeding male will be seen as the leader, or alpha. Subordinate dingoes approach a more dominant dog in a slightly crouched posture, ears flat, and tail down, to ensure peace in the pack. Establishment of artificial packs in captive dingoes has failed. Reproduction Dingoes breed once annually, depending on the estrous cycle of the females, which according to most sources, only come in heat once per year. Dingo females can come in heat twice per year, but can only be pregnant once a year, with the second time only seeming to be pregnant. Males are virile throughout the year in most regions, but have a lower sperm production during the summer in most cases. During studies on dingoes from the Eastern Highlands and Central Australia in captivity, no specific breeding cycle could be observed. All were potent throughout the year. The breeding was only regulated by the heat of the females. A rise in testosterone was observed in the males during the breeding season, but this was attributed to the heat of the females and copulation. In contrast to the captive dingoes, captured dingo males from Central Australia did show evidence of a male breeding cycle. Those dingoes showed no interest in females in heat (this time other domestic dogs) outside the mating season (January to July) and did not breed with them. The mating season usually occurs in Australia between March and May (according to other sources between April and June). During this time, dingoes may actively defend their territories using vocalisations, dominance behaviour, growling, and barking. Most females in the wild start breeding at the age of 2 years. Within packs, the alpha female tends to go into heat before subordinates and actively suppresses mating attempts by other females. Males become sexually mature between the ages of 1 and 3 years. The precise start of breeding varies depending on age, social status, geographic range, and seasonal conditions. Among dingoes in captivity, the pre-estrus was observed to last 10–12 days. However, the pre-estrus may last as long as 60 days in the wild. In general, the only dingoes in a pack that successfully breed are the alpha pair, and the other pack members help with raising the pups. Subordinates are actively prevented from breeding by the alpha pair and some subordinate females have a false pregnancy. Low-ranking or solitary dingoes can successfully breed if the pack structure breaks up. The gestation period lasts for 61–69 days and the size of the litter can range from 1 to 10 (usually 5) pups, with the number of males born tending to be higher than that of females. Pups of subordinate females usually get killed by the alpha female, which causes the population increase to be low even in good times. This behaviour possibly developed as an adaptation to the fluctuating environmental conditions in Australia. Pups are usually born between May and August (the winter period), but in tropical regions, breeding can occur at any time of the year. At the age of 3 weeks, the pups leave the den for the first time, and leave it completely at 8 weeks. Dens are mostly underground. Reports exist of dens in abandoned rabbit burrows, rock formations, under boulders in dry creeks, under large spinifex, in hollow logs, and augmented burrows of monitor lizards and wombat burrows. The pups usually stray around the den within a radius of , and are accompanied by older dogs during longer travels. The transition to consuming solid food is normally accomplished by all members of the pack during the age of 9 to 12 weeks. Apart from their own experiences, pups also learn through observation. Young dingoes usually become independent at the age of 3–6 months or they disperse at the age of 10 months, when the next mating season starts. Migration Dingoes usually remain in one area and do not undergo seasonal migrations. However, during times of famine, even in normally "safe" areas, dingoes travel into pastoral areas, where intensive, human-induced control measures are undertaken. In Western Australia in the 1970s, young dogs were found to travel for long distances when necessary. About 10% of the dogs captured—all younger than 12 months—were later recaptured far away from their first location. Among these, the average travelled distance for males was and for females . Therefore, travelling dingoes had lower chances of survival in foreign territories, and they are apparently unlikely to survive long migrations through occupied territories. The rarity of long migration routes seemed to confirm this. During investigations in the Nullarbor Plain, even longer migration routes were recorded. The longest recorded migration route of a radio-collared dingo was about . Attacks on humans Dingoes generally avoid conflict with humans, but they are large enough to be dangerous. Most attacks involve people feeding wild dingoes, particularly on K'gari (formerly Fraser Island), which is a special centre of dingo-related tourism. The vast majority of dingo attacks are minor in nature, but some can be major, and a few have been fatal: the death of two-month-old Azaria Chamberlain in the Northern Territory in 1980 is one of them. Many Australian national parks have signs advising visitors not to feed wildlife, partly because this practice is not healthy for the animals, and partly because it may encourage undesirable behaviour, such as snatching or biting by dingoes, kangaroos, goannas, and some birds. Impact Ecological Extinction of thylacines Some researchers propose that the dingo caused the extirpation of the thylacine, the Tasmanian devil, and the Tasmanian native hen from mainland Australia because of the correlation in space and time with the dingo's arrival. Recent studies have questioned this proposal, suggesting that climate change and increasing human populations may have been the cause. Dingoes do not seem to have had the same ecological impact that invasive red foxes have in modern times. This might be connected to the dingo's way of hunting and the size of their favoured prey, as well as to the low number of dingoes in the time before European colonisation. In 2017, a genetic study found that the population of the northwestern dingoes had commenced expanding since 4,000—6,000 years ago. This was proposed to be due either to their first arrival in Australia or to the commencement of the extinction of the thylacine, with the dingo expanding into the thylacine's former range. Interactions with humans The first British colonists who settled at Port Jackson, in 1788, recorded the dingo living with indigenous Australians, and later at Melville Island, in 1818. Furthermore, they were noted at the lower Darling and Murray rivers in 1862, indicating that the dingo was possibly semi-domesticated (or at least utilised in a "symbiotic" manner) by aboriginal Australians. When livestock farming began expanding across Australia, in the early 19th century, dingoes began preying on sheep and cattle. Numerous population-control measures have been implemented since then, including a nation-wide fencing project, with only limited success. Dingoes can be tame when they come in frequent contact with humans. Furthermore, some dingoes live with humans. Many indigenous Australians and early European settlers lived alongside dingoes. Indigenous Australians would take dingo pups from the den and tame them until sexual maturity and the dogs would leave. According to David Jenkins, a research fellow at Charles Sturt University, the breeding and reintroduction of pure dingoes is no easy option and, as of 2007, there were no studies that seriously dealt with this topic, especially in areas where dingo populations are already present. Interactions with other animals Much of the present place of wild dogs in the Australian ecosystem, especially in the urban areas, remains unknown. Although the ecological role of dingoes in Northern and Central Australia is well understood, the same does not apply to the role of wild dogs in the east of the continent. In contrast to some claims, dingoes are assumed to have a positive impact on biodiversity in areas where feral foxes are present. Dingoes are regarded as apex predators and possibly perform an ecological key function. Likely (with increasing evidence from scientific research), they control the diversity of the ecosystem by limiting the number of prey and keeping the competition in check. Wild dogs hunt feral livestock such as goats and pigs, as well as native prey and introduced animals. The low number of feral goats in Northern Australia is possibly caused by the presence of the dingoes, but whether they control the goats' numbers is still disputable. Studies from 1995 in the northern wet forests of Australia found the dingoes there did not reduce the number of feral pigs, but their predation only affects the pig population together with the presence of water buffaloes (which hinder the pigs' access to food). Observations concerning the mutual impact of dingoes and red fox and cat populations suggest dingoes limit the access of foxes and cats to certain resources. As a result, a disappearance of the dingoes may cause an increase of red fox and feral cat numbers, and therefore, a higher pressure on native animals. These studies found the presence of dingoes is one of the factors that keep fox numbers in an area low, and therefore reduces pressure on native animals, which then do not disappear from the area. The countrywide numbers of red foxes are especially high where dingo numbers are low, but other factors might be responsible for this, depending on the area. Evidence was found for a competition between wild dogs and red foxes in the Blue Mountains of New South Wales, since many overlaps occurred in the spectrum of preferred prey, but only evidence for local competition, not on a grand scale, was found. Also, dingoes can live with red foxes and feral cats without reducing their numbers in areas with sufficient food resources (for example, high rabbit numbers) and hiding places. Nearly nothing is known about the relationship of wild dogs and feral cats, except both mostly live in the same areas. Although wild dogs also eat cats, whether this affects the cat populations is not known. Additionally, the disappearance of dingoes might increase the prevalence of kangaroo, rabbit, and Australian brushturkey numbers. In the areas outside the Dingo Fence, the number of emus is lower than in the areas inside. However, the numbers changed depending on the habitat. Since the environment is the same on both sides of the fence, the dingo was assumed to be a strong factor for the regulation of these species. Therefore, some people demand that dingo numbers should be allowed to increase or dingoes should be reintroduced in areas with low dingo populations to lower the pressure on endangered populations of native species and to reintroduce them in certain areas. In addition, the presence of the Australian brushturkey in Queensland increased significantly after dingo baiting was conducted. The dingo's habitat covers most of Australia, but they are absent in the southeast and Tasmania, and an area in the southwest (see map). As Australia's largest extant terrestrial predators, dingoes prey on mammals up to the size of the large red kangaroo, in addition to the grey kangaroo, wombat, wallaby, quoll, possum and most other marsupials; they frequently pursue birds, lizards, fish, crabs, crayfish, eels, snakes, frogs, young crocodiles, larger insects, snails, carrion, human refuse, and sometimes fallen fruits or seeds. Dingoes can also be of potential benefit to their environment, as they will hunt Australia's many introduced and invasive species. This includes human-introduced animals such as deer and their offspring (sambar, chital, and red deer) and water buffalo, in addition to the highly invasive rabbits, red foxes, feral and domestic cats, some feral dogs, sheep, and calves. Rarely, a pack of dingoes will pursue the larger and more dangerous dromedary camel, feral donkey, or feral horse; unattended young animals, or sick, weak, or elderly individuals are at greatest risk. Cultural Cultural opinions about the dingo are often based on its perceived "cunning", and the idea that it is an intermediate between civilisation and wildness. Some of the early European settlers looked on dingoes as domestic dogs, while others thought they were more like wolves. Over the years, dingoes began to attack sheep, and their relationship to the Europeans changed very quickly; they were regarded as devious and cowardly, since they did not fight bravely in the eyes of the Europeans, and vanished into the bush. Additionally, they were seen as promiscuous or as devils with a venomous bite or saliva, so they could be killed unreservedly. Over the years, dingo trappers gained some prestige for their work, especially when they managed to kill hard-to-catch dingoes. Dingoes were associated with thieves, vagabonds, bushrangers, and parliamentary opponents. From the 1960s, politicians began calling their opponents "dingo", meaning they were cowardly and treacherous, and it has become a popular form of attack since then. Today, the word "dingo" still stands for "coward" and "cheat", with verb and adjective forms used, as well. The image of the dingo has ranged among some groups from the instructive to the demonic. Ceremonies (like a keen at the Cape York Peninsula in the form of howling) and dreamtime stories are connected to the dingo, which were passed down through the generations. The dingo plays a prominent role in the Dreamtime stories of indigenous Australians, but it is rarely depicted in their cave paintings when compared with the extinct thylacine. One of the tribal elders of the people of the Yarralin, Northern Territory region tells that the Dreamtime dingo is the ancestor of both dingoes and humans. The dingoes "are what we would be if we were not what we are." Similar to how Europeans acquired dingoes, the Aboriginal people of Australia acquired dogs from the immigrants very quickly. This process was so fast that Francis Barrallier (surveyor on early expeditions around the colony at Port Jackson) discovered in 1802 that five dogs of European origin were there before him. One theory holds that other domestic dogs adopt the role of the "pure" dingo. Introduced animals, such as the water buffalo and the domestic cat, have been adopted into the indigenous Aboriginal culture in the forms of rituals, traditional paintings, and dreamtime stories. Most of the published myths originate from the Western Desert and show a remarkable complexity. In some stories, dingoes are the central characters, while in others, they are only minor ones. One time, an ancestor from the Dreamtime created humans and dingoes or gave them their current shape. Stories mention creation, socially acceptable behaviour, and explanations why some things are the way they are. Myths exist about shapeshifters (human to dingo or vice versa), "dingo-people", and the creation of certain landscapes or elements of those landscapes, like waterholes or mountains. Economic Livestock farming expanded across Australia from the early 1800s, which led to conflict between the dingo and graziers. Sheep, and to a lesser extent cattle, are an easy target for dingoes. The pastoralists and the government bodies that support this industry have shot, trapped, and poisoned dingoes or destroyed dingo pups in their dens. After two centuries of persecution, the dingo or dingo–dog hybrids can still be found across most of the continent. Research on the real extent of the damage and the reason for this problem only started recently. Livestock can die from many causes, and when the carcass is found, determining with certainty the cause of death is often difficult. Since the outcome of an attack on livestock depends to a high degree on the behaviour and experience of the predator and the prey, only direct observation is certain to determine whether an attack was by dingoes or other domestic dogs. Even the existence of remnants of the prey in the scat of wild dogs does not prove they are pests, since wild dogs also eat carrion. The cattle industry can tolerate low to moderate, and sometimes high, numbers of wild dogs (therefore dingoes are not so easily regarded as pests in these areas). In the case of sheep and goats, a zero-tolerance attitude is common. The biggest threats are dogs that live inside or near the paddock areas. The extent of sheep loss is hard to determine due to the wide pasture lands in some parts of Australia. In 2006, cattle losses in the Northern Territory rangeland grazing areas were estimated to be up to 30%. Therefore, factors such as availability of native prey, as well as the defending behaviour and health of the cattle, play an important role in the number of losses. A study in Central Australia in 2003 confirmed that dingoes only have a low impact on cattle numbers when a sufficient supply of other prey (such as kangaroos and rabbits) is available. In some parts of Australia, the loss of calves is assumed to be minimised if horned cattle are used instead of polled. The precise economic impact is not known, and the rescue of some calves is unlikely to compensate for the necessary costs of control measures. Calves usually suffer less lethal wounds than sheep due to their size and the protection by adult cattle, so they have a higher chance of surviving an attack. As a result, the evidence of a dog attack may only be discovered after the cattle have been herded back into the enclosure, and signs such as bitten ears, tails, and other wounds are discovered. The opinions of cattle owners regarding dingoes are more variable than those of sheep owners. Some cattle owners believe that the weakened mother losing her calf is better in times of drought so that she does not have to care for her calf, too. Therefore, these owners are more hesitant to kill dingoes. The cattle industry may benefit from the predation of dingoes on rabbits, kangaroos, and rats. Furthermore, the mortality rate of calves has many possible causes, and discriminating between them is difficult. The only reliable method to assess the damage would be to document all pregnant cows, then observe their development and those of their calves. The loss of calves in observed areas where dingoes were controlled was higher than in other areas. Loss of livestock is, therefore, not necessarily caused by the occurrence of dingoes and is independent from wild dogs. One researcher has stated that for cattle stations where dingoes were controlled, kangaroos were abundant, and this affects the availability of grass. Domestic dogs are the only terrestrial predators in Australia that are big enough to kill fully grown sheep, and only a few sheep manage to recover from the severe injuries. In the case of lambs, death can have many causes apart from attacks by predators, which are blamed for the deaths because they eat from the carcasses. Although attacks by red foxes are possible, such attacks are more rare than previously thought. The fact that the sheep and goat industry is much more susceptible to damage caused by wild dogs than the cattle industry is mostly due to two factors – the flight behaviour of the sheep and their tendency to flock together in the face of danger, and the hunting methods of wild dogs, along with their efficient way of handling goats and sheep. Therefore, the damage to the livestock industry does not correlate to the numbers of wild dogs in an area (except that no damage occurs where no wild dogs occur). According to a report from the government of Queensland, wild dogs cost the state about $30 million annually due to livestock losses, the spread of diseases, and control measures. Losses for the livestock industry alone were estimated to be as high as $18 million. In Barcaldine, Queensland, up to one-fifth of all sheep are killed by dingoes annually, a situation which has been described as an "epidemic". According to a survey among cattle owners in 1995, performed by the Park and Wildlife Service, owners estimated their annual losses due to wild dogs (depending on the district) to be from 1.6% to 7.1%. In 2018, a study in northern South Australia indicates that fetal/calf loss averages 18.6%, with no significant reduction due to dingo baiting. The calf losses did not correlate with increased dingo activity, and the cattle diseases pestivirus and leptospirosis were a major cause. Dingoes then scavenged on the carcasses. There was also evidence of dingo predation on calves. Among the indigenous Australians, dingoes were also used as hunting aids, living hot water bottles, and camp dogs. Their scalps were used as a kind of currency, their teeth were traditionally used for decorative purposes, and their fur for traditional costumes. Sometimes "pure" dingoes are important for tourism, when they are used to attract visitors. However, this seems to be common only on Fraser Island, where the dingoes are extensively used as a symbol to enhance the attraction of the island. Tourists are drawn to the experience of personally interacting with dingoes. Pictures of dingoes appear on brochures, many websites, and postcards advertising the island. Legal status The dingo is recognised as a native animal under the laws of all Australian jurisdictions. Australia has over 500 national parks of which all but six are managed by the states and territories. , the legal status of the dingo varies between these jurisdictions and in some instances it varies between different regions of a single jurisdiction. some of these jurisdictions classify dingoes as an invasive native. Australian government: Section 528 of the Environment Protection and Biodiversity Conservation Act 1999 defines a native species as one that was present in Australia before the year 1400. The dingo is protected in all Australian government managed national parks and reserves, World Heritage Areas, and other protected areas. Australian Capital Territory: The dingo is listed as a "pest animal" outside national parks and reserves in the Pest Plants and Animals (Pest Animals) Declaration 2016 (No 1) made under the Pest Plants and Animals Act 2005, which calls for a management plan for pest animals. The Nature Conservation Act 2014 protects native animals in national parks and reserves but excludes this protection to "pest animals" declared under the Pest Plants and Animals Act 2005. New South Wales: The dingo falls under the definition of "wildlife" under the National Parks and Wildlife Act 1974 however it also becomes "unprotected fauna" under Schedule 11 of the act. The Wild Dog Destruction Act (1921) applies only to the western division of the state and includes the dingo in its definition of "wild dogs". The act requires landowners to destroy any wild dogs on their property and any person owning a dingo or half-bred dingo without a permit faces a fine. In other parts of the state, dingoes can be kept as pets under the Companion Animals Act 1998 as a dingo is defined under this act as a "dog". The dingo has been proposed for listing under the Threatened Species Conservation Act because it is argued that these dogs had established populations before the arrival of Europeans, but no decision has been made. Northern Territory: The dingo is a "vertebrate that is indigenous to Australia" and therefore "protected wildlife" under the Territory Parks and Wildlife Conservation Act 2014. A permit is required for all matters dealing with protected wildlife. Queensland: The dingo is listed as "least concern wildlife" in the Nature Conservation (Wildlife) Regulation 2006 under the Nature Conservation Act 1992, therefore the dingo is protected in National Parks and conservation areas. The dingo is listed as a "pest" in the Land Protection (Pest and Stock Route Management) Regulation 2003 under the Land Protection (Pest and Stock Route Management) Act 2002, which requires land owners to take reasonable steps to keep their lands free of pests. South Australia: The National Parks and Wildlife Act 1972 defines a protected animal as one that is indigenous to Australia but then lists the dingo as an "unprotected species" under Schedule 11. The purpose of the Dog Fence Act 1946 is to prevent wild dogs entering into the pastoral and agricultural areas south of the dog-proof fence. The dingo is listed as a "wild dog" under this act, and landowners are required to maintain the fence and destroy any wild dog within the vicinity of the fence by shooting, trapping or baiting. The dingo is listed as an "unprotected species" in the Natural Resources Management Act 2004, which allows landowners to lay baits "to control animals" on their land just north of the dog fence. Tasmania: Tasmania does not have a native dingo population. The dingo is listed as a "restricted animal" in the Nature Conservation Act 2002 and cannot be imported without a permit. Once imported into Tasmania, a dingo is listed as a dog under the Dog Control Act 2000. Victoria: The dingo is a "vertebrate taxon" that is "indigenous" to Australia and therefore "wildlife" under the Wildlife Act 1975, which protects wildlife. The act mandates that a permit is required to keep a dingo, and that this dingo must not be cross-bred with a dog. The act allows an order to be made to unprotect dingoes in certain areas of the state. The Order in Council made on the 28 September 2010 includes the far north-west of the state and all of the state north-east of Melbourne. It was made to protect stock on private land. The order allows dingoes to be trapped, shot or baited by any person on private land in these regions, while protecting the dingo on state-owned land. Western Australia: Dingoes are considered as "unprotected" native fauna under the Western Australian Wildlife Conservation Act. The dingo is recorded as a "declared pest" on the Western Australian Organism List. This list records those species that have been declared as pests under the Biosecurity and Agriculture Management Act 2007, and these are regarded as pests across all of Western Australia. Landowners must take the prescribed measures to deal with declared pests on their land. The policy of the WA government is to promote eradication of dingoes in the livestock grazing areas but leave them undisturbed in the rest of the state. Control measures Dingo attacks on livestock led to widescale efforts to repel them from areas with intensive agricultural usage, and all states and territories have enacted laws for the control of dingoes. In the early 20th century, fences were erected to keep dingoes away from areas frequented by sheep, and a tendency to routinely eradicate dingoes developed among some livestock owners. Established methods for the control of dingoes in sheep areas entailed the employment of specific workers on every property. The job of these people (who were nicknamed "doggers") was to reduce the number of dingoes by using steel traps, baits, firearms and other methods. The responsibility for the control of wild dogs lay solely in the hands of the landowners. At the same time, the government was forced to control the number of dingoes. As a result, a number of measures for the control of dingoes developed over time. It was also considered that dingoes travel over long distances to reach areas with richer prey populations, and the control methods were often concentrated along "paths" or "trails" and in areas that were far away from sheep areas. All dingoes were regarded as a potential danger and were hunted. Apart from the introduction of the poison 1080 (extensively used for 40 years and nicknamed "doggone"), the methods and strategies for controlling wild dogs have changed little over time. Information concerning cultural importance to indigenous people and the importance of dingoes and the impact of control measures on other species is also lacking in some areas. Historically, the attitudes and needs of indigenous people were not taken into account when dingoes were controlled. Other factors that might be taken into account are the genetic status (degree of interbreeding) of dingoes in these areas, ownership and land usage, as well as a reduction of killing measures to areas outside the zones. However, most control measures and the appropriate studies are there to minimise the loss of livestock and not to protect dingoes. Increasing pressure from environmentalists against the random killing of dingoes, as well as the impact on other animals, demanded that more information needed to be gathered to prove the necessity of control measures and to disprove the claim of unnecessary killings. Today, permanent population control is regarded as necessary to reduce the impact of all wild dogs and to ensure the survival of the "pure" dingo in the wild. Guardian animals To protect livestock, livestock guardian dogs (for example, Maremmas), donkeys, alpacas and llamas are used. Dingo Fence In the 1920s, the Dingo Fence was erected on the basis of the Wild Dog Act (1921) and, until 1931, thousands of miles of Dingo Fences had been erected in several areas of South Australia. In the year 1946, these efforts were directed to a single goal, and the Dingo Fence was finally completed. The fence connected with other fences in New South Wales and Queensland. The main responsibilities in maintaining the Dingo Fence still lies with the landowners whose properties border on the fence and who receive financial support from the government. Reward system A reward system (local, as well from the government) was active from 1846 to the end of the 20th century, but there is no evidence that – despite the billions of dollars spent – it was ever an efficient control method. Therefore, its importance declined over time. Dingo scalping commenced in 1912 with the passage of the Wild Dogs Act by the government of South Australia. In an attempt to reduce depredation on livestock, that government offered a bounty for dingo skins, and this program was later repeated in Western Australia and the Northern Territory. One writer argues that this new legislation and economic driver had significant impacts on Aboriginal society in the region. This act was followed by updates and amendments, including 1931, 1938, and 1948. Poisoning Baits with the poison 1080 are regarded as the fastest and safest method for dog control, since they are extremely susceptible. Even small amounts of poison per dog are sufficient (0.3 mg per kg). The application of aerial baiting is regulated in the Commonwealth by the Civil Aviation Regulations (1988). The assumption that the tiger quoll might be damaged by the poison led to the dwindling of areas where aerial baiting could be performed. In areas where aerial baiting is no longer possible, it is necessary to put down baits. From 2004, cyanide-ejectors and protection collars (filled with 1080 on certain spots) have been tested. In 2016, controversy surrounded a plan to inject a population of dingoes on Pelorus Island, off the coast of northern Queensland, Australia, with pills that would release a fatal dose of 1080 poison two years after the dingoes were to be intentionally released to help eradicate goats. The dingoes were dubbed 'death-row dingoes', and the plan was blocked due to concerns for a locally threatened shorebird. Neutering Owners of dingoes and other domestic dogs are sometimes asked to neuter their pets and keep them under observation to reduce the number of stray/feral dogs and prevent interbreeding with dingoes. Efficiency of measures The efficiency of control measures was questioned in the past and is often questioned today, as well as whether they stand in a good cost-benefit ratio. The premium system proved to be susceptible to deception and to be useless on a large scale, and can therefore only be used for getting rid of "problem-dogs". Animal traps are considered inhumane and inefficient on a large scale, due to the limited efficacy of baits. Based on studies, it is assumed that only young dogs that would have died anyway can be captured. Furthermore, wild dogs are capable of learning and sometimes are able to detect and avoid traps quite efficiently. In one case, a dingo bitch followed a dogger and triggered his traps one after another by carefully pushing her paw through the sand that covered the trap. Poisonous baits can be very effective when they are of good meat quality; however, they do not last long and are occasionally taken by red foxes, quolls, ants and birds. Aerial baiting can nearly eliminate whole dingo populations. Livestock guardian dogs can effectively minimise livestock losses, but are less effective on wide open areas with widely distributed livestock. Furthermore, they can be a danger to the livestock or be killed by control measures themselves when they are not sufficiently supervised by their owners. Fences are reliable in keeping wild dogs from entering certain areas, but they are expensive to build, need permanent maintenance, and only cause the problem to be relocated. Control measures mostly result in smaller packs and a disruption of pack structure. The measures seem to be rather detrimental to the livestock industry because the empty territories are taken over by young dogs and the predation then increases. Nonetheless, it is regarded as unlikely that the control measures could completely eradicate the dingo in Central Australia, and the elimination of all wild dogs is not considered a realistic option. It has been shown that culling a small percentage of immature dingoes on Fraser Island had little significant negative impact on the overall island population, though this is being disputed. Conservation of purebreds Until 2004, the dingo was categorised as of "least concern" on the Red List of Threatened Species. In 2008, it was recategorised as "vulnerable", following the decline in numbers to around 30% of "pure" dingoes, due to crossbreeding with domestic dogs. In 2018, the IUCN regarded the dingo as a feral dog and discarded it from the Red List. Dingoes are reasonably abundant in large parts of Australia, but there is some argument that they are endangered due to interbreeding with other dogs in many parts of their range. Dingoes receive varying levels of protection in conservation areas such as national parks and natural reserves in New South Wales, the Northern Territory and Victoria, Arnhem Land and other Aboriginal lands, UNESCO World Heritage Sites, and the whole of the Australian Capital Territory. In some states, dingoes are regarded as declared pests and landowners are allowed to control the local populations. Throughout Australia, all other wild dogs are considered pests. Fraser Island is a 1,840 square kilometre World Heritage Site located off Australia's eastern coast. The island is home to a genetically distinct population of dingoes that are free of dog introgression, estimated to number 120. These dingoes are unique because they are closely related to the southeastern dingoes but share a number of genes with the New Guinea singing dog and show some evidence of admixture with the northwestern dingoes. Because of their conservation value, in February 2013, a report on Fraser Island dingo management strategies was released, with options including ending the intimidation of dingoes, tagging practice changes and regular veterinarian checkups, as well as a permanent dingo sanctuary on the island. According to DNA examinations from 2004, the dingoes of Fraser Island are "pure", as opposed to dingo—dog hybrids. However, skull measurements from the mid-1990s had a different result. A 2013 study showed that dingoes living in the Tanami Desert are among the "purest" in Australia. Groups that have devoted themselves to the conservation of the "pure" dingo by using breeding programs include the Australian Native Dog Conservation Society and the Australian Dingo Conservation Association. Presently, the efforts of the dingo conservation groups are considered to be ineffective because most of their dogs are untested or are known to be hybrids. Dingo conservation efforts focus primarily on preventing interbreeding between dingoes and other domestic dogs in order to conserve the population of pure dingoes. This is extremely difficult and costly. Conservation efforts are hampered by the fact that it is not known how many pure dingoes still exist in Australia. Steps to conserve the pure dingo can only be effective when the identification of dingoes and other domestic dogs is absolutely reliable, especially in the case of living specimens. Additionally, conservation efforts are in conflict with control measures. Conservation of pure and survivable dingo populations is promising in remote areas, where contact with humans and other domestic dogs is rare. Under New South Wales state policy in parks, reserves and other areas not used by agriculture, these populations are only to be controlled when they pose a threat to the survival of other native species. The introduction of "dog-free" buffer zones around areas with pure dingoes is regarded as a realistic method to stop interbreeding. This is enforced to the extent that all wild dogs can be killed outside the conservation areas. However, studies from the year 2007 indicate that even an intensive control of core areas is probably not able to stop the process of interbreeding. According to the Dingo Discovery Sanctuary and Research Centre, many studies are finding a case for the re-introduction of the dingo into previously occupied areas in order to return some balance to badly degraded areas as a result of "unregulated and ignorant farming practices". Dingo densities have been measured at up to 3 per square kilometre (0.8/sq mi) in both the Guy Fawkes River region of New South Wales and in South Australia at the height of a rabbit plague. Hybridisation In 2023, a study of 402 wild and captive dingoes using 195,000 points across the dingo genome indicates that past studies of hybridisation were over-estimated and that pure dingoes are more common than they were originally thought to be. In 2021, DNA testing of over 5,000 wild-living canines from across Australia found that 31 were feral domestic dogs and 27 were first generation hybrids. This finding challenges the perception that dingoes are nearly extinct and have been replaced by feral domestic dogs. Coat colour cannot be used to distinguish hybrids. Dingo-like domestic dogs and dingo-hybrids can be generally distinguished by the more dog-typical kind of barking that exists among the hybrids, and differences in the breeding cycle, certain skull characteristics, and genetic analyses can be used for differentiation. Despite all the characteristics that can be used for distinguishing between dingoes and other domestic dogs, there are two problems that should not be underestimated. First, there is no real clarity regarding at what point a dog is regarded as a "pure" dingo, and, secondly, no distinguishing feature is completely reliable — it is not known which characteristics permanently remain under the conditions of natural selection. There are two main opinions regarding this process of interbreeding. The first, and likely most common, position states that the "pure" dingo should be preserved via strong controls of the wild dog populations, and only "pure" or "nearly-pure" dingoes should be protected. The second position is relatively new and is of the opinion that people must accept that the dingo has changed and that it is impossible to bring the "pure" dingo back. Conservation of these dogs should therefore be based on where and how they live, as well as their cultural and ecological role, instead of concentrating on precise definitions or concerns about "genetic purity". Both positions are controversially discussed. Due to this interbreeding, there is a wider range of fur colours, skull shapes and body size in the modern-day wild dog population than in the time before the arrival of the Europeans. Over the course of the last 40 years, there has been an increase of about 20% in the average wild dog body size. It is currently unknown whether, in the case of the disappearance of "pure" dingoes, remaining hybrids would alter the predation pressure on other animals. It is also unclear what kind of role these hybrids would play in the Australian ecosystems. However, it is unlikely that the dynamics of the various ecosystems will be excessively disturbed by this process. In 2011, a total of 3,941 samples were included in the first continent-wide DNA study of wild dogs. The study found that 46% were pure dingoes which exhibited no dog alleles (gene expressions). There was evidence of hybridisation in every region sampled. In Central Australia only 13% were hybrids; however, in southeastern Australia 99% were hybrids or feral dogs. Pure dingo distribution was 88% in the Northern Territory, intermediate numbers in Western Australia, South Australia and Queensland, and 1% in New South Wales and Victoria. Almost all wild dogs showed some dingo ancestry, with only 3% of dogs showing less than 80% dingo ancestry. This indicates that domestic dogs have a low survival rate in the wild or that most hybridisation is the result of roaming dogs that return to their owners. No populations of feral dogs have been found in Australia. In 2016, a three dimensional geometric morphometric analysis of the skulls of dingoes, dogs and their hybrids found that dingo-dog hybrids exhibit morphology closer to the dingo than to the parent group dog. Hybridisation did not push the unique Canis dingo cranial morphology towards the wolf phenotype, therefore hybrids cannot be distinguished from dingoes based on cranial measures. The study suggests that the wild dingo morphology is dominant when compared with the recessive dog breed morphology, and concludes that although hybridisation introduces dog DNA into the dingo population, the native cranial morphology remains resistant to change. See also Dogs portal New Guinea singing dog Domesticated plants and animals of Austronesia Indian pariah dog Free-ranging dog Carolina Dog Footnotes References Bibliography Further reading Canis lupus dingo Dog breeds originating in Australia Dog landraces Dog types Fauna naturalised in Australia Feral dogs Apex predators Landraces of Oceania Mammals described in 1793 Mammals of Queensland Mammals of South Australia Mammals of Western Australia Mammals of the Northern Territory Subspecies of Canis lupus Vulnerable fauna of Australia Controversial mammal taxa
Dingo
[ "Biology" ]
15,615
[ "Biological hypotheses", "Controversial mammal taxa", "Controversial taxa" ]
62,929
https://en.wikipedia.org/wiki/Aqua%20regia
Aqua regia (; from Latin, "regal water" or "royal water") is a mixture of nitric acid and hydrochloric acid, optimally in a molar ratio of 1:3. Aqua regia is a fuming liquid. Freshly prepared aqua regia is colorless, but it turns yellow, orange or red within seconds from the formation of nitrosyl chloride and nitrogen dioxide. It was so named by alchemists because it can dissolve noble metals like gold and platinum, though not all metals. Preparation and decomposition Upon mixing of concentrated hydrochloric acid and concentrated nitric acid, chemical reactions occur. These reactions result in the volatile products nitrosyl chloride and chlorine gas: as evidenced by the fuming nature and characteristic yellow color of aqua regia. As the volatile products escape from solution, aqua regia loses its potency. Nitrosyl chloride (NOCl) can further decompose into nitric oxide (NO) and elemental chlorine (): This dissociation is equilibrium-limited. Therefore, in addition to nitrosyl chloride and chlorine, the fumes over aqua regia also contain nitric oxide (NO). Because nitric oxide readily reacts with atmospheric oxygen, the gases produced also contain nitrogen dioxide, (red fume): Applications Aqua regia is primarily used to produce chloroauric acid, the electrolyte in the Wohlwill process for refining the highest purity (99.999%) gold. Aqua regia is also used in etching and in specific analytic procedures. It is also used in some laboratories to clean glassware of organic compounds and metal particles. This method is preferred among most over the more traditional chromic acid bath for cleaning NMR tubes, because no traces of paramagnetic chromium can remain to spoil spectra. While chromic acid baths are discouraged because of the high toxicity of chromium and the potential for explosions, aqua regia is itself very corrosive and has been implicated in several explosions due to mishandling. Because its components react quickly, resulting in its decomposition, aqua regia quickly loses its effectiveness (yet remains a strong acid), so its components are usually only mixed immediately before use. Chemistry Dissolving gold Aqua regia dissolves gold, although neither constituent acid will do so alone. Nitric acid is a powerful oxidizer, which will dissolve a very small quantity of gold, forming gold(III) ions (). The hydrochloric acid provides a ready supply of chloride ions (), which react with the gold ions to produce tetrachloroaurate(III) anions (), also in solution. The reaction with hydrochloric acid is an equilibrium reaction that favors formation of tetrachloroaurate(III) anions. This results in a removal of gold ions from solution and allows further oxidation of gold to take place. The gold dissolves to become chloroauric acid. In addition, gold may be dissolved by the chlorine present in aqua regia. Appropriate equations are: Au + 3  + 4 HCl + 3  + + 2  or Au + + 4 HCl + NO + + . Solid tetrachloroauric acid may be isolated by evaporating the excess aqua regia, and decomposing the residual nitric acid by repeatedly heating the solution with additional hydrochloric acid. That step reduces nitric acid (see decomposition of aqua regia). If elemental gold is desired, it may be selectively reduced with reducing agents such as sulfur dioxide, hydrazine, oxalic acid, etc. The equation for the reduction of oxidized gold () by sulfur dioxide () is the following: Dissolving platinum Similar equations can be written for platinum. As with gold, the oxidation reaction can be written with either nitric oxide or nitrogen dioxide as the nitrogen oxide product: The oxidized platinum ion then reacts with chloride ions resulting in the chloroplatinate ion: Experimental evidence reveals that the reaction of platinum with aqua regia is considerably more complex. The initial reactions produce a mixture of chloroplatinous acid () and nitrosoplatinic chloride (). The nitrosoplatinic chloride is a solid product. If full dissolution of the platinum is desired, repeated extractions of the residual solids with concentrated hydrochloric acid must be performed: and The chloroplatinous acid can be oxidized to chloroplatinic acid by saturating the solution with molecular chlorine () while heating: Dissolving platinum solids in aqua regia was the mode of discovery for the densest metals, iridium and osmium, both of which are found in platinum ores and are not dissolved by aqua regia, instead collecting as insoluble metallic powder (elemental Ir, Os) on the base of the vessel. Precipitating dissolved platinum As a practical matter, when platinum group metals are purified through dissolution in aqua regia, gold (commonly associated with PGMs) is precipitated by treatment with iron(II) chloride. Platinum in the filtrate, as hexachloroplatinate(IV), is converted to ammonium hexachloroplatinate by the addition of ammonium chloride. This ammonium salt is extremely insoluble, and it can be filtered off. Ignition (strong heating) converts it to platinum metal: Unprecipitated hexachloroplatinate(IV) is reduced with elemental zinc, and a similar method is suitable for small scale recovery of platinum from laboratory residues. Reaction with tin Aqua regia reacts with tin to form tin(IV) chloride, containing tin in its highest oxidation state: Reaction with other substances It can react with iron pyrite to form Iron(III) chloride: History Aqua regia first appeared in the De inventione veritatis ("On the Discovery of Truth") by pseudo-Geber (after ), who produced it by adding sal ammoniac (ammonium chloride) to nitric acid. The preparation of aqua regia by directly mixing hydrochloric acid with nitric acid only became possible after the discovery in the late sixteenth century of the process by which free hydrochloric acid can be produced. The third of Basil Valentine's keys () shows a dragon in the foreground and a fox eating a rooster in the background. The rooster symbolizes gold (from its association with sunrise and the sun's association with gold), and the fox represents aqua regia. The repetitive dissolving, heating, and redissolving (the rooster eating the fox eating the rooster) leads to the buildup of chlorine gas in the flask. The gold then crystallizes in the form of gold(III) chloride, whose red crystals Basil called "the rose of our masters" and "the red dragon's blood". The reaction was not reported again in the chemical literature until 1895. Antoine Lavoisier called aqua regia nitro-muriatic acid in 1789. When Germany invaded Denmark in World War II, Hungarian chemist George de Hevesy dissolved the gold Nobel Prizes of German physicists Max von Laue (1914) and James Franck (1925) in aqua regia to prevent the Nazis from confiscating them. The German government had prohibited Germans from accepting or keeping any Nobel Prize after jailed peace activist Carl von Ossietzky had received the Nobel Peace Prize in 1935. De Hevesy placed the resulting solution on a shelf in his laboratory at the Niels Bohr Institute. It was subsequently ignored by the Nazis who thought the jar—one of perhaps hundreds on the shelving—contained common chemicals. After the war, de Hevesy returned to find the solution undisturbed and precipitated the gold out of the acid. The gold was returned to the Royal Swedish Academy of Sciences and the Nobel Foundation. They re-cast the medals and again presented them to Laue and Franck. See also sometimes also used to clean glassware Notes References External links Chemistry Comes Alive! Aqua Regia Aqua Regia at The Periodic Table of Videos (University of Nottingham) Demonstration of Gold Coin Dissolving in Acid (Aqua Regia) Gold Alchemical substances Oxidizing mixtures Oxidizing acids Mineral acids
Aqua regia
[ "Chemistry" ]
1,736
[ "Acids", "Inorganic compounds", "Mineral acids", "Alchemical substances", "Oxidizing agents", "Oxidizing mixtures", "Oxidizing acids" ]
62,950
https://en.wikipedia.org/wiki/Ternary%20numeral%20system
A ternary numeral system (also called base 3 or trinary) has three as its base. Analogous to a bit, a ternary digit is a trit (trinary digit). One trit is equivalent to log2 3 (about 1.58496) bits of information. Although ternary most often refers to a system in which the three digits are all non–negative numbers; specifically , , and , the adjective also lends its name to the balanced ternary system; comprising the digits −1, 0 and +1, used in comparison logic and ternary computers. Comparison to other bases Representations of integer numbers in ternary do not get uncomfortably lengthy as quickly as in binary. For example, decimal 365 or senary corresponds to binary (nine bits) and to ternary (six digits). However, they are still far less compact than the corresponding representations in bases such as decimal – see below for a compact way to codify ternary using nonary (base 9) and septemvigesimal (base 27). {| class="wikitable" |+ Numbers from 0 to 33 − 1 in standard ternary |- align="center" ! Ternary | 0 || 1 || 2 || 10 || 11 || 12 || 20 || 21 || 22 |- align="center" ! Binary | 0 || 1 || 10 || 11 || 100 || 101 || 110 || 111 || |- align="center" ! Senary | 0 || 1 || 2 || 3 || 4 || 5 || 10 || 11 || 12 |- align="center" ! Decimal ! 0 || 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 |- |colspan=10 style="background-color:white;"| |- align="center" ! Ternary | 100 || 101 || 102 || 110 || 111 || 112 || 120 || 121 || 122 |- align="center" ! Binary | 1001 || 1010 || 1011 || 1100 || 1101 || 1110 || 1111 | || |- align="center" ! Senary | 13 || 14 || 15 || 20 || 21 || 22 || 23 || 24 || 25 |- align="center" ! Decimal ! 9 ||10 || 11 || 12|| 13 || 14 || 15 || 16 || 17 |- |colspan=10 style="background-color:white;"| |- align="center" ! Ternary | 200 || 201 || 202 || 210 || 211 || 212 || 220 || 221 || 222 |- align="center" ! Binary | || || || || | || || || |- align="center" ! Senary | 30 || 31 || 32 || 33 || 34 || 35 || 40 || 41 || 42 |- align="center" ! Decimal ! 18 || 19 || 20 || 21 || 22 || 23 || 24 || 25 || 26 |} {| class="wikitable" |+ Powers of 3 in ternary |- align="center" ! Ternary | 1 || 10 || 100 || || |- align="center" ! Binary | 1 || 11 || 1001 || || |- align="center" ! Senary | 1 || 3 || 13 || 43 || 213 |- align="center" ! Decimal | 1 || 3 || 9 || 27 || 81 |- align="center" ! Power ! || || ! || |- |colspan=10 style="background-color:white;"| |- align="center" ! Ternary | || || | || |- align="center" ! Binary | || || | || |- align="center" ! Senary | || || || || |- align="center" ! Decimal | 243 || 729 || || || |- align="center" ! Power ! || || ! || |} As for rational numbers, ternary offers a convenient way to represent as same as senary (as opposed to its cumbersome representation as an infinite string of recurring digits in decimal); but a major drawback is that, in turn, ternary does not offer a finite representation for (nor for , , etc.), because 2 is not a prime factor of the base; as with base two, one-tenth (decimal, senary ) is not representable exactly (that would need e.g. decimal); nor is one-sixth (senary , decimal ). {| class="wikitable" |+ Fractions in ternary |- align="center" ! Fraction | || || || || || || || || || || || |- align="center" ! Ternary | 0. || 0.1 || 0. || 0. || 0.0 || 0. || 0. || 0.01 || 0. || 0. || 0.0 || 0. |- align="center" ! Binary | 0.1 || 0. || 0.01 || 0. || 0.0 || 0. || 0.001 || 0. || 0.0 || 0. || 0.00 || 0. |- align="center" ! Senary | 0.3 || 0.2 || 0.13 || 0. || 0.1 || 0. || 0.043 || 0.04 || 0.0 || 0. || 0.03 || 0. |- align="center" ! Decimal ! 0.5 || 0. || 0.25 || 0.2 || 0.1 || 0. || 0.125 ! 0. || 0.1 || 0. || 0.08 || 0. |} Sum of the digits in ternary as opposed to binary The value of a binary number with n bits that are all 1 is . Similarly, for a number N(b, d) with base b and d digits, all of which are the maximal digit value , we can write: . and , so , or Then For a three-digit ternary number, . Compact ternary representation: base 9 and 27 Nonary (base 9, each digit is two ternary digits) or septemvigesimal (base 27, each digit is three ternary digits) can be used for compact representation of ternary, similar to how octal and hexadecimal systems are used in place of binary. Practical usage In certain analog logic, the state of the circuit is often expressed ternary. This is most commonly seen in CMOS circuits, and also in transistor–transistor logic with totem-pole output. The output is said to either be low (grounded), high, or open (high-Z). In this configuration the output of the circuit is actually not connected to any voltage reference at all. Where the signal is usually grounded to a certain reference, or at a certain voltage level, the state is said to be high impedance because it is open and serves its own reference. Thus, the actual voltage level is sometimes unpredictable. A rare "ternary point" in common use is for defensive statistics in American baseball (usually just for pitchers), to denote fractional parts of an inning. Since the team on offense is allowed three outs, each out is considered one third of a defensive inning and is denoted as .1. For example, if a player pitched all of the 4th, 5th and 6th innings, plus achieving 2 outs in the 7th inning, his innings pitched column for that game would be listed as 3.2, the equivalent of (which is sometimes used as an alternative by some record keepers). In this usage, only the fractional part of the number is written in ternary form. Ternary numbers can be used to convey self-similar structures like the Sierpinski triangle or the Cantor set conveniently. Additionally, it turns out that the ternary representation is useful for defining the Cantor set and related point sets, because of the way the Cantor set is constructed. The Cantor set consists of the points from 0 to 1 that have a ternary expression that does not contain any instance of the digit 1. Any terminating expansion in the ternary system is equivalent to the expression that is identical up to the term preceding the last non-zero term followed by the term one less than the last non-zero term of the first expression, followed by an infinite tail of twos. For example: 0.1020 is equivalent to 0.1012222... because the expansions are the same until the "two" of the first expression, the two was decremented in the second expansion, and trailing zeros were replaced with trailing twos in the second expression. Ternary is the integer base with the lowest radix economy, followed closely by binary and quaternary. This is due to its proximity to the mathematical constant e. It has been used for some computing systems because of this efficiency. It is also used to represent three-option trees, such as phone menu systems, which allow a simple path to any branch. A form of redundant binary representation called a binary signed-digit number system, a form of signed-digit representation, is sometimes used in low-level software and hardware to accomplish fast addition of integers because it can eliminate carries. Binary-coded ternary Simulation of ternary computers using binary computers, or interfacing between ternary and binary computers, can involve use of binary-coded ternary (BCT) numbers, with two or three bits used to encode each trit. BCT encoding is analogous to binary-coded decimal (BCD) encoding. If the trit values 0, 1 and 2 are encoded 00, 01 and 10, conversion in either direction between binary-coded ternary and binary can be done in logarithmic time. A library of C code supporting BCT arithmetic is available. Tryte Some ternary computers such as the Setun defined a tryte to be six trits or approximately 9.5 bits (holding more information than the de facto binary byte). See also Qutrit Setun, a ternary computer Ternary logic Taixuanjing References Further reading External links Ternary Arithmetic The ternary calculating machine of Thomas Fowler Ternary Base Conversionincludes fractional part, from Maths Is Fun Gideon Frieder's replacement ternary numeral system Visualization of ternary numeral system Computer arithmetic Positional numeral systems Ternary computers
Ternary numeral system
[ "Mathematics" ]
2,331
[ "Computer arithmetic", "Numeral systems", "Arithmetic", "Positional numeral systems" ]
62,983
https://en.wikipedia.org/wiki/Medicago
Medicago is a genus of flowering plants, commonly known as medick or burclover, in the legume family (Fabaceae). It contains at least 87 species and is distributed mainly around the Mediterranean Basin, and extending across temperate Eurasia and sub-Saharan Africa. The best-known member of the genus is alfalfa (M. sativa), an important forage crop, and the genus name is based on the Latin name for that plant, , from Median (grass). Most members of the genus are low, creeping herbs, resembling clover, but with burs (hence the common name). However, alfalfa grows to a height of 1 meter, and tree medick (M. arborea) is a shrub. Members of the genus are known to produce bioactive compounds such as medicarpin (a flavonoid) and medicagenic acid (a triterpenoid saponin). Chromosome numbers in Medicago range from 2n = 14 to 48. The species Medicago truncatula is a model legume due to its relatively small stature, small genome (450–500 Mbp), short generation time (about 3 months), and ability to reproduce both by outcrossing and selfing. Comprehensive descriptions of the genus are Lesinš and Lesinš 1979 and Small and Jomphe 1989. Major collections are SARDI (Australia), USDA-GRIN (United States), ICARDA (Syria), and INRA (France). Evolution Medicago diverged from Glycine (soybean) about 53–55 million years ago (in the early Eocene), from Lotus (deervetch) 49–51 million years ago (also in the Eocene), and from Trigonella 10–22 million years ago (in the Miocene). Ecological interactions with other organisms Symbiosis with nitrogen-fixing rhizobia Béna et al. (2005) constructed a molecular phylogeny of 23 Sinorhizobium strains and tested the symbiotic ability of six strains with 35 Medicago species. Comparison of these phylogenies indicates many transitions in the compatibility of the association over evolutionary time. Furthermore, they propose that the geographical distribution of strains limits the distribution of particular Medicago species. Agricultural uses Agronomic research has been conducted on species of the Medicago genus. Other than alfalfa, several of the prostrate members of the family (such as Medicago lupulina and Medicago truncatula) have been used as forage crops. Select species in the Medicago genus naturally develop spiney pods during the reproductive phase of growth (such as Medicago intertexta and Medicago polymorpha). Despite having high levels of agronomic performance, these are typically viewed as undesirable in sheep based farming systems due to their ability to become lodged in wool, reducing fleece value. Breeding efforts in the 1990's have yielded spineless varieties of burr medic, providing valuable production amongst farming systems in low rainfall (<300mm annual), free draining, alkaline soils. Insect herbivores Medicago species are used as food plants by the larvae of some Lepidoptera species including the common swift, flame, latticed heath, lime-speck pug, nutmeg, setaceous Hebrew character, and turnip moths and case-bearers of the genus Coleophora, including C. frischella (recorded on M. sativa) and C. fuscociliella (feeds exclusively on Medicago spp.). Species This list is compiled from: Section Buceras Subsection Deflexae Medicago retrorsa (Boiss.) E. Small Subsection Erectae Medicago arenicola (Huber-Mor.) E. Small Medicago astroites (Fisch. & Mey.) Trautv. Medicago carica (Huber-Mor.) E. Small Medicago crassipes (Boiss.) E. Small Medicago fischeriana (Ser.) Trautv. Medicago halophila (Boiss.) E. Small Medicago heldreichii (Boiss.) E. Small Medicago medicaginoides (Retz.) E. Small Medicago monantha (C. A. Meyer) Trautv. Medicago orthoceras (Kar. & Kir.) Trautv. Medicago pamphylica (Huber-Mor. & Sirjaev) E. Small Medicago persica (Boiss.) E. Small Medicago phrygia (Boiss. & Bal.) E. Small Medicago polyceratia (L.) Trautv. Medicago rigida (Boiss. & Bal.) E. Small Subsection Isthmocarpae Medicago rhytidiocarpa (Boiss. & Bal.) E. Small Medicago isthmocarpa (Boiss. & Bal.) E. Small Subsection Reflexae Medicago monspeliaca (L.) Trautv. Section Carstiensae Medicago carstiensis Wulf. Section Dendrotelis Medicago arborea L. Medicago citrina (Font Quer) Greuter Medicago strasseri Greuter, Matthas & Risse Section Geocarpa Medicago hypogaea E. Small Section Heynianae Medicago heyniana Greuter Section Hymenocarpos Medicago radiata L. Section Lunatae Medicago biflora (Griseb.) E. Small Medicago brachycarpa M. Bieb. Medicago huberi E. Small Medicago rostrata (Boiss. & Bal.) E. Small Section Lupularia Medicago lupulina L. Medicago secundiflora Durieu Section Medicago Medicago cancellata M. Bieb. Medicago daghestanica Rupr. Medicago hybrida (Pourr.) Trautv. Medicago marina L. Medicago papillosa Boiss. M. p. macrocarpa M. p. papillosa Medicago pironae Vis. Medicago prostrata Jacq. M. p. prostrata M. p. pseudorupestris Medicago rhodopea Velen. Medicago rupestris M. Bieb Medicago sativa L. (alfalfa) M. s. caerulea M. s. falcata (Medicago falcata) M. s. f. var. falcata M. s. f. var. viscosa M. s. glomerata M. s. sativa * Medicago saxatilis M. Bieb Medicago suffruticosa Ramond ex DC. M. s. leiocarpa M. s. suffruticosa Section Orbiculares Medicago orbicularis (L.) Bart. Section Platycarpae Medicago archiducis-nicolai Sirjaev Medicago cretacea M. Bieb. Medicago edgeworthii Sirjaev Medicago ovalis (Boiss.) Sirjaev Medicago playtcarpa (L.) Trautv. Medicago plicata (Boiss.) Sirjaev Medicago popovii (E. Kor.) Sirjaev Medicago ruthenica (L.) Ledebour Subsection Rotatae Medicago blancheana Boiss. Medicago noeana Boiss. Medicago rugosa Desr. Medicago rotata Boiss. Medicago scutellata (L.) Miller Medicago shepardii Post Section Spirocarpos Subsection Intertextae Medicago ciliaris (L.) Krocker Medicago granadensis Willd. Medicago intertexta (L.) Miller Medicago muricoleptis Tin. Subsection Leptospireae Medicago arabica (L.) Huds. Medicago coronata (L.) Bart. Medicago disciformis DC. Medicago laciniata (L.) Miller Medicago lanigera Winkl. & Fedtsch. Medicago laxispira Heyn Medicago minima (L.) Bart. Medicago polymorpha L. Medicago praecox DC. Medicago sauvagei Nègre Medicago tenoreana Ser. Subsection Pachyspireae Medicago constricta Durieu Medicago doliata Carmign. Medicago italica (Miller) Fiori Medicago lesinsii E. Small Medicago littoralis Rohde ex Lois. Medicago murex Willd. Medicago rigidula (L.) All. Medicago rigiduloides E. Small Medicago sinskiae Uljanova Medicago soleirolii Duby Medicago sphaerocarpos Bertol. Medicago syriaca E. Small Medicago truncatula Gaertn. Medicago turbinata (L.) All. Species names with uncertain taxonomic status The status of the following species is unresolved: Medicago agropyretorum Vassilcz. Medicago alatavica Vassilcz. Medicago caucasica Vassilcz. Medicago cyrenaea Maire & Weiller Medicago difalcata Sinskaya Medicago grossheimii Vassilcz. Medicago gunibica Vassilcz. Medicago hemicoerulea Sinskaya Medicago karatschaica (A. Heller) A. Heller Medicago komarovii Vassilcz. Medicago meyeri Gruner Medicago polychroa Grossh. Medicago schischkinii Sumnev. Medicago talyschensis Latsch. Medicago transoxana Vassilcz. Medicago tunetana (Murb.) A.W. Hill Medicago vardanis' Vassilcz. Medicago virescens Grossh. Recent molecular phylogenic analyses of Medicago'' indicate that the sections and subsections defined by Small & Jomphe, as outlined above, are generally polyphyletic. However, with minor revisions sections and subsections could be rendered monophyletic. Notes References Plant models Fabaceae genera Taxa named by Carl Linnaeus
Medicago
[ "Biology" ]
2,309
[ "Model organisms", "Plant models" ]
62,996
https://en.wikipedia.org/wiki/Theophylline
Theophylline, also known as 1,3-dimethylxanthine, is a drug that inhibits phosphodiesterase and blocks adenosine receptors. It is used to treat chronic obstructive pulmonary disease (COPD) and asthma. Its pharmacology is similar to other methylxanthine drugs (e.g., theobromine and caffeine). Trace amounts of theophylline are naturally present in tea, coffee, chocolate, yerba maté, guarana, and kola nut. The name 'theophylline' derives from "Thea"—the former genus name for tea + Legacy Greek φύλλον (phúllon, "leaf") + -ine. Medical uses The main actions of theophylline involve: relaxing bronchial smooth muscle increasing heart muscle contractility and efficiency (positive inotrope) increasing heart rate (positive chronotropic) increasing blood pressure increasing renal blood flow anti-inflammatory effects central nervous system stimulatory effect, mainly on the medullary respiratory center The main therapeutic uses of theophylline are for treating: Chronic obstructive pulmonary disease (COPD) Asthma infant apnea Blocks the action of adenosine; an inhibitory neurotransmitter that induces sleep, contracts the smooth muscles and relaxes the cardiac muscle. Treatment of post-dural puncture headache. Performance enhancement in sports Theophylline and other methylxanthines are often used for their performance-enhancing effects in sports, as these drugs increase alertness, bronchodilation, and increase the rate and force of heart contraction. There is conflicting information about the value of theophylline and other methylxanthines as prophylaxis against exercise-induced asthma. Adverse effects The use of theophylline is complicated by its interaction with various drugs and by the fact that it has a narrow therapeutic window (<20 mcg/mL). Its use must be monitored by direct measurement of serum theophylline levels to avoid toxicity. It can also cause nausea, diarrhea, increase in heart rate, abnormal heart rhythms, and CNS excitation (headaches, insomnia, irritability, dizziness and lightheadedness). Seizures can also occur in severe cases of toxicity, and are considered to be a neurological emergency. Its toxicity is increased by erythromycin, cimetidine, and fluoroquinolones, such as ciprofloxacin. Some lipid-based formulations of theophylline can result in toxic theophylline levels when taken with fatty meals, an effect called dose dumping, but this does not occur with most formulations of theophylline. Theophylline toxicity can be treated with beta blockers. In addition to seizures, tachyarrhythmias are a major concern. Theophylline should not be used in combination with the SSRI fluvoxamine. Spectroscopy UV-visible Theophylline is soluble in 0.1N NaOH and absorbs maximally at 277 nm with an extinction coefficient of 10,200 (cm−1 M−1). Proton NMR The characteristic signals, distinguishing theophylline from related methylxanthines, are approximately 3.23δ and 3.41δ, corresponding to the unique methylation possessed by theophylline. The remaining proton signal, at 8.01δ, corresponds to the proton on the imidazole ring, not transferred between the nitrogen. The transferred proton between the nitrogen is a variable proton and only exhibits a signal under certain conditions. 13C-NMR The unique methylation of theophylline corresponds to the following signals: 27.7δ and 29.9δ. The remaining signals correspond to carbons characteristic of the xanthine backbone. Natural occurrences Theophylline is naturally found in cocoa beans. Amounts as high as 3.7 mg/g have been reported in Criollo cocoa beans. Trace amounts of theophylline are also found in brewed tea, although brewed tea provides only about 1 mg/L, which is significantly less than a therapeutic dose. Trace amounts of theophylline are also found in guarana (Paullinia cupana) and in kola nuts. Pharmacology Pharmacodynamics Like other methylated xanthine derivatives, theophylline is both a competitive nonselective phosphodiesterase inhibitor which increases intracellular levels of cAMP and cGMP, activates PKA, inhibits TNF-alpha and inhibits leukotriene synthesis, and reduces inflammation and innate immunity nonselective adenosine receptor antagonist, antagonizing A1, A2, and A3 receptors almost equally, which explains many of its cardiac effects. Theophylline activates histone deacetylases. Pharmacokinetics Absorption When theophylline is administered intravenously, bioavailability is 100%. Distribution Theophylline is distributed in the extracellular fluid, in the placenta, in the mother's milk and in the central nervous system. The volume of distribution is 0.5 L/kg. The protein binding is 40%. Metabolism Theophylline is metabolized extensively in the liver. It undergoes N-demethylation via cytochrome P450 1A2. It is metabolized by parallel first order and Michaelis-Menten pathways. Metabolism may become saturated (non-linear), even within the therapeutic range. Small dose increases may result in disproportionately large increases in serum concentration. Methylation to caffeine is also important in the infant population. Smokers and people with hepatic (liver) impairment metabolize it differently. Cigarette and marijuana smoking induces metabolism of theophylline, increasing the drug's metabolic clearance. Excretion Theophylline is excreted unchanged in the urine (up to 10%). Clearance of the drug is increased in children (age 1 to 12), teenagers (12 to 16), adult smokers, elderly smokers, as well as in cystic fibrosis, and hyperthyroidism. Clearance of the drug is decreased in these conditions: elderly, acute congestive heart failure, cirrhosis, hypothyroidism and febrile viral illnesses. The elimination half-life varies: 30 hours for premature neonates, 24 hours for neonates, 3.5 hours for children ages 1 to 9, 8 hours for adult non-smokers, 5 hours for adult smokers, 24 hours for those with hepatic impairment, 12 hours for those with congestive heart failure NYHA class I-II, 24 hours for those with congestive heart failure NYHA class III-IV, 12 hours for the elderly. History Theophylline was first extracted from tea leaves and chemically identified around 1888 by the German biologist Albrecht Kossel. Seven years later, a chemical synthesis starting with 1,3-dimethyluric acid was described by Emil Fischer and Lorenz Ach. The Traube purine synthesis, an alternative method to synthesize theophylline, was introduced in 1900 by another German scientist, Wilhelm Traube. Theophylline's first clinical use came in 1902 as a diuretic. It took an additional 20 years until it was first reported as an asthma treatment. The drug was prescribed in a syrup up to the 1970s as Theostat 20 and Theostat 80, and by the early 1980s in a tablet form called Quibron. See also Theophylline/ephedrine References External links Adenosine receptor antagonists Bitter compounds Bronchodilators Drugs developed by Merck Histone Acetyltransferase Inhibitor Human drug metabolites Phosphodiesterase inhibitors Pro-motivational agents Wakefulness-promoting agents Xanthines
Theophylline
[ "Chemistry" ]
1,668
[ "Chemicals in medicine", "Xanthines", "Alkaloids by chemical classification", "Human drug metabolites" ]
63,011
https://en.wikipedia.org/wiki/Nocturnality
Nocturnality is a behavior in some non-human animals characterized by being active during the night and sleeping during the day. The common adjective is "nocturnal", versus diurnal meaning the opposite. Nocturnal creatures generally have highly developed senses of hearing, smell, and specially adapted eyesight. Some animals, such as cats and ferrets, have eyes that can adapt to both low-level and bright day levels of illumination (see metaturnal). Others, such as bushbabies and (some) bats, can function only at night. Many nocturnal creatures including tarsiers and some owls have large eyes in comparison with their body size to compensate for the lower light levels at night. More specifically, they have been found to have a larger cornea relative to their eye size than diurnal creatures to increase their : in the low-light conditions. Nocturnality helps wasps, such as Apoica flavissima, avoid hunting in intense sunlight. Diurnal animals, including humans (except for night owls), squirrels and songbirds, are active during the daytime. Crepuscular species, such as rabbits, skunks, tigers and hyenas, are often erroneously referred to as nocturnal. Cathemeral species, such as fossas and lions, are active both in the day and at night. Origins While it is difficult to say which came first, nocturnality or diurnality, a hypothesis in evolutionary biology, the nocturnal bottleneck theory, postulates that in the Mesozoic, many ancestors of modern-day mammals evolved nocturnal characteristics in order to avoid contact with the numerous diurnal predators. A recent study attempts to answer the question as to why so many modern day mammals retain these nocturnal characteristics even though they are not active at night. The leading answer is that the high visual acuity that comes with diurnal characteristics is not needed anymore due to the evolution of compensatory sensory systems, such as a heightened sense of smell and more astute auditory systems. In a recent study, recently extinct elephant birds and modern day nocturnal kiwi bird skulls were examined to recreate their likely brain and skull formation. They indicated that olfactory bulbs were much larger in comparison to their optic lobes, indicating they both have a common ancestor who evolved to function as a nocturnal species, decreasing their eyesight in favor of a better sense of smell. The anomaly to this theory were anthropoids, who appeared to have the most divergence from nocturnality of all organisms examined. While most mammals did not exhibit the morphological characteristics expected of a nocturnal creature, reptiles and birds fit in perfectly. A larger cornea and pupil correlated well with whether these two classes of organisms were nocturnal or not. Advantages Resource competition Being active at night is a form of niche differentiation, where a species' niche is partitioned not by the amount of resources but by the amount of time (i.e. temporal division of the ecological niche). Hawks and owls can hunt the same field or meadow for the same rodents without conflict because hawks are diurnal and owls are nocturnal. This means they are not in competition for each other's prey. Another niche that being nocturnal lessens competition within is pollination - nocturnal pollinators such as moths, beetles, thrips, and bats have a lower risk of being seen by predators, and the plants evolved temporal scent production and ambient heat to attract nocturnal pollination. Like with predators hunting the same prey, some plants such as apples can be pollinated both during the day and at night. Predation Nocturnality is a form of crypsis, an adaptation to avoid or enhance predation. Although lions are cathemeral, and may be active at any time of day or night, they prefer to hunt at night because many of their prey species (zebra, antelope, impala, wildebeest, etc.) have poor night vision. Many species of small rodents, such as the Large Japanese Field Mouse, are active at night because most of the dozen or so birds of prey that hunt them are diurnal. There are many diurnal species that exhibit some nocturnal behaviors. For example, many seabirds and sea turtles only gather at breeding sites or colonies at night to reduce the risk of predation to themselves and/or their offspring. Nocturnal species take advantage of the night time to prey on species that are used to avoiding diurnal predators. Some nocturnal fish species will use the moonlight to prey on zooplankton species that come to the surface at night. Some species have developed unique adaptations that allow them to hunt in the dark. Bats are famous for using echolocation to hunt down their prey, using sonar sounds to capture them in the dark. Water conservation Another reason for nocturnality is avoiding the heat of the day. This is especially true in arid biomes like deserts, where nocturnal behavior prevents creatures from losing precious water during the hot, dry daytime. This is an adaptation that enhances osmoregulation. One of the reasons that (cathemeral) lions prefer to hunt at night is to conserve water. Hamiltons Frog, found on Stephens and Maud islands, stays hidden for the majority of the day when temperatures are warmer and are mainly active at night. They will only come out during the day if there are humid and cool conditions. Many plant species native to arid biomes have adapted so that their flowers only open at night when the sun's intense heat cannot wither and destroy their moist, delicate blossoms. These flowers are pollinated by bats, another creature of the night. Climate-change and the change in global temperatures has led to an increasing amount of diurnal species to push their activity patterns closer towards crepuscular or fully nocturnal behavior. This adaptive measure allows species to avoid the heat of the day, without having to leave that particular habitat. Human disturbances The exponential increase in human expansion and technological advances in the last few centuries has had a major effect on nocturnal animals, as well as diurnal species. The causes of these can be traced to distinct, sometimes overlapping areas: light pollution and spatial disturbance. Light pollution Light pollution is a major issue for nocturnal species, and the impact continues to increase as electricity reaches parts of the world that previously had no access. Species in the tropics are generally more affected by this due to the change in their relatively constant light patterns, but temperate species relying on day-night triggers for behavioral patterns are also affected as well. Many diurnal species see the benefit of a "longer day", allowing for a longer hunting period which is detrimental to their nocturnal prey trying to avoid them. Orientation Light pollution can disorient species that are used to darkness, as their adaptive eyes are not as used to the artificial lighting. Insects are the most obvious example, who are attracted by the lighting and are usually killed by either the heat or electrical current. Some species of frogs are blinded by the quick changes in light, while nocturnal migratory birds may be disoriented, causing them to lose direction, tire out, or be captured by predators. Sea turtles are particularly affected by this, adding to a number of threats to the different endangered species. Adults are likely to stay away from artificially lit beaches that they might prefer to lay eggs on, as there is less cover against predators. Additionally, baby sea turtles that hatch from eggs on artificially lit beaches often get lost, heading towards the light sources as opposed to the ocean. Rhythmic behaviors Rhythmic behaviors are affected by light pollution both seasonally and daily patterns. Migrating birds or mammals might have issues with the timing of their movement for example. On a day-to-day basis, species can see significant changes in their internal temperatures, their general movement, feeding and body mass. These small scale changes can eventually lead to a population decline, as well as hurting local trophic levels and interconnecting species. Some typically diurnal species have even become crepuscular or nocturnal as a result of light pollution and general human disturbance. Reproduction There have been documented effects of light pollution on reproductive cycles and factors in different species. It can affect mate choice, migration to breeding grounds, and nest site selection. In male green frogs, artificial light causes a decrease in mate calls and continued to move around instead of waiting for a potential mate to arrive. This hurts the overall fitness of the species, which is concerning considering the overall decrease in amphibian populations. Predation Some nocturnal predator-prey relationships are interrupted by artificial lighting. Bats that are fast-moving are often at an advantage with insects being drawn to light; they are fast enough to escape any predators also attracted to the light, leaving slow-moving bats at a disadvantage. Another example is harbor seals eating juvenile salmon that moved down a river lit by nearby artificial lighting. Once the lights were turned off, predation levels decreased. Many diurnal prey species forced into being nocturnal are susceptible to nocturnal predators and those species with poor nocturnal eyesight often bear the brunt of the cost. Spatial disturbance The increasing amount of habitat destruction worldwide as a result of human expansion has given both advantages and disadvantages to different nocturnal animals. As a result of peak human activity in the daytime, more species are likely to be active at night in order to avoid the new disturbance in their habitat. Carnivorous predators however are less timid of the disturbance, feeding on human waste and keeping a relatively similar spatial habitat as they did before. In comparison, herbivorous prey tend to stay in areas where human disturbance is low, limiting both resources and their spatial habitat. This leads to an imbalance in favor of predators, who increase in population and come out more often at night. In captivity Zoos In zoos, nocturnal animals are usually kept in special night-illumination enclosures to invert their normal sleep-wake cycle and to keep them active during the hours when visitors will be there to see them. Pets Hedgehogs and sugar gliders are just two of the many nocturnal species kept as (exotic) pets. Cats have adapted to domestication so that each individual, whether stray alley cat or pampered housecat, can change their activity level at will, becoming nocturnal or diurnal in response to their environment or the routine of their owners. Cats normally demonstrate crepuscular behavior, bordering nocturnal, being most active in hunting and exploration at dusk and dawn. See also Adaptation Antipredator adaptation Competitive exclusion principle Crepuscular Crypsis Diurnality List of nocturnal animals List of nocturnal birds Niche (ecology) Niche differentiation Night owl (person) Tapetum lucidum References Antipredator adaptations Behavioral ecology Biological interactions Chronobiology Circadian rhythm Ethology Predation Sleep
Nocturnality
[ "Biology" ]
2,177
[ "Nocturnal animals", "Behavior", "Animals", "Biological interactions", "Behavioral ecology", "Biological defense mechanisms", "Behavioural sciences", "Circadian rhythm", "Antipredator adaptations", "nan", "Chronobiology", "Ethology", "Sleep" ]
63,025
https://en.wikipedia.org/wiki/Variable%20star
A variable star is a star whose brightness as seen from Earth (its apparent magnitude) changes systematically with time. This variation may be caused by a change in emitted light or by something partly blocking the light, so variable stars are classified as either: Intrinsic variables, whose luminosity actually changes periodically; for example, because the star swells and shrinks. Extrinsic variables, whose apparent changes in brightness are due to changes in the amount of their light that can reach Earth; for example, because the star has an orbiting companion that sometimes eclipses it. Many, possibly most, stars exhibit at least some oscillation in luminosity: the energy output of the Sun, for example, varies by about 0.1% over an 11-year solar cycle. Discovery An ancient Egyptian calendar of lucky and unlucky days composed some 3,200 years ago may be the oldest preserved historical document of the discovery of a variable star, the eclipsing binary Algol. Aboriginal Australians are also known to have observed the variability of Betelgeuse and Antares, incorporating these brightness changes into narratives that are passed down through oral tradition. Of the modern astronomers, the first variable star was identified in 1638 when Johannes Holwarda noticed that Omicron Ceti (later named Mira) pulsated in a cycle taking 11 months; the star had previously been described as a nova by David Fabricius in 1596. This discovery, combined with supernovae observed in 1572 and 1604, proved that the starry sky was not eternally invariable as Aristotle and other ancient philosophers had taught. In this way, the discovery of variable stars contributed to the astronomical revolution of the sixteenth and early seventeenth centuries. The second variable star to be described was the eclipsing variable Algol, by Geminiano Montanari in 1669; John Goodricke gave the correct explanation of its variability in 1784. Chi Cygni was identified in 1686 by G. Kirch, then R Hydrae in 1704 by G. D. Maraldi. By 1786, ten variable stars were known. John Goodricke himself discovered Delta Cephei and Beta Lyrae. Since 1850, the number of known variable stars has increased rapidly, especially after 1890 when it became possible to identify variable stars by means of photography. In 1930, astrophysicist Cecilia Payne published the book The Stars of High Luminosity, in which she made numerous observations of variable stars, paying particular attention to Cepheid variables. Her analyses and observations of variable stars, carried out with her husband, Sergei Gaposchkin, laid the basis for all subsequent work on the subject. The latest edition of the General Catalogue of Variable Stars (2008) lists more than 46,000 variable stars in the Milky Way, as well as 10,000 in other galaxies, and over 10,000 'suspected' variables. Detecting variability The most common kinds of variability involve changes in brightness, but other types of variability also occur, in particular changes in the spectrum. By combining light curve data with observed spectral changes, astronomers are often able to explain why a particular star is variable. Variable star observations Variable stars are generally analysed using photometry, spectrophotometry and spectroscopy. Measurements of their changes in brightness can be plotted to produce light curves. For regular variables, the period of variation and its amplitude can be very well established; for many variable stars, though, these quantities may vary slowly over time, or even from one period to the next. Peak brightnesses in the light curve are known as maxima, while troughs are known as minima. Amateur astronomers can do useful scientific study of variable stars by visually comparing the star with other stars within the same telescopic field of view of which the magnitudes are known and constant. By estimating the variable's magnitude and noting the time of observation a visual lightcurve can be constructed. The American Association of Variable Star Observers collects such observations from participants around the world and shares the data with the scientific community. From the light curve the following data are derived: are the brightness variations periodical, semiperiodical, irregular, or unique? what is the period of the brightness fluctuations? what is the shape of the light curve (symmetrical or not, angular or smoothly varying, does each cycle have only one or more than one minima, etcetera)? From the spectrum the following data are derived: what kind of star is it: what is its temperature, its luminosity class (dwarf star, giant star, supergiant, etc.)? is it a single star, or a binary? (the combined spectrum of a binary star may show elements from the spectra of each of the member stars) does the spectrum change with time? (for example, the star may turn hotter and cooler periodically) changes in brightness may depend strongly on the part of the spectrum that is observed (for example, large variations in visible light but hardly any changes in the infrared) if the wavelengths of spectral lines are shifted this points to movements (for example, a periodical swelling and shrinking of the star, or its rotation, or an expanding gas shell) (Doppler effect) strong magnetic fields on the star betray themselves in the spectrum abnormal emission or absorption lines may be indication of a hot stellar atmosphere, or gas clouds surrounding the star. In very few cases it is possible to make pictures of a stellar disk. These may show darker spots on its surface. Interpretation of observations Combining light curves with spectral data often gives a clue as to the changes that occur in a variable star. For example, evidence for a pulsating star is found in its shifting spectrum because its surface periodically moves toward and away from us, with the same frequency as its changing brightness. About two-thirds of all variable stars appear to be pulsating. In the 1930s astronomer Arthur Stanley Eddington showed that the mathematical equations that describe the interior of a star may lead to instabilities that cause a star to pulsate. The most common type of instability is related to oscillations in the degree of ionization in outer, convective layers of the star. When the star is in the swelling phase, its outer layers expand, causing them to cool. Because of the decreasing temperature the degree of ionization also decreases. This makes the gas more transparent, and thus makes it easier for the star to radiate its energy. This in turn makes the star start to contract. As the gas is thereby compressed, it is heated and the degree of ionization again increases. This makes the gas more opaque, and radiation temporarily becomes captured in the gas. This heats the gas further, leading it to expand once again. Thus a cycle of expansion and compression (swelling and shrinking) is maintained. The pulsation of cepheids is known to be driven by oscillations in the ionization of helium (from He++ to He+ and back to He++). Nomenclature In a given constellation, the first variable stars discovered were designated with letters R through Z, e.g. R Andromedae. This system of nomenclature was developed by Friedrich W. Argelander, who gave the first previously unnamed variable in a constellation the letter R, the first letter not used by Bayer. Letters RR through RZ, SS through SZ, up to ZZ are used for the next discoveries, e.g. RR Lyrae. Later discoveries used letters AA through AZ, BB through BZ, and up to QQ through QZ (with J omitted). Once those 334 combinations are exhausted, variables are numbered in order of discovery, starting with the prefixed V335 onwards. Classification Variable stars may be either intrinsic or extrinsic. Intrinsic variable stars: stars where the variability is being caused by changes in the physical properties of the stars themselves. This category can be divided into three subgroups. Pulsating variables, stars whose radius alternately expands and contracts as part of their natural evolutionary ageing processes. Eruptive variables, stars who experience eruptions on their surfaces like flares or mass ejections. Cataclysmic or explosive variables, stars that undergo a cataclysmic change in their properties like novae and supernovae. Extrinsic variable stars: stars where the variability is caused by external properties like rotation or eclipses. There are two main subgroups. Eclipsing binaries, double stars or planetary systems where, as seen from Earth's vantage point the stars occasionally eclipse one another as they orbit, or the planet eclipses its star. Rotating variables, stars whose variability is caused by phenomena related to their rotation. Examples are stars with extreme "sunspots" which affect the apparent brightness or stars that have fast rotation speeds causing them to become ellipsoidal in shape. These subgroups themselves are further divided into specific types of variable stars that are usually named after their prototype. For example, dwarf novae are designated U Geminorum stars after the first recognized star in the class, U Geminorum. Intrinsic variable stars Examples of types within these divisions are given below. Pulsating variable stars Pulsating stars swell and shrink, affecting their brightness and spectrum. Pulsations are generally split into: radial, where the entire star expands and shrinks as a whole; and non-radial, where one part of the star expands while another part shrinks. Depending on the type of pulsation and its location within the star, there is a natural or fundamental frequency which determines the period of the star. Stars may also pulsate in a harmonic or overtone which is a higher frequency, corresponding to a shorter period. Pulsating variable stars sometimes have a single well-defined period, but often they pulsate simultaneously with multiple frequencies and complex analysis is required to determine the separate interfering periods. In some cases, the pulsations do not have a defined frequency, causing a random variation, referred to as stochastic. The study of stellar interiors using their pulsations is known as asteroseismology. The expansion phase of a pulsation is caused by the blocking of the internal energy flow by material with a high opacity, but this must occur at a particular depth of the star to create visible pulsations. If the expansion occurs below a convective zone then no variation will be visible at the surface. If the expansion occurs too close to the surface the restoring force will be too weak to create a pulsation. The restoring force to create the contraction phase of a pulsation can be pressure if the pulsation occurs in a non-degenerate layer deep inside a star, and this is called an acoustic or pressure mode of pulsation, abbreviated to p-mode. In other cases, the restoring force is gravity and this is called a g-mode. Pulsating variable stars typically pulsate in only one of these modes. Cepheids and cepheid-like variables This group consists of several kinds of pulsating stars, all found on the instability strip, that swell and shrink very regularly caused by the star's own mass resonance, generally by the fundamental frequency. Generally the Eddington valve mechanism for pulsating variables is believed to account for cepheid-like pulsations. Each of the subgroups on the instability strip has a fixed relationship between period and absolute magnitude, as well as a relation between period and mean density of the star. The period-luminosity relationship was first established for Delta Cepheids by Henrietta Leavitt, and makes these high luminosity Cepheids very useful for determining distances to galaxies within the Local Group and beyond. Edwin Hubble used this method to prove that the so-called spiral nebulae are in fact distant galaxies. The Cepheids are named only for Delta Cephei, while a completely separate class of variables is named after Beta Cephei. Classical Cepheid variables Classical Cepheids (or Delta Cephei variables) are population I (young, massive, and luminous) yellow supergiants which undergo pulsations with very regular periods on the order of days to months. On September 10, 1784, Edward Pigott detected the variability of Eta Aquilae, the first known representative of the class of Cepheid variables. However, the namesake for classical Cepheids is the star Delta Cephei, discovered to be variable by John Goodricke a few months later. Type II Cepheids Type II Cepheids (historically termed W Virginis stars) have extremely regular light pulsations and a luminosity relation much like the δ Cephei variables, so initially they were confused with the latter category. Type II Cepheids stars belong to older Population II stars, than do the type I Cepheids. The Type II have somewhat lower metallicity, much lower mass, somewhat lower luminosity, and a slightly offset period versus luminosity relationship, so it is always important to know which type of star is being observed. RR Lyrae variables These stars are somewhat similar to Cepheids, but are not as luminous and have shorter periods. They are older than type I Cepheids, belonging to Population II, but of lower mass than type II Cepheids. Due to their common occurrence in globular clusters, they are occasionally referred to as cluster Cepheids. They also have a well established period-luminosity relationship, and so are also useful as distance indicators. These A-type stars vary by about 0.2–2 magnitudes (20% to over 500% change in luminosity) over a period of several hours to a day or more. Delta Scuti variables Delta Scuti (δ Sct) variables are similar to Cepheids but much fainter and with much shorter periods. They were once known as Dwarf Cepheids. They often show many superimposed periods, which combine to form an extremely complex light curve. The typical δ Scuti star has an amplitude of 0.003–0.9 magnitudes (0.3% to about 130% change in luminosity) and a period of 0.01–0.2 days. Their spectral type is usually between A0 and F5. SX Phoenicis variables These stars of spectral type A2 to F5, similar to δ Scuti variables, are found mainly in globular clusters. They exhibit fluctuations in their brightness in the order of 0.7 magnitude (about 100% change in luminosity) or so every 1 to 2 hours. Rapidly oscillating Ap variables These stars of spectral type A or occasionally F0, a sub-class of δ Scuti variables found on the main sequence. They have extremely rapid variations with periods of a few minutes and amplitudes of a few thousandths of a magnitude. Long period variables The long period variables are cool evolved stars that pulsate with periods in the range of weeks to several years. Mira variables Mira variables are Asymptotic giant branch (AGB) red giants. Over periods of many months they fade and brighten by between 2.5 and 11 magnitudes, a 6 fold to 30,000 fold change in luminosity. Mira itself, also known as Omicron Ceti (ο Cet), varies in brightness from almost 2nd magnitude to as faint as 10th magnitude with a period of roughly 332 days. The very large visual amplitudes are mainly due to the shifting of energy output between visual and infra-red as the temperature of the star changes. In a few cases, Mira variables show dramatic period changes over a period of decades, thought to be related to the thermal pulsing cycle of the most advanced AGB stars. Semiregular variables These are red giants or supergiants. Semiregular variables may show a definite period on occasion, but more often show less well-defined variations that can sometimes be resolved into multiple periods. A well-known example of a semiregular variable is Betelgeuse, which varies from about magnitudes +0.2 to +1.2 (a factor 2.5 change in luminosity). At least some of the semi-regular variables are very closely related to Mira variables, possibly the only difference being pulsating in a different harmonic. Slow irregular variables These are red giants or supergiants with little or no detectable periodicity. Some are poorly studied semiregular variables, often with multiple periods, but others may simply be chaotic. Long secondary period variables Many variable red giants and supergiants show variations over several hundred to several thousand days. The brightness may change by several magnitudes although it is often much smaller, with the more rapid primary variations are superimposed. The reasons for this type of variation are not clearly understood, being variously ascribed to pulsations, binarity, and stellar rotation. Beta Cephei variables Beta Cephei (β Cep) variables (sometimes called Beta Canis Majoris variables, especially in Europe) undergo short period pulsations in the order of 0.1–0.6 days with an amplitude of 0.01–0.3 magnitudes (1% to 30% change in luminosity). They are at their brightest during minimum contraction. Many stars of this kind exhibits multiple pulsation periods. Slowly pulsating B-type stars Slowly pulsating B (SPB) stars are hot main-sequence stars slightly less luminous than the Beta Cephei stars, with longer periods and larger amplitudes. Very rapidly pulsating hot (subdwarf B) stars The prototype of this rare class is V361 Hydrae, a 15th magnitude subdwarf B star. They pulsate with periods of a few minutes and may simultaneous pulsate with multiple periods. They have amplitudes of a few hundredths of a magnitude and are given the GCVS acronym RPHS. They are p-mode pulsators. PV Telescopii variables Stars in this class are type Bp supergiants with a period of 0.1–1 day and an amplitude of 0.1 magnitude on average. Their spectra are peculiar by having weak hydrogen while on the other hand carbon and helium lines are extra strong, a type of extreme helium star. RV Tauri variables These are yellow supergiant stars (actually low mass post-AGB stars at the most luminous stage of their lives) which have alternating deep and shallow minima. This double-peaked variation typically has periods of 30–100 days and amplitudes of 3–4 magnitudes. Superimposed on this variation, there may be long-term variations over periods of several years. Their spectra are of type F or G at maximum light and type K or M at minimum brightness. They lie near the instability strip, cooler than type I Cepheids more luminous than type II Cepheids. Their pulsations are caused by the same basic mechanisms related to helium opacity, but they are at a very different stage of their lives. Alpha Cygni variables Alpha Cygni (α Cyg) variables are nonradially pulsating supergiants of spectral classes Bep to AepIa. Their periods range from several days to several weeks, and their amplitudes of variation are typically of the order of 0.1 magnitudes. The light changes, which often seem irregular, are caused by the superposition of many oscillations with close periods. Deneb, in the constellation of Cygnus is the prototype of this class. Gamma Doradus variables Gamma Doradus (γ Dor) variables are non-radially pulsating main-sequence stars of spectral classes F to late A. Their periods are around one day and their amplitudes typically of the order of 0.1 magnitudes. Pulsating white dwarfs These non-radially pulsating stars have short periods of hundreds to thousands of seconds with tiny fluctuations of 0.001 to 0.2 magnitudes. Known types of pulsating white dwarf (or pre-white dwarf) include the DAV, or ZZ Ceti, stars, with hydrogen-dominated atmospheres and the spectral type DA; DBV, or V777 Her, stars, with helium-dominated atmospheres and the spectral type DB; and GW Vir stars, with atmospheres dominated by helium, carbon, and oxygen. GW Vir stars may be subdivided into DOV and PNNV stars. Solar-like oscillations The Sun oscillates with very low amplitude in a large number of modes having periods around 5 minutes. The study of these oscillations is known as helioseismology. Oscillations in the Sun are driven stochastically by convection in its outer layers. The term solar-like oscillations is used to describe oscillations in other stars that are excited in the same way and the study of these oscillations is one of the main areas of active research in the field of asteroseismology. BLAP variables A Blue Large-Amplitude Pulsator (BLAP) is a pulsating star characterized by changes of 0.2 to 0.4 magnitudes with typical periods of 20 to 40 minutes. Fast yellow pulsating supergiants A fast yellow pulsating supergiant (FYPS) is a luminous yellow supergiant with pulsations shorter than a day. They are thought to have evolved beyond a red supergiant phase, but the mechanism for the pulsations is unknown. The class was named in 2020 through analysis of TESS observations. Eruptive variable stars Eruptive variable stars show irregular or semi-regular brightness variations caused by material being lost from the star, or in some cases being accreted to it. Despite the name, these are not explosive events. Protostars Protostars are young objects that have not yet completed the process of contraction from a gas nebula to a veritable star. Most protostars exhibit irregular brightness variations. Herbig Ae/Be stars Variability of more massive (2–8 solar mass) Herbig Ae/Be stars is thought to be due to gas-dust clumps, orbiting in the circumstellar disks. Orion variables Orion variables are young, hot pre–main-sequence stars usually embedded in nebulosity. They have irregular periods with amplitudes of several magnitudes. A well-known subtype of Orion variables are the T Tauri variables. Variability of T Tauri stars is due to spots on the stellar surface and gas-dust clumps, orbiting in the circumstellar disks. FU Orionis variables These stars reside in reflection nebulae and show gradual increases in their luminosity in the order of 6 magnitudes followed by a lengthy phase of constant brightness. They then dim by 2 magnitudes (six times dimmer) or so over a period of many years. V1057 Cygni for example dimmed by 2.5 magnitude (ten times dimmer) during an eleven-year period. FU Orionis variables are of spectral type A through G and are possibly an evolutionary phase in the life of T Tauri stars. Giants and supergiants Large stars lose their matter relatively easily. For this reason variability due to eruptions and mass loss is fairly common among giants and supergiants. Luminous blue variables Also known as the S Doradus variables, the most luminous stars known belong to this class. Examples include the hypergiants η Carinae and P Cygni. They have permanent high mass loss, but at intervals of years internal pulsations cause the star to exceed its Eddington limit and the mass loss increases hugely. Visual brightness increases although the overall luminosity is largely unchanged. Giant eruptions observed in a few LBVs do increase the luminosity, so much so that they have been tagged supernova impostors, and may be a different type of event. Yellow hypergiants These massive evolved stars are unstable due to their high luminosity and position above the instability strip, and they exhibit slow but sometimes large photometric and spectroscopic changes due to high mass loss and occasional larger eruptions, combined with secular variation on an observable timescale. The best known example is Rho Cassiopeiae. R Coronae Borealis variables While classed as eruptive variables, these stars do not undergo periodic increases in brightness. Instead they spend most of their time at maximum brightness, but at irregular intervals they suddenly fade by 1–9 magnitudes (2.5 to 4000 times dimmer) before recovering to their initial brightness over months to years. Most are classified as yellow supergiants by luminosity, although they are actually post-AGB stars, but there are both red and blue giant R CrB stars. R Coronae Borealis (R CrB) is the prototype star. DY Persei variables are a subclass of R CrB variables that have a periodic variability in addition to their eruptions. Wolf–Rayet variables Classic population I Wolf–Rayet stars are massive hot stars that sometimes show variability, probably due to several different causes including binary interactions and rotating gas clumps around the star. They exhibit broad emission line spectra with helium, nitrogen, carbon and oxygen lines. Variations in some stars appear to be stochastic while others show multiple periods. Gamma Cassiopeiae variables Gamma Cassiopeiae (γ Cas) variables are non-supergiant fast-rotating B class emission line-type stars that fluctuate irregularly by up to 1.5 magnitudes (4 fold change in luminosity) due to the ejection of matter at their equatorial regions caused by the rapid rotational velocity. Flare stars In main-sequence stars major eruptive variability is exceptional. It is common only among the flare stars, also known as the UV Ceti variables, very faint main-sequence stars which undergo regular flares. They increase in brightness by up to two magnitudes (six times brighter) in just a few seconds, and then fade back to normal brightness in half an hour or less. Several nearby red dwarfs are flare stars, including Proxima Centauri and Wolf 359. RS Canum Venaticorum variables These are close binary systems with highly active chromospheres, including huge sunspots and flares, believed to be enhanced by the close companion. Variability scales ranges from days, close to the orbital period and sometimes also with eclipses, to years as sunspot activity varies. Cataclysmic or explosive variable stars Supernovae Supernovae are the most dramatic type of cataclysmic variable, being some of the most energetic events in the universe. A supernova can briefly emit as much energy as an entire galaxy, brightening by more than 20 magnitudes (over one hundred million times brighter). The supernova explosion is caused by a white dwarf or a star core reaching a certain mass/density limit, the Chandrasekhar limit, causing the object to collapse in a fraction of a second. This collapse "bounces" and causes the star to explode and emit this enormous energy quantity. The outer layers of these stars are blown away at speeds of many thousands of kilometers per second. The expelled matter may form nebulae called supernova remnants. A well-known example of such a nebula is the Crab Nebula, left over from a supernova that was observed in China and elsewhere in 1054. The progenitor object may either disintegrate completely in the explosion, or, in the case of a massive star, the core can become a neutron star (generally a pulsar) or a black hole. Supernovae can result from the death of an extremely massive star, many times heavier than the Sun. At the end of the life of this massive star, a non-fusible iron core is formed from fusion ashes. This iron core is pushed towards the Chandrasekhar limit till it surpasses it and therefore collapses. One of the most studied supernovae of this type is SN 1987A in the Large Magellanic Cloud. A supernova may also result from mass transfer onto a white dwarf from a star companion in a double star system. The Chandrasekhar limit is surpassed from the infalling matter. The absolute luminosity of this latter type is related to properties of its light curve, so that these supernovae can be used to establish the distance to other galaxies. Luminous red nova Luminous red novae are stellar explosions caused by the merger of two stars. They are not related to classical novae. They have a characteristic red appearance and very slow decline following the initial outburst. Novae Novae are also the result of dramatic explosions, but unlike supernovae do not result in the destruction of the progenitor star. Also unlike supernovae, novae ignite from the sudden onset of thermonuclear fusion, which under certain high pressure conditions (degenerate matter) accelerates explosively. They form in close binary systems, one component being a white dwarf accreting matter from the other ordinary star component, and may recur over periods of decades to centuries or millennia. Novae are categorised as fast, slow or very slow, depending on the behaviour of their light curve. Several naked eye novae have been recorded, Nova Cygni 1975 being the brightest in the recent history, reaching 2nd magnitude. Dwarf novae Dwarf novae are double stars involving a white dwarf in which matter transfer between the component gives rise to regular outbursts. There are three types of dwarf nova: U Geminorum stars, which have outbursts lasting roughly 5–20 days followed by quiet periods of typically a few hundred days. During an outburst they brighten typically by 2–6 magnitudes. These stars are also known as SS Cygni variables after the variable in Cygnus which produces among the brightest and most frequent displays of this variable type. Z Camelopardalis stars, in which occasional plateaux of brightness called standstills are seen, part way between maximum and minimum brightness. SU Ursae Majoris stars, which undergo both frequent small outbursts, and rarer but larger superoutbursts. These binary systems usually have orbital periods of under 2.5 hours. DQ Herculis variables DQ Herculis systems are interacting binaries in which a low-mass star transfers mass to a highly magnetic white dwarf. The white dwarf spin period is significantly shorter than the binary orbital period and can sometimes be detected as a photometric periodicity. An accretion disk usually forms around the white dwarf, but its innermost regions are magnetically truncated by the white dwarf. Once captured by the white dwarf's magnetic field, the material from the inner disk travels along the magnetic field lines until it accretes. In extreme cases, the white dwarf's magnetism prevents the formation of an accretion disk. AM Herculis variables In these cataclysmic variables, the white dwarf's magnetic field is so strong that it synchronizes the white dwarf's spin period with the binary orbital period. Instead of forming an accretion disk, the accretion flow is channeled along the white dwarf's magnetic field lines until it impacts the white dwarf near a magnetic pole. Cyclotron radiation beamed from the accretion region can cause orbital variations of several magnitudes. Z Andromedae variables These symbiotic binary systems are composed of a red giant and a hot blue star enveloped in a cloud of gas and dust. They undergo nova-like outbursts with amplitudes of up to 4 magnitudes. The prototype for this class is Z Andromedae. AM CVn variables AM CVn variables are symbiotic binaries where a white dwarf is accreting helium-rich material from either another white dwarf, a helium star, or an evolved main-sequence star. They undergo complex variations, or at times no variations, with ultrashort periods. Extrinsic variable stars There are two main groups of extrinsic variables: rotating stars and eclipsing stars. Rotating variable stars Stars with sizeable sunspots may show significant variations in brightness as they rotate, and brighter areas of the surface are brought into view. Bright spots also occur at the magnetic poles of magnetic stars. Stars with ellipsoidal shapes may also show changes in brightness as they present varying areas of their surfaces to the observer. Non-spherical stars Ellipsoidal variables These are very close binaries, the components of which are non-spherical due to their tidal interaction. As the stars rotate the area of their surface presented towards the observer changes and this in turn affects their brightness as seen from Earth. Stellar spots The surface of the star is not uniformly bright, but has darker and brighter areas (like the sun's solar spots). The star's chromosphere too may vary in brightness. As the star rotates we observe brightness variations of a few tenths of magnitudes. FK Comae Berenices variables These stars rotate extremely rapidly (~100 km/s at the equator); hence they are ellipsoidal in shape. They are (apparently) single giant stars with spectral types G and K and show strong chromospheric emission lines. Examples are FK Com, V1794 Cygni and UZ Librae. A possible explanation for the rapid rotation of FK Comae stars is that they are the result of the merger of a (contact) binary. BY Draconis variable stars BY Draconis stars are of spectral class K or M and vary by less than 0.5 magnitudes (70% change in luminosity). Magnetic fields Alpha2 Canum Venaticorum variables Alpha2 Canum Venaticorum (α2 CVn) variables are main-sequence stars of spectral class B8–A7 that show fluctuations of 0.01 to 0.1 magnitudes (1% to 10%) due to changes in their magnetic fields. SX Arietis variables Stars in this class exhibit brightness fluctuations of some 0.1 magnitude caused by changes in their magnetic fields due to high rotation speeds. Optically variable pulsars Few pulsars have been detected in visible light. These neutron stars change in brightness as they rotate. Because of the rapid rotation, brightness variations are extremely fast, from milliseconds to a few seconds. The first and the best known example is the Crab Pulsar. Eclipsing binaries Extrinsic variables have variations in their brightness, as seen by terrestrial observers, due to some external source. One of the most common reasons for this is the presence of a binary companion star, so that the two together form a binary star. When seen from certain angles, one star may eclipse the other, causing a reduction in brightness. One of the most famous eclipsing binaries is Algol, or Beta Persei (β Per). Algol variables Algol variables undergo eclipses with one or two minima separated by periods of nearly constant light. The prototype of this class is Algol in the constellation Perseus. Double Periodic variables Double periodic variables exhibit cyclical mass exchange which causes the orbital period to vary predictably over a very long period. The best known example is V393 Scorpii. Beta Lyrae variables Beta Lyrae (β Lyr) variables are extremely close binaries, named after the star Sheliak. The light curves of this class of eclipsing variables are constantly changing, making it almost impossible to determine the exact onset and end of each eclipse. W Serpentis variables W Serpentis is the prototype of a class of semi-detached binaries including a giant or supergiant transferring material to a massive more compact star. They are characterised, and distinguished from the similar β Lyr systems, by strong UV emission from accretions hotspots on a disc of material. W Ursae Majoris variables The stars in this group show periods of less than a day. The stars are so closely situated to each other that their surfaces are almost in contact with each other. Planetary transits Stars with planets may also show brightness variations if their planets pass between Earth and the star. These variations are much smaller than those seen with stellar companions and are only detectable with extremely accurate observations. Examples include HD 209458 and GSC 02652-01324, and all of the planets and planet candidates detected by the Kepler Mission. See also Guest star Irregular variable List of variable stars Stellar pulsation References Bibliography External links The American Association of Variable Star Observers GCVS Variability Types Society for Popular Astronomy – Variable Star Section Star types Concepts in astronomy
Variable star
[ "Physics", "Astronomy" ]
7,574
[ "Concepts in astronomy", "Star types", "Astronomical classification systems" ]
63,050
https://en.wikipedia.org/wiki/Eosinophilia%E2%80%93myalgia%20syndrome
Eosinophilia–myalgia syndrome is a rare, sometimes fatal neurological condition linked to the ingestion of the dietary supplement L-tryptophan. The risk of developing EMS increases with larger doses of tryptophan and increasing age. Some research suggests that certain genetic polymorphisms may be related to the development of EMS. The presence of eosinophilia is a core feature of EMS, along with unusually severe myalgia (muscle pain). Signs and symptoms The initial, acute phase of EMS, which last for three to six months, presents as trouble with breathing and muscle problems, including soreness and spasm, but which may also be intense. Muscle weakness is not a feature of this phase, but some people experience muscle stiffness. Additional features can include cough, fever, fatigue, joint pain, edema, and numbness or tingling, usually in the limbs, hands and feet. The chronic phase follows the acute phase. Eosinophilic fasciitis may develop, primarily in the limbs. CNS signs may appear, including numbness, increased sensation, muscle weakness, and sometimes cardiac or digestive dysfunction. Fatigue is present to some degree, while the muscle pain (which may be extremely intense) and dyspnea continue in this phase. Causes Subsequent epidemiological studies suggested that EMS was linked to specific batches of L-tryptophan supplied by a single large Japanese manufacturer, Showa Denko. It eventually became clear that recent batches of Showa Denko's L-tryptophan were contaminated by trace impurities, which were subsequently thought to be responsible for the 1989 EMS outbreak. The L-tryptophan was produced by a bacterium grown in open vats in a Showa Denko fertilizer factory. While a total of 63 trace contaminants were eventually identified, only six of them could be associated with EMS. The compound EBT (1,1'-ethylidene-bis-L-tryptophan, also known as "Peak E") was the only contaminant identifiable by initial analysis, but further analysis revealed PAA (3-(phenylamino)-L-alanine, also known as "UV-5"), and peak 200 (2[3-indolyl-methyl]-L-tryptophan). Two of the remaining uncharacterized peaks associated with EMS were later determined to be 3a-hydroxy-1,2,3,3a,8,8a-hexahydropyrrolo-[2,3-b]-indole-2-carboxylic acid (peak C) and 2-(2-hydroxy indoline)-tryptophan (peak FF). These were characterized using accurate mass LC–MS, LC–MS/MS and multistage mass spectrometry (MSn). The last of the six contaminants (peak AAA/"UV-28", being "the contaminant most significantly associated with EMS" has been characterized as two related chain-isomers; peak AAA1 ((S)-2-amino-3-(2-((S,E)-7-methylnon-1-en-1-yl)-1H-indol-3-yl)propanoic acid, a condensation product between L-tryptophan and 7-methylnonanoic acid) and peak AAA2 ((S)-2-amino-3-(2-((E)-dec-1-en-1-yl)-1H-indol-3-yl)propanoic acid, a condensate between L-tryptophan and decanoic acid). No consistent relationship has ever been firmly established between any specific trace impurity or impurities identified in these batches and the effects of EMS. While EBT in particular has been frequently implicated as the culprit, there is no statistically significant association between EBT levels and EMS. Of the 63 trace contaminants, only the two AAA compounds displayed a statistically significant association with cases of EMS (with a p-value of 0.0014). As most research has focused on attempts to associate individual contaminants with EMS, there is a comparative lack of detailed research on other possible causal or contributing factors. Tryptophan itself has been implicated as a potentially major contributory factor in EMS. While critics of this theory have argued that this hypothesis fails to explain the near-absent reports of EMS prior to and following the EMS outbreak, this fails to take into account the sudden rapid increase in tryptophan's usage immediately prior to the 1989 outbreak, and ignores the strong influence of the EMS outbreak's legacy and the extended FDA ban on later usage of tryptophan. Crucially, this also ignores the existence of a number of cases of EMS that developed both prior to and after the primary epidemic, including at least one case where the tryptophan was tested and found to lack the contaminants found in the contaminated lots of Showa Denko's tryptophan, as well as cases with other supplements inducing EMS, and even a case of EMS induced by excessive dietary L-tryptophan intake via overconsumption of cashew nuts. A major Canadian analysis located a number of patients that met the CDC criteria for EMS but had never been exposed to tryptophan, which "brings causal interpretations of earlier studies into question". Other studies have highlighted numerous major flaws in many of the epidemiological studies on the association of tryptophan with EMS, which cast serious doubt on the validity of their results. As the FDA concluded, "other brands of L-tryptophan, or L-tryptophan itself, regardless of the levels or presence of impurities, could not be eliminated as causal or contributing to the development of EMS". Even animal studies have suggested that tryptophan itself "when ingested by susceptible individuals either alone or in combination with some other component in the product, results in the pathological features in EMS". At the time of the outbreak, Showa Denko had recently made alterations to its manufacturing procedures that were thought to be linked to the possible origin of the contaminants detected in the affected lots of tryptophan. A key change was the reduction of the amount of activated charcoal used to purify each batch from >20 kg to 10 kg. A portion of the contaminated batches had also bypassed another filtration step that used reverse-osmosis to remove certain impurities. Additionally, the bacterial culture used to synthesize tryptophan was a strain of Bacillus amyloliquefaciens that had been genetically engineered to increase tryptophan production. Although the prior four generations of the genetically engineered strain had been used without incident, the fifth generation used for the contaminated batches was thought to be a possible source of the impurities that were detected. This has been used to argue that the genetic engineering itself was the primary cause of the contamination, a stance that was heavily criticized for overlooking the other known non-GMO causes of contamination, as well as for its use by anti-GMO activists as a way to threaten the development of biotechnology with false information. The reduction in the amount of activated carbon used and the introduction of the fifth generation Bacillus amyloliquefaciens strain were both associated with the development of EMS, but due to the high overlap of these changes, the precise independent contribution of each change could not be determined (although the bypass of the reverse-osmosis filtration for certain lots was determined to be not significantly associated with the contaminated lots of tryptophan). While Showa Denko claimed a purity of 99.6%, it was noted that "the quantities of the known EMS associated contaminants, EBT and PAA, were remarkably small, of the order of 0.01%, and could easily escape detection". Regulatory response The FDA loosened its restrictions on sales and marketing of tryptophan in February 2001, but continued to limit the importation of tryptophan not intended for an exempted use until 2005. Diagnosis Treatment Treatment is withdrawal of products containing L-tryptophan and the administration of glucocorticoids. Most patients recover fully, remain stable, or show slow recovery, but the disease is fatal in up to 5% of patients. History The first case of eosinophilia–myalgia syndrome was reported to the Centers for Disease Control and Prevention (CDC) in November 1989, although some cases had occurred as early as 2–3 years before this. In total, more than 1,500 cases of EMS were reported to the CDC, as well as at least 37 EMS-associated deaths. After preliminary investigation revealed that the outbreak was linked to intake of tryptophan, the U.S. Food and Drug Administration (FDA) recalled tryptophan supplements in 1989 and banned most public sales in 1990, with other countries following suit. This FDA restriction was loosened in 2001, and fully lifted in 2005. Since the initial ban on L-tryptophan, a normal metabolite of the compound in mammals, 5-hydroxtryptophan (5-HTP) has become a popular replacement dietary supplement. See also Toxic oil syndrome References Systemic connective tissue disorders Connective tissue diseases Syndromes Drug safety Adulteration
Eosinophilia–myalgia syndrome
[ "Chemistry" ]
1,987
[ "Adulteration", "Drug safety" ]
63,064
https://en.wikipedia.org/wiki/Bengal%20cat
The Bengal cat is a breed of hybrid cat created from crossing of an Asian leopard cat (Prionailurus bengalensis) with domestic cats, especially the spotted Egyptian Mau. It is then usually bred with a breed that demonstrates a friendlier personality, because after breeding a domesticated cat with a wildcat, its friendly personality may not manifest in the kitten. The breed's name derives from the leopard cat's taxonomic name. Bengals have varying appearances. Their coats range from spots, rosettes, arrowhead markings, to marbling. History Early history The earliest mention of an Asian leopard cat × domestic cross was in 1889, when Harrison Weir wrote of them in Our Cats and All About Them. Bengals as a breed Jean Mill of California is given credit for the modern Bengal breed. She made the first known deliberate cross of an Asian leopard cat with a domestic cat (a black California tomcat). Bengals as a breed did not really begin in earnest until much later. Cat registries In 1986, the breed was accepted as a "new breed" by The International Cat Association; Bengals gained TICA championship status in 1991. The Governing Council of the Cat Fancy (GCCF) accepted Bengal cats in 1997. Fédération Internationale Féline (FIFe) in 1999 accepted the breed into their registry. Also in 1999, Bengals were accepted into the Australian Cat Federation (ACF). The Cat Fanciers' Association accepted the Bengal in CFA's "Miscellaneous" in 2016, under the restrictions that "it must be F6 or later (6 generations removed from the Asian leopard cat or non-Bengal domestic cat ancestors)". Early generations Bengal cats from the first three generations of breeding (F1, G2, and G3) are considered "foundation" or "early-generation" Bengals. The early-generation males are frequently infertile. Therefore, female early-generation Bengals are bred to fertile domestic Bengal males of later generations. Nevertheless, as the term was used incorrectly for many years, many people and breeders still refer to the cats as F2, F3, and F4, even though the term is considered incorrect. Popularity The Bengal breed was more fully developed by the 1980s. "In 1992 The International Cat Association had 125 registered Bengal Breeders." By the 2000s, Bengals had become a very popular breed. In 2019, there were nearly 2,500 Bengal breeders registered in TICA worldwide. * The 2019 number only represents the breeders who use the word "Bengal" in their cattery name. Appearance Markings Colors Bengals come in a variety of coat colors. The International Cat Association (TICA) recognizes several Bengal colors: brown spotted, seal lynx point (snow), sepia, silver, and mink spotted tabby. Spotted rosetted The Bengal cat is the only domestic breed of cat that has rosette markings. Marble Domestic cats have four distinct and heritable coat patterns – ticked, mackerel, blotched, and spotted – these are collectively referred to as tabby markings. Christopher Kaelin, a Stanford University geneticist, has conducted research that has been used to identify the spotted gene and the marble gene in domestic Bengal cats. Kaelin studied the color and pattern variations of feral cats in Northern California, and was able to identify the gene responsible for the marble pattern in Bengal cats. Legal restrictions In Australia, G5 (fifth-generation) Bengals are not restricted, but their import is complex. Bengals were regulated in the United Kingdom. In 2007, however, the Department for Environment, Food and Rural Affairs removed the previous licensing requirements. In the United States, legal restrictions and even bans sometimes exist at the state and municipal level. In Hawaii, Bengal cats are prohibited by law (as are all wild cat species, and all other hybrids of domestic and wild animals). In Connecticut, it is also illegal to own any generation of Bengal cat. In Alaska, Bengal cats must be four generations removed from the Asian leopard cat. A permit and registered pedigree that indicates the previous four generations are required. In California, the code of regulations Title 14, section K, Asian leopards are not specifically listed as a restricted species. In Delaware, a permit is required to own Bengal cats. Bengals of the F1–G4 generations are also regulated in New York state, Georgia, Massachusetts, and Indiana. Various cities have imposed restrictions; in New York City, Bengals are prohibited, and there are limits on Bengal ownership in Seattle, Washington, and in Denver, Colorado. Except where noted above, Bengal cats with a generation of G5 and beyond are considered domestic, and are generally legal in the US. In New Zealand's Southland District, the Bengal cat requires a permit to own and is completely banned on any off-shore islands including Stewart Island. Health Hypertrophic cardiomyopathy (HCM) Hypertrophic cardiomyopathy (HCM) is a major concern in the Bengal cat breed. Hypertrophic cardiomyopathy is a disease in which the heart muscle (myocardium) becomes abnormally thick (hypertrophied). A thick heart muscle makes it difficult for the cat's heart to pump blood. HCM is a common genetic disease in Bengal cats and there is no genetic testing available as of 2018. In the United States, the current practice of screening for HCM involves bringing Bengal cats to a board certified veterinary cardiologist where an echocardiogram is completed. Bengal cats that are used for breeding should be screened annually to ensure that no hypertrophic cardiomyopathy is present. As of January 2019, the North Carolina State University is attempting to identify genetic markers for HCM in the Bengal Cat. One study published in the Journal of Internal Veterinary Medicine has claimed the prevalence of hypertrophic cardiomyopathy in Bengal cats is 16.7% (95% CI = 13.2–46.5%). Bengal progressive retinal atrophy (PRA-b) Bengal cats are known to be affected by several genetic diseases, one of which is Bengal progressive retinal atrophy, also known as Bengal PRA or PRA-b. Anyone breeding Bengal cats should carry out this test, since it is inexpensive, noninvasive, and easy to perform. A breeder stating their cats are "veterinarian tested" should not be taken to mean that this test has been performed by a vet: it is carried out by the breeder, outside of a vet office (rarely, if ever, by a vet). The test is then sent directly to the laboratory. Erythrocyte pyruvate kinase deficiency (PK-deficiency or PK-def) PK deficiency is a common genetic diseases found in Bengal cats. PK deficiency is another test that is administered by the breeder. Breeding Bengal Cats should be tested before breeding to ensure two PK deficiency carriers are not mated. This is a test that a breeder must do on their own. A breeder uses a cotton swab to rub the inside of the cat's mouth and then mails the swab to the laboratory. Ulcerative nasal dermatitis A unique form of ulcerative dermatitis affecting the nasal planum (rhinarium or nose leather) of Bengal cats was first reported in 2004. The condition first presents between the ages of 4-12 months, beginning as a dry scale and progressing to crusts and fissures typical of hyperkeratosis. The exact cause remains unclear; it is considered hereditary and incurable, but can respond favorably to topical steroid treatments such as prednisolone and tacrolimus ointment. Life expectancy A UK study looking at veterinary records found the Bengal to have a life expectancy of 8.51 years compared to 11.74 years overall. Bengal blood-type The UC Davis Veterinary Genetics Laboratory has studied domestic cat blood-types. They conclude that most domestic cats fall within the AB system. The common blood-types are A and B and some cats have the rare AB blood-type. There is a lack of sufficient samples from Bengals, so the genetics of the AB blood-group in Bengal cats is not well understood. One Bengal blood-type study that took place in the U.K. tested 100 Bengal cats. They concluded that all 100 of the Bengal cats tested had type A blood. Shedding and grooming Bengals are often claimed by breeders and pet adoption agencies to be a hypoallergenic breed – one less likely to cause an allergic reaction. The Bengal cat is said to produce lower than average levels of allergens, though this has not been scientifically proven as of 2020. Cat geneticist Leslie Lyons, who runs the University of Missouri's Feline and Comparative Genetics Laboratory, discounts such claims, observing that there is no such thing as a hypoallergenic cat. Alleged hypoallergenic breeds thus may still produce a reaction among those who have severe allergies. Bengal Longhair (Cashmere Bengal) Some long-haired Bengals (more properly, semi-long-haired) have always occurred in Bengal breeding. Many different domestic cats were used to create the Bengal breed, and it is theorized that the gene for long hair came from one of these backcrossings. UC Davis has developed a genetic test for long hair so that Bengal breeders could select Bengal cats with a recessive long-hair gene for their breeding programs. Some Bengal cats used in breeding can carry a recessive gene for long-haired. When a male and female Bengal each carry a copy of the recessive long hair gene, and those two Bengals are mated with each other, they can produce long-haired Bengals. (See Cat coat genetics#Genes involved in fur length and texture.) In the past, long-haired offspring of Bengal matings were spayed or neutered until some breeders chose to develop the long-haired Bengal (which are sometimes called a Cashmere Bengal). Long-haired Bengals are starting to gain more recognition in some cat breed registries but are not widely accepted. Since 2013, they have "preliminary" breed status in the New Zealand Cat Fancy (NZCF) registry, under the breed name Cashmere Bengal. Since 2017 The International Cat Association (TICA) has accepted the Bengal Longhair in competitions. See also (mostly experimental): Bramble cat Genetta cat Highlander cat Jungle Curl Mojave Spotted Pantherette Punjabi cat Serengeti cat Toyger Hybrid cat varieties involving other species: Chausie Kellas cat Savannah cat Ocicat, a spotted breed that is not a domestic–wild hybrid List of cat breeds List of experimental cat breeds References External links Cat breeds Cat breeds originating in the United States Domestic–wild hybrid cats Intergeneric hybrids
Bengal cat
[ "Biology" ]
2,201
[ "Intergeneric hybrids", "Hybrid organisms" ]
63,072
https://en.wikipedia.org/wiki/Creative%20accounting
Creative accounting is a euphemism referring to accounting practices that may follow the letter of the rules of standard accounting practices, but deviate from the spirit of those rules with questionable accounting ethics—specifically distorting results in favor of the "preparers", or the firm that hired the accountant. They are characterized by excessive complication and the use of novel ways of characterizing income, assets, or liabilities, and the intent to influence readers towards the interpretations desired by the authors. The terms "innovative" or "aggressive" are also sometimes used. Another common synonym is "cooking the books". Creative accounting is oftentimes used in tandem with outright financial fraud (including securities fraud), and lines between the two are blurred. Creative accounting practices have been known since ancient times and appear world-wide in various forms. The term as generally understood refers to systematic misrepresentation of the true income and assets of corporations or other organizations. "Creative accounting" has been at the root of a number of accounting scandals, and many proposals for accounting reform—usually centering on an updated analysis of capital and factors of production that would correctly reflect how value is added. Newspaper and television journalists have hypothesized that the stock market downturn of 2002 was precipitated by reports of "accounting irregularities" at Enron, Worldcom, and other firms in the United States. According to critic David Ehrenstein, the term "creative accounting" was first used in 1968 in the film The Producers by Mel Brooks, where it is also known as Hollywood accounting. Motivations behind creative accounting The underlining purpose for creative accounting is to "present [a] business in the best possible light" typically by manipulating recorded profits or costs. Company managers who participate in creative accounting can have a variety of situational motivations for doing so, including: Market and stockholder expectations of profits Personal incentives Bonus-related pay Benefits from shares and share options Job security Personal satisfaction Cover-up fraud Tax management Management buyouts Debt covenant Manager's self-interest Mergers and acquisitions Types/examples of creative accounting schemes Earnings management Creative accounting can be used to manage earnings. Earnings management occurs when managers use judgment in financial reporting and in structuring transactions to alter financial reports to either mislead some stakeholders about the underlying economic performance of a company or influence contractual outcomes that depend on reported accounting numbers. Hollywood accounting Practiced by some Hollywood film studios, creative accounting can be used to conceal earnings of a film to distort the profit participation promised to certain participants of the film's earnings. Essentially, participants in the gross revenue of the film stay unaffected but profit participants are presented with a deflated or negative number on profitability, leading to less or no payments to them following a film's success. Famous examples of deceiving good faith profit participants involve Darth Vader actor David Prowse (with $729M adjusted gross earnings on Return of the Jedi) and Forrest Gump novel writer Winston Groom (with $661M gross theatrical revenue)—both of whom have been paid $0 on their profit participation due to the films "being in the red". Tobashi schemes This form of creative accounting—now considered a criminal offense in Japan, where it originated—involves the sale, swap or other form of temporary trade of a liability of one company with another company within the holding's portfolio, often solely created to conceal losses of the first firm. These schemes were popular in the 1980s in Japan before the government instituted harsher civil laws and eventually criminalized the practice. The Enron scandal revealed that Enron had extensively made use of sub-corporations to offload debts and hide its true losses in a Tobashi fashion. Lehman Brothers' Repo 105 scheme Lehman Brothers utilized repurchase agreements to bolster profitability reports with their Repo 105 scheme under the watch of the accounting firm Ernst & Young. The scheme consisted of mis-reporting a Repo (a promise to re-buy a liability or asset after selling it) as a sale, and timing it exactly in a way that half of the transaction was completed before a profitability reporting deadline, half after—hence bolstering profitability numbers on paper. Public prosecutors in New York filed suit against EY for allowing the "accounting fraud involving the surreptitious removal of tens of billions of dollars of fixed income securities from Lehman's balance sheet in order to deceive the public about Lehman's true liquidity condition". Enron had done exactly the same about 10 years earlier; in their case, Merrill Lynch aided Enron in bolstering profitability close to earnings periods by willfully entering repurchase agreements to buy Nigerian barges from Enron, only for Enron to buy them back a few months later. The U.S. Securities and Exchange Commission (SEC) filed charges and convicted multiple Merrill Lynch executives of aiding the fraud. Currency swap concealment of Greek debt by Goldman Sachs In 2001–2002, Goldman Sachs aided the government of Greece after its admission to the Eurozone to better its deficit numbers by conducting large currency swaps. These transactions, totaling more than 2.3 billion Euros, were technically loans but concealed as currency swaps in order to circumvent Maastricht Treaty rules on member nations deficit limits and allowed Greece to "hide" an effective 1 billion euro loan. After Goldman Sachs had engineered the financial instrument and sold it to the Greeks—simply shifting the liabilities in the future and defrauding investors and the European Union, the investment bank's president Gary Cohn pitched Athens another deal. After Greece refused the second deal, the firm sold its Greek swaps to the Greek national bank and made sure its Short and Long positions towards Greece were in balance—so that a potential Greek default would not affect Goldman Sachs. Parmalat's mis-accounted credit-linked notes Italian dairy giant Parmalat employed a number of creative accounting and wire fraud schemes before 2003 that lead to the largest bankruptcy in European history. It sold itself Credit-linked notes with the help of Merrill Lynch through a Cayman Islands special-purpose entity and over-accounted for their value on the balance sheet. It also forged a $3.9Bn check from Bank of America. The publicly listed company stated to investors that it had about $2Bn in liabilities (this figure was accepted by its auditors Deloitte and Grant Thornton International), but once audited more vigorously during the bankruptcy proceedings, it was discovered that the company's debt turned out to actually be $14.5Bn. This massive debt was largely caused by failed operations in Latin America and increasingly complex financial instruments used to mask debt—such as Parmalat "billing itself" through a subsidiary called Epicurum. It was also discovered that its CEO Calisto Tanzi had ordered the creation of shell accounts and diverted 900M Euros worth into his private travel company. Offshoring and tax avoidance In order to avoid taxes on profits, multinational corporations often make use of offshore subsidiaries in order to employ a creative accounting technique known as "Minimum-Profit Accounting". The subsidiary is created in a tax haven—often just as a shell company—then charges large fees to the primary corporation, effectively minimizing or wholly wiping out the profit of the main corporation. Within most parts of the European Union and the United States, this practice is perfectly legal and often executed in plain sight or with explicit approval of tax regulators. Nike, Inc. famously employed offshoring by selling its Swoosh logo to a Bermuda-based special-purpose entity subsidiary for a nominal amount, and then went on to "charge itself" licensing fees that Nike Inc. had to pay to the subsidiary in order to use its own brand in Europe. The Dutch tax authorities were aware of and approved of this siphoning structure, but did not publish the private agreement they had with Nike. The licensing fees totaled $3.86Bn over the course of 3 years and were discovered due to an unrelated U.S.-based lawsuit as well as the Paradise Papers. In 2014, the Bermuda deal with Dutch authorities expired, and Nike shifted the profits to another offshore subsidiary, a Netherlands-based Limited Liability Partnership (CV, short for Commanditaire Vennootschap, generally known as a Kommanditgesellschaft). Through a Dutch tax loophole, CV's owned by individuals that are residing in the Netherlands are tax-free. Exploiting this structure saved Nike more than $1Bn in taxes annually and reduced its global tax rate to 13.1%; the company is currently being pursued for billions of dollars worth of back taxes in litigation by multiple governments for this multinational tax avoidance. In popular media A number of center around financial scandals and securities fraud that involved creative accounting practices: Enron: The Smartest Guys in the Room Inside Job (2010 film) PBS documentary film based on The Ascent of Money Documentary based on The Commanding Heights Betting on Zero The China Hustle Dirty Money £830,000,000 – Nick Leeson and the Fall of the House of Barings, about Nick Leeson and Barings Bank The Inventor: Out for Blood in Silicon Valley Chasing Madoff, about the Madoff investment scandal The Price We Pay Fyre and Fyre Fraud, two competing documentaries about the Fyre Festival See also Corporate abuse Reverse takeover References Further reading Amat, O., & Gowthorpe, C. (2004). Creative accounting: Nature, incidence and ethical issues, Economics Working Papers 749, Department of Economics and Business, Universitat Pompeu Fabra. Oliveras, E., & Amat, O. (2003). Ethics and creative accounting: Some empirical evidence on accounting for intangibles in Spain. UPF Economics and Business Working Paper, (732). Accounting systems Euphemisms Financial controversies
Creative accounting
[ "Technology" ]
2,013
[ "Information systems", "Accounting systems" ]
63,077
https://en.wikipedia.org/wiki/Stock%20market%20bubble
A stock market bubble is a type of economic bubble taking place in stock markets when market participants drive stock prices above their value in relation to some system of stock valuation. Behavioral finance theory attributes stock market bubbles to cognitive biases that lead to groupthink and herd behavior. Bubbles occur not only in real-world markets, with their inherent uncertainty and noise, but also in highly predictable experimental markets. Other theoretical explanations of stock market bubbles have suggested that they are rational, intrinsic, and contagious. History Historically, early stock market bubbles and crashes have their roots in financial activities of the 17th-century Dutch Republic, the birthplace of the first formal (official) stock exchange and market in history. The Dutch tulip mania, of the 1630s, is generally considered the world's first recorded speculative bubble (or economic bubble). Examples Two famous early stock market bubbles were the Mississippi Scheme in France and the South Sea bubble in England. Both bubbles came to an abrupt end in 1720, bankrupting thousands of unfortunate investors. Those stories, and many others, are recounted in Charles Mackay's 1841 popular account, "Extraordinary Popular Delusions and the Madness of Crowds". The two most famous bubbles of the twentieth century, the bubble in American stocks in the 1920s just before the Wall Street crash of 1929 and the following Great Depression, and the Dot-com bubble of the late 1990s, were based on speculative activity surrounding the development of new technologies. The 1920s saw the widespread introduction of a range of technological innovations including radio, automobiles, aviation and the deployment of electrical power grids. The 1990s was the decade when Internet and e-commerce technologies emerged. Other stock market bubbles of note include the Encilhamento occurred in Brazil during the late 1880s and early 1890s, the Nifty Fifty stocks in the early 1970s, Taiwanese stocks in 1987–89 and Japanese stocks in the late 1980s. Stock market bubbles frequently produce hot markets in initial public offerings, since investment bankers and their clients see opportunities to float new stock issues at inflated prices. These hot IPO markets misallocate investment funds to areas dictated by speculative trends, rather than to enterprises generating longstanding economic value. Typically when there is an over abundance of IPOs in a bubble market, a large portion of the IPO companies fail completely, never achieve what is promised to the investors, or can even be vehicles for fraud. Whether rational or irrational Emotional and cognitive biases (see behavioral finance) seem to be the causes of bubbles, but often, when the phenomenon appears, pundits try to find a rationale, so as not to be against the crowd. Thus, sometimes, people will dismiss concerns about overpriced markets by citing a new economy where the old stock valuation rules may no longer apply. This type of thinking helps to further propagate the bubble whereby everyone is investing with the intent of finding a greater fool. Still, some analysts cite the wisdom of crowds and say that price movements really do reflect rational expectations of fundamental returns. Large traders become powerful enough to rock the boat, generating stock market bubbles. To sort out the competing claims between behavioral finance and efficient markets theorists, observers need to find bubbles that occur when a readily available measure of fundamental value is also observable. The bubble in closed-end country funds in the late 1980s is instructive here, as are the bubbles that occur in experimental asset markets. According to the efficient-market hypothesis, this doesn't happen, and so any data is wrong. For closed-end country funds, observers can compare the stock prices to the net asset value per share (the net value of the fund's total holdings divided by the number of shares outstanding). For experimental asset markets, observers can compare the stock prices to the expected returns from holding the stock (which the experimenter determines and communicates to the traders). In both instances, closed-end country funds and experimental markets, stock prices clearly diverge from fundamental values. Nobel laureate Dr. Vernon Smith has illustrated the closed-end country fund phenomenon with a chart showing prices and net asset values of the in 1989 and 1990 in his work on price bubbles. At its peak, the Spain Fund traded near $35, nearly triple its Net Asset Value of about $12 per share. At the same time the Spain Fund and other closed-end country funds were trading at very substantial premiums, the number of closed-end country funds available exploded thanks to many issuers creating new country funds and selling the IPOs at high premiums. It only took a few months for the premiums in closed-end country funds to fade back to the more typical discounts at which closed-end funds trade. Those who had bought them at premiums had run out of "greater fools". For a while, though, the supply of "greater fools" had been outstanding. Positive feedback A rising price on any share will attract the attention of investors. Not all of those investors are willing or interested in studying the intrinsics of the share and for such people the rising price itself is reason enough to invest. In turn, the additional investment will provide buoyancy to the price, thus completing a positive feedback loop. Like all dynamic systems, financial markets operate in an ever-changing equilibrium, which translates into price volatility. However, a self-adjustment (negative feedback) takes place normally: when prices rise more people are encouraged to sell, while fewer are encouraged to buy. This puts a limit on volatility. However, once positive feedback takes over, the market, like all systems with positive feedback, enters a state of increasing disequilibrium. This can be seen in financial bubbles where asset prices rapidly spike upwards far beyond what could be considered the rational "economic value", only to fall rapidly afterwards. Effect of incentives Investment managers, such as stock mutual fund managers, are compensated and retained in part due to their performance relative to peers. Taking a conservative or contrarian position as a bubble builds results in performance unfavorable to peers. This may cause customers to go elsewhere and can affect the investment manager's own employment or compensation. The typical short-term focus of U.S. equity markets exacerbates the risk for investment managers that do not participate during the building phase of a bubble, particularly one that builds over a longer period of time. In attempting to maximize returns for clients and maintain their employment, they may rationally participate in a bubble they believe to be forming, as the benefits outweigh the risks of not doing so. See also Histoire des bourses de valeurs (French) 1991 Indian economic crisis 2007–2008 financial crisis Asset allocation Business cycle Collective behavior Diversification (finance) Dot-com bubble Fictitious capital Financial modeling Financial risk management Irrational exuberance Market trend Risk management Stock market crash Stock market crashes in India References External links Accounts of the South Sea Bubble, John Law and the Mississippi Company can be found in Charles Mackay's classic Extraordinary Popular Delusions and the Madness of Crowds (1843) – available from Project Gutenberg. Warning: this reference has been widely criticized by historians. Behavioral finance Economic bubbles Market trends Stock market de:Spekulationsblase fr:Bulle spéculative
Stock market bubble
[ "Biology" ]
1,463
[ "Behavioral finance", "Behavior", "Human behavior" ]
63,082
https://en.wikipedia.org/wiki/Market%20trend
A market trend is a perceived tendency of the financial markets to move in a particular direction over time. Analysts classify these trends as secular for long time-frames, primary for medium time-frames, and secondary for short time-frames. Traders attempt to identify market trends using technical analysis, a framework which characterizes market trends as predictable price tendencies within the market when price reaches support and resistance levels, varying over time. A future market trend can only be determined in hindsight, since at any time prices in the future are not known. Past trends are identified by drawing lines, known as trendlines, that connect price action making higher highs and higher lows for an uptrend, or lower lows and lower highs for a downtrend. Market terminology The terms "bull market" and "bear market" describe upward and downward market trends, respectively, and can be used to describe either the market as a whole or specific sectors and securities. The terms come from London's Exchange Alley in the early 18th century, where traders who engaged in naked short selling were called "bear-skin jobbers" because they sold a bear's skin (the shares) before catching the bear. This was simplified to "bears," while traders who bought shares on credit were called "bulls." The latter term might have originated by analogy to bear-baiting and bull-baiting, two animal fighting sports of the time. Thomas Mortimer recorded both terms in his 1761 book Every Man His Own Broker. He remarked that bulls who bought in excess of present demand might be seen wandering among brokers' offices moaning for a buyer, while bears rushed about devouring any shares they could find to close their short positions. An unrelated folk etymology supposes that the terms refer to a bear clawing downward to attack and a bull bucking upward with its horns. Secular trends A secular market trend is a lasting long-term trend that lasts 5 to 25 years and consists of a series of primary trends. A secular bear market consists of smaller bull markets and larger bear markets; a secular bull market consists of larger bull markets and smaller bear markets. In a secular bull market, the prevailing trend is "bullish" or upward-moving. The United States stock market was described as being in a secular bull market from about 1983 to 2000 (or 2007), with brief upsets including Black Monday and the Stock market downturn of 2002, triggered by the crash of the dot-com bubble. Another example is the 2000s commodities boom. In a secular bear market, the prevailing trend is "bearish" or downward-moving. An example of a secular bear market occurred in gold from January 1980 to June 1999, culminating with the Brown Bottom. During this period, the market price of gold fell from a high of $850/oz ($30/g) to a low of $253/oz ($9/g). The stock market was also described as being in a secular bear market from 1929 to 1949. Primary trends A primary trend has broad support throughout the entire market, across most sectors, and lasts for a year or more. Bull market A bull market is a period of generally rising prices. The start of a bull market is marked by widespread pessimism. This point is when the "crowd" is the most "bearish". The feeling of despondency changes to hope, "optimism", and eventually euphoria as the bull runs its course. This often leads the economic cycle, for example, in a full recession, or earlier. Generally, bull markets begin when stocks rise 20% from their low and end when stocks experience a 20% drawdown. However, some analysts suggest a bull market cannot happen within a bear market. An analysis of Morningstar, Inc. stock market data from 1926 to 2014 revealed that, on average, a typical bull market lasted 8.5 years with a cumulative total return averaging 458%. Additionally, annualized gains for bull markets ranged from 14.9% to 34.1%. Examples India's Bombay Stock Exchange Index, BSE SENSEX, experienced a major bull market trend from April 2003 to January 2008. It increased from 2,900 points to 21,000 points, representing a more than 600% return in 5 years. Notable bull markets characterized the 1925–1929, 1953–1957, and 1993–1997 periods when the U.S. and many other stock markets experienced significant growth. While the first period ended abruptly with the start of the Great Depression, the end of the later time periods were mostly periods of soft landing, which became large bear markets. (see: Recession of 1960–61 and the dot-com bubble in 2000–2001) Bear market A bear market is a general decline in the stock market over a period of time. It involves a transition from high investor optimism to widespread investor fear and pessimism. One generally accepted measure of a bear market is a price decline of 20% or more over at least a two-month period. A decline of 10% to 20% is classified as a correction. Bear markets conclude when stocks recover, reaching new highs. The bear market is then assessed retrospectively from the recent highs to the lowest closing price, and its recovery period spans from the lowest closing price to the attainment of new highs. Another commonly accepted indicator of the end of a bear market is indices gaining 20% or more from their low. From 1926 to 2014, the average duration of a bear market was 13 months, accompanied by an average cumulative loss of 30%. Annualized declines for bear markets ranged from −19.7% to −47%. Examples Some examples of a bear market include: The Wall Street Crash of 1929, which erased 89% (from 386 to 40) of the Dow Jones Industrial Average's market capitalization by July 1932, marking the start of the Great Depression. After regaining nearly 50% of its losses, a longer bear market from 1937 to 1942 occurred in which the market was again cut in half. A long-term bear market occurred from about 1973 to 1982, encompassing the 1970s energy crisis and the high unemployment of the early 1980s. A bear market occurred in India following the 1992 Indian stock market scam committed by Harshad Mehta. The Stock market downturn of 2002. As a result of the financial crisis of 2007–2008, a bear market occurred between October 2007 and March 2009. The 2015 Chinese stock market crash. In early 2020, the COVID-19 pandemic caused multiple stock market crashes, leading to bear markets across the world. In 2022, concerns over an inflation surge and potential rises of the federal funds rate caused a bear market. Market top A market top (or market high) is usually not a dramatic event. The market has simply reached the highest point that it will, for some time. This identification is retrospective, as market participants are generally unaware of it when it occurs. Thus prices subsequently fall, either slowly or more rapidly. According to William O'Neil, since the 1950s, a market top is characterized by three to five distribution days in a major stock market index occurring within a relatively short period of time. Distribution is identified as a decline in price with higher volume than the preceding session. Examples The peak of the dot-com bubble, as measured by the NASDAQ-100, occurred on March 24, 2000, when the index closed at 4,704.73. The Nasdaq peaked at 5,132.50 and the S&P 500 Index at 1525.20. The peak of the U.S. stock market before the financial crisis of 2007–2008 occurred on October 9, 2007. The S&P 500 Index closed at 1,565 and the NASDAQ at 2,861.50. Market bottom A market bottom marks a trend reversal, signifying the end of a market downturn and the commencement of an upward-moving trend (bull market). Identifying a market bottom, often referred to as 'bottom picking,' is a challenging task, as it's difficult to recognize before it passes. The upturn following a decline may be short-lived, and prices might resume their descent, resulting in a loss for the investor who purchased stocks during a misperceived or 'false' market bottom. Baron Rothschild is often quoted as advising that the best time to buy is when there is 'blood in the streets'—that is, when the markets have fallen drastically and investor sentiment is extremely negative. Examples Some more examples of market bottoms, in terms of the closing values of the Dow Jones Industrial Average (DJIA) include: The Dow Jones Industrial Average hit a bottom at 1,738.74 on October 19, 1987, following a decline from 2,722.41 on August 25, 1987. This day is commonly referred to as Black Monday (chart). A bottom of 7,286.27 was reached on the DJIA on October 9, 2002, following a decline from 11,722.98 on January 14, 2000. This decline included an intermediate bottom of 8,235.81 on September 21, 2001 (a 14% change from September 10), leading to an intermediate top of 10,635.25 on March 19, 2002 (chart). Meanwhile, the "tech-heavy" Nasdaq experienced a more precipitous fall, declining 79% from its peak of 5,132 on March 10, 2000, to its bottom of 1,108 on October 10, 2002. A bottom of 6,440.08 (DJIA) on 9 March 2009 was reached after a decline associated with the subprime mortgage crisis starting at 14164.41 on 9 October 2007 (chart). Secondary trends Secondary trends are short-term changes in price direction within a primary trend, typically lasting for a few weeks or a few months. Bear market rally Similarly, a bear market rally, sometimes referred to as a 'sucker's rally' or 'dead cat bounce', is characterized by a price increase of 5% or more before prices fall again. Bear market rallies were observed in the Dow Jones Industrial Average index after the Wall Street Crash of 1929, leading down to the market bottom in 1932, and throughout the late 1960s and early 1970s. The Japanese Nikkei 225 has had several bear-market rallies between the 1980s and 2011, while undergoing an overall long-term downward trend. Causes of market trends The price of assets, such as stocks, is determined by supply and demand. By definition, the market balances buyers and sellers, making it impossible to have 'more buyers than sellers' or vice versa, despite the common use of that expression. During a surge in demand, buyers are willing to pay higher prices, while sellers seek higher prices in return. Conversely, in a surge in supply, the dynamics are reversed. Supply and demand dynamics vary as investors attempt to reallocate their investments between asset types. For instance, investors may seek to move funds from government bonds to 'tech' stocks, but the success of this shift depends on finding buyers for the government bonds they are selling. Conversely, they might aim to move funds from 'tech' stocks to government bonds at another time. In each case, these actions influence the prices of both asset types. Ideally, investors aim to use market timing to buy low and sell high, but in practice, they may end up buying high and selling low. Contrarian investors and traders employ a strategy of 'fading' investors' actions—buying when others are selling and selling when others are buying. A period when most investors are selling stocks is known as distribution, while a period when most investors are buying stocks is known as accumulation. "According to standard theory, a decrease in price typically leads to less supply and more demand, while an increase in price has the opposite effect. While this principle holds true for many assets, it often operates in reverse for stocks due to the common mistake made by investors—buying high in a state of euphoria and selling low in a state of fear or panic, driven by the herding instinct. In cases where an increase in price leads to an increase in demand, or a decrease in price leads to an increase in supply, the expected negative feedback loop is disrupted, resulting in price instability. This phenomenon is evident in bubbles or market crashes. Market sentiment Market sentiment is a contrarian stock market indicator. When an extremely high proportion of investors express a bearish (negative) sentiment, some analysts consider it to be a strong signal that a market bottom may be near. David Hirshleifer observes a trend phenomenon that follows a path starting with under-reaction and culminating in overreaction by investors and traders. Indicators that measure investor sentiment may include: The Investor Intelligence Sentiment Index evaluates market sentiment through the Bull-Bear spread (% of Bulls − % of Bears). A close-to-historic-low spread may signal a bottom, indicating a potential market turnaround. Conversely, an extreme high in bullish sentiment and an extreme low in bearish sentiment may suggest a market top or an imminent occurrence. This contrarian measure is more reliable for coincidental timing at market lows than at market tops. The American Association of Individual Investors (AAII) sentiment indicator is often interpreted to suggest that the majority of the decline has already occurred when it gives a reading of minus 15% or below. Other sentiment indicators include the Nova-Ursa ratio, the Short Interest/Total Market Float, and the put/call ratio. See also Animal spirits Black Monday Bull-bear line Business cycle Don't fight the tape Economic Cycle Research Institute Economic expansion Herd mentality Market sentiment Michael Ewing Purves, developed the "Wolf Market" framework Mr. Market Real estate trends Recession Trend following References External links Financial markets Financial economics Investment Behavioral finance Capitalism
Market trend
[ "Biology" ]
2,813
[ "Behavioral finance", "Behavior", "Human behavior" ]
63,091
https://en.wikipedia.org/wiki/Piet%20Hein%20%28scientist%29
Piet Hein (16 December 1905 – 17 April 1996) was a Danish polymath (mathematician, inventor, designer, writer and poet), often writing under the Old Norse pseudonym Kumbel, meaning "tombstone". His short poems, known as gruks or grooks (), first started to appear in the daily newspaper Politiken shortly after the German occupation of Denmark in April 1940 under the pseudonym "Kumbel Kumbell". He also invented the Soma cube and the board game Hex. Biography Hein, a direct descendant of Piet Pieterszoon Hein, the 17th century Dutch naval figure, was born in Copenhagen, Denmark. He studied at the Institute for Theoretical Physics (later to become the Niels Bohr Institute) of the University of Copenhagen, and Technical University of Denmark. Yale awarded him an honorary doctorate in 1972. He died in his home on Funen, Denmark in 1996. Resistance Piet Hein, who, in his own words, "played mental ping-pong" with Niels Bohr in the inter-War period, found himself confronted with a dilemma when the Germans occupied Denmark. He felt that he had three choices: Do nothing, flee to neutral Sweden or join the Danish resistance movement. As he explained in 1968, "Sweden was out because I am not Swedish, but Danish. I could not remain at home because, if I had, every knock at the door would have sent shivers up my spine. So, I joined the Resistance." Taking as his first weapon the instrument with which he was most familiar, the pen, he wrote and had published his first "grook" (). It passed the censors who did not grasp its real meaning. The Consolation Grook reads: CONSOLATION GROOK Losing one glove is certainly painful, but nothing compared to the pain, of losing one, throwing away the other, and finding the first one again. The Danes, however, understood its importance and soon it was found as graffiti all around the country. The deeper meaning of the grook was that even if you lose your freedom ("losing one glove"), do not lose your patriotism and self-respect by collaborating with the Nazis ("throwing away the other"), because that sense of having betrayed your country will be more painful when freedom has been found again someday. One of Hein's best-known grooks is A Maxim for Vikings: A MAXIM FOR VIKINGS Here is a fact that should help you fight a bit longer: Things that don't act- ually kill you outright make you stronger. Recreational mathematics In 1959, city planners in Stockholm, Sweden announced a design challenge for a roundabout in their city square Sergels Torg. Piet Hein's winning proposal was based on a superellipse. He went on to use the superellipse in the design of furniture and other artifacts. He also invented a perpetual calendar called the Astro Calendar and marketed housewares based on the superellipse and its three-dimensional analog, the superegg. He invented the Soma cube and devised the games of Hex, Tangloids, Tower, Polytaire, TacTix, Nimbi, Qrazy Qube, and Pyramystery. Hein was a close associate of Martin Gardner and his work was frequently featured in Gardner's Mathematical Games column in Scientific American. At the age of 95, Gardner wrote his autobiography and titled it Undiluted Hocus-Pocus. Both the title and the dedication of this book come from one of Hein's grooks. See also Flipism Personal Piet Hein was married four times and had five sons from his last three marriages. (1937) married Gunver Holck, divorced (1942) married Gerda Ruth (Nena) Cohnheim, divorced Sons: Jan Alvaro Hein, born 9 January 1943; Anders Humberto Hein, born 30 December 1943 (1947) married Anne Cathrina (Trine) Krøyer Pedersen, divorced Son: Lars Hein, born 20 May 1950 (1955) married Gerd Ericsson, who died 3 November 1968 Sons: Jotun Hein, born 19 July 1956; Hugo Piet Hein, born 16 November 1963 Bibliography Grooks – 20 volumes, originally published between 1940 and 1963, all currently out-of-print. Grooks (1966) Grooks 2 (1968) Grooks 3 (1970) Grooks 4 (1972) Grooks 5 (1973) Grooks 6 (1978) Grooks 7 (1984) The following books of grooks are available on this subpage of the website Collected Grooks I Collected Grooks II Runaway Runes: Short Grooks I Viking Vistas: Short Grooks II References Other References Gardner, Martin: Piet Hein's Superellipse. – in Gardner, Martin: Mathematical Carnival. A New Round-Up of Tantalizers and Puzzles from Scientific American. New York: Vintage, 1977, pp. 240–254. Johan Gielis: Inventing the circle. The geometry of nature. – Antwerpen : Geniaal Press, 2003. – "A Poet with a Slide Rule: Piet Hein Bestrides Art and Science," by Jim Hicks, Life Magazine, Vol. 61 No. 16, 10/14/66, pp. 55–66 "Piet Hein Biographical Details", by Nils Aas, tr. by Roger Stevenson. The Papers of the Medford Educational Institute 3. "To and by Piet Hein on the Occasion of Piet Hein's Election as the Student Organization's Twelfth Honorary Member", tr. by Roger Stevenson. The Papers of the Medford Educational Institute 2. External links , including several sample grooks Superellipse at MathWorld Grooks at My Poetic Side Hein's Grooks at Archimedes' Lab 1905 births 1996 deaths 20th-century Danish inventors 20th-century Danish poets Danish male poets Recreational mathematicians Danish furniture designers 20th-century Danish mathematicians Puzzle designers University of Copenhagen alumni Yale University alumni Designers from Copenhagen Writers from Copenhagen Danish people of Dutch descent 20th-century Danish male writers Grut Hansen family
Piet Hein (scientist)
[ "Mathematics" ]
1,291
[ "Recreational mathematics", "Recreational mathematicians" ]
63,199
https://en.wikipedia.org/wiki/Spontaneous%20human%20combustion
Spontaneous human combustion (SHC) is the pseudoscientific concept of the spontaneous combustion of a living (or recently deceased) human body without an apparent external source of ignition on the body. In addition to reported cases, descriptions of the alleged phenomenon appear in literature, and both types have been observed to share common characteristics in terms of circumstances and the remains of the victim. Scientific investigations have attempted to analyze reported instances of SHC and have resulted in hypotheses regarding potential causes and mechanisms, including victim behavior and habits, alcohol consumption, and proximity to potential sources of ignition, as well as the behavior of fires that consume melted fats. Natural explanations, as well as unverified natural phenomena, have been proposed to explain reports of SHC. The current scientific consensus is that purported cases of SHC involve overlooked external sources of ignition. Overview "Spontaneous human combustion" refers to the death from a fire originating without an apparent external source of ignition: a belief that the fire starts within the body of the victim. This idea and the term "spontaneous human combustion" were both first proposed in 1746 by Paul Rolli, a Fellow of the Royal Society, in an article published in the Philosophical Transactions concerning the mysterious death of Countess Cornelia Zangheri Bandi. Writing in The British Medical Journal in 1938, coroner Gavin Thurston describes the phenomenon as having "apparently attracted the attention not only of the medical profession but of the non-medical professionals one hundred years ago" (referring to a fictional account published in 1834 in the Frederick Marryat cycle). In his 1995 book Ablaze!, Larry E. Arnold, a director of ParaScience International, wrote that there had been about 200 cited reports of spontaneous human combustion worldwide over a period of around 300 years. Characteristics The topic received coverage in the British Medical Journal in 1938. An article by L. A. Parry cited an 1823-published book Medical Jurisprudence, which stated that commonalities among recorded cases of spontaneous human combustion included the following characteristics: Alcoholism is a common theme in early SHC literary references, in part because some Victorian era physicians and writers believed spontaneous human combustion was the result of alcoholism. Scientific investigation An extensive two-and-a-half-year research project, involving 30 historical cases of alleged SHC from 1725 to 1982, was conducted by science investigator Joe Nickell and forensic analyst John F. Fischer. Their lengthy, two-part report was published in 1984 in the journal of the International Association of Arson Investigators, and incorporated into their 1988 book Secrets of the Supernatural. Nickell has written frequently on the subject, appeared on television documentaries, conducted additional research, and lectured at the New York State Academy of Fire Science at Montour Falls, New York, as a guest instructor. The Nickell and Fischer investigation, which looked at cases in the 18th, 19th and 20th centuries, showed that the burned cadavers were close to plausible sources for the ignition: candles, lamps, fireplaces, and so on. Such sources were often omitted from published accounts of these incidents, presumably to deepen the aura of mystery surrounding an apparently "spontaneous" death. The investigations also found that there was a correlation between alleged SHC deaths and the victim's intoxication (or other forms of incapacitation) which could conceivably have caused them to be careless and unable to respond properly to an accident. Where the destruction of the body was not particularly extensive, a primary source of combustible fuel could plausibly have been the victim's clothing or a covering such as a blanket or comforter. However, where the destruction was extensive, additional fuel sources were involved, such as chair stuffing, floor coverings, the flooring itself, and the like. The investigators described how such materials helped to retain melted fat, which caused more of the body to be burned and destroyed, yielding still more liquified fat, in a cyclic process known as the "wick effect" or the "candle effect". According to the Nickell and Fischer investigation, nearby objects often remained undamaged because fire tends to burn upwards, but burns laterally with some difficulty. The fires in question are relatively small, achieving considerable destruction by the wick effect, and nearby objects may not be close enough to catch fire themselves (much as one can closely approach a modest campfire without burning). As with other mysteries, Nickell and Fischer cautioned against a "single, simplistic explanation for all unusual burning deaths" but rather urged investigating "on an individual basis". Neurologist Steven Novella has said that skepticism about spontaneous human combustion is now bleeding over into becoming popular skepticism about spontaneous combustion. In a 2002 study, Angi M. Christensen of the University of Tennessee cremated both healthy and osteoporotic samples of human bone and compared the resulting color changes and fragmentation. The study found that osteoporotic bone samples "consistently displayed more discoloration and a greater degree of fragmentation than healthy ones." The same study found that when human tissue is burned, the resulting flame produces a small amount of heat, indicating that fire is unlikely to spread from burning tissue. Suggested explanations The scientific consensus is that incidents which might appear as spontaneous combustion did in fact have an external source of ignition, and that spontaneous human combustion without an external ignition source is extremely implausible. Pseudoscientific hypotheses have been presented which attempt to explain how SHC might occur without an external flame source. Benjamin Radford, science writer and deputy editor of the science magazine Skeptical Inquirer, casts doubt on the plausibility of spontaneous human combustion: "If SHC is a real phenomenon (and not the result of an elderly or infirm person being too close to a flame source), why doesn't it happen more often? There are 8 billion people in the world [ ⁠today in 2024⁠], and yet we don't see reports of people bursting into flame while walking down the street, attending football games, or sipping a coffee at a local Starbucks." Natural explanations Almost all postulated cases of SHC involve people with low mobility due to advanced age or obesity, along with poor health. Victims show a high likelihood of having died in their sleep, or of having been unable to move once they had caught fire. Smoking is often seen as the source of fire. Natural causes such as heart attacks may lead to the victim dying, subsequently dropping the cigarette, which after a period of smouldering can ignite the victim's clothes. The "wick effect" hypothesis suggests that a small external flame source, such as a burning cigarette, chars the clothing of the victim at a location, splitting the skin and releasing subcutaneous fat, which is in turn absorbed into the burned clothing, acting as a wick. This combustion can continue for as long as the fuel is available. This hypothesis has been successfully tested with pig tissue and is consistent with evidence recovered from cases of human combustion. The human body typically has enough stored energy in fat and other chemical stores to fully combust the body; even lean people have several pounds of fat in their tissues. This fat, once heated by the burning clothing, wicks into the clothing much as candle wax is drawn into a lit candle wick, providing the fuel needed to keep the wick burning. The protein in the body also burns, but provides less energy than fat, with the water in the body being the main impediment to combustion. However, slow combustion, lasting hours, gives the water time to evaporate slowly. In an enclosed area, such as a house, this moisture will recondense nearby, possibly on windows. Feet don't typically burn because they often have the least fat; hands also have little fat, but may burn if resting on the abdomen, which provides all of the necessary fat for combustion. Scalding can cause burn-like injuries, sometimes leading to death, without setting fire to clothing. Although not applicable in cases where the body is charred and burnt, this has been suggested as a cause in at least one claimed SHC-like event. Brian J. Ford has suggested that ketosis, possibly caused by alcoholism or low-carb dieting, produces acetone, which is highly flammable and could therefore lead to apparently spontaneous combustion. SHC can be confused with self-immolation as a form of suicide. In the West, self-immolation accounts for 1% of suicides, while Radford claims in developing countries the figure can be as high as 40%. Sometimes there are reasonable explanations for the deaths, but proponents ignore official autopsies and contradictory evidence in favor of anecdotal accounts and personal testimonies. Inhaling/digesting phosphorus in different forms may lead to the formation of phosphine, which can autoignite. Alternative hypotheses Larry E. Arnold in his 1995 book Ablaze! proposed a pseudoscientific new subatomic particle, which he called "pyrotron". Arnold also wrote that the flammability of a human body could be increased by certain circumstances, like increased alcohol in the blood. He further proposed that extreme stress could be the trigger that starts many combustions. This process may use no external oxygen to spread throughout the body, since it may not be an "oxidation-reduction" reaction; however, no reaction mechanism has been proposed. Researcher Joe Nickell has criticised Arnold's hypotheses as based on selective evidence and argument from ignorance. In his 1976 book Fire from Heaven, UK writer Michael Harrison suggests that SHC is connected to poltergeist activity because, he argues, "the force which activates the 'poltergeist' originates in, and is supplied by, a human being". Within the concluding summary, Harrison writes: "SHC, fatal or non-fatal, belongs to the extensive range of poltergeist phenomena." John Abrahamson suggested that ball lightning could account for spontaneous human combustion. "This is circumstantial only, but the charring of human limbs seen in a number of ball lightning cases are [sic] very suggestive that this mechanism may also have occurred where people have had limbs combusted," says Abrahamson. Examples On 2 July 1951, Mary Reeser, a 67-year-old woman, was found burned to death in her house after her landlady realised that the house's doorknob was unusually warm. The landlady notified the police, and upon entering the home they found Reeser's remains completely burned into ash, with only one leg remaining. The chair she was sitting in was also destroyed. Reeser took sleeping pills and was also a smoker. Despite its proliferation in popular culture, the contemporary FBI investigation ruled out the possibility of SHC. A common theory was that she was smoking a cigarette after taking sleeping pills and then fell asleep while still holding the burning cigarette, which could have ignited her gown, ultimately leading to her death. Her daughter-in-law stated, "The cigarette dropped to her lap. Her fat was the fuel that kept her burning. The floor was cement, and the chair was by itself. There was nothing around her to burn". Margaret Hogan, an 89-year-old widow who lived alone in a house on Prussia Street, Dublin, Ireland, was found burned almost to the point of complete destruction on 28 March 1970. Plastic flowers on a table in the centre of the room had been reduced to liquid and a television with a melted screen sat 12 feet from the armchair in which the ashen remains were found; otherwise, the surroundings were almost untouched. Her two feet, and both legs from below the knees, were undamaged. A small coal fire had been burning in the grate when a neighbour left the house the previous day; however, no connection between this fire and that in which Mrs. Hogan died could be found. An inquest, held on 3 April 1970, recorded death by burning, with the cause of the fire listed as "unknown". On 24 November 1979, during Thanksgiving weekend, Beatrice Oczki, a 51-year-old woman, was found charred to death in her home in the village of Bolingbrook, Illinois, United States. Henry Thomas, a 73-year-old man, was found burned to death in the living room of his council house on the Rassau estate in Ebbw Vale, South Wales, in 1980. Most of his body was incinerated, leaving only his skull and part of each leg below the knee. The feet and legs were still clothed in socks and trousers. Half of the chair in which he had been sitting was also destroyed. Police forensic officers decided that the incineration of Thomas was due to the wick effect. In December 2010, the death of Michael Faherty, a 76-year-old man in County Galway, Ireland, was recorded as "spontaneous combustion" by the coroner. The doctor, Ciaran McLoughlin, made this statement at the inquiry into the death: "This fire was thoroughly investigated and I'm left with the conclusion that this fits into the category of spontaneous human combustion, for which there is no adequate explanation." The Skeptic magazine ascribed to possible SHC the 1899 case of two children from the same family who were burned to death in different places at the same time. The evidence showed that although the coincidence seemed strange, the children both loved to play with fire and had been "whipped" for this behavior in the past. Looking at all the evidence, the coroner and jury ruled that these were both accidental deaths. In popular culture Charles Brockden Brown's 1798 novel Wieland: or, The Transformation: An American Tale features the death of the Wieland family's patriarch via spontaneous human combustion during prayer within his property's temple. In the novel Redburn by Herman Melville published in 1849, a sailor, Miguel Saveda, is consumed by "animal combustion" while in a drunken stupor on the return voyage from Liverpool to New York. In the novel Bleak House by Charles Dickens, the character Mr. Krook dies of spontaneous combustion at the end of Part X. Dickens researched the details of a number of contemporary accounts of spontaneous human combustion before writing that part of the novel and, after receiving criticism from a scientist friend suggesting he was perpetuating a "vulgar error", cites some of these cases in Part XI and again in the preface to the one-volume edition. The death of Mr. Krook has been described as "the most famous case in literature" of spontaneous human combustion. In the comic story "The Glenmutchkin Railway" by William Edmondstoune Aytoun, published in 1845 in Blackwood's Magazine, one of the railway directors, Sir Polloxfen Tremens, is said to have died of spontaneous combustion. In the 1984 mockumentary This Is Spın̈al Tap, about the fictional heavy metal band Spinal Tap, two of the band's former drummers are said to have died in separate on-stage spontaneous human combustion incidents. In the episode "Confidence and Paranoia" of British science fiction series Red Dwarf, a character called the Mayor of Warsaw is said to have spontaneously exploded in the 16th century and briefly appears in a vision by an unconscious Lister (the main protagonist of the series) where he explodes in front of Rimmer (his hologram bunkmate). In the beginning of the 1998 video game Parasite Eve, an entire audience in Carnegie Hall spontaneously combusts (except for Aya Brea, the protagonist of the game) during an opera presentation as the main actress Melissa Pierce starts to sing. This phenomenon is mentioned in the TV series The X-Files. In the episode "Heart Break" of the second season of the American action police procedural television series NCIS, a case is investigated where the victim at first glance seems to have been killed by spontaneous human combustion. In episode "Duty Free Rome" of the second season of the TV series Picket Fences, the town's mayor is shown to have been killed by spontaneous combustion. In the seventh season episode “Mars Attacks” of the American TV medical drama ER, a patient is treated for “spontaneous human combustion” and subsequently catches fire. The manga and anime series Fire Force (En'en no Shōbōtai) focuses on the main protagonists fighting humans who have this phenomenon. In the fourth episode of the first season of the English comedic drama series Toast of London, Toast decides to finish his book by having the main character spontaneously combust. When bringing it to his literary agent, the laziness of his ending enrages her to the point of spontaneous combustion in front of Toast. The adult animated series South Park devoted a whole episode, titled "Spontaneous Combustion", to spontaneous human combustion. In Kevin Wilson's short story "Blowing Up on the Spot" (from his collection Tunneling to the Center of the Earth), the protagonist's parents died from a "double spontaneous human combustion." In the 2020 American black comedy horror film Spontaneous, high school students at Covington High begin to inexplicably explode. See also Exploding animal Fan death Pyrokinesis References External links New light on human torch mystery Forteana Human Causes of death Combustion Unexplained phenomena Paranormal
Spontaneous human combustion
[ "Chemistry" ]
3,540
[ "Combustion", "Spontaneous human combustion", "Explosions", "Exploding animals" ]
63,218
https://en.wikipedia.org/wiki/Present%20value
In economics and finance, present value (PV), also known as present discounted value(PDV), is the value of an expected income stream determined as of the date of valuation. The present value is usually less than the future value because money has interest-earning potential, a characteristic referred to as the time value of money, except during times of negative interest rates, when the present value will be equal or more than the future value. Time value can be described with the simplified phrase, "A dollar today is worth more than a dollar tomorrow". Here, 'worth more' means that its value is greater than tomorrow. A dollar today is worth more than a dollar tomorrow because the dollar can be invested and earn a day's worth of interest, making the total accumulate to a value more than a dollar by tomorrow. Interest can be compared to rent. Just as rent is paid to a landlord by a tenant without the ownership of the asset being transferred, interest is paid to a lender by a borrower who gains access to the money for a time before paying it back. By letting the borrower have access to the money, the lender has sacrificed the exchange value of this money, and is compensated for it in the form of interest. The initial amount of borrowed funds (the present value) is less than the total amount of money paid to the lender. Present value calculations, and similarly future value calculations, are used to value loans, mortgages, annuities, sinking funds, perpetuities, bonds, and more. These calculations are used to make comparisons between cash flows that don’t occur at simultaneous times, since time and dates must be consistent in order to make comparisons between values. When deciding between projects in which to invest, the choice can be made by comparing respective present values of such projects by means of discounting the expected income streams at the corresponding project interest rate, or rate of return. The project with the highest present value, i.e. that is most valuable today, should be chosen. Background If offered a choice between $100 today or $100 in one year, and there is a positive real interest rate throughout the year, a rational person will choose $100 today. This is described by economists as time preference. Time preference can be measured by auctioning off a risk free security—like a US Treasury bill. If a $100 note with a zero coupon, payable in one year, sells for $80 now, then $80 is the present value of the note that will be worth $100 a year from now. This is because money can be put in a bank account or any other (safe) investment that will return interest in the future. An investor who has some money has two options: to spend it right now or to save it. But the financial compensation for saving it (and not spending it) is that the money value will accrue through the compound interest that he or she will receive from a borrower (the bank account in which he has the money deposited). Therefore, to evaluate the real value of an amount of money today after a given period of time, economic agents compound the amount of money at a given (interest) rate. Most actuarial calculations use the risk-free interest rate which corresponds to the minimum guaranteed rate provided by a bank's saving account for example, assuming no risk of default by the bank to return the money to the account holder on time. To compare the change in purchasing power, the real interest rate (nominal interest rate minus inflation rate) should be used. The operation of evaluating a present value into the future value is called a capitalization (how much will $100 today be worth in 5 years?). The reverse operation—evaluating the present value of a future amount of money—is called a discounting (how much will $100 received in 5 years—at a lottery for example—be worth today?). It follows that if one has to choose between receiving $100 today and $100 in one year, the rational decision is to choose the $100 today. If the money is to be received in one year and assuming the savings account interest rate is 5%, the person has to be offered at least $105 in one year so that the two options are equivalent (either receiving $100 today or receiving $105 in one year). This is because if $100 is deposited in a savings account, the value will be $105 after one year, again assuming no risk of losing the initial amount through bank default. Interest rates Interest is the additional amount of money gained between the beginning and the end of a time period. Interest represents the time value of money, and can be thought of as rent that is required of a borrower in order to use money from a lender. For example, when an individual takes out a bank loan, the individual is charged interest. Alternatively, when an individual deposits money into a bank, the money earns interest. In this case, the bank is the borrower of the funds and is responsible for crediting interest to the account holder. Similarly, when an individual invests in a company (through corporate bonds, or through stock), the company is borrowing funds, and must pay interest to the individual (in the form of coupon payments, dividends, or stock price appreciation). The interest rate is the change, expressed as a percentage, in the amount of money during one compounding period. A compounding period is the length of time that must transpire before interest is credited, or added to the total. For example, interest that is compounded annually is credited once a year, and the compounding period is one year. Interest that is compounded quarterly is credited four times a year, and the compounding period is three months. A compounding period can be any length of time, but some common periods are annually, semiannually, quarterly, monthly, daily, and even continuously. There are several types and terms associated with interest rates: Compound interest, interest that increases exponentially over subsequent periods, Simple interest, additive interest that does not increase Effective interest rate, the effective equivalent compared to multiple compound interest periods Nominal annual interest, the simple annual interest rate of multiple interest periods Discount rate, an inverse interest rate when performing calculations in reverse Continuously compounded interest, the mathematical limit of an interest rate with a period of zero time. Real interest rate, which accounts for inflation. Calculation The operation of evaluating a present sum of money some time in the future is called a capitalization (how much will 100 today be worth in five years?). The reverse operation—evaluating the present value of a future amount of money—is called discounting (how much will 100 received in five years be worth today?). Spreadsheets commonly offer functions to compute present value. In Microsoft Excel, there are present value functions for single payments - "=NPV(...)", and series of equal, periodic payments - "=PV(...)". Programs will calculate present value flexibly for any cash flow and interest rate, or for a schedule of different interest rates at different times. Present value of a lump sum The most commonly applied model of present valuation uses compound interest. The standard formula is: Where is the future amount of money that must be discounted, is the number of compounding periods between the present date and the date where the sum is worth , is the interest rate for one compounding period (the end of a compounding period is when interest is applied, for example, annually, semiannually, quarterly, monthly, daily). The interest rate, , is given as a percentage, but expressed as a decimal in this formula. Often, is referred to as the Present Value Factor This is also found from the formula for the future value with negative time. For example, if you are to receive $1000 in five years, and the effective annual interest rate during this period is 10% (or 0.10), then the present value of this amount is The interpretation is that for an effective annual interest rate of 10%, an individual would be indifferent to receiving $1000 in five years, or $620.92 today. The purchasing power in today's money of an amount of money, years into the future, can be computed with the same formula, where in this case is an assumed future inflation rate. If we are using lower discount rate(i ), then it allows the present values in the discount future to have higher values. Net present value of a stream of cash flows A cash flow is an amount of money that is either paid out or received, differentiated by a negative or positive sign, at the end of a period. Conventionally, cash flows that are received are denoted with a positive sign (total cash has increased) and cash flows that are paid out are denoted with a negative sign (total cash has decreased). The cash flow for a period represents the net change in money of that period. Calculating the net present value, , of a stream of cash flows consists of discounting each cash flow to the present, using the present value factor and the appropriate number of compounding periods, and combining these values. For example, if a stream of cash flows consists of +$100 at the end of period one, -$50 at the end of period two, and +$35 at the end of period three, and the interest rate per compounding period is 5% (0.05) then the present value of these three Cash Flows are: respectively Thus the net present value would be: There are a few considerations to be made. The periods might not be consecutive. If this is the case, the exponents will change to reflect the appropriate number of periods The interest rates per period might not be the same. The cash flow must be discounted using the interest rate for the appropriate period: if the interest rate changes, the sum must be discounted to the period where the change occurs using the second interest rate, then discounted back to the present using the first interest rate. For example, if the cash flow for period one is $100, and $200 for period two, and the interest rate for the first period is 5%, and 10% for the second, then the net present value would be: The interest rate must necessarily coincide with the payment period. If not, either the payment period or the interest rate must be modified. For example, if the interest rate given is the effective annual interest rate, but cash flows are received (and/or paid) quarterly, the interest rate per quarter must be computed. This can be done by converting effective annual interest rate, , to nominal annual interest rate compounded quarterly: Here, is the nominal annual interest rate, compounded quarterly, and the interest rate per quarter is Present value of an annuity Many financial arrangements (including bonds, other loans, leases, salaries, membership dues, annuities including annuity-immediate and annuity-due, straight-line depreciation charges) stipulate structured payment schedules; payments of the same amount at regular time intervals. Such an arrangement is called an annuity. The expressions for the present value of such payments are summations of geometric series. There are two types of annuities: an annuity-immediate and annuity-due. For an annuity immediate, payments are received (or paid) at the end of each period, at times 1 through , while for an annuity due, payments are received (or paid) at the beginning of each period, at times 0 through . This subtle difference must be accounted for when calculating the present value. An annuity due is an annuity immediate with one more interest-earning period. Thus, the two present values differ by a factor of : The present value of an annuity immediate is the value at time 0 of the stream of cash flows: where: = number of periods, = amount of cash flows, = effective periodic interest rate or rate of return. An approximation for annuity and loan calculations The above formula (1) for annuity immediate calculations offers little insight for the average user and requires the use of some form of computing machinery. There is an approximation which is less intimidating, easier to compute and offers some insight for the non-specialist. It is given by Where, as above, C is annuity payment, PV is principal, n is number of payments, starting at end of first period, and i is interest rate per period. Equivalently C is the periodic loan repayment for a loan of PV extending over n periods at interest rate, i. The formula is valid (for positive n, i) for ni≤3. For completeness, for ni≥3 the approximation is . The formula can, under some circumstances, reduce the calculation to one of mental arithmetic alone. For example, what are the (approximate) loan repayments for a loan of PV = $10,000 repaid annually for n = ten years at 15% interest (i = 0.15)? The applicable approximate formula is C ≈ 10,000*(1/10 + (2/3) 0.15) = 10,000*(0.1+0.1) = 10,000*0.2 = $2000 pa by mental arithmetic alone. The true answer is $1993, very close. The overall approximation is accurate to within ±6% (for all n≥1) for interest rates 0≤i≤0.20 and within ±10% for interest rates 0.20≤i≤0.40. It is, however, intended only for "rough" calculations. Present value of a perpetuity A perpetuity refers to periodic payments, receivable indefinitely, although few such instruments exist. The present value of a perpetuity can be calculated by taking the limit of the above formula as n approaches infinity. Formula (2) can also be found by subtracting from (1) the present value of a perpetuity delayed n periods, or directly by summing the present value of the payments which form a geometric series. Again there is a distinction between a perpetuity immediate – when payments received at the end of the period – and a perpetuity due – payment received at the beginning of a period. And similarly to annuity calculations, a perpetuity due and a perpetuity immediate differ by a factor of : PV of a bond See: Bond valuation#Present value approach A corporation issues a bond, an interest earning debt security, to an investor to raise funds. The bond has a face value, , coupon rate, , and maturity date which in turn yields the number of periods until the debt matures and must be repaid. A bondholder will receive coupon payments semiannually (unless otherwise specified) in the amount of , until the bond matures, at which point the bondholder will receive the final coupon payment and the face value of a bond, . The present value of a bond is the purchase price. The purchase price can be computed as: The purchase price is equal to the bond's face value if the coupon rate is equal to the current interest rate of the market, and in this case, the bond is said to be sold 'at par'. If the coupon rate is less than the market interest rate, the purchase price will be less than the bond's face value, and the bond is said to have been sold 'at a discount', or below par. Finally, if the coupon rate is greater than the market interest rate, the purchase price will be greater than the bond's face value, and the bond is said to have been sold 'at a premium', or above par. Technical details Present value is additive. The present value of a bundle of cash flows is the sum of each one's present value. See time value of money for further discussion. These calculations must be applied carefully, as there are underlying assumptions: That it is not necessary to account for price inflation, or alternatively, that the cost of inflation is incorporated into the interest rate; see Inflation-indexed bond. That the likelihood of receiving the payments is high — or, alternatively, that the default risk is incorporated into the interest rate; see Corporate bond#Risk analysis. (In fact, the present value of a cashflow at a constant interest rate is mathematically one point in the Laplace transform of that cashflow, evaluated with the transform variable (usually denoted "s") equal to the interest rate. The full Laplace transform is the curve of all present values, plotted as a function of interest rate. For discrete time, where payments are separated by large time periods, the transform reduces to a sum, but when payments are ongoing on an almost continual basis, the mathematics of continuous functions can be used as an approximation.) Variants/approaches There are mainly two flavors of Present Value. Whenever there will be uncertainties in both timing and amount of the cash flows, the expected present value approach will often be the appropriate technique. With Present Value under uncertainty, future dividends are replaced by their conditional expectation. Traditional Present Value Approach – in this approach a single set of estimated cash flows and a single interest rate (commensurate with the risk, typically a weighted average of cost components) will be used to estimate the fair value. Expected Present Value Approach – in this approach multiple cash flows scenarios with different/expected probabilities and a credit-adjusted risk free rate are used to estimate the fair value. Choice of interest rate The interest rate used is the risk-free interest rate if there are no risks involved in the project. The rate of return from the project must equal or exceed this rate of return or it would be better to invest the capital in these risk free assets. If there are risks involved in an investment this can be reflected through the use of a risk premium. The risk premium required can be found by comparing the project with the rate of return required from other projects with similar risks. Thus it is possible for investors to take account of any uncertainty involved in various investments. Present value method of valuation An investor, the lender of money, must decide the financial project in which to invest their money, and present value offers one method of deciding. A financial project requires an initial outlay of money, such as the price of stock or the price of a corporate bond. The project claims to return the initial outlay, as well as some surplus (for example, interest, or future cash flows). An investor can decide which project to invest in by calculating each projects’ present value (using the same interest rate for each calculation) and then comparing them. The project with the smallest present value – the least initial outlay – will be chosen because it offers the same return as the other projects for the least amount of money. Years' purchase The traditional method of valuing future income streams as a present capital sum is to multiply the average expected annual cash-flow by a multiple, known as "years' purchase". For example, in selling to a third party a property leased to a tenant under a 99-year lease at a rent of $10,000 per annum, a deal might be struck at "20 years' purchase", which would value the lease at 20 * $10,000, i.e. $200,000. This equates to a present value discounted in perpetuity at 5%. For a riskier investment the purchaser would demand to pay a lower number of years' purchase. This was the method used for example by the English crown in setting re-sale prices for manors seized at the Dissolution of the Monasteries in the early 16th century. The standard usage was 20 years' purchase. See also Capital budgeting Current yield Lifetime value Liquidation Net present value Present value interest factor References Further reading Income Mathematical finance fr:Valeur actuelle
Present value
[ "Mathematics" ]
4,070
[ "Applied mathematics", "Mathematical finance" ]
63,243
https://en.wikipedia.org/wiki/Chemical%20symbol
Chemical symbols are the abbreviations used in chemistry, mainly for chemical elements; but also for functional groups, chemical compounds, and other entities. Element symbols for chemical elements, also known as atomic symbols, normally consist of one or two letters from the Latin alphabet and are written with the first letter capitalised. History Earlier symbols for chemical elements stem from classical Latin and Greek vocabulary. For some elements, this is because the material was known in ancient times, while for others, the name is a more recent invention. For example, Pb is the symbol for lead (plumbum in Latin); Hg is the symbol for mercury (hydrargyrum in Greek); and He is the symbol for helium (a Neo-Latin name) because helium was not known in ancient Roman times. Some symbols come from other sources, like W for tungsten (Wolfram in German) which was not known in Roman times. A three-letter temporary symbol may be assigned to a newly synthesized (or not yet synthesized) element. For example, "Uno" was the temporary symbol for hassium (element 108) which had the temporary name of unniloctium, based on the digits of its atomic number. There are also some historical symbols that are no longer officially used. Extension of the symbol In addition to the letters for the element itself, additional details may be added to the symbol as superscripts or subscripts a particular isotope, ionization, or oxidation state, or other atomic detail. A few isotopes have their own specific symbols rather than just an isotopic detail added to their element symbol. Attached subscripts or superscripts specifying a nuclide or molecule have the following meanings and positions: The nucleon number (mass number) is shown in the left superscript position (e.g., 14N). This number defines the specific isotope. Various letters, such as "m" and "f" may also be used here to indicate a nuclear isomer (e.g., 99mTc). Alternately, the number here can represent a specific spin state (e.g., 1O2). These details can be omitted if not relevant in a certain context. The proton number (atomic number) may be indicated in the left subscript position (e.g., 64Gd). The atomic number is redundant to the chemical element, but is sometimes used to emphasize the change of numbers of nucleons in a nuclear reaction. If necessary, a state of ionization or an excited state may be indicated in the right superscript position (e.g., state of ionization Ca2+). The number of atoms of an element in a molecule or chemical compound is shown in the right subscript position (e.g., N2 or Fe2O3). If this number is one, it is normally omitted - the number one is implicitly understood if unspecified. A radical is indicated by a dot on the right side (e.g., Cl• for a neutral chlorine atom). This is often omitted unless relevant to a certain context because it is already deducible from the charge and atomic number, as generally true for nonbonded valence electrons in skeletal structures. Many functional groups also have their own chemical symbol, e.g. Ph for the phenyl group, and Me for the methyl group. A list of current, dated, as well as proposed and historical signs and symbols is included here with its signification. Also given is each element's atomic number, atomic weight, or the atomic mass of the most stable isotope, group and period numbers on the periodic table, and etymology of the symbol. Symbols for chemical elements Symbols and names not currently used The following is a list of symbols and names formerly used or suggested for elements, including symbols for placeholder names and names given by discredited claimants for discovery. Systematic chemical symbols These symbols are based on systematic element names, which are now replaced by trivial (non-systematic) element names and symbols. Data is given in order of: atomic number, systematic symbol, systematic name; trivial symbol, trivial name. 101: Unu, unnilunium; Md, mendelevium. 102: Unb, unnilbium; No, nobelium. 103: Unt, unniltrium; Lr, lawrencium. 104: Unq, unnilquadium; Rf, rutherfordium. 105: Unp, unnilpentium; Db, dubnium. 106: Unh, unnilhexium; Sg, seaborgium. 107: Uns, unnilseptium; Bh, bohrium. 108: Uno, unniloctium; Hs, hassium. 109: Une, unnilennium; Mt, meitnerium. 110: Uun, ununnilium; Ds, darmstadtium. 111: Uuu, unununium; Rg, roentgenium. 112: Uub, ununbium; Cn, copernicium. 113: Uut, ununtrium; Nh, nihonium. 114: Uuq, ununquadium; Fl, flerovium. 115: Uup, ununpentium; Mc, moscovium. 116: Uuh, ununhexium; Lv, livermorium. 117: Uus, ununseptium; Ts, tennessine. 118: Uuo, ununoctium; Og, oganesson. When elements beyond oganesson (starting with ununennium, Uue, element 119), are discovered; their systematic name and symbol will presumably be superseded by a trivial name and symbol. Alchemical symbols The following ideographic symbols were used in alchemy to denote elements known since ancient times. Not included in this list are spurious elements, such as the classical elements fire and water or phlogiston, and substances now known to be compounds. Many more symbols were in at least sporadic use: one early 17th-century alchemical manuscript lists 22 symbols for mercury alone. Planetary names and symbols for the metals – the seven planets and seven metals known since Classical times in Europe and the Mideast – was ubiquitous in alchemy. The association of what are anachronistically known as planetary metals started breaking down with the discovery of antimony, bismuth and zinc in the 16th century. Alchemists would typically call the metals by their planetary names, e.g. "Saturn" for lead and "Mars" for iron; compounds of tin, iron and silver continued to be called "jovial", "martial" and "lunar"; or "of Jupiter", "of Mars" and "of the moon", through the 17th century. The tradition remains today with the name of the element mercury, where chemists decided the planetary name was preferable to common names like "quicksilver", and in a few archaic terms such as lunar caustic (silver nitrate) and saturnism (lead poisoning). Daltonian symbols The following symbols were employed by John Dalton in the early 1800s as the periodic table of elements was being formulated. Not included in this list are substances now known to be compounds, such as certain rare-earth mineral blends. Modern alphabetic notation was introduced in 1814 by Jöns Jakob Berzelius; its precursor can be seen in Dalton's circled letters for the metals, especially in his augmented table from 1810. A trace of Dalton's conventions also survives in ball-and-stick models of molecules, where balls for carbon are black and for oxygen red. Symbols for named isotopes The following is a list of isotopes which have been given unique symbols. This is not a list of current systematic symbols (in the Atom form); such a list can instead be found in Template:Navbox element isotopes. The symbols for isotopes of hydrogen, deuterium (D) and tritium (T), are still in use today, as is thoron (Tn) for radon-220 (though not actinon; An usually instead means a generic actinide). Heavy water and other deuterated solvents are commonly used in chemistry, and it is convenient to use a single character rather than a symbol with a subscript in these cases. The practice also continues with tritium compounds. When the name of the solvent is given, a lowercase d is sometimes used. For example, d-benzene or CD can be used instead of C[H]. The symbols for isotopes of elements other than hydrogen and radon are no longer used in the scientific community. Many of these symbols were designated during the early years of radiochemistry, and several isotopes (namely those in the decay chains of actinium, radium, and thorium) bear placeholder names using the early naming system devised by Ernest Rutherford. Other symbols In Chinese, each chemical element has a dedicated character, usually created for the purpose (see Chemical elements in East Asian languages). However, in Chinese Latin symbols are also used, especially in formulas. General: A: A deprotonated acid or an anion An: any actinide B: A base, often in the context of Lewis acid–base theory or Brønsted–Lowry acid–base theory E: any element or electrophile L: any ligand Ln: any lanthanide M: any metal Mm: mischmetal (occasionally used) Ng: any noble gas (Rg is sometimes used, but that is also used for the element roentgenium: see above) Nu: any nucleophile R: any unspecified radical (moiety) not important to the discussion St: steel (occasionally used) X: any halogen (or sometimes pseudohalogen) From organic chemistry: Ac: acetyl – (also used for the element actinium: see above) Ad: 1-adamantyl All: allyl Am: amyl (pentyl) – (also used for the element americium: see above) Ar: aryl – (also used for the element argon: see above) Bn: benzyl Bs: brosyl or (outdated) benzenesulfonyl Bu: butyl (i-, s-, or t- prefixes may be used to denote iso-, sec-, or tert- isomers, respectively) Bz: benzoyl Cp: cyclopentadienyl Cp*: pentamethylcyclopentadienyl Cy: cyclohexyl Cyp: cyclopentyl Et: ethyl Me: methyl Mes: mesityl (2,4,6-trimethylphenyl) Ms: mesyl (methylsulfonyl) Np: neopentyl – (also used for the element neptunium: see above) Ns: nosyl Pent: pentyl Ph, Φ: phenyl Pr: propyl – (i- prefix may be used to denote isopropyl. Also used for the element praseodymium: see above) R: In organic chemistry contexts, an unspecified "R" is often understood to be an alkyl group Tf: triflyl (trifluoromethanesulfonyl) Tr, Trt: trityl (triphenylmethyl) Ts, Tos: tosyl (para-toluenesulfonyl) – (Ts also used for the element tennessine: see above) Vi: vinyl Exotic atoms: Mu: muonium Pn: protonium Ps: positronium Hazard pictographs are another type of symbols used in chemistry. See also List of chemical elements naming controversies List of elements Nuclear notation Notes References Elementymology & Elements Multidict, element name etymologies. Retrieved July 15, 2005. Atomic Weights of the Elements 2001, Pure Appl. Chem. 75(8), 1107–1122, 2003. Retrieved June 30, 2005. Atomic weights of elements with atomic numbers from 1–109 taken from this source. IUPAC Standard Atomic Weights Revised (2005). WebElements Periodic Table. Retrieved June 30, 2005. Atomic weights of elements with atomic numbers 110–116 taken from this source. Leighton, Robert B. Principles of Modern Physics. New York: McGraw-Hill. 1959. Scerri, E.R. "The Periodic Table, Its Story and Its Significance". New York, Oxford University Press. 2007. External links Berzelius' List of Elements History of IUPAC Atomic Weight Values (1883 to 1997) Committee on Nomenclature, Terminology, and Symbols , American Chemical Society Chemistry pl:Symbol chemiczny
Chemical symbol
[ "Physics", "Mathematics" ]
2,658
[ "Symbols", "Chemical elements", "Atoms", "Matter" ]
63,249
https://en.wikipedia.org/wiki/Pilcrow
In typography, the pilcrow (¶) is a glyph used to identify a paragraph. In editorial production the pilcrow typographic character may also be known as the paragraph mark, the paragraph sign, the paragraph symbol, the paraph, and the blind P. In writing and editorial practice, authors and editors use the pilcrow glyph to indicate the start of separate paragraphs, and to identify a new paragraph within a long block of text without paragraph indentions, as in the book An Essay on Typography (1931), by Eric Gill. In the Middle Ages, the practice of rubrication (type in red-ink) used a red pilcrow to indicate the beginning of a different train of thought within the author's narrative without paragraphs. The typographic character of the pilcrow usually is drawn like a lowercase letter-, reaching from the descender to the ascender height; the bowl (loop) can be filled or empty. Moreover, the pilcrow can also be drawn with the bowl extended downward, to resemble a reversed letter-. Origin and name The English word pilcrow derives from the &lsqb;&rsqb;, "written in the side" or "written in the margin". In Old French, parágraphos became the word and later . The earliest English language reference to the modern pilcrow is in 1440, with the Middle English word . Use in Ancient Greek The first way to divide sentences into groups in Ancient Greek was the original [], which was a horizontal line in the margin to the left of the main text. As the became more popular, the horizontal line eventually changed into the Greek letter Gamma (, ) and later into , which were enlarged letters at the beginning of a paragraph. Use in Latin The above notation soon changed to the letter , an abbreviation for the Latin word , which translates as "head", i.e. it marks the head of a new thesis. Eventually, to mark a new section, the Latin word , which translates as "little head", was used, and the letter came to mark a new section, or chapter, in 300 BC. Use in Middle Ages In the 1100s, had completely replaced as the symbol for a new chapter. Rubricators eventually added one or two vertical bars to the to stylize it (as ); the "bowl" of the symbol was filled in with dark ink and eventually looked like the modern pilcrow, . (Scribes would often leave space before paragraphs to allow rubricators to add a hand-drawn pilcrow in contrasting ink. With the introduction of the printing press from the late medieval period on, space before paragraphs was still left for rubricators to complete by hand. However in some circumstances, rubricators could not draw fast enough for publishers' deadlines and books would often be sold with the beginnings of the paragraphs left blank. This is how the practice of indention before paragraphs was created.) Modern use The pilcrow remains in use in modern documents in the following ways: In legal writing, it is often used whenever one cites a specific paragraph within pleadings, law review articles, statutes, or other legal documents and materials. It is also used to indicate a paragraph break within quoted text. In academic writing, it is sometimes used as an in-text referencing tool to make reference to a specific paragraph from a document that does not contain page numbers, allowing the reader to find where that particular idea or statistic was sourced. The pilcrow sign followed by a number indicates the paragraph number from the top of the page. It is rarely used when citing books or journal articles. In web publishing style guides, a pilcrow may be used to indicate an anchor link. In proofreading, it indicates an instruction that one paragraph should be split into two or more separate paragraphs. The proofreader inserts the pilcrow at the point where a new paragraph should begin. In some high-church Anglican and Episcopal churches, it is used in the printed order of service to indicate that instructions follow; these indicate when the congregation should stand, sit, and kneel, who participates in various portions of the service, and similar information. King's College, Cambridge uses this convention in the service booklet for the Festival of Nine Lessons and Carols. This is analogous to the writing of these instructions in red in some rubrication conventions. The pilcrow is also often used in word processing and desktop publishing software: As the toolbar icon used to toggle the display of formatting marks, such as tabs and paragraph breaks. As the symbol for a paragraph break, shown when display is requested. The pilcrow may indicate a footnote in a convention that uses a set of distinct typographic symbols in turn to distinguish between footnotes on a given page; it is the sixth in a series of footnote symbols beginning with the asterisk. (The modern convention is to use numbers or letters in superscript form.) Encoding The pilcrow character was encoded in the 1984 Multinational Character Set (Digital Equipment Corporation's extension to ASCII) at 0xB6 (decimal 182), subsequently adopted by ISO/IEC 8859-1 ("ISO Latin-1", 1987) at the same code point, and thence by Unicode as . In addition, Unicode also defines , , and . The capitulum character is obsolete, being replaced by pilcrow, but is included in Unicode for backward compatibility and historic studies. The pilcrow symbol was included in the default hardware codepage 437 of IBM PCs (and all other 8-bit OEM codepages based on this) at code point 20 (0x14), which is an ASCII control character. Keyboard entry Windows: or (both on the numeric keypad) Microsoft US international keyboard layout: Classic Mac OS and macOS: Linux and ChromeOS: Linux with compose key: ChromeOS with UK-International keyboard layout: HTML: (introduced in HTML 3.2 (1997)), or Vim, in insert mode:     (upper-case i, not a digit 1 or a lower-case letter L) TeX: \P LaTeX: \P or \textpilcrow Android phones (Gboard): Apple iPhones and iPads may require the user to set up a text replacement shortcut without installing custom keyboard software. Tools may be required to easily generate a pilcrow, or other special characters. Paragraph signs in non-Latin writing systems In Sanskrit and other Indian languages, text blocks are commonly written in stanzas. Two vertical bars, ॥, called a "double daṇḍa", are the functional equivalent of a pilcrow. In Thai, the character ๏ marks the beginning of a stanza and ฯะ or ๚ะ marks the end of a stanza. In Amharic, the characters ፠ and ፨ can mark a section/paragraph. In China, the 〇, which has been used as a zero character since the 12th century, has been used to mark paragraphs in older Western-made books such as the Chinese Union Version of the Bible. References External links Punctuation Typographical symbols
Pilcrow
[ "Mathematics" ]
1,495
[ "Symbols", "Typographical symbols" ]
63,262
https://en.wikipedia.org/wiki/Financial%20economics
Financial economics is the branch of economics characterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear on both sides of a trade". Its concern is thus the interrelation of financial variables, such as share prices, interest rates and exchange rates, as opposed to those concerning the real economy. It has two main areas of focus: asset pricing and corporate finance; the first being the perspective of providers of capital, i.e. investors, and the second of users of capital. It thus provides the theoretical underpinning for much of finance. The subject is concerned with "the allocation and deployment of economic resources, both spatially and across time, in an uncertain environment". It therefore centers on decision making under uncertainty in the context of the financial markets, and the resultant economic and financial models and principles, and is concerned with deriving testable or policy implications from acceptable assumptions. It thus also includes a formal study of the financial markets themselves, especially market microstructure and market regulation. It is built on the foundations of microeconomics and decision theory. Financial econometrics is the branch of financial economics that uses econometric techniques to parameterise the relationships identified. Mathematical finance is related in that it will derive and extend the mathematical or numerical models suggested by financial economics. Whereas financial economics has a primarily microeconomic focus, monetary economics is primarily macroeconomic in nature. Underlying economics Financial economics studies how rational investors would apply decision theory to investment management. The subject is thus built on the foundations of microeconomics and derives several key results for the application of decision making under uncertainty to the financial markets. The underlying economic logic yields the fundamental theorem of asset pricing, which gives the conditions for arbitrage-free asset pricing. The various "fundamental" valuation formulae result directly. Present value, expectation and utility Underlying all of financial economics are the concepts of present value and expectation. Calculating their present value, in the first formula, allows the decision maker to aggregate the cashflows (or other returns) to be produced by the asset in the future to a single value at the date in question, and to thus more readily compare two opportunities; this concept is then the starting point for financial decision making. (Note that here, "" represents a generic (or arbitrary) discount rate applied to the cash flows, whereas in the valuation formulae, the risk-free rate is applied once these have been "adjusted" for their riskiness; see below.) An immediate extension is to combine probabilities with present value, leading to the expected value criterion which sets asset value as a function of the sizes of the expected payouts and the probabilities of their occurrence, and respectively. This decision method, however, fails to consider risk aversion. In other words, since individuals receive greater utility from an extra dollar when they are poor and less utility when comparatively rich, the approach is therefore to "adjust" the weight assigned to the various outcomes, i.e. "states", correspondingly: . See indifference price. (Some investors may in fact be risk seeking as opposed to risk averse, but the same logic would apply.) Choice under uncertainty here may then be defined as the maximization of expected utility. More formally, the resulting expected utility hypothesis states that, if certain axioms are satisfied, the subjective value associated with a gamble by an individual is that individuals statistical expectation of the valuations of the outcomes of that gamble. The impetus for these ideas arises from various inconsistencies observed under the expected value framework, such as the St. Petersburg paradox and the Ellsberg paradox. Arbitrage-free pricing and equilibrium The concepts of arbitrage-free, "rational", pricing and equilibrium are then coupled with the above to derive various of the "classical" (or "neo-classical") financial economics models. Rational pricing is the assumption that asset prices (and hence asset pricing models) will reflect the arbitrage-free price of the asset, as any deviation from this price will be "arbitraged away". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments. Economic equilibrium is a state in which economic forces such as supply and demand are balanced, and in the absence of external influences these equilibrium values of economic variables will not change. General equilibrium deals with the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that a set of prices exists that will result in an overall equilibrium. (This is in contrast to partial equilibrium, which only analyzes single markets.) The two concepts are linked as follows: where market prices do not allow profitable arbitrage, i.e. they comprise an arbitrage-free market, then these prices are also said to constitute an "arbitrage equilibrium". Intuitively, this may be seen by considering that where an arbitrage opportunity does exist, then prices can be expected to change, and they are therefore not in equilibrium. An arbitrage equilibrium is thus a precondition for a general economic equilibrium. "Complete" here means that there is a price for every asset in every possible state of the world, , and that the complete set of possible bets on future states-of-the-world can therefore be constructed with existing assets (assuming no friction): essentially solving simultaneously for n (risk-neutral) probabilities, , given n prices. For a simplified example see , where the economy has only two possible states – up and down – and where and () are the two corresponding probabilities, and in turn, the derived distribution, or "measure". The formal derivation will proceed by arbitrage arguments.The analysis here is often undertaken assuming a representative agent, essentially treating all market participants, "agents", as identical (or, at least, assuming that they act in such a way that the sum of their choices is equivalent to the decision of one individual) with the effect that the problems are then mathematically tractable. With this measure in place, the expected, i.e. required, return of any security (or portfolio) will then equal the risk-free return, plus an "adjustment for risk", i.e. a security-specific risk premium, compensating for the extent to which its cashflows are unpredictable. All pricing models are then essentially variants of this, given specific assumptions or conditions. This approach is consistent with the above, but with the expectation based on "the market" (i.e. arbitrage-free, and, per the theorem, therefore in equilibrium) as opposed to individual preferences. Continuing the example, in pricing a derivative instrument, its forecasted cashflows in the above-mentioned up- and down-states and , are multiplied through by and , and are then discounted at the risk-free interest rate; per the second equation above. In pricing a "fundamental", underlying, instrument (in equilibrium), on the other hand, a risk-appropriate premium over risk-free is required in the discounting, essentially employing the first equation with and combined. This premium may be derived by the CAPM (or extensions) as will be seen under . The difference is explained as follows: By construction, the value of the derivative will (must) grow at the risk free rate, and, by arbitrage arguments, its value must then be discounted correspondingly; in the case of an option, this is achieved by "manufacturing" the instrument as a combination of the underlying and a risk free "bond"; see (and below). Where the underlying is itself being priced, such "manufacturing" is of course not possible – the instrument being "fundamental", i.e. as opposed to "derivative" – and a premium is then required for risk. (Correspondingly, mathematical finance separates into two analytic regimes: risk and portfolio management (generally) use physical (or actual or actuarial) probability, denoted by "P"; while derivatives pricing uses risk-neutral probability (or arbitrage-pricing probability), denoted by "Q". In specific applications the lower case is used, as in the above equations.) State prices With the above relationship established, the further specialized Arrow–Debreu model may be derived. This result suggests that, under certain economic conditions, there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy. The Arrow–Debreu model applies to economies with maximally complete markets, in which there exists a market for every time period and forward prices for every commodity at all time periods. A direct extension, then, is the concept of a state price security, also called an Arrow–Debreu security, a contract that agrees to pay one unit of a numeraire (a currency or a commodity) if a particular state occurs ("up" and "down" in the simplified example above) at a particular time in the future and pays zero numeraire in all the other states. The price of this security is the state price of this particular state of the world; also referred to as a "Risk Neutral Density". In the above example, the state prices, , would equate to the present values of and : i.e. what one would pay today, respectively, for the up- and down-state securities; the state price vector is the vector of state prices for all states. Applied to derivative valuation, the price today would simply be : the fourth formula (see above regarding the absence of a risk premium here). For a continuous random variable indicating a continuum of possible states, the value is found by integrating over the state price "density". State prices find immediate application as a conceptual tool ("contingent claim analysis"); but can also be applied to valuation problems. Given the pricing mechanism described, one can decompose the derivative value – true in fact for "every security" – as a linear combination of its state-prices; i.e. back-solve for the state-prices corresponding to observed derivative prices. These recovered state-prices can then be used for valuation of other instruments with exposure to the underlyer, or for other decision making relating to the underlyer itself. Using the related stochastic discount factor - also called the pricing kernel - the asset price is computed by "discounting" the future cash flow by the stochastic factor , and then taking the expectation; the third equation above. Essentially, this factor divides expected utility at the relevant future period - a function of the possible asset values realized under each state - by the utility due to today's wealth, and is then also referred to as "the intertemporal marginal rate of substitution". Resultant models Applying the above economic concepts, we may then derive various economic- and financial models and principles. As above, the two usual areas of focus are Asset Pricing and Corporate Finance, the first being the perspective of providers of capital, the second of users of capital. Here, and for (almost) all other financial economics models, the questions addressed are typically framed in terms of "time, uncertainty, options, and information", as will be seen below. Time: money now is traded for money in the future. Uncertainty (or risk): The amount of money to be transferred in the future is uncertain. Options: one party to the transaction can make a decision at a later time that will affect subsequent transfers of money. Information: knowledge of the future can reduce, or possibly eliminate, the uncertainty associated with future monetary value (FMV). Applying this framework, with the above concepts, leads to the required models. This derivation begins with the assumption of "no uncertainty" and is then expanded to incorporate the other considerations. (This division sometimes denoted "deterministic" and "random", or "stochastic".) Certainty The starting point here is "Investment under certainty", and usually framed in the context of a corporation. The Fisher separation theorem, asserts that the objective of the corporation will be the maximization of its present value, regardless of the preferences of its shareholders. Related is the Modigliani–Miller theorem, which shows that, under certain conditions, the value of a firm is unaffected by how that firm is financed, and depends neither on its dividend policy nor its decision to raise capital by issuing stock or selling debt. The proof here proceeds using arbitrage arguments, and acts as a benchmark for evaluating the effects of factors outside the model that do affect value. The mechanism for determining (corporate) value is provided by John Burr Williams' The Theory of Investment Value, which proposes that the value of an asset should be calculated using "evaluation by the rule of present worth". Thus, for a common stock, the "intrinsic", long-term worth is the present value of its future net cashflows, in the form of dividends. What remains to be determined is the appropriate discount rate. Later developments show that, "rationally", i.e. in the formal sense, the appropriate discount rate here will (should) depend on the asset's riskiness relative to the overall market, as opposed to its owners' preferences; see below. Net present value (NPV) is the direct extension of these ideas typically applied to Corporate Finance decisioning. For other results, as well as specific models developed here, see the list of "Equity valuation" topics under . Bond valuation, in that cashflows (coupons and return of principal, or "Face value") are deterministic, may proceed in the same fashion. An immediate extension, Arbitrage-free bond pricing, discounts each cashflow at the market derived rate – i.e. at each coupon's corresponding zero rate, and of equivalent credit worthiness – as opposed to an overall rate. In many treatments bond valuation precedes equity valuation, under which cashflows (dividends) are not "known" per se. Williams and onward allow for forecasting as to these – based on historic ratios or published dividend policy – and cashflows are then treated as essentially deterministic; see below under . For both stocks and bonds, "under certainty, with the focus on cash flows from securities over time," valuation based on a term structure of interest rates is in fact consistent with arbitrage-free pricing. Indeed, a corollary of the above is that "the law of one price implies the existence of a discount factor"; correspondingly, as formulated, . Whereas these "certainty" results are all commonly employed under corporate finance, uncertainty is the focus of "asset pricing models" as follows. Fisher's formulation of the theory here - developing an intertemporal equilibrium model - underpins also the below applications to uncertainty; see for the development. Uncertainty For "choice under uncertainty" the twin assumptions of rationality and market efficiency, as more closely defined, lead to modern portfolio theory (MPT) with its capital asset pricing model (CAPM) – an equilibrium-based result – and to the Black–Scholes–Merton theory (BSM; often, simply Black–Scholes) for option pricing – an arbitrage-free result. As above, the (intuitive) link between these, is that the latter derivative prices are calculated such that they are arbitrage-free with respect to the more fundamental, equilibrium determined, securities prices; see . Briefly, and intuitively – and consistent with above – the relationship between rationality and efficiency is as follows. Given the ability to profit from private information, self-interested traders are motivated to acquire and act on their private information. In doing so, traders contribute to more and more "correct", i.e. efficient, prices: the efficient-market hypothesis, or EMH. Thus, if prices of financial assets are (broadly) efficient, then deviations from these (equilibrium) values could not last for long. (See earnings response coefficient.) The EMH (implicitly) assumes that average expectations constitute an "optimal forecast", i.e. prices using all available information are identical to the best guess of the future: the assumption of rational expectations. The EMH does allow that when faced with new information, some investors may overreact and some may underreact, but what is required, however, is that investors' reactions follow a normal distribution – so that the net effect on market prices cannot be reliably exploited to make an abnormal profit. In the competitive limit, then, market prices will reflect all available information and prices can only move in response to news: the random walk hypothesis. This news, of course, could be "good" or "bad", minor or, less common, major; and these moves are then, correspondingly, normally distributed; with the price therefore following a log-normal distribution. Under these conditions, investors can then be assumed to act rationally: their investment decision must be calculated or a loss is sure to follow; correspondingly, where an arbitrage opportunity presents itself, then arbitrageurs will exploit it, reinforcing this equilibrium. Here, as under the certainty-case above, the specific assumption as to pricing is that prices are calculated as the present value of expected future dividends, as based on currently available information. What is required though, is a theory for determining the appropriate discount rate, i.e. "required return", given this uncertainty: this is provided by the MPT and its CAPM. Relatedly, rationality – in the sense of arbitrage-exploitation – gives rise to Black–Scholes; option values here ultimately consistent with the CAPM. In general, then, while portfolio theory studies how investors should balance risk and return when investing in many assets or securities, the CAPM is more focused, describing how, in equilibrium, markets set the prices of assets in relation to how risky they are. This result will be independent of the investor's level of risk aversion and assumed utility function, thus providing a readily determined discount rate for corporate finance decision makers as above, and for other investors. The argument proceeds as follows: If one can construct an efficient frontier – i.e. each combination of assets offering the best possible expected level of return for its level of risk, see diagram – then mean-variance efficient portfolios can be formed simply as a combination of holdings of the risk-free asset and the "market portfolio" (the Mutual fund separation theorem), with the combinations here plotting as the capital market line, or CML. Then, given this CML, the required return on a risky security will be independent of the investor's utility function, and solely determined by its covariance ("beta") with aggregate, i.e. market, risk. This is because investors here can then maximize utility through leverage as opposed to pricing; see Separation property (finance), and CML diagram aside. As can be seen in the formula aside, this result is consistent with the preceding, equaling the riskless return plus an adjustment for risk. A more modern, direct, derivation is as described at the bottom of this section; which can be generalized to derive other equilibrium-pricing models. Black–Scholes provides a mathematical model of a financial market containing derivative instruments, and the resultant formula for the price of European-styled options. The model is expressed as the Black–Scholes equation, a partial differential equation describing the changing price of the option over time; it is derived assuming log-normal, geometric Brownian motion (see Brownian model of financial markets). The key financial insight behind the model is that one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently "eliminate risk", absenting the risk adjustment from the pricing (, the value, or price, of the option, grows at , the risk-free rate). This hedge, in turn, implies that there is only one right price – in an arbitrage-free sense – for the option. And this price is returned by the Black–Scholes option pricing formula. (The formula, and hence the price, is consistent with the equation, as the formula is the solution to the equation.) Since the formula is without reference to the share's expected return, Black–Scholes inheres risk neutrality; intuitively consistent with the "elimination of risk" here, and mathematically consistent with above. Relatedly, therefore, the pricing formula may also be derived directly via risk neutral expectation. Itô's lemma provides the underlying mathematics, and, with Itô calculus more generally, remains fundamental in quantitative finance. As implied by the Fundamental Theorem, the two major results are consistent. Here, the Black Scholes equation can alternatively be derived from the CAPM, and the price obtained from the Black–Scholes model is thus consistent with the assumptions of the CAPM. The Black–Scholes theory, although built on Arbitrage-free pricing, is therefore consistent with the equilibrium based capital asset pricing. Both models, in turn, are ultimately consistent with the Arrow–Debreu theory, and can be derived via state-pricing – essentially, by expanding the fundamental result above – further explaining, and if required demonstrating, this consistency. Here, the CAPM is derived by linking , risk aversion, to overall market return, and setting the return on security as ; see . The Black–Scholes formula is found, in the limit, by attaching a binomial probability to each of numerous possible spot-prices (i.e. states) and then rearranging for the terms corresponding to and , per the boxed description; see . Extensions More recent work further generalizes and extends these models. As regards asset pricing, developments in equilibrium-based pricing are discussed under "Portfolio theory" below, while "Derivative pricing" relates to risk-neutral, i.e. arbitrage-free, pricing. As regards the use of capital, "Corporate finance theory" relates, mainly, to the application of these models. Portfolio theory The majority of developments here relate to required return, i.e. pricing, extending the basic CAPM. Multi-factor models such as the Fama–French three-factor model and the Carhart four-factor model, propose factors other than market return as relevant in pricing. The intertemporal CAPM and consumption-based CAPM similarly extend the model. With intertemporal portfolio choice, the investor now repeatedly optimizes her portfolio; while the inclusion of consumption (in the economic sense) then incorporates all sources of wealth, and not just market-based investments, into the investor's calculation of required return. Whereas the above extend the CAPM, the single-index model is a more simple model. It assumes, only, a correlation between security and market returns, without (numerous) other economic assumptions. It is useful in that it simplifies the estimation of correlation between securities, significantly reducing the inputs for building the correlation matrix required for portfolio optimization. The arbitrage pricing theory (APT) similarly differs as regards its assumptions. APT "gives up the notion that there is one right portfolio for everyone in the world, and ...replaces it with an explanatory model of what drives asset returns." It returns the required (expected) return of a financial asset as a linear function of various macro-economic factors, and assumes that arbitrage should bring incorrectly priced assets back into line. The linear factor model structure of the APT is used as the basis for many of the commercial risk systems employed by asset managers. As regards portfolio optimization, the Black–Litterman model departs from the original Markowitz model – i.e. of constructing portfolios via an efficient frontier. Black–Litterman instead starts with an equilibrium assumption, and is then modified to take into account the 'views' (i.e., the specific opinions about asset returns) of the investor in question to arrive at a bespoke asset allocation. Where factors additional to volatility are considered (kurtosis, skew...) then multiple-criteria decision analysis can be applied; here deriving a Pareto efficient portfolio. The universal portfolio algorithm applies machine learning to asset selection, learning adaptively from historical data. Behavioral portfolio theory recognizes that investors have varied aims and create an investment portfolio that meets a broad range of goals. Copulas have lately been applied here; recently this is the case also for genetic algorithms and Machine learning, more generally. (Tail) risk parity focuses on allocation of risk, rather than allocation of capital. See for other techniques and objectives, and for discussion. Derivative pricing In pricing derivatives, the binomial options pricing model provides a discretized version of Black–Scholes, useful for the valuation of American styled options. Discretized models of this type are built – at least implicitly – using state-prices (as above); relatedly, a large number of researchers have used options to extract state-prices for a variety of other applications in financial economics. For path dependent derivatives, Monte Carlo methods for option pricing are employed; here the modelling is in continuous time, but similarly uses risk neutral expected value. Various other numeric techniques have also been developed. The theoretical framework too has been extended such that martingale pricing is now the standard approach. Drawing on these techniques, models for various other underlyings and applications have also been developed, all based on the same logic (using "contingent claim analysis"). Real options valuation allows that option holders can influence the option's underlying; models for employee stock option valuation explicitly assume non-rationality on the part of option holders; Credit derivatives allow that payment obligations or delivery requirements might not be honored. Exotic derivatives are now routinely valued. Multi-asset underlyers are handled via simulation or copula based analysis. Similarly, the various short-rate models allow for an extension of these techniques to fixed income- and interest rate derivatives. (The Vasicek and CIR models are equilibrium-based, while Ho–Lee and subsequent models are based on arbitrage-free pricing.) The more general HJM Framework describes the dynamics of the full forward-rate curve – as opposed to working with short rates – and is then more widely applied. The valuation of the underlying instrument – additional to its derivatives – is relatedly extended, particularly for hybrid securities, where credit risk is combined with uncertainty re future rates; see and . Following the Crash of 1987, equity options traded in American markets began to exhibit what is known as a "volatility smile"; that is, for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices, and thus implied volatilities, than what is suggested by BSM. (The pattern differs across various markets.) Modelling the volatility smile is an active area of research, and developments here – as well as implications re the standard theory – are discussed in the next section. After the 2007–2008 financial crisis, a further development: as outlined, (over the counter) derivative pricing had relied on the BSM risk neutral pricing framework, under the assumptions of funding at the risk free rate and the ability to perfectly replicate cashflows so as to fully hedge. This, in turn, is built on the assumption of a credit-risk-free environment – called into question during the crisis. Addressing this, therefore, issues such as counterparty credit risk, funding costs and costs of capital are now additionally considered when pricing, and a credit valuation adjustment, or CVA – and potentially other valuation adjustments, collectively xVA – is generally added to the risk-neutral derivative value. The standard economic arguments can be extended to incorporate these various adjustments. A related, and perhaps more fundamental change, is that discounting is now on the Overnight Index Swap (OIS) curve, as opposed to LIBOR as used previously. This is because post-crisis, the overnight rate is considered a better proxy for the "risk-free rate". (Also, practically, the interest paid on cash collateral is usually the overnight rate; OIS discounting is then, sometimes, referred to as "CSA discounting".) Swap pricing – and, therefore, yield curve construction – is further modified: previously, swaps were valued off a single "self discounting" interest rate curve; whereas post crisis, to accommodate OIS discounting, valuation is now under a "multi-curve framework" where "forecast curves" are constructed for each floating-leg LIBOR tenor, with discounting on the common OIS curve. Corporate finance theory Mirroring the above developments, corporate finance valuations and decisioning no longer need assume "certainty". Monte Carlo methods in finance allow financial analysts to construct "stochastic" or probabilistic corporate finance models, as opposed to the traditional static and deterministic models; see . Relatedly, Real Options theory allows for owner – i.e. managerial – actions that impact underlying value: by incorporating option pricing logic, these actions are then applied to a distribution of future outcomes, changing with time, which then determine the "project's" valuation today. More traditionally, decision trees – which are complementary – have been used to evaluate projects, by incorporating in the valuation (all) possible events (or states) and consequent management decisions; the correct discount rate here reflecting each decision-point's "non-diversifiable risk looking forward." Related to this, is the treatment of forecasted cashflows in equity valuation. In many cases, following Williams above, the average (or most likely) cash-flows were discounted, as opposed to a theoretically correct state-by-state treatment under uncertainty; see comments under Financial modeling § Accounting. In more modern treatments, then, it is the expected cashflows (in the mathematical sense: ) combined into an overall value per forecast period which are discounted. And using the CAPM – or extensions – the discounting here is at the risk-free rate plus a premium linked to the uncertainty of the entity or project cash flows (essentially, and combined). Other developments here include agency theory, which analyses the difficulties in motivating corporate management (the "agent"; in a different sense to the above) to act in the best interests of shareholders (the "principal"), rather than in their own interests; here emphasizing the issues interrelated with capital structure. Clean surplus accounting and the related residual income valuation provide a model that returns price as a function of earnings, expected returns, and change in book value, as opposed to dividends. This approach, to some extent, arises due to the implicit contradiction of seeing value as a function of dividends, while also holding that dividend policy cannot influence value per Modigliani and Miller's "Irrelevance principle"; see . "Corporate finance" as a discipline more generally, per Fisher above, relates to the long term objective of maximizing the value of the firm - and its return to shareholders - and thus also incorporates the areas of capital structure and dividend policy. Extensions of the theory here then also consider these latter, as follows: (i) optimization re capitalization structure, and theories here as to corporate choices and behavior: Capital structure substitution theory, Pecking order theory, Market timing hypothesis, Trade-off theory; (ii) considerations and analysis re dividend policy, additional to - and sometimes contrasting with - Modigliani-Miller, include: the Walter model, Lintner model, Residuals theory and signaling hypothesis, as well as discussion re the observed clientele effect and dividend puzzle. As described, the typical application of real options is to capital budgeting type problems. However, here, they are also applied to problems of capital structure and dividend policy, and to the related design of corporate securities; and since stockholder and bondholders have different objective functions, in the analysis of the related agency problems. In all of these cases, state-prices can provide the market-implied information relating to the corporate, as above, which is then applied to the analysis. For example, convertible bonds can (must) be priced consistent with the (recovered) state-prices of the corporate's equity. Financial markets The discipline, as outlined, also includes a formal study of financial markets. Of interest especially are market regulation and market microstructure, and their relationship to price efficiency. Regulatory economics studies, in general, the economics of regulation. In the context of finance, it will address the impact of financial regulation on the functioning of markets and the efficiency of prices, while also weighing the corresponding increases in market confidence and financial stability. Research here considers how, and to what extent, regulations relating to disclosure (earnings guidance, annual reports), insider trading, and short-selling will impact price efficiency, the cost of equity, and market liquidity. Market microstructure is concerned with the details of how exchange occurs in markets (with Walrasian-, matching-, Fisher-, and Arrow-Debreu markets as prototypes), and "analyzes how specific trading mechanisms affect the price formation process", examining the ways in which the processes of a market affect determinants of transaction costs, prices, quotes, volume, and trading behavior. It has been used, for example, in providing explanations for long-standing exchange rate puzzles, and for the equity premium puzzle. In contrast to the above classical approach, models here explicitly allow for (testing the impact of) market frictions and other imperfections; see also market design. For both regulation and microstructure, and generally, agent-based models can be developed to examine any impact due to a change in structure or policy - or to make inferences re market dynamics - by testing these in an artificial financial market, or AFM. This approach, essentially simulated trade between numerous agents, "typically uses artificial intelligence technologies [often genetic algorithms and neural nets] to represent the adaptive behaviour of market participants". These 'bottom-up' models "start from first principals of agent behavior", with participants modifying their trading strategies having learned over time, and "are able to describe macro features [i.e. stylized facts] emerging from a soup of individual interacting strategies". Agent-based models depart further from the classical approach — the representative agent, as outlined — in that they introduce heterogeneity into the environment (thereby addressing, also, the aggregation problem). Challenges and criticism As above, there is a very close link between: the random walk hypothesis, with the associated belief that price changes should follow a normal distribution, on the one hand; and market efficiency and rational expectations, on the other. Wide departures from these are commonly observed, and there are thus, respectively, two main sets of challenges. Departures from normality As discussed, the assumptions that market prices follow a random walk and that asset returns are normally distributed are fundamental. Empirical evidence, however, suggests that these assumptions may not hold, and that in practice, traders, analysts and risk managers frequently modify the "standard models" (see Kurtosis risk, Skewness risk, Long tail, Model risk). In fact, Benoit Mandelbrot had discovered already in the 1960s that changes in financial prices do not follow a normal distribution, the basis for much option pricing theory, although this observation was slow to find its way into mainstream financial economics. Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of the above "classical" financial models; while jump diffusion models allow for (option) pricing incorporating "jumps" in the spot price. Risk managers, similarly, complement (or substitute) the standard value at risk models with historical simulations, mixture models, principal component analysis, extreme value theory, as well as models for volatility clustering. For further discussion see , and . Portfolio managers, likewise, have modified their optimization criteria and algorithms; see above. Closely related is the volatility smile, where, as above, implied volatility – the volatility corresponding to the BSM price – is observed to differ as a function of strike price (i.e. moneyness), true only if the price-change distribution is non-normal, unlike that assumed by BSM. The term structure of volatility describes how (implied) volatility differs for related options with different maturities. An implied volatility surface is then a three-dimensional surface plot of volatility smile and term structure. These empirical phenomena negate the assumption of constant volatility – and log-normality – upon which Black–Scholes is built. Within institutions, the function of Black–Scholes is now, largely, to communicate prices via implied volatilities, much like bond prices are communicated via YTM; see . In consequence traders (and risk managers) now, instead, use "smile-consistent" models, firstly, when valuing derivatives not directly mapped to the surface, facilitating the pricing of other, i.e. non-quoted, strike/maturity combinations, or of non-European derivatives, and generally for hedging purposes. The two main approaches are local volatility and stochastic volatility. The first returns the volatility which is "local" to each spot-time point of the finite difference- or simulation-based valuation; i.e. as opposed to implied volatility, which holds overall. In this way calculated prices – and numeric structures – are market-consistent in an arbitrage-free sense. The second approach assumes that the volatility of the underlying price is a stochastic process rather than a constant. Models here are first calibrated to observed prices, and are then applied to the valuation or hedging in question; the most common are Heston, SABR and CEV. This approach addresses certain problems identified with hedging under local volatility. Related to local volatility are the lattice-based implied-binomial and -trinomial trees – essentially a discretization of the approach – which are similarly, but less commonly, used for pricing; these are built on state-prices recovered from the surface. Edgeworth binomial trees allow for a specified (i.e. non-Gaussian) skew and kurtosis in the spot price; priced here, options with differing strikes will return differing implied volatilities, and the tree can be calibrated to the smile as required. Similarly purposed (and derived) closed-form models were also developed. As discussed, additional to assuming log-normality in returns, "classical" BSM-type models also (implicitly) assume the existence of a credit-risk-free environment, where one can perfectly replicate cashflows so as to fully hedge, and then discount at "the" risk-free-rate. And therefore, post crisis, the various x-value adjustments must be employed, effectively correcting the risk-neutral value for counterparty- and funding-related risk. These xVA are additional to any smile or surface effect. This is valid as the surface is built on price data relating to fully collateralized positions, and there is therefore no "double counting" of credit risk (etc.) when appending xVA. (Were this not the case, then each counterparty would have its own surface...) As mentioned at top, mathematical finance (and particularly financial engineering) is more concerned with mathematical consistency (and market realities) than compatibility with economic theory, and the above "extreme event" approaches, smile-consistent modeling, and valuation adjustments should then be seen in this light. Recognizing this, critics of financial economics - especially vocal since the 2007–2008 financial crisis - suggest that instead, the theory needs revisiting almost entirely: "The current system, based on the idea that risk is distributed in the shape of a bell curve, is flawed... The problem is [that economists and practitioners] never abandon the bell curve. They are like medieval astronomers who believe the sun revolves around the earth and are furiously tweaking their geo-centric math in the face of contrary evidence. They will never get this right; they need their Copernicus." Departures from rationality As seen, a common assumption is that financial decision makers act rationally; see Homo economicus. Recently, however, researchers in experimental economics and experimental finance have challenged this assumption empirically. These assumptions are also challenged theoretically, by behavioral finance, a discipline primarily concerned with the limits to rationality of economic agents. For related criticisms re corporate finance theory vs its practice see:. Various persistent market anomalies have also been documented as consistent with and complementary to price or return distortions – e.g. size premiums – which appear to contradict the efficient-market hypothesis. Within these market anomalies, calendar effects are the most commonly referenced group. Related to these are various of the economic puzzles, concerning phenomena similarly contradicting the theory. The equity premium puzzle, as one example, arises in that the difference between the observed returns on stocks as compared to government bonds is consistently higher than the risk premium rational equity investors should demand, an "abnormal return". For further context see Random walk hypothesis § A non-random walk hypothesis, and sidebar for specific instances. More generally, and, again, particularly following the 2007–2008 financial crisis, financial economics and mathematical finance have been subjected to deeper criticism; notable here is Nassim Nicholas Taleb, who claims that the prices of financial assets cannot be characterized by the simple models currently in use, rendering much of current practice at best irrelevant, and, at worst, dangerously misleading; see Black swan theory, Taleb distribution. A topic of general interest has thus been financial crises, and the failure of (financial) economics to model (and predict) these. A related problem is systemic risk: where companies hold securities in each other then this interconnectedness may entail a "valuation chain" – and the performance of one company, or security, here will impact all, a phenomenon not easily modeled, regardless of whether the individual models are correct. See: Systemic risk § Inadequacy of classic valuation models; Cascades in financial networks; Flight-to-quality. Areas of research attempting to explain (or at least model) these phenomena, and crises, include noise trading, market microstructure (as above), and Heterogeneous agent models. The latter is extended to agent-based computational models, as mentioned; here price is treated as an emergent phenomenon, resulting from the interaction of the various market participants (agents). The noisy market hypothesis argues that prices can be influenced by speculators and momentum traders, as well as by insiders and institutions that often buy and sell stocks for reasons unrelated to fundamental value; see Noise (economic). The adaptive market hypothesis is an attempt to reconcile the efficient market hypothesis with behavioral economics, by applying the principles of evolution to financial interactions. An information cascade, alternatively, shows market participants engaging in the same acts as others ("herd behavior"), despite contradictions with their private information. Copula-based modelling has similarly been applied. See also Hyman Minsky's "financial instability hypothesis", as well as George Soros' application of "reflexivity". However, studies show that despite inefficiencies, asset prices generally follow a random walk, making it difficult to consistently outperform market averages and achieve "alpha". As an explanation for these inefficiencies, institutional limits to arbitrage are sometimes referenced (as opposed to factors directly contradictory to the theory). The practical implication is that passive investing (e.g. via low-cost index funds) should, on average, serve better than any other active strategy. See also :Category:Finance theories :Category:Financial models Historical notes References Bibliography Financial economics Volume I ; Volume II . Asset pricing Corporate finance External links Actuarial science
Financial economics
[ "Mathematics" ]
9,070
[ "Applied mathematics", "Actuarial science" ]
63,337
https://en.wikipedia.org/wiki/Supersaturation
In physical chemistry, supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium. Most commonly the term is applied to a solution of a solid in a liquid, but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent. History Early studies of the phenomenon were conducted with sodium sulfate, also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds". Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization. Occurrence and examples Solid precipitate, liquid solvent A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases with decreasing temperature; in such cases the excess of solute will rapidly separate from the solution as crystals or an amorphous powder. In a few cases the opposite effect occurs. The example of sodium sulfate in water is well-known and this was why it was used in early studies of solubility. Recrystallization is a process used to purify chemical compounds. A mixture of the impure compound and solvent is heated until the compound has dissolved. If there is some solid impurity remaining it is removed by filtration. When the temperature of the solution is subsequently lowered it briefly becomes supersaturated and then the compound crystallizes out until chemical equilibrium at the lower temperature is achieved. Impurities remain in the supernatant liquid. In some cases crystals do not form quickly and the solution remains supersaturated after cooling. This is because there is a thermodynamic barrier to the formation of a crystal in a liquid medium. Commonly this is overcome by adding a tiny crystal of the solute compound to the supersaturated solution, a process known as "seeding". Another process in common use is to rub a rod on the side of a glass vessel containing the solution to release microscopic glass particles which can act as nucleation centres. In industry, centrifugation is used to separate the crystals from the supernatant liquid. Some compounds and mixtures of compounds can form long-living supersaturated solutions. Carbohydrates are a class of such compounds; The thermodynamic barrier to formation of crystals is rather high because of extensive and irregular hydrogen bonding with the solvent, water. For example, although sucrose can be recrystallised easily, its hydrolysis product, known as "invert sugar" or "golden syrup" is a mixture of glucose and fructose that exists as a viscous, supersaturated, liquid. Clear honey contains carbohydrates which may crystallize over a period of weeks. Supersaturation may be encountered when attempting to crystallize a protein. Gaseous solute, liquid solvent The solubility of a gas in a liquid increases with increasing gas pressure. When the external pressure is reduced, the excess gas comes out of solution. Fizzy drinks are made by subjecting the liquid to carbon dioxide, under pressure. In champagne the CO2 is produced naturally in the final stage of fermentation. When the bottle or can is opened some gas is released in the form of bubbles. Release of gas from supersaturated tissues can cause an underwater diver to suffer from decompression sickness (a.k.a. the bends) when returning to the surface. This can be fatal if the released gas obstructs critical blood supplies causing ischaemia in vital tissues. Dissolved gases can be released during oil exploration when a strike is made. This occurs because the oil in oil-bearing rock is under considerable pressure from the over-lying rock, allowing the oil to be supersaturated with respect to dissolved gases. Liquid formation from a mixture of gases A cloudburst is an extreme form of production of liquid water from a supersaturated mixture of air and water vapour in the atmosphere. Supersaturation in the vapour phase is related to the surface tension of liquids through the Kelvin equation, the Gibbs–Thomson effect and the Poynting effect. The International Association for the Properties of Water and Steam (IAPWS) provides a special equation for the Gibbs free energy in the metastable-vapor region of water in its Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam. All thermodynamic properties for the metastable-vapor region of water can be derived from this equation by means of the appropriate relations of thermodynamic properties to the Gibbs free energy. Measurement When measuring the concentration of a solute in a supersaturated gaseous or liquid mixture it is obvious that the pressure inside the cuvette may be greater than the ambient pressure. When this is so a specialized cuvette must be used. The choice of analytical technique to use will depend on the characteristics of the analyte. Applications The characteristics of supersaturation have practical applications in terms of pharmaceuticals. By creating a supersaturated solution of a certain drug, it can be ingested in liquid form. The drug can be made driven into a supersaturated state through any normal mechanism and then prevented from precipitating out by adding precipitation inhibitors. Drugs in this state are referred to as "supersaturating drug delivery services," or "SDDS." Oral consumption of a drug in this form is simple and allows for the measurement of very precise dosages. Primarily, it provides a means for drugs with very low solubility to be made into aqueous solutions. In addition, some drugs can undergo supersaturation inside the body despite being ingested in a crystalline form. This phenomenon is known as in vivo supersaturation. The identification of supersaturated solutions can be used as a tool for marine ecologists to study the activity of organisms and populations. Photosynthetic organisms release O2 gas into the water. Thus, an area of the ocean supersaturated with O2 gas can likely determined to be rich with photosynthetic activity. Though some O2 will naturally be found in the ocean due to simple physical chemical properties, upwards of 70% of all oxygen gas found in supersaturated regions can be attributed to photosynthetic activity. Supersaturation in vapor phase is usually present in the expansion process through steam nozzles that operate with superheated steam at the inlet, which transitions to saturated state at the outlet. Supersaturation thus becomes an important factor to be taken into account in the design of steam turbines, as this results in an actual mass flow of steam through the nozzle being about 1 to 3% greater than the theoretically calculated value that would be expected if the expanding steam underwent a reversible adiabatic process through equilibrium states. In these cases supersaturation occurs due to the fact that the expansion process develops so rapidly and in such a short time, that the expanding vapor cannot reach its equilibrium state in the process, behaving as if it were superheated. Hence the determination of the expansion ratio, relevant to the calculation of the mass flow through the nozzle, must be done using an adiabatic index of approximately 1.3, like that of the superheated steam, instead of 1.135, which is the value that should have to be used for a quasi-static adiabatic expansion in the saturated region. The study of supersaturation is also relevant to atmospheric studies. Since the 1940s, the presence of supersaturation in the atmosphere has been known. When water is supersaturated in the troposphere, the formation of ice lattices is frequently observed. In a state of saturation, the water particles will not form ice under tropospheric conditions. It is not enough for molecules of water to form an ice lattice at saturation pressures; they require a surface to condense on to or conglomerations of liquid water molecules of water to freeze. For these reasons, relative humidities over ice in the atmosphere can be found above 100%, meaning supersaturation has occurred. Supersaturation of water is actually very common in the upper troposphere, occurring between 20% and 40% of the time. This can be determined using satellite data from the Atmospheric Infrared Sounder. References Thermodynamics Atmospheric thermodynamics Underwater diving physics
Supersaturation
[ "Physics", "Chemistry", "Mathematics" ]
1,982
[ "Applied and interdisciplinary physics", "Thermodynamics", "Underwater diving physics", "Dynamical systems" ]
63,358
https://en.wikipedia.org/wiki/Mikhail%20Lavrentyev
Mikhail Alekseyevich Lavrentyev (or Lavrentiev, ; November 19, 1900 – October 15, 1980) was a Soviet mathematician and hydrodynamicist. Early years Lavrentyev was born in Kazan, where his father was an instructor at a college (he later became a professor at Kazan University, then Moscow University). He entered Kazan University, and, when his family moved to Moscow in 1921, he transferred to the Department of Physics and Mathematics of Moscow University. He graduated in 1922. He continued his studies in the university in 1923-26 as a graduate student of Nikolai Luzin. Although Luzin was alleged to plagiarize in science and indulge in anti-Sovietism by some of his students in 1936, Lavrentyev did not participate in the notorious political persecution of his teacher which is known as the Luzin case or Luzin affair. In fact Luzin was a friend of his father. Mid career In 1927, Lavrentyev spent half a year in France, collaborating with French mathematicians, and upon returned took up a position with Moscow University. Later he became a member of the staff of the Steklov Institute. His main contributions relate to conformal mappings and partial differential equations. Mstislav Keldysh was one of his students. In 1939, Oleksandr Bogomolets, the president of the Ukrainian Academy of Sciences, asked Lavrentyev to become director of the Institute of Mathematics at Kyiv. One of Lavrentyev's scientific interests was the physics of explosive processes, in which he had become involved when doing defense work during World War II. A better understanding of the physics of explosions made it possible to use controlled explosions in construction, the best-known example being the construction of the Medeu Mudflow Control Dam outside of Almaty in Kazakhstan. In Siberia Mikhail Lavrentyev was one of the main organizers and the first Chairman of the Siberian Division of the Russian Academy of Sciences (in his time the Academy of Sciences of the USSR) from its founding in 1957 to 1975. The foundation of the Siberia's "Academic Town" Akademgorodok (now a district of Novosibirsk) remains his most widely known achievement. Six months after the decision to found the Siberian Division of the USSR Academy of Sciences Novosibirsk State University was established. The Decree of the Council of Ministers of the USSR was signed January 9, 1958. From 1959 to 1966 he was a professor at Novosibirsk State University. Lavrentyev was also a founder of the Institute of Hydrodynamics of the Siberian Division of the Russian Academy of Sciences which since 1980 has been named after Lavrentyev. Lavrentyev was awarded the honorary title of Hero of Socialist Labour, a Lenin and 2 Stalin Prizes, and a Lomonosov Gold Medal. He was elected a member of several world-renowned academies, and an honorable citizen of Novosibirsk. Mikhail A. Lavrentyev's son, also named Mikhail (Mikhail M. Lavrentyev, 1930-2010), also became a mathematician and was a member of the leadership of Akademgorodok. Eponyms Street in Kazan Street in Dolgoprudny Academician Lavrentyev Avenue in Novosibirsk Lavrentyev Institute of Hydrodynamics in Novosibirsk SESC affiliated with NSU Novosibirsk Lavrentyev Lyceum 130 RV Akademik Lavrentyev Aiguilles in Altai and Pamir References External links Mikhail Lavrentiev's biography, at the site of Lavrentiev Hydrodynamics Institute. 1900 births 1980 deaths Mathematicians from Kazan Academic staff of Bauman Moscow State Technical University Academic staff of the D. Mendeleev University of Chemical Technology of Russia Academic staff of the Moscow Institute of Physics and Technology Academic staff of Moscow State University Academic staff of Novosibirsk State University Academic staff of the Steklov Institute of Mathematics Academic staff of the Taras Shevchenko National University of Kyiv Central Aerohydrodynamic Institute employees Foreign members of the Bulgarian Academy of Sciences Full Members of the USSR Academy of Sciences Members of the German Academy of Sciences at Berlin Members of the German National Academy of Sciences Leopoldina Members of the National Academy of Sciences of Ukraine Moscow State University alumni NASU Institute of Mathematics Candidates of the Central Committee of the 22nd Congress of the Communist Party of the Soviet Union Candidates of the Central Committee of the 23rd Congress of the Communist Party of the Soviet Union Candidates of the Central Committee of the 24th Congress of the Communist Party of the Soviet Union Second convocation members of the Verkhovna Rada of the Ukrainian Soviet Socialist Republic Fifth convocation members of the Supreme Soviet of the Soviet Union Sixth convocation members of the Supreme Soviet of the Soviet Union Seventh convocation members of the Supreme Soviet of the Soviet Union Eighth convocation members of the Supreme Soviet of the Soviet Union Ninth convocation members of the Supreme Soviet of the Soviet Union Heroes of Socialist Labour Recipients of the Stalin Prize Recipients of the Lenin Prize Commanders of the Legion of Honour Recipients of the Lomonosov Gold Medal Recipients of the Order of Lenin Recipients of the Order of the October Revolution Recipients of the Order of the Red Banner of Labour Soviet mathematicians Soviet physicists Lavrentyev Russian scientists
Mikhail Lavrentyev
[ "Technology" ]
1,095
[ "Science and technology awards", "Recipients of the Lomonosov Gold Medal" ]
63,397
https://en.wikipedia.org/wiki/Mean%20time%20between%20failures
Mean time between failures (MTBF) is the predicted elapsed time between inherent failures of a mechanical or electronic system during normal system operation. MTBF can be calculated as the arithmetic mean (average) time between failures of a system. The term is used for repairable systems while mean time to failure (MTTF) denotes the expected time to failure for a non-repairable system. The definition of MTBF depends on the definition of what is considered a failure. For complex, repairable systems, failures are considered to be those out of design conditions which place the system out of service and into a state for repair. Failures which occur that can be left or maintained in an unrepaired condition, and do not place the system out of service, are not considered failures under this definition. In addition, units that are taken down for routine scheduled maintenance or inventory control are not considered within the definition of failure. The higher the MTBF, the longer a system is likely to work before failing. Overview Mean time between failures (MTBF) describes the expected time between two failures for a repairable system. For example, three identical systems starting to function properly at time 0 are working until all of them fail. The first system fails after 100 hours, the second after 120 hours and the third after 130 hours. The MTBF of the systems is the average of the three failure times, which is 116.667 hours. If the systems were non-repairable, then their MTTF would be 116.667 hours. In general, MTBF is the "up-time" between two failure states of a repairable system during operation as outlined here: For each observation, the "down time" is the instantaneous time it went down, which is after (i.e. greater than) the moment it went up, the "up time". The difference ("down time" minus "up time") is the amount of time it was operating between these two events. By referring to the figure above, the MTBF of a component is the sum of the lengths of the operational periods divided by the number of observed failures: In a similar manner, mean down time (MDT) can be defined as Mathematical description The MTBF is the expected value of the random variable indicating the time until failure. Thus, it can be written as where is the probability density function of . Equivalently, the MTBF can be expressed in terms of the reliability function as . The MTBF and have units of time (e.g., hours). Any practically-relevant calculation of the MTBF assumes that the system is working within its "useful life period", which is characterized by a relatively constant failure rate (the middle part of the "bathtub curve") when only random failures are occurring. In other words, it is assumed that the system has survived initial setup stresses and has not yet approached its expected end of life, both of which often increase the failure rate. Assuming a constant failure rate implies that has an exponential distribution with parameter . Since the MTBF is the expected value of , it is given by the reciprocal of the failure rate of the system, . Once the MTBF of a system is known, and assuming a constant failure rate, the probability that any one particular system will be operational for a given duration can be inferred from the reliability function of the exponential distribution, . In particular, the probability that a particular system will survive to its MTBF is , or about 37% (i.e., it will fail earlier with probability 63%). Application The MTBF value can be used as a system reliability parameter or to compare different systems or designs. This value should only be understood conditionally as the “mean lifetime” (an average value), and not as a quantitative identity between working and failed units. Since MTBF can be expressed as “average life (expectancy)”, many engineers assume that 50% of items will have failed by time t = MTBF. This inaccuracy can lead to bad design decisions. Furthermore, probabilistic failure prediction based on MTBF implies the total absence of systematic failures (i.e., a constant failure rate with only intrinsic, random failures), which is not easy to verify. Assuming no systematic errors, the probability the system survives during a duration, T, is calculated as exp^(-T/MTBF). Hence the probability a system fails during a duration T, is given by 1 - exp^(-T/MTBF). MTBF value prediction is an important element in the development of products. Reliability engineers and design engineers often use reliability software to calculate a product's MTBF according to various methods and standards (MIL-HDBK-217F, Telcordia SR332, Siemens SN 29500, FIDES, UTE 80-810 (RDF2000), etc.). The Mil-HDBK-217 reliability calculator manual in combination with RelCalc software (or other comparable tool) enables MTBF reliability rates to be predicted based on design. A concept which is closely related to MTBF, and is important in the computations involving MTBF, is the mean down time (MDT). MDT can be defined as mean time which the system is down after the failure. Usually, MDT is considered different from MTTR (Mean Time To Repair); in particular, MDT usually includes organizational and logistical factors (such as business days or waiting for components to arrive) while MTTR is usually understood as more narrow and more technical. Application of MTBF in manufacturing MTBF serves as a crucial metric for managing machinery and equipment reliability. Its application is particularly significant in the context of total productive maintenance (TPM), a comprehensive maintenance strategy aimed at maximizing equipment effectiveness. MTBF provides a quantitative measure of the time elapsed between failures of a system during normal operation, offering insights into the reliability and performance of manufacturing equipment. By integrating MTBF with TPM principles, manufacturers can achieve a more proactive maintenance approach. This synergy allows for the identification of patterns and potential failures before they occur, enabling preventive maintenance and reducing unplanned downtime. As a result, MTBF becomes a key performance indicator (KPI) within TPM, guiding decisions on maintenance schedules, spare parts inventory, and ultimately, optimizing the lifespan and efficiency of machinery. This strategic use of MTBF within TPM frameworks enhances overall production efficiency, reduces costs associated with breakdowns, and contributes to the continuous improvement of manufacturing processes. MTBF and MDT for networks of components Two components (for instance hard drives, servers, etc.) may be arranged in a network, in series or in parallel. The terminology is here used by close analogy to electrical circuits, but has a slightly different meaning. We say that the two components are in series if the failure of either causes the failure of the network, and that they are in parallel if only the failure of both causes the network to fail. The MTBF of the resulting two-component network with repairable components can be computed according to the following formulae, assuming that the MTBF of both individual components is known: where is the network in which the components are arranged in series. For the network containing parallel repairable components, to find out the MTBF of the whole system, in addition to component MTBFs, it is also necessary to know their respective MDTs. Then, assuming that MDTs are negligible compared to MTBFs (which usually stands in practice), the MTBF for the parallel system consisting from two parallel repairable components can be written as follows: where is the network in which the components are arranged in parallel, and is the probability of failure of component during "vulnerability window" . Intuitively, both these formulae can be explained from the point of view of failure probabilities. First of all, let's note that the probability of a system failing within a certain timeframe is the inverse of its MTBF. Then, when considering series of components, failure of any component leads to the failure of the whole system, so (assuming that failure probabilities are small, which is usually the case) probability of the failure of the whole system within a given interval can be approximated as a sum of failure probabilities of the components. With parallel components the situation is a bit more complicated: the whole system will fail if and only if after one of the components fails, the other component fails while the first component is being repaired; this is where MDT comes into play: the faster the first component is repaired, the less is the "vulnerability window" for the other component to fail. Using similar logic, MDT for a system out of two serial components can be calculated as: and for a system out of two parallel components MDT can be calculated as: Through successive application of these four formulae, the MTBF and MDT of any network of repairable components can be computed, provided that the MTBF and MDT is known for each component. In a special but all-important case of several serial components, MTBF calculation can be easily generalised into which can be shown by induction, and likewise since the formula for the mdt of two components in parallel is identical to that of the mtbf for two components in series. Variations of MTBF There are many variations of MTBF, such as mean time between system aborts (MTBSA), mean time between critical failures (MTBCF) or mean time between unscheduled removal (MTBUR). Such nomenclature is used when it is desirable to differentiate among types of failures, such as critical and non-critical failures. For example, in an automobile, the failure of the FM radio does not prevent the primary operation of the vehicle. It is recommended to use Mean time to failure (MTTF) instead of MTBF in cases where a system is replaced after a failure ("non-repairable system"), since MTBF denotes time between failures in a system which can be repaired. MTTFd is an extension of MTTF, and is only concerned about failures which would result in a dangerous condition. It can be calculated as follows: where B10 is the number of operations that a device will operate prior to 10% of a sample of those devices would fail and nop is number of operations. B10d is the same calculation, but where 10% of the sample would fail to danger. nop is the number of operations/cycle in one year. MTBF considering censoring In fact the MTBF counting only failures with at least some systems still operating that have not yet failed underestimates the MTBF by failing to include in the computations the partial lifetimes of the systems that have not yet failed. With such lifetimes, all we know is that the time to failure exceeds the time they've been running. This is called censoring. In fact with a parametric model of the lifetime, the likelihood for the experience on any given day is as follows: , where is the failure time for failures and the censoring time for units that have not yet failed, = 1 for failures and 0 for censoring times, = the probability that the lifetime exceeds , called the survival function, and is called the hazard function, the instantaneous force of mortality (where = the probability density function of the distribution). For a constant exponential distribution, the hazard, , is constant. In this case, the MBTF is MTBF = , where is the maximum likelihood estimate of , maximizing the likelihood given above and is the number of uncensored observations. We see that the difference between the MTBF considering only failures and the MTBF including censored observations is that the censoring times add to the numerator but not the denominator in computing the MTBF. See also References External links Engineering failures Survival analysis Reliability indices
Mean time between failures
[ "Technology", "Engineering" ]
2,455
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
63,452
https://en.wikipedia.org/wiki/Heuristic
A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision. Context Gigerenzer & Gaissmaier (2011) state that sub-sets of strategy include heuristics, regression analysis, and Bayesian inference. Heuristics are strategies based on rules to generate optimal decisions, like the anchoring effect and utility maximization problem. These strategies depend on using readily accessible, though loosely applicable, information to control problem solving in human beings, machines and abstract issues. When an individual applies a heuristic in practice, it generally performs as expected. However it can alternatively create systematic errors. The most fundamental heuristic is trial and error, which can be used in everything from matching nuts and bolts to finding the values of variables in algebra problems. In mathematics, some common heuristics involve the use of visual representations, additional assumptions, forward/backward reasoning and simplification. Dual process theory concerns embodied heuristics. Heuristic rigour models Lakatosian heuristics is based on the key term: Justification (epistemology). One-reason decisions One-reason decisions are algorithms that are made of three rules: search rules, confirmation rules (stopping), and decision rules Hiatus heuristic: a "recency-of-last-purchase rule" Take-the-first heuristic Recognition-based decisions A class whose function is to determine and filter out superfluous things. Tracking heuristics Tracking heuristics is a class of heuristics. Trade-off Tallying heuristic Equality heuristic Social heuristics Epistemic heuristics Behavioral economics Others Minimalist heuristic Meta-heuristic Optimality History George Polya studied and published on heuristics in 1945. Polya (1945) cites Pappus of Alexandria as having written a text that Polya dubs Heuristic. Pappus' heuristic problem-solving methods consist of analysis and synthesis. Notable Figures George Polya Herbert A. Simon Daniel Kahneman Amos Tversky Gerd Gigerenzer Judea Pearl Robin Dunbar David Perkins Page Herbert Spencer Charles Alexander McMurry Frank Morton McMurry Lawrence Zalcman Imre Lakatos William C. Wimsatt Alan Hodgkin Andrew Huxley Works Meno How to solve it Mathematics and Plausible Reasoning Contemporary The study of heuristics in human decision-making was developed in the 1970s and the 1980s, by the psychologists Amos Tversky and Daniel Kahneman, although the concept had been originally introduced by the Nobel laureate Herbert A. Simon. Simon's original primary object of research was problem solving that showed that we operate within what he calls bounded rationality. He coined the term satisficing, which denotes a situation in which people seek solutions, or accept choices or judgements, that are "good enough" for their purposes although they could be optimised. Rudolf Groner analysed the history of heuristics from its roots in ancient Greece up to contemporary work in cognitive psychology and artificial intelligence, proposing a cognitive style "heuristic versus algorithmic thinking", which can be assessed by means of a validated questionnaire. Adaptive toolbox The adaptive toolbox contains strategies for fabricating heuristic devices. The core mental capacities are recall (memory), frequency, object permanence, and imitation. Gerd Gigerenzer and his research group argued that models of heuristics need to be formal to allow for predictions of behavior that can be tested. They study the fast and frugal heuristics in the "adaptive toolbox" of individuals or institutions, and the ecological rationality of these heuristics; that is, the conditions under which a given heuristic is likely to be successful. The descriptive study of the "adaptive toolbox" is done by observation and experiment, while the prescriptive study of ecological rationality requires mathematical analysis and computer simulation. Heuristics – such as the recognition heuristic, the take-the-best heuristic and fast-and-frugal trees – have been shown to be effective in predictions, particularly in situations of uncertainty. It is often said that heuristics trade accuracy for effort but this is only the case in situations of risk. Risk refers to situations where all possible actions, their outcomes and probabilities are known. In the absence of this information, that is under uncertainty, heuristics can achieve higher accuracy with lower effort. This finding, known as a less-is-more effect, would not have been found without formal models. The valuable insight of this program is that heuristics are effective not despite their simplicity – but because of it. Furthermore, Gigerenzer and Wolfgang Gaissmaier found that both individuals and organisations rely on heuristics in an adaptive way. Cognitive-experiential self-theory Heuristics, through greater refinement and research, have begun to be applied to other theories, or be explained by them. For example, the cognitive-experiential self-theory (CEST) is also an adaptive view of heuristic processing. CEST breaks down two systems that process information. At some times, roughly speaking, individuals consider issues rationally, systematically, logically, deliberately, effortfully, and verbally. On other occasions, individuals consider issues intuitively, effortlessly, globally, and emotionally. From this perspective, heuristics are part of a larger experiential processing system that is often adaptive, but vulnerable to error in situations that require logical analysis. Attribute substitution In 2002, Daniel Kahneman and Shane Frederick proposed that cognitive heuristics work by a process called attribute substitution, which happens without conscious awareness. According to this theory, when somebody makes a judgement (of a "target attribute") that is computationally complex, a more easily calculated "heuristic attribute" is substituted. In effect, a cognitively difficult problem is dealt with by answering a rather simpler problem, without being aware of this happening. This theory explains cases where judgements fail to show regression toward the mean. Heuristics can be considered to reduce the complexity of clinical judgments in health care. Academic disciplines Psychology In psychology, heuristics are simple, efficient rules, either learned or inculcated by evolutionary processes. These psychological heuristics have been proposed to explain how people make decisions, come to judgements, and solve problems. These rules typically come into play when people face complex problems or incomplete information. Researchers employ various methods to test whether people use these rules. The rules have been shown to work well under most circumstances, but in certain cases can lead to systematic errors or cognitive biases. Philosophy A heuristic device is used when an entity X exists to enable understanding of, or knowledge concerning, some other entity Y. A good example is a model that, as it is never identical with what it models, is a heuristic device to enable understanding of what it models. Stories, metaphors, etc., can also be termed heuristic in this sense. A classic example is the notion of utopia as described in Plato's best-known work, The Republic. This means that the "ideal city" as depicted in The Republic is not given as something to be pursued, or to present an orientation-point for development. Rather, it shows how things would have to be connected, and how one thing would lead to another (often with highly problematic results), if one opted for certain principles and carried them through rigorously. Heuristic is also often used as a noun to describe a rule of thumb, procedure, or method. Philosophers of science have emphasised the importance of heuristics in creative thought and the construction of scientific theories. Seminal works include Karl Popper's The Logic of Scientific Discovery and others by Imre Lakatos, Lindley Darden, and William C. Wimsatt. Law In legal theory, especially in the theory of law and economics, heuristics are used in the law when case-by-case analysis would be impractical, insofar as "practicality" is defined by the interests of a governing body. The present securities regulation regime largely assumes that all investors act as perfectly rational persons. In truth, actual investors face cognitive limitations from biases, heuristics, and framing effects. For instance, in all states in the United States the legal drinking age for unsupervised persons is 21 years, because it is argued that people need to be mature enough to make decisions involving the risks of alcohol consumption. However, assuming people mature at different rates, the specific age of 21 would be too late for some and too early for others. In this case, the somewhat arbitrary delineation is used because it is impossible or impractical to tell whether an individual is sufficiently mature for society to trust them with that kind of responsibility. Some proposed changes, however, have included the completion of an alcohol education course rather than the attainment of 21 years of age as the criterion for legal alcohol possession. This would put youth alcohol policy more on a case-by-case basis and less on a heuristic one, since the completion of such a course would presumably be voluntary and not uniform across the population. The same reasoning applies to patent law. Patents are justified on the grounds that inventors must be protected so they have incentive to invent. It is therefore argued that it is in society's best interest that inventors receive a temporary government-granted monopoly on their idea, so that they can recoup investment costs and make economic profit for a limited period. In the United States, the length of this temporary monopoly is 20 years from the date the patent application was filed, though the monopoly does not actually begin until the application has matured into a patent. However, like the drinking age problem above, the specific length of time would need to be different for every product to be efficient. A 20-year term is used because it is difficult to tell what the number should be for any individual patent. More recently, some, including University of North Dakota law professor Eric E. Johnson, have argued that patents in different kinds of industries – such as software patents – should be protected for different lengths of time. Artificial intelligence The bias–variance tradeoff gives insight into describing the less-is-more strategy. A heuristic can be used in artificial intelligence systems while searching a solution space. The heuristic is derived by using some function that is put into the system by the designer, or by adjusting the weight of branches based on how likely each branch is to lead to a goal node. Behavioural economics Heuristics refers to the cognitive shortcuts that individuals use to simplify decision-making processes in economic situations. Behavioral economics is a field that integrates insights from psychology and economics to better understand how people make decisions. Anchoring and adjustment is one of the most extensively researched heuristics in behavioural economics. Anchoring is the tendency of people to make future judgements or conclusions based too heavily on the original information supplied to them. This initial knowledge functions as an anchor, and it can influence future judgements even if the anchor is entirely unrelated to the decisions at hand. Adjustment, on the other hand, is the process through which individuals make gradual changes to their initial judgements or conclusions. Anchoring and adjustment has been observed in a wide range of decision-making contexts, including financial decision-making, consumer behavior, and negotiation. Researchers have identified a number of strategies that can be used to mitigate the effects of anchoring and adjustment, including providing multiple anchors, encouraging individuals to generate alternative anchors, and providing cognitive prompts to encourage more deliberative decision-making. Other heuristics studied in behavioral economics include the representativeness heuristic, which refers to the tendency of individuals to categorize objects or events based on how similar they are to typical examples, and the availability heuristic, which refers to the tendency of individuals to judge the likelihood of an event based on how easily it comes to mind. Stereotyping Stereotyping is a type of heuristic that people use to form opinions or make judgements about things they have never seen or experienced. They work as a mental shortcut to assess everything from the social status of a person (based on their actions), to classifying a plant as a tree based on it being tall, having a trunk, and that it has leaves (even though the person making the evaluation might never have seen that particular type of tree before). Stereotypes, as first described by journalist Walter Lippmann in his book Public Opinion (1922), are the pictures we have in our heads that are built around experiences as well as what we are told about the world. See also References Further reading How To Solve It: Modern Heuristics, Zbigniew Michalewicz and David B. Fogel, Springer Verlag, 2000. The Problem of Thinking Too Much , 11 December 2002, Persi Diaconis Adages Biological rules Ecogeographic rules Heuristics Problem solving methods Rules of thumb
Heuristic
[ "Biology" ]
2,810
[ "Biological rules", "Ecogeographic rules", "nan" ]
63,501
https://en.wikipedia.org/wiki/Paragliding
Paragliding is the recreational and competitive adventure sport of flying paragliders: lightweight, free-flying, foot-launched glider aircraft with no rigid primary structure. The pilot sits in a harness or in a cocoon-like 'pod' suspended below a fabric wing. Wing shape is maintained by the suspension lines, the pressure of air entering vents in the front of the wing, and the aerodynamic forces of the air flowing over the outside. Despite not using an engine, paraglider flights can last many hours and cover many hundreds of kilometres, though flights of one to five hours and covering some tens of kilometres are more the norm. By skillful exploitation of sources of lift, the pilot may gain height, often climbing to altitudes of a few thousand metres. History In 1966, Canadian Domina Jalbert was granted a patent for a multi-cell wing type aerial device—"a wing having a flexible canopy constituting an upper skin and with a plurality of longitudinally extending ribs forming in effect a wing corresponding to an airplane wing airfoil... More particularly the invention contemplates the provision of a wing of rectangular or other shape having a canopy or top skin and a lower spaced apart bottom skin", a governable gliding parachute with multi-cells and controls for glide. In 1954, Walter Neumark predicted (in an article in Flight magazine) a time when a glider pilot would be "able to launch himself by running over the edge of a cliff or down a slope ... whether on a rock-climbing holiday in Skye or skiing in the Alps." In 1961, the French engineer Pierre Lemongine produced improved parachute designs that led to the Para-Commander (PC). The Para-Commander had cutouts at the rear and sides that enabled it to be towed into the air and steered, leading to parasailing/parascending. Domina Jalbert invented the parafoil, which had sectioned cells in an aerofoil shape, an open leading edge, a closed trailing edge, and was inflated by passage through the airthe design. He filed US Patent 3131894 on January 10, 1963. About that time, David Barish was developing the sail wing (single-surface wing) for recovery of NASA space capsules—"slope soaring was a way of testing out... the Sail Wing." After tests on Hunter Mountain, New York, in September 1965, he went on to promote slope soaring as a summer activity for ski resorts. Author Walter Neumark wrote Operating Procedures for Ascending Parachutes, and in 1973 he and a group of enthusiasts with a passion for tow-launching PCs and ram-air parachutes broke away from the British Parachute Association to form the British Association of Parascending Clubs (which later became the British Hang Gliding and Paragliding Association). In 1997, Neumark was awarded the Gold Medal of the Royal Aero Club of the UK. Authors Patrick Gilligan (Canada) and Bertrand Dubuis (Switzerland) wrote the first flight manual, The Paragliding Manual in 1985, coining the word paragliding. These developments were combined in June 1978 by three friends, Jean-Claude Bétemps, André Bohn and Gérard Bosson, from Mieussy, Haute-Savoie, France. After inspiration from an article on slope soaring in the Parachute Manual magazine by parachutist and publisher Dan Poynter, they calculated that on a suitable slope, a "square" ram-air parachute could be inflated by running down the slope; Bétemps launched from Pointe du Pertuiset, Mieussy, and flew 100 m. Bohn followed him and glided down to the football pitch in the valley 1000 metres below. Parapente ( being French for 'slope') was born. From the 1980s, equipment has continued to improve, and the number of paragliding pilots and established sites has continued to increase. The first (unofficial) Paragliding World Championship was held in Verbier, Switzerland, in 1987, though the first officially sanctioned FAI World Paragliding Championship was held in Kössen, Austria, in 1989. Europe has seen the greatest growth in paragliding, with France alone registering in 2011 over 25,000 active pilots. Starting in 2022, feasibility studies of paragliding from above 8000meters have been in progress in the Everest region of Nepal this would effectively be paragliding from the highest starting altitude on the planet. Equipment Wing The paraglider wing or canopy is usually what is known in engineering as a ram-air airfoil. Such wings comprise two layers of fabric that are connected to internal supporting material in such a way as to form a row of cells. By leaving most of the cells open only at the leading edge, incoming air keeps the wing inflated, thus maintaining its shape. When inflated, the wing's cross-section has the typical teardrop aerofoil shape. Modern paraglider wings are made of high-performance non-porous materials such as ripstop nylon. In most modern paragliders (from the 1990s onwards), some of the cells of the leading edge are closed to form a cleaner aerodynamic profile. Holes in the internal ribs allow a free flow of air from the open cells to these closed cells to inflate them, and also to the wingtips, which are also closed. Almost all modern paragliders follow a sharknose design of the leading edge, by which the inflation opening is not at the front of the wing, but slightly backwards on the underside of the wing, and following a concave shape. This design, resembling the nose of a shark, increases wing stability and stall resistance. In modern paragliders, semi-flexible rods made out of plastic or nitinol are used to give extra stability to the profile of the wing. In high-performance paragliders, these rods extend through most of the length of the upper wing. The pilot is supported underneath the wing by a network of suspension lines. These start with two sets of risers made of short () lengths of strong webbing. Each set is attached to the harness by a carabiner, one on each side of the pilot, and each riser of a set is generally attached to lines from only one row of its side of wing. At the end of each riser of the set, there is a small delta maillon with a number (2–5) of lines attached, forming a fan. These are typically long, with the end attached to 2–4 further lines of around m, which are again joined to a group of smaller, thinner lines. In some cases this is repeated for a fourth cascade. The top of each line is attached to small fabric loops sewn into the structure of the wing, which are generally arranged in rows running span-wise (i.e., side to side). The row of lines nearest the front are known as the A lines, the next row back the B lines, and so on. A typical wing will have A, B, C and D lines, but recently, there has been a tendency to reduce the rows of lines to three, or even two (and experimentally to one), to reduce drag. Paraglider lines are usually made from UHMW polythene or aramid. Although they look rather slender, these materials are strong and subject to load testing requirements. For example, a single 0.66 mm-diameter line (about the thinnest used) can have a breaking strength of . Paraglider wings typically have an area of with a span of and weigh . Combined weight of wing, harness, reserve, instruments, helmet, etc. is around . Ultralight Hike & Fly kits can be lighter than . The glide ratio of paragliders ranges from 9.3 for recreational wings to about 11.3 for modern competition models, reaching in some cases up to 13. For comparison, a typical skydiving parachute will achieve about 3:1 glide. A hang glider ranges from 9.5 for recreational wings to about 16.5 for modern competition models. An idling (gliding) Cessna 152 light aircraft will achieve 9:1. Some sailplanes can achieve a glide ratio of up to 72:1. The speed range of paragliders is typically , from stall speed to maximum speed. Achieving maximum speed requires the use of speedbar, or trimmers. Without these, and without applying brakes, a paraglider is at its trim speed, which is typically and often at the best glide ratio, too. High-performance paragliders meant for competitions may achieve faster accelerated flight, as do speedwings, due to their small size and different profile. For storage and carrying, the wing is usually folded into a stuffsack (bag), which can then be stowed in a large backpack along with the harness. Some modern harnesses include the ability to turn the harness inside out such that it becomes a backpack, saving weight and space. Paragliders are unique among human-carrying aircraft in being easily portable. The complete equipment packs into a rucksack and can be carried easily on the pilot's back, in a car, or on public transport. In comparison with other air sports, this substantially simplifies travel to a suitable takeoff spot, the selection of a landing place and return travel. Tandem paragliders, designed to carry the pilot and one passenger, are larger but otherwise similar. They usually fly faster with higher trim speeds, are more resistant to collapse, and have a slightly higher sink rate compared to solo paragliders. Harness The pilot is loosely and comfortably buckled into a harness, which offers support in both the standing and sitting positions. Most harnesses have protectors made out of foam or other materials underneath the seat and behind the back to reduce the impact on failed launches or landings. Modern harnesses are designed to be as comfortable as a lounge chair in the sitting or reclining position. Many harnesses even have an adjustable lumbar support. A reserve parachute is also typically connected to a paragliding harness. Harnesses also vary according to the need of the pilot, and thereby come in a range of designs, mostly: open harnesses, ranging from training harness for beginners to all-round harnesses pod harnesses for long-distance cross-country flights competition harnesses, which are pod harnesses with the capacity to carry two reserve parachutes acro harnesses, a type of open harness, designed for acrobatic paragliding, with the capacity for two or three reserve parachutes hike&fly harnesses, which are designed to be lightweight and compact when folded away for hiking harnesses for tandem pilots and passengers kids tandem harnesses are also now available with special child-proof locks Harnesses have a substantial influence on the flying characteristics; for instance, acro harnesses lead to more agile handling, which is desirable for flying acrobatics, but may be unsuitable for beginners or XC pilots looking for more stability in flight. While pod harnesses offer more stability and aerodynamic properties, they increase the risk of riser twist, and are hence not suitable for beginners. The standard harness is an open harness, which features a sitting, slightly reclined body position. Instruments in paragliding Most pilots use variometers, radios, and, increasingly, GNSS units when they are flying. Variometer The main purpose of a variometer is in helping a pilot find and stay in the "core" of a thermal to maximise height gain and, conversely, to indicate when a pilot is in sinking air and needs to find rising air. Humans can sense the acceleration when they first hit a thermal, but cannot detect the difference between constant rising air and constant sinking air. Modern variometers are capable of detecting rates of climb or sink of 1 cm per second. A variometer indicates climb rate (or sink-rate) with short audio signals (beeps, which increase in pitch and tempo during ascent, and a droning sound, which gets deeper as the rate of descent increases) and/or a visual display. It also shows altitude: either above takeoff, above sea level, or (at higher altitudes) flight level. Radio Radio communications are used in training, to communicate with other pilots, and to report where and when they intend to land. These radios normally operate on a range of frequencies in different countries—some authorised, some illegal but tolerated locally. Some local authorities (e.g., flight clubs) offer periodic automated weather updates on these frequencies. In rare cases, pilots use radios to talk to airport control towers or air traffic controllers. Many pilots carry a cell phone so they can call for pickup should they land away from their intended point of destination. GNSS GNSS is a necessary accessory when flying competitions, where it has to be demonstrated that way-points have been correctly passed. The recorded GNSS track of a flight can be used to analyze flying technique or can be shared with other pilots. GNSS is also used to determine drift due to the prevailing wind when flying at altitude, providing position information to allow restricted airspace to be avoided and identifying one's location for retrieval teams after landing out in unfamiliar territory. GNSS is integrated with some models of variometer. This is not only more convenient, but also allows for a three-dimensional record of the flight. The flight track can be used as proof for record claims, replacing the old method of photo documentation. Increasingly, smart phones are used as the primary means of navigation and flight logging, with several applications available to assist in air navigation. They are also used to co-ordinate tasks in competitive paragliding and facilitate retrieval of pilots returning to their point of launch. External variometers are typically used to assist in accurate altitude information. Ground handling Paraglider ground handling, also known as kiting, is the practice of handling the paraglider on land. The primary purpose of ground handling is to practice the skills necessary for launching and landing. However, ground handling could be considered a fun and challenging sport in and of itself. Ground handling is considered an essential part of most paragliding wing management training. It needs to be remembered that in any sort of stumble or tumble, the head is at risk and a helmet is therefore always advisable. It is highly recommended that low hour pilots, ground-handling, should be wearing a formal harness with leg and waist straps firmly fitted and fastened. Since 2015 the standard harness has become an inflatable type. This forms a protective cushion when, during flight, air is forced through a check valve and retained in a chamber behind and under the pilot. In ground-handling practice the amount of air passing through the check valve may be very slight. In an accident where the pilot has been lifted and dumped while facing downwind, the protection offered by an inflatable harness is likely to be minimal. The old fashioned foam type of harness has a special value in that sort of situation. Location The ideal launch training site for novices with standard wings has the following characteristics: Measured steady wind strength: 1 m/s to 4 m/s (3.6–14 km/h: 1.9-7.7 knots: 2.2-8.9 mph) The even, flat, surface should slope slightly downwards (2 or 3 degrees) from down-wind to up-wind (providing a small vertical lift component). The site must be isolated from uninvolved visitors. Free of obstructions that might create a trip or snag hazard. Soft surface, such as grass or sand, to reduce damage to the handler and wing in case of falls. Novices should wear a harness and helmet and be accompanied by an appropriate adult. As pilots progress, they may challenge themselves by kiting over and around obstacles, in strong or turbulent wind, and on greater slopes. Flying Launching As with all aircraft, launching and landing are done into wind. The wing is placed into an airstream, either by running or being pulled, or an existing wind. The wing moves up over the pilot into a position in which it can carry the passenger. The pilot is then lifted from the ground and, after a safety period, can sit down into his harness. Unlike skydivers, paragliders, like hang gliders, do not jump at any time during this process. There are two launching techniques used on higher ground and one assisted launch technique used in flatland areas: Forward launch In low winds, the wing is inflated with a forward launch, where the pilot runs forward with the wing behind so that the air pressure generated by the forward movement inflates the wing. It is often easier, because the pilot only has to run forward, but the pilot cannot see his wing until it is above him, where he has to check it in a very short time for correct inflation and untangled lines before the launch. Reverse launch In higher winds, a reverse launch is used, with the pilot facing the wing to bring it up into a flying position, then turning around under the wing and running to complete the launch. Reverse launches have a number of advantages over a forward launch. It is more straightforward to inspect the wing and check if the lines are free as it leaves the ground. In the presence of wind, the pilot can be tugged toward the wing, and facing the wing makes it easier to resist this force and safer in case the pilot slips (as opposed to being dragged backwards). However, the movement pattern is more complex than forward launch, and the pilot has to hold the brakes in a correct way and turn to the correct side so he does not tangle the lines. These launches are normally attempted with a reasonable wind speed, making the ground speed required to pressurise the wing much lower. The launch is initiated by the hands raising the leading edge with the As. As it rises the wing is controlled more by centring the feet than by use of the brakes or Cs. With mid level wings (EN C and D) the wing may try to "overshoot" the pilot as it nears the top. This is checked with Cs or brakes. The wing becomes increasingly sensitive to the Cs and brakes as its internal air pressure rises. This is usually felt from increasing lift of the wing applying harness pressure to the seat of the pants. That pressure indicates that the wing is likely to remain stable when the pilot pirouettes to face the wind. The next step in the launch is to bring the wing into the lift zone. There are two techniques for accomplishing this depending on wind conditions. In light wind this is usually done after turning to the front, steering with the feet towards the low wing tip, and applying light brakes in a natural sense to keep the wing horizontal. In stronger wind conditions it is often found to be easier to remain facing downwind while moving slowly and steadily backwards into the wind. Knees bent to load the wing, foot adjustments to remain central and minimum use of Cs or Brakes to keep the wing horizontal. Pirouette when the feet are close to lifting. This option has two distinct advantages. a) The pilot can see the wing centre marker (an aid to centring the feet) and, if necessary, b) the pilot can move briskly towards the wing to assist with an emergency deflation. With either method it is essential to check "traffic" across the launch face before committing to flight. The A's and C's technique described above is well suited to low-hours pilots, on standard wings, in wind strengths up to 10 knots. It is particularly recommended for kiting. As wind speed increases (above ten knots), especially on steep ridges, the use of the C's introduces the potential to be lifted before the wing is overhead due to the increased angle of attack. That type of premature lift often results in the pilot's weight swinging downwind rapidly, resulting in a frontal tuck (due to excess A line loads). In that situation the pilot commonly drops vertically and injuries are not uncommon. In ridge soaring situations above ten knots it is almost always better to lift the wing with A's only and use the brakes to stop any potential overshoot. The brakes do not usually increase the angle of attack as much C's. As wind strength increases it becomes more important than ever for the pilot to keep the wing loaded by bending the knees and pushing the shoulders forward. Most pilots will find that when their hands are vertically under the brake line pulleys they are able reduce trailing edge drag to the absolute minimum. That is not so easy for most, when the arms are thrust rearwards. Towed launch In flatter countryside, pilots can also be launched with a tow. Once at full height (towing can launch pilots up to altitude), the pilot pulls a release cord, and the towline falls away. This requires separate training, as flying on a winch has quite different characteristics from free flying. There are two major ways to tow: pay-in and pay-out towing. Pay-in towing involves a stationary winch that winds in the towline and thereby pulls the pilot in the air. The distance between winch and pilot at the start is around or more. Pay-out towing involves a moving object, like a car or a boat, that pays out line slower than the speed of the object, thereby pulling the pilot up in the air. In both cases, it is very important to have a gauge indicating line tension to avoid pulling the pilot out of the air. Another form of towing is static line towing. This involves a moving object, like a car or a boat, attached to a paraglider or hang glider with a fixed-length line. This can be very dangerous, because now the forces on the line have to be controlled by the moving object itself, which is almost impossible to do, unless stretchy rope and a pressure/tension meter (dynamometer) is used. Static line towing with stretchy rope and a load cell as a tension meter has been used in Poland, Ukraine, Russia, and other Eastern European countries for over 20 years (under the name Malinka) with about the same safety record as other forms of towing. One more form of towing is hand towing. This is where 1−3 people pull a paraglider using a tow rope of up to . The stronger the wind, the fewer people are needed for a successful hand tow. Tows up to have been accomplished, allowing the pilot to get into a lift band of a nearby ridge or row of buildings and ridge-soar in the lift the same way as with a regular foot launch. Landing Landing a paraglider, as with all unpowered aircraft which cannot abort a landing, involves some specific techniques and traffic patterns. Paragliding pilots most commonly lose their height by flying a figure 8 over a landing zone until they reach the correct height, then line up into the wind and give the glider full speed. Once the correct height (about a metre above ground) is achieved the pilot will stall the glider in order to land. Traffic pattern Unlike during launch, where coordination between multiple pilots is straightforward, landing involves more planning, because more than one pilot might have to land at the same time. Therefore, a specific traffic pattern has been established. Pilots line up into a position above the airfield and to the side of the landing area, which is dependent on the wind direction, where they can lose height (if necessary) by flying circles. From this position, they follow the legs of a flightpath in a rectangular pattern to the landing zone: downwind leg, base leg, and final approach. This allows for synchronization between multiple pilots and reduces the risk of collisions, because a pilot can anticipate what other pilots around him are going to do next. Techniques Landing involves lining up for an approach into wind and, just before touching down, flaring the wing to minimise vertical and/or horizontal speed. This consists of gently going from 0% brake at around two metres to 100% brake when touching down on the ground. During the approach descent, at around four metres before touching ground, some momentary braking (50% for around two seconds) can be applied then released, thus using forward pendular momentum to gain speed for flaring more effectively and approaching the ground with minimal vertical speed. In light winds, some minor running is common. In moderate to medium headwinds, the landings can be without forward speed, or even going backwards with respect to the ground in strong winds. Landing with winds which force the pilot backwards are particularly hazardous as there is a potential to tumble and be dragged. While the wing is vertically above the pilot there is potential for a reduced risk deflation. This involves taking the leading edge lines (As) in each hand at the mallion/riser junction and applying the pilot's full weight with a deep knee bend action. In almost every case the wing's leading edge will fly forward a little and then tuck. It is then likely to collapse and descend upwind of the pilot. On the ground it will be restrained by the pilot's legs. Landing in winds which are too strong for the wing is to be avoided wherever possible. During approach to the intended landing site this potential problem is often obvious and there may be opportunities to extend the flight to find a more sheltered landing area. On every landing it is desirable to have the wing remain flyable with a small amount of forward momentum. This makes deflation much more controllable. While the midsection lines (Bs) are vertical there is much less chance of the wing moving downwind fast. The common deflation cue comes from a vigorous tug on the rear risers' lines (Cs or Ds). Promptly rotate to face down wind, maintain pressure on the rear risers and take brisk steps towards the wing as it falls. With practice there is potential for precision enabling safe trouble-free landing. For strong winds during the landing approach, flapping the wing (symmetrical pulsing of brakes) is a common option on final. It reduces the wing's lift performance. The descent rate is increases by the alternate application and release of the brakes about once per second. (The amount of brake applied in each cycle being variable but about 25%.) The system depends on the pilot's wing familiarity. The wing must not become stalled. This should be established with gentle applications in flight, at a safe height, in good conditions and with an observer providing feedback. As a rule the manufacturer has set the safe-brake-travel-range based on average body proportions for pilots in the approved weight range. Making changes to that setting should be undertaken in small increases, with tell-tale marks showing the variations and a test flight to confirm the desired effect. Shortening the brake lines can produce the problematic effect of making the wing sluggish. Lengthening brakes excessively can make it hard to bring the wing to a safe touchdown speed. Alternative approach techniques for landing in strong winds include the use of a speed bar and big ears. A speed bar increases wing penetration and adds a small increase in the vertical descent rate. This makes it easier to adjust descent rates during a formal circuit. In an extreme situation it might be advisable to stand on the speed bar, after shifting out of the harness, and stay on it till touchdown and deflation. Big ears are commonly applied during circuit height management. The vertical descent speed is increased and that advantage can be used to bring the glider to an appropriate circuit joining height. Most manufacturers change the operation technique for big ears in advanced models. It is common for Big Ears in C-rated gliders to remain folded in after the control line is released. In those cases the wing can be landed with reasonable safety with big ears deployed. In those wing types it usually takes two or three symmetrical pumps with brakes, over a second or two, to re-inflate the tips. In lower rated wings the Big Ears need the line to remain held to hold the ears in. While they are held-in the wing tends to respond to weight shift slightly better (due to reduced effective area) on the roll axis. They auto re-inflate when the line is released. In general those wings are better suited to the situation where the ears are pulled in simply to get rid of excess height. Full-wing flight should then be resumed during base leg or several seconds before touch down. Wing familiarity is a key ingredient in applying these controls. Pilots should practise in medium conditions in a safe area, at a safe height and with options for landing. Control Brakes: controls held in each of the pilot's hands connect to the trailing edge of the left and right sides of the wing. These controls are called brakes and provide the primary and most general means of control in a paraglider. The brakes are used to adjust speed, to steer (in addition to weight shift), and to flare (during landing). Weight shift: in addition to manipulating the brakes, a paraglider pilot must also lean in order to steer properly. Such weight shifting can also be used for more limited steering when brake use is unavailable, such as when under "big ears" (see below). More advanced control techniques may also involve weight shifting. Speed bar: a kind of foot control called the speed bar (also accelerator) attaches to the paragliding harness and connects to the leading edge of the paraglider wing, usually through a system of at least two pulleys (see animation in margin). This control is used to increase speed and does so by decreasing the wing's angle of attack. This control is necessary because the brakes can only slow the wing from what is called trim speed (no brakes applied). The accelerator is needed to go faster than this. More advanced means of control can be obtained by manipulating the paraglider's risers or lines directly. Most commonly, the lines connecting to the outermost points of the wing's leading edge can be used to induce the wingtips to fold under. The technique, known as "big ears", is used to increase the rate of descent (see picture and the full description below). The risers connecting to the rear of the wing can also be manipulated for steering if the brakes have been severed or are otherwise unavailable. For ground-handling purposes, a direct manipulation of these lines can be more effective and offer more control than the brakes. The effect of sudden wind blasts can be countered by directly pulling on the risers and making the wing unflyable, thereby avoiding falls or unintentional takeoffs. Fast descents Problems with getting down can occur when the lift situation is very good or when the weather changes unexpectedly. There are three possibilities for rapidly reducing altitude in such situations, each of which has benefits and issues to be aware of. The "big ears" manoeuvre induces descent rates of 2.5 to 3.5 m/s, 4–6 m/s with additional speed bar. It is the most controllable of the techniques and the easiest for beginners to learn. The B-line stall induces descent rates of 6–10 m/s. It increases loading on parts of the wing (the pilot's weight is mostly on the B-lines, instead of spread across all the lines). Finally, a spiral dive offers the fastest rate of descent, at 7–25 m/s. It places greater loads on the wing than other techniques do and requires the highest level of skill from the pilot to execute safely. Big ears Pulling on the outer A-lines during non-accelerated, normal flight folds the wing tips inwards, which substantially reduces the glide angle with only a small decrease in forward speed. As the effective wing area is reduced, the wing loading is increased, and it becomes more stable. However, the angle of attack is increased, and the craft is closer to stall speed, but this can be ameliorated by applying the speed bar, which also increases the descent rate. When the lines are released, the wing re-inflates. If necessary, a short pumping on the brakes helps reestablish normal flight. Compared to the other techniques, with big ears, the wing still glides forward, which enables the pilot to leave an area of danger. Even landing this way is possible, e.g., if the pilot has to counter an updraft on a slope. B-line stall In a B-line stall, the second set of risers from the leading-edge/front (the B-lines) are pulled down independently of the other risers, with the specific lines used to initiate a stall. This puts a spanwise crease in the wing, thereby separating the airflow from the upper surface of the wing. It dramatically reduces the lift produced by the canopy and thus induces a higher rate of descent. This can be a strenuous manoeuvre, because these B-lines have to be held in this position, and the tension of the wing puts an upwards force on these lines. The release of these lines has to be handled carefully not to provoke a too fast forward shooting of the wing, which the pilot then could fall into. This is less popular now as it induces high loads on the internal structure of the wing. Spiral dive The spiral dive is the most rapid form of controlled fast descent; an aggressive spiral dive can achieve a sink rate of 25 m/s. This manoeuvre halts forward progress and brings the flier almost straight down. The pilot pulls the brakes on one side and shifts his weight onto that side to induce a sharp turn. The flight path then begins to resembles a corkscrew. After a specific downward speed is reached, the wing points directly to the ground. When the pilot reaches his desired height, he ends this manoeuvre by slowly releasing the inner brake, shifting his weight to the outer side and braking on this side. The release of the inner brake has to be handled carefully to end the spiral dive gently in a few turns. If done too fast, the wing translates the turning into a dangerous upward and pendular motion. Spiral dives put a strong G-force on the wing and glider and must be done carefully and skilfully. The G-forces involved can induce blackouts, and the rotation can produce disorientation. Some high-end gliders have what is called a "stable spiral problem". After inducing a spiral and without further pilot input, some wings do not automatically return to normal flight and stay inside their spiral. Serious injury and fatal accidents have occurred when pilots could not exit this manoeuvre and spiralled into the ground. The rate of rotation in a spiral dive can be reduced by using a drogue chute, deployed just before the spiral is induced. This reduces the G forces experienced. Soaring Soaring flight is achieved by using wind directed upwards by a fixed object such as a dune or ridge. In slope soaring, pilots fly along the length of a slope feature in the landscape, relying on the lift provided by the air, which is forced up as it passes over the slope. Slope soaring is highly dependent on a steady wind within a defined range (the suitable range depends on the performance of the wing and the skill of the pilot). Too little wind, and insufficient lift is available to stay airborne (pilots end up scratching along the slope). With more wind, gliders can fly well above and forward of the slope, but too much wind, and there is a risk of being blown back over the slope. A particular form of ridge soaring is called condo soaring, where pilots soar a row of buildings that form an artificial ridge. This form of soaring is particularly used in flat lands where there are no natural ridges, but there are plenty of man-made building ridges. Thermal flying When the sun warms the ground, the ground will radiate some of its heat to a thin layer of air situated just above it. Air has very poor thermal conductivity and most of the heat transfer in it will be convective - forming rising columns of hot air, called thermals. If the terrain is not uniform, it will warm some features more than others (such as rock faces or large buildings) and these thermals will tend to always form at the same spot, otherwise they will be more random. Sometimes these may be a simple rising column of air; more often, they are blown sideways in the wind and will break off from the source, with a new thermal forming later. Once a pilot finds a thermal, he begins to fly in a circle, trying to centre the circle on the strongest part of the thermal (the "core"), where the air is rising the fastest. Most pilots use a vario-altimeter ("vario"), which indicates climb rate with beeps and/or a visual display, to help core in on a thermal. Often there is strong sink surrounding thermals, and there is also strong turbulence resulting in wing collapses as a pilot tries to enter a strong thermal. Good thermal flying is a skill that takes time to learn, but a good pilot can often core a thermal all the way to cloud base. Cross-country flying Once the skills of using thermals to gain altitude have been mastered, pilots can glide from one thermal to the next to go cross country. Having gained altitude in a thermal, a pilot glides down to the next available thermal. Potential thermals can be identified by land features that typically generate thermals or by cumulus clouds, which mark the top of a rising column of warm, humid air as it reaches the dew point and condenses to form a cloud. Cross-country pilots also need an intimate familiarity with air law, flying regulations, aviation maps indicating restricted airspace, etc. In-flight wing deflation (collapse) Since the shape of the wing (airfoil) is formed by the moving air entering and inflating the wing, in turbulent air, part or all of the wing can deflate (collapse). Piloting techniques referred to as active flying will greatly reduce the frequency and severity of deflations or collapses. On modern recreational wings, such deflations will normally recover without pilot intervention. In the event of a severe deflation, correct pilot input will speed recovery from a deflation, but incorrect pilot input may slow the return of the glider to normal flight, so pilot training and practice in correct response to deflations are necessary. For the rare occasions when it is not possible to recover from a deflation (or from other threatening situations such as a spin), most pilots carry a reserve (rescue, emergency) parachute (or even two); however, most pilots never have cause to "throw" their reserve. Should a wing deflation occur at low altitude, i.e., shortly after takeoff or just before landing, the wing (paraglider) may not recover its correct structure rapidly enough to prevent an accident, with the pilot often not having enough altitude remaining to deploy a reserve parachute [with the minimum altitude for this being approximately , but typical deployment to stabilization periods using up of altitude] successfully. Different packing methods of the reserve parachute affect its deploying time. Low-altitude wing failure can result in serious injury or death due to the subsequent velocity of a ground impact whereas a higher altitude failure may allow more time to regain some degree of control in the descent rate and, critically, deploy the reserve if needed. In-flight wing deflation and other hazards are minimized by flying a suitable glider and choosing appropriate weather conditions and locations for the pilot's skill and experience level. As a competitive sport There are various disciplines of competitive paragliding: Cross-country flying is the classical form of paragliding competitions with championships in club, regional, national and international levels (see PWC). Aerobatic competitions demand the participants to perform certain manoeuvres. Competitions are held for individual pilots as well as for pairs that show synchronous performances. This form is the most spectacular for spectators on the ground to watch. Hike & Fly competitions, in which a certain route has to be flown or hiked only over several days: Red Bull X-Alps—the unofficial world championship in this category of competition—first launched in 2003 and has since taken place every other year. Since 2012, the similar X-Pyr cross-Pyrenees competition has taken place in the even years. Many other Hike and Fly competitions occur today worldwide. In addition to these organized events it is also possible to participate in various online contests that require participants to upload flight track data to dedicated websites like OLC. Safety Paragliding, like any adventure sport, is a potentially dangerous activity. In the United States, for example, in 2010 (the last year for which details are available), one paraglider pilot died. This is an equivalent rate of one in 5,000 pilots. In 2019, YouTube personality Grant Thompson of The King Of Random died in a paraglider accident. Over the years 1994−2010, an average of seven in every 10,000 active paraglider pilots have been fatally injured, though with a marked improvement in recent years. In France (with over 25,000 registered fliers), two of every 10,000 pilots were fatally injured in 2011 (a rate that is not atypical of the years 2007−2011), although around six of every 1,000 pilots were seriously injured (more than two-day hospital stay). The potential for injury can be significantly reduced by training and risk management. The use of proper equipment such as a wing designed for the pilot's size and skill level, as well as a helmet, a reserve parachute, and a cushioned harness also minimize risk. Pilot safety is influenced by an understanding of the site conditions such as air turbulence (rotors), strong thermals, gusty wind, and ground obstacles such as power lines. Sufficient pilot training in wing control and emergency manoeuvres from competent instructors can minimize accidents. Many paragliding accidents are the result of a combination of pilot error and poor flying conditions. SIV, short for Simulation d'Incident en Vol (simulation of incident in flight) instruction offers training in managing and preventing unstable and potentially dangerous situations such as collapses, full stalls, and cravattes. These courses are typically led by a specially trained instructor over large bodies of water, with the student usually being instructed via radio. Students will be taught how to induce dangerous situations, and thus learn how to both avoid and remedy them once induced. This course is recommended to pilots who are looking to move to more high performance and less stable wings, which is a natural progression for most pilots. In some countries a SIV course is a basic requirement of initial pilot training. In the event of an unrecoverable manoeuvre resulting in water landing, a rescue boat is typically dispatched to collect the pilot. Other added safety features may include buoyancy aids or secondary reserve parachutes. These courses are not considered essential for novice level flying. Fitness and age Paragliding in ordinary circumstances is not especially demanding in terms of strength. It sometimes needs a Pilot to walk with equipment to and from a launch site and this occasionally requires assistance from a friend or colleague. Age is more significant in people past their fifties. This especially relates to those with artificial joints. An unexpected or heavy landing can put enormous pressure on the bones which serve as anchors for hips and knee joints. Due to increasing loss of bone density in senior pilots, there is an increased risk that during a bad landing a bone may shatter and this considerably complicates moving to an appropriate treatment centre. Currently surgeons often rate these prosthetic joints as being suitable only for smooth, steady, work loads. But even for those with ordinary knees and hips there is often a stiffness in walking and running which has a negative effect on launching. Pilots who recognise this minor debility usually avoid strong wind launches, which may demand the pilot to move briskly towards the wing during inflation. There are pilots still flying while in their nineties but these are exceptional and they may very well depend on specific assistance. It is important for individuals to consult a doctor following any serious health event. It is especially important to carry, in one's flight pack, an up to date list of details relating to medications and major health issues. [primary source?] Instruction Most popular paragliding regions have a number of schools, generally registered with and/or organized by national associations. Certification systems vary widely between countries, though around 10 days instruction to basic certification is standard. There are several key components to a paragliding pilot certification instruction program. Initial training for beginning pilots usually begins with some amount of ground school to discuss the basics, including elementary theories of flight as well as basic structure and operation of the paraglider. Students then learn how to control the glider on the ground, practising take-offs and controlling the wing 'overhead'. Low, gentle hills are next where students get their first short flights, flying at very low altitudes, to get used to the handling of the wing over varied terrain. Special winches can be used to tow the glider to low altitude in areas that have no hills readily available. As their skills progress, students move on to steeper/higher hills (or higher winch tows), making longer flights, and learning to turn the glider, control the glider's speed, then moving on to 360° turns, spot landings, 'big ears' (used to increase the rate of descent for the paraglider), and other more advanced techniques. Training instructions are often provided to the student via radio, particularly during the first flights. A third key component to a complete paragliding instructional program provides substantial background in the key areas of meteorology, aviation law, and general flight area etiquette. To give prospective pilots a chance to determine if they would like to proceed with a full pilot training program, most schools offer tandem flights, in which an experienced instructor pilots the paraglider with the prospective pilot as a passenger. Schools often offer pilot's families and friends the opportunity to fly tandem, and sometimes sell tandem pleasure flights at holiday resorts. Most recognised courses lead to a national licence and an internationally recognised International Pilot Proficiency Information/Identification card. The IPPI specifies five stages of paragliding proficiency, from the entry level ParaPro 1 to the most advanced stage 5. Attaining a level of ParaPro 3 typically allows the pilot to fly solo or without instructor supervision. World records FAI (Fédération Aéronautique Internationale) world records: Free Distance (previously titled "Straight Distance" prior to May 2020) – : Sebastien Kayrouz (USA), Del Rio, Texas (USA) – Claude, Texas (USA) – 20 June 2021 flying an Ozone Enzo 3 Straight distance (female) – : Yael Margelisch (Switzerland); Caicó (Brazil) – 12 October 2019 flying an Ozone Enzo 3 Straight distance to declared goal – : Sebastien Kayrouz (USA), Del Rio, Texas (USA) – Claude, Texas (USA) – 20 June 2021 flying an Ozone Enzo 3 Straight distance to a declared goal (female) - 457,3 km (284,15 mi): Marcella Pomarico Uchoa (BRA) - 14 Oct 2022 flying an Ozone Enzo 3 (location Caicó (Brazil)) Gain of height – : Antoine Girard (France); Aconcagua (Argentina); 15 February 2019, flying an Ozone LM6 Others: Highest flight – : Ewa Wisnierska; between Barraba and Niagra (Australia). Related activities Sky diving Parachutes have the most resemblance with paragliders but the sports are very different. Whereas with sky diving the parachute is a tool to safely return to earth after free fall, the paraglider allows longer flights and the use of thermals. Hang gliding Hang gliding is a close cousin, and hang glider and paraglider launches are often found in proximity to one another. Despite the considerable difference in equipment, the two activities offer similar pleasures, and some pilots are involved in both sports. Powered hang glider Foot-launched powered hang gliders are powered by an engine and propeller in pusher configuration. An ordinary hang glider is used for its wing and control frame, and the pilot can foot-launch from a hill or from flat ground. Powered paragliding Powered paragliding is the flying of paragliders with a small engine known as a paramotor attached. Powered paragliding is known as paramotoring and requires extra training alongside regular paragliding training. It is often recommended to become competent in paragliding prior to learning to paramotor in order to know fully what one is doing. Speed flying Speed flying, or speed riding, is the separate sport of flying paragliders of a reduced size. These wings have increased speed, though they are not normally capable of soaring flight. The sport involves taking off on skis or on foot and swooping rapidly down in close proximity to a slope, even periodically touching it if skis are used. These smaller wings are also sometimes used where wind speeds are too high for a full-sized paraglider, although this is invariably at coastal sites where the wind is laminar and not subject to as much mechanical turbulence as inland sites. Gliding Just like sailplanes and hang gliders, paragliders use thermals to extend the time in the air. Air speed, glide ratio and flight distances are superior to the ones achieved by paragliders. Paragliders on the other hand are able to also facilitate thermals that are too small (because of the much larger turn radius) or too weak for gliding. Paragliding can be of local importance as a commercial activity. Paid accompanied tandem flights are available in many mountainous regions, both in the winter and in the summer. In addition, there are many schools offering courses and guides who lead groups of more experienced pilots exploring an area. Finally, there are the manufacturers and the associated repair and after-sales services. Paraglider-like wings also find other uses, for example, in ship propulsion and wind energy exploitation, and are related to some forms of power kite. Kite skiing uses equipment similar to paragliding sails. National organizations United States Hang Gliding and Paragliding Association (USHPA) – United States British Hang Gliding and Paragliding Association (BHPA) – United Kingdom Flyability – BHPA associated charity for disabled paragliding and hang gliding (FFVL) – France Association of Paragliding Pilots and Instructors (APPI) Hang Gliding and Paragliding Association of Canada (HPAC) – Canada (FAVL) – Argentina (DHV) (Eng. German hang gliding association) – Germany Sports Aviation Federation of Australia (SAFA), previously known as the Hang Gliding Federation of Australia (HGFA) – Australia Swiss Hang Gliding Association (SHV/FSVL) – Switzerland Georgian Paragliding Federation (GPF) – Georgia South African Hang Gliding and Paragliding Association - SAHPA – South Africa Asociacion Uruguaya de Parapentes - ASUP – Uruguay Paragliding Federation of Ukraine (PFU) - Ukraine The Hungarian Free Flying Association (HFFA) - Hungary Notes References Further reading Les visiteurs du ciel – Guide de l'air pour l'homme volant. Hubert Aupetit. External links Adventure travel Aircraft configurations Gliding technology Individual sports Articles containing video clips
Paragliding
[ "Engineering" ]
10,546
[ "Aircraft configurations", "Aerospace engineering" ]
63,525
https://en.wikipedia.org/wiki/Tiltrotor
A tiltrotor is an aircraft that generates lift and propulsion by way of one or more powered rotors (sometimes called proprotors) mounted on rotating shafts or nacelles usually at the ends of a fixed wing. Almost all tiltrotors use a transverse rotor design, with a few exceptions that use other multirotor layouts. Tiltrotor design combines the VTOL capability of a helicopter with the speed and range of a conventional fixed-wing aircraft. For vertical flight, the rotors are angled so the plane of rotation is horizontal, generating lift the way a normal helicopter rotor does. As the aircraft gains speed, the rotors are progressively tilted forward, with the plane of rotation eventually becoming vertical. In this mode the rotors provide thrust as a propeller, and the airfoil of the fixed wings takes over providing the lift via the forward motion of the entire aircraft. Since the rotors can be configured to be more efficient for propulsion (e.g. with root-tip twist) and it avoids a helicopter's issues of retreating blade stall, the tiltrotor can achieve higher cruise speeds and takeoff weights than helicopters. A tiltrotor aircraft differs from a tiltwing in that only the rotor pivots rather than the entire wing. This method trades off efficiency in vertical flight for efficiency in STOL/STOVL operations. History The first work in the direction of a tilt-rotor (French "Convertible") seems to have originated ca. 1902 by the French-Swiss brothers Henri and Armand Dufaux, for which they got a patent in February 1904, and made their work public in April 1905. Concrete ideas of constructing vertical take-off and landing (VTOL) aircraft using helicopter-like rotors were pushed further in the 1930s. The first design resembling modern tiltrotors was patented by George Lehberger in May 1930, but he did not further develop the concept. In World War II, Weserflug in Germany came up with the concept of their P.1003/1 around 1938, which was tilting to the top with part of the wings but not the full wings, so it may be in between tilt-rotor and tilt-planes. Shortly after a German prototype, the Focke-Achgelis Fa 269, was developed starting in 1942, which was tilting to the ground, but never flew. Platt and LePage patented the PL-16, the first American tiltrotor aircraft. However, the company shut down in August 1946 due to lack of capital. Two prototypes which made it to flight were the one-seat Transcendental Model 1-G and two seat Transcendental Model 2, each powered by a single reciprocating engine. Development started on the Model 1-G in 1947, though it did not fly until 1954. The Model 1-G flew for about a year until a crash in Chesapeake Bay on July 20, 1955, destroying the prototype aircraft but not seriously injuring the pilot. The Model 2 was developed and flew shortly afterwards, but the US Air Force withdrew funding in favor of the Bell XV-3 and it did not fly much beyond hover tests. The Transcendental 1-G is the first tiltrotor aircraft to have flown and accomplished most of a helicopter to aircraft transition in flight (to within 10 degrees of true horizontal aircraft flight). Built in 1953, the experimental Bell XV-3 flew until 1966, proving the fundamental soundness of the tiltrotor concept and gathering data about technical improvements needed for future designs. A related technology development is the tiltwing. Although two designs, the Canadair CL-84 Dynavert and the LTV XC-142, were technical successes, neither entered production due to other issues. Tiltrotors generally have better hover efficiency than tiltwings, but less than helicopters. In 1968, Westland Aircraft displayed their own designs—a small experimental craft (We 01C) and a 68-seater transport We 028—at the SBAC Farnborough Airshow. In 1972, with funding from NASA and the U.S. Army, Bell Helicopter Textron started development of the XV-15, a twin-engine tiltrotor research aircraft. Two aircraft were built to prove the tiltrotor design and explore the operational flight envelope for military and civil applications. In 1981, using experience gained from the XV-3 and XV-15, Bell and Boeing Helicopters began developing the V-22 Osprey, a twin-turboshaft military tiltrotor aircraft for the U.S. Air Force and the U.S. Marine Corps. Bell teamed with Boeing in developing a commercial tiltrotor, but Boeing went out in 1998 and Agusta came in for the Bell/Agusta BA609. This aircraft was redesignated as the AW609 following the transfer of full ownership to AgustaWestland in 2011. Bell has also developed a tiltrotor unmanned aerial vehicle (UAV), the TR918 Eagle Eye. Russia has had a few tiltrotor projects, mostly unmanned such as the Mil Mi-30, and has started another in 2015. Around 2005–2010, Bell and Boeing teamed up again to perform a conceptual study of a larger Quad TiltRotor (QTR) for the US Army's Joint Heavy Lift (JHL) program. The QTR is a larger, four rotor version of the V-22 with two tandem wings sets of fixed wings and four tilting rotors. In January 2013, the FAA defined US tiltrotor noise rules to comply with ICAO rules. A noise certification will cost $588,000, same as for a large helicopter. AgustaWestland says they have free-flown a manned electric tiltrotor in 2013 called Project Zero, with its rotors inside the wingspan. In 2013, Bell Helicopter CEO John Garrison responded to Boeing's taking a different airframe partner for the US Army's future lift requirements by indicating that Bell would take the lead itself in developing the Bell V-280 Valor, with Lockheed Martin. In 2014, the Clean Sky 2 program (by the European Union and industry) awarded AgustaWestland and its partners $328 million to develop a "next-generation civil tiltrotor" design for the offshore market, with Critical Design Review near the end of 2016. The goals are tilting wing sections, 11 metric tons Maximum takeoff weight, seating for 19 to 22 passengers, first flight in 2021, a cruise speed of 300 knots, a top speed of 330 knots, a ceiling of 25,000 feet, and a range of 500 nautical miles. Technical considerations Controls In vertical flight, the tiltrotor uses controls very similar to a twin or tandem-rotor helicopter. Yaw is controlled by tilting its rotors in opposite directions. Roll is provided through differential power or thrust. Pitch is provided through rotor blades cyclic-, or nacelle, tilt. Vertical motion is controlled with conventional rotor blade pitch and either a conventional helicopter collective control lever (as in the Bell/Agusta BA609) or a unique control similar to a fixed-wing engine control called a thrust control lever (TCL) (as in the Bell-Boeing V-22 Osprey). Speed and payload issues The tiltrotor's advantage is significantly greater speed than a helicopter. In a helicopter the maximum forward speed is defined by the turn speed of the rotor; at some point the helicopter will be moving forward at the same speed as the spinning of the backwards-moving side of the rotor, so that side of the rotor sees zero or negative airspeed, and begins to stall. This limits modern helicopters to cruise speeds of about 150 knots / 277 km/h. However, with the tiltrotor this problem is avoided, because the proprotors are perpendicular to the motion in the high-speed portions of the flight regime (and thus not subject to this reverse flow condition), so the tiltrotor has relatively high maximum speed—over 300 knots / 560 km/h has been demonstrated in the two types of tiltrotors flown so far, and cruise speeds of 250 knots / 460 km/h are achieved. This speed is achieved somewhat at the expense of payload. As a result of this reduced payload, some estimate that a tiltrotor does not exceed the transport efficiency (speed times payload) of a helicopter, while others conclude the opposite. Additionally, the tiltrotor propulsion system is more complex than a conventional helicopter due to the large, articulated nacelles and the added wing; however, the improved cruise efficiency and speed improvement over helicopters is significant in certain uses. Speed and, more importantly, the benefit to overall response time is the principal virtue sought by the military forces that are using the tiltrotor. Tiltrotors are inherently less noisy in forward flight (airplane mode) than helicopters. This, combined with their increased speed, is expected to improve their utility in populated areas for commercial uses and reduce the threat of detection for military uses. Tiltrotors, however, are typically as loud as equally sized helicopters in hovering flight. Noise simulations for a 90-passenger tiltrotor indicate lower cruise noise inside the cabin than a Bombardier Dash 8 airplane, although low-frequency vibrations may be higher. Tiltrotors also provide substantially greater cruise altitude capability than helicopters. Tiltrotors can easily reach 6,000 m / 20,000 ft or more whereas helicopters typically do not exceed 3,000 m / 10,000 ft altitude. This feature will mean that some uses that have been commonly considered only for fixed-wing aircraft can now be supported with tiltrotors without need of a runway. A drawback however is that a tiltrotor suffers considerably reduced payload when taking off from high altitude. Mono tiltrotor A mono tiltrotor aircraft uses a tiltable rotating propeller, or coaxial proprotor, for lift and propulsion. For vertical flight the proprotor is angled to direct its thrust downwards, providing lift. In this mode of operation the craft is essentially identical to a helicopter. As the craft gains speed, the coaxial proprotor is slowly tilted forward, with the blades eventually becoming perpendicular to the ground. In this mode the wing provides the lift, and the wing's greater efficiency helps the tiltrotor achieve its high speed. In this mode, the craft is essentially a turboprop aircraft. A mono tiltrotor aircraft is different from a conventional tiltrotor in which the proprotors are mounted to the wing tips, in that the coaxial proprotor is mounted to the aircraft's fuselage. As a result of this structural efficiency, a mono tiltrotor exceeds the transport efficiency (speed times payload) of both a helicopter and a conventional tiltrotor. One design study concluded that if the mono tiltrotor could be technically realized, it would be half the size, one-third the weight, and nearly twice as fast as a helicopter. In vertical flight, the mono tiltrotor uses controls very similar to a coaxial helicopter, such as the Kamov Ka-50. Yaw is controlled for instance by increasing the lift on the upper proprotor while decreasing the lift on the lower proprotor. Roll and pitch are provided through rotor cyclic. Vertical motion is controlled with conventional rotor blade blade pitch. List of tiltrotor aircraft AgustaWestland AW609 AgustaWestland Project Zero American Dynamics AD-150 Bell XV-3 Bell XV-15 Bell Eagle Eye Bell V-280 Valor Bell Boeing V-22 Osprey Curtiss-Wright X-19 Focke-Achgelis Fa 269 IAI Panther Transcendental Model 1-G See also Pitch drop-back Tiltjet Tiltwing Tailsitter PTOL VTOL Thrust vectoring References External links Archived at Ghostarchive and the Wayback Machine: Aircraft configurations
Tiltrotor
[ "Engineering" ]
2,416
[ "Aircraft configurations", "Aerospace engineering" ]
63,528
https://en.wikipedia.org/wiki/Inflorescence
In botany, an inflorescence is a group or cluster of flowers arranged on a plant's stem that is composed of a main branch or a system of branches. An inflorescence is categorized on the basis of the arrangement of flowers on a main axis (peduncle) and by the timing of its flowering (determinate and indeterminate). Morphologically, an inflorescence is the modified part of the shoot of seed plants where flowers are formed on the axis of a plant. The modifications can involve the length and the nature of the internodes and the phyllotaxis, as well as variations in the proportions, compressions, swellings, adnations, connations and reduction of main and secondary axes. One can also define an inflorescence as the reproductive portion of a plant that bears a cluster of flowers in a specific pattern. General characteristics Inflorescences are described by many different characteristics including how the flowers are arranged on the peduncle, the blooming order of the flowers, and how different clusters of flowers are grouped within it. These terms are general representations as plants in nature can have a combination of types. Because flowers facilitate plant reproduction, inflorescence characteristics are largely a result of natural selection. The stem holding the whole inflorescence is called a peduncle. The main axis (also referred to as major stem) above the peduncle bearing the flowers or secondary branches is called the rachis. The stalk of each flower in the inflorescence is called a pedicel. A flower that is not part of an inflorescence is called a solitary flower and its stalk is also referred to as a peduncle. Any flower in an inflorescence may be referred to as a floret, especially when the individual flowers are particularly small and borne in a tight cluster, such as in a pseudanthium. The fruiting stage of an inflorescence is known as an infructescence. Inflorescences may be simple (single) or complex (panicle). The rachis may be one of several types, including single, composite, umbel, spike or raceme. In some species the flowers develop directly from the main stem or woody trunk, rather than from the plant's main shoot. This is called cauliflory and is found across a number of plant families. An extreme version of this is flagelliflory where long, whip-like branches grow from the main trunk to the ground and even below it. Inflorescences form directly on these branches. Terminal flower Plant organs can grow according to two different schemes, namely monopodial or racemose and sympodial or cymose. In inflorescences these two different growth patterns are called indeterminate and determinate respectively, and indicate whether a terminal flower is formed and where flowering starts within the inflorescence. Indeterminate inflorescence: Monopodial (racemose) growth. The terminal bud keeps growing and forming lateral flowers. A terminal flower is never formed. Determinate inflorescence: Sympodial (cymose) growth. The terminal bud forms a terminal flower and then dies out. Other flowers then grow from lateral buds. Indeterminate and determinate inflorescences are sometimes referred to as open and closed inflorescences respectively. The indeterminate patterning of flowers is derived from determinate flowers. It is suggested that indeterminate flowers have a common mechanism that prevents terminal flower growth. Based on phylogenetic analyses, this mechanism arose independently multiple times in different species. In an indeterminate inflorescence there is no true terminal flower and the stem usually has a rudimentary end. In many cases the last true flower formed by the terminal bud (subterminal flower) straightens up, appearing to be a terminal flower. Often a vestige of the terminal bud may be noticed higher on the stem. In determinate inflorescences the terminal flower is usually the first to mature (precursive development), while the others tend to mature starting from the base of the stem. This pattern is called acropetal maturation. When flowers start to mature from the top of the stem, maturation is basipetal, whereas when the central mature first, maturation is divergent. Phyllotaxis As with leaves, flowers can be arranged on the stem according to many different patterns. See 'Phyllotaxis' for in-depth descriptions. Similarly arrangement of leaf in bud is called Ptyxis. When a single or a cluster of flower(s) is located at the axil of a bract, the location of the bract in relation to the stem holding the flower(s) is indicated by the use of different terms and may be a useful diagnostic indicator. Typical placement of bracts include: Some plants have bracts that subtend the inflorescence, where the flowers are on branched stalks; the bracts are not connected to the stalks holding the flowers, but are adnate or attached to the main stem (Adnate describes the fusing together of different unrelated parts. When the parts fused together are the same, they are connately joined.) Other plants have the bracts subtend the pedicel or peduncle of single flowers. Metatopic placement of bracts include: When the bract is attached to the stem holding the flower (the pedicel or peduncle), it is said to be recaulescent; sometimes these bracts or bracteoles are highly modified and appear to be appendages of the flower calyx. Recaulescences is the fusion of the subtending leaf with the stem holding the bud or the bud itself, thus the leaf or bract is adnate to the stem of flower. When the formation of the bud is shifted up the stem distinctly above the subtending leaf, it is described as concaulescent. Organization There is no general consensus in defining the different inflorescences. The following is based on Focko Weberling's Morphologie der Blüten und der Blütenstände (Stuttgart, 1981). The main groups of inflorescences are distinguished by branching. Within these groups, the most important characteristics are the intersection of the axes and different variations of the model. They may contain many flowers (pluriflor) or a few (pauciflor). Inflorescences can be simple or compound. Simple inflorescences Indeterminate or racemose Indeterminate simple inflorescences are generally called racemose . The main kind of racemose inflorescence is the raceme (, from classical Latin racemus, cluster of grapes). The other kind of racemose inflorescences can all be derived from this one by dilation, compression, swelling or reduction of the different axes. Some passage forms between the obvious ones are commonly admitted. A raceme is an unbranched, indeterminate inflorescence with pedicellate (having short floral stalks) flowers along the axis. A spike is a type of raceme with flowers that do not have a pedicel. A racemose corymb is an unbranched, indeterminate inflorescence that is flat-topped or convex due to their outer pedicels which are progressively longer than inner ones. An umbel is a type of raceme with a short axis and multiple floral pedicels of equal length that appear to arise from a common point. It is characteristic of Umbelliferae. A spadix is a spike of flowers densely arranged around it, enclosed or accompanied by a highly specialised bract called a spathe. It is characteristic of the family Araceae. A flower head or capitulum is a very contracted raceme in which the single sessile flowers share are borne on an enlarged stem. It is characteristic of Dipsacaceae. A catkin or ament is a scaly, generally drooping spike or raceme. Cymose or other complex inflorescences that are superficially similar are also generally called thus. Determinate or cymose Determinate simple inflorescences are generally called cymose. The main kind of cymose inflorescence is the cyme (pronounced ), from the Latin cyma in the sense 'cabbage sprout', from Greek kuma 'anything swollen'). Cymes are further divided according to this scheme: Only one secondary axis: monochasium Secondary buds always develop on the same side of the stem: helicoid cyme or bostryx The successive pedicels are aligned on the same plane: drepanium Secondary buds develop alternately on the stem : scorpioid cyme The successive pedicels are arranged in a sort of spiral: cincinnus (characteristic of the Boraginaceae and Commelinaceae) The successive pedicels follow a zig-zag path on the same plane: rhipidium (many Iridaceae) Two secondary axes: dichasial cyme Secondary axis still dichasial: dichasium (characteristic of Caryophyllaceae) Secondary axis monochasia: double scorpioid cyme or double helicoid cyme More than two secondary axes: pleiochasium A cyme can also be so compressed that it looks like an umbel. Strictly speaking this kind of inflorescence could be called umbelliform cyme, although it is normally called simply 'umbel'. Another kind of definite simple inflorescence is the raceme-like cyme or botryoid; that is as a raceme with a terminal flower and is usually improperly called 'raceme'. A reduced raceme or cyme that grows in the axil of a bract is called a fascicle. A verticillaster is a fascicle with the structure of a dichasium; it is common among the Lamiaceae. Many verticillasters with reduced bracts can form a spicate (spike-like) inflorescence that is commonly called a spike. Compound inflorescences Simple inflorescences are the basis for compound inflorescences or synflorescences. The single flowers are there replaced by a simple inflorescence, which can be both a racemose or a cymose one. Compound inflorescences are composed of branched stems and can involve complicated arrangements that are difficult to trace back to the main branch. A kind of compound inflorescence is the double inflorescence, in which the basic structure is repeated in the place of single florets. For example, a double raceme is a raceme in which the single flowers are replaced by other simple racemes; the same structure can be repeated to form triple or more complex structures. Compound raceme inflorescences can either end with a final raceme (homoeothetic), or not (heterothetic). A compound raceme is often called a panicle. This definition is very different from that given by Weberling. Compound umbels are umbels in which the single flowers are replaced by many smaller umbels called umbellets. The stem attaching the side umbellets to the main stem is called a ray. The most common kind of definite compound inflorescence is the panicle (of Webeling, or 'panicle-like cyme'). A panicle is a definite inflorescence that is increasingly more strongly and irregularly branched from the top to the bottom and where each branching has a terminal flower. The so-called cymose corymb is similar to a racemose corymb but has a panicle-like structure. Another type of panicle is the anthela. An anthela is a cymose corymb with the lateral flowers higher than the central ones. A raceme in which the single flowers are replaced by cymes is called a (indefinite) thyrse. The secondary cymes can be of any of the different types of dichasia and monochasia. A botryoid in which the single flowers are replaced by cymes is a definite thyrse or thyrsoid. Thyrses are often confusingly called panicles. Other combinations are possible. For example, heads or umbels may be arranged in a corymb or a panicle. Other The family Asteraceae is characterised by a highly specialised head technically called a calathid (but usually referred to as 'capitulum' or 'head'). The family Poaceae has a peculiar inflorescence of small spikes (spikelets) organised in panicles or spikes that are usually simply and improperly referred to as spike and panicle. The genus Ficus (Moraceae) has an inflorescence called a hypanthodium, which bears numerous flowers on the inside of a convex or involuted compound receptacle. The genus Euphorbia has cyathia (sing. cyathium), usually organised in umbels. Some species have inflorescences reduced to composite flowers or pseudanthia, in which case it is difficult to differentiate between inflorescences and single flowers. Development and patterning Development Genetic basis Genes that shape inflorescence development have been studied at great length in Arabidopsis. LEAFY (LFY) is a gene that promotes floral meristem identity, regulating inflorescence development in Arabidopsis. Any alterations in timing of LFY expression can cause formation of different inflorescences in the plant. Genes similar in function to LFY include APETALA1 (AP1). Mutations in LFY, AP1, and similar promoting genes can cause conversion of flowers into shoots. In contrast to LEAFY, genes like terminal flower (TFL) support the activity of an inhibitor that prevents flowers from growing on the inflorescence apex (flower primordium initiation), maintaining inflorescence meristem identity. Both types of genes help shape flower development in accordance with the ABC model of flower development. Studies have been recently conducted or are ongoing for homologs of these genes in other flower species. Environmental influences Inflorescence-feeding insect herbivores shape inflorescences by reducing lifetime fitness (how much flowering occurs), seed production by the inflorescences, and plant density, among other traits. In the absence of these herbivores, inflorescences usually produce more flower heads and seeds. Temperature can also variably shape inflorescence development. High temperatures can impair the proper development of flower buds or delay bud development in certain species, while in others an increase in temperature can hasten inflorescence development. Meristems and inflorescence architecture The shift from the vegetative to reproductive phase of a flower involves the development of an inflorescence meristem that generates floral meristems. Plant inflorescence architecture depends on which meristems becomes flowers and which become shoots. Consequently, genes that regulate floral meristem identity play major roles in determining inflorescence architecture because their expression domain will direct where the plant's flowers are formed. On a larger scale, inflorescence architecture affects quality and quantity of offspring from selfing and outcrossing, as the architecture can influence pollination success. For example, Asclepias inflorescences have been shown to have an upper size limit, shaped by self-pollination levels due to crosses between inflorescences on the same plant or between flowers on the same inflorescence. In Aesculus sylvatica, it has been shown that the most common inflorescence sizes are correlated with the highest fruit production as well. References Bibliography Focko Weberling: Morphologie der Blüten und der Blütenstände; Zweiter Teil. Verlag Eugen Ulmer, Stuttgart 1981 Wilhelm Troll: Die Infloreszenzen; Erster Band. Gustav Fischer Verlag, Stuttgart 1964 Wilhelm Troll: Die Infloreszenzen; Zweiter Band, Erster Teil. Gustav Fischer Verlag, Stuttgart 1969 Wilhelm Troll: Praktische Einführung in die Pflanzenmorphologie. Gustav Fischer Verlag, Jena 1957 Bernhard Kausmann: Pflanzenanatomie. Gustav Fischer Verlag, Jena 1963 Walter S. Judd, Christopher S. Campbell, Elizabeth A. Kellogg, Peter F. Stevens, Michael J. Donoghue: Plant Systematics: A Phylogenetic Approach, Sinauer Associates Inc. 2007 Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website . Version 7, May 2006 [and more or less continuously updated since]. Strasburger, Noll, Schenck, Schimper: Lehrbuch der Botanik für Hochschulen. 4. Auflage, Gustav Fischer, Jena 1900, p. 459 R J Ferry. Inflorescences and Their Names. The McAllen International Orchid Society Journal.Vol. 12(6), pp. 4-11 June 2011 External links Inflorescence Plant morphology
Inflorescence
[ "Biology" ]
3,556
[ "Plant morphology", "Plants" ]
63,541
https://en.wikipedia.org/wiki/Glutamic%20acid
Glutamic acid (symbol Glu or E; the anionic form is known as glutamate) is an α-amino acid that is used by almost all living beings in the biosynthesis of proteins. It is a non-essential nutrient for humans, meaning that the human body can synthesize enough for its use. It is also the most abundant excitatory neurotransmitter in the vertebrate nervous system. It serves as the precursor for the synthesis of the inhibitory gamma-aminobutyric acid (GABA) in GABAergic neurons. Its molecular formula is . Glutamic acid exists in two optically isomeric forms; the dextrorotatory -form is usually obtained by hydrolysis of gluten or from the waste waters of beet-sugar manufacture or by fermentation. Its molecular structure could be idealized as HOOC−CH()−()2−COOH, with two carboxyl groups −COOH and one amino group −. However, in the solid state and mildly acidic water solutions, the molecule assumes an electrically neutral zwitterion structure −OOC−CH()−()2−COOH. It is encoded by the codons GAA or GAG. The acid can lose one proton from its second carboxyl group to form the conjugate base, the singly-negative anion glutamate −OOC−CH()−()2−COO−. This form of the compound is prevalent in neutral solutions. The glutamate neurotransmitter plays the principal role in neural activation. This anion creates the savory umami flavor of foods and is found in glutamate flavorings such as MSG. In Europe, it is classified as food additive E620. In highly alkaline solutions the doubly negative anion −OOC−CH()−()2−COO− prevails. The radical corresponding to glutamate is called glutamyl. The one-letter symbol E for glutamate was assigned as the letter following D for aspartate, as glutamate is larger by one methylene –CH2– group. Chemistry Ionization When glutamic acid is dissolved in water, the amino group (−) may gain a proton (), and/or the carboxyl groups may lose protons, depending on the acidity of the medium. In sufficiently acidic environments, both carboxyl groups are protonated and the molecule becomes a cation with a single positive charge, HOOC−CH()−()2−COOH. At pH values between about 2.5 and 4.1, the carboxylic acid closer to the amine generally loses a proton, and the acid becomes the neutral zwitterion −OOC−CH()−()2−COOH. This is also the form of the compound in the crystalline solid state. The change in protonation state is gradual; the two forms are in equal concentrations at pH 2.10. At even higher pH, the other carboxylic acid group loses its proton and the acid exists almost entirely as the glutamate anion −OOC−CH()−()2−COO−, with a single negative charge overall. The change in protonation state occurs at pH 4.07. This form with both carboxylates lacking protons is dominant in the physiological pH range (7.35–7.45). At even higher pH, the amino group loses the extra proton, and the prevalent species is the doubly-negative anion −OOC−CH()−()2−COO−. The change in protonation state occurs at pH 9.47. Optical isomerism Glutamic acid is chiral; two mirror-image enantiomers exist: (−), and (+). The form is more widely occurring in nature, but the form occurs in some special contexts, such as the bacterial capsule and cell walls of the bacteria (which produce it from the form with the enzyme glutamate racemase) and the liver of mammals. History Although they occur naturally in many foods, the flavor contributions made by glutamic acid and other amino acids were only scientifically identified early in the 20th century. The substance was discovered and identified in the year 1866 by the German chemist Karl Heinrich Ritthausen, who treated wheat gluten (for which it was named) with sulfuric acid. In 1908, Japanese researcher Kikunae Ikeda of the Tokyo Imperial University identified brown crystals left behind after the evaporation of a large amount of kombu broth as glutamic acid. These crystals, when tasted, reproduced the novel flavor he detected in many foods, most especially in seaweed. Professor Ikeda termed this flavor umami. He then patented a method of mass-producing a crystalline salt of glutamic acid, monosodium glutamate. Synthesis Biosynthesis Industrial synthesis Glutamic acid is produced on the largest scale of any amino acid, with an estimated annual production of about 1.5 million tons in 2006. Chemical synthesis was supplanted by the aerobic fermentation of sugars and ammonia in the 1950s, with the organism Corynebacterium glutamicum (also known as Brevibacterium flavum) being the most widely used for production. Isolation and purification can be achieved by concentration and crystallization; it is also widely available as its hydrochloride salt. Function and uses Metabolism Glutamate is a key compound in cellular metabolism. In humans, dietary proteins are broken down by digestion into amino acids, which serve as metabolic fuel for other functional roles in the body. A key process in amino acid degradation is transamination, in which the amino group of an amino acid is transferred to an α-ketoacid, typically catalysed by a transaminase. The reaction can be generalised as such: R1-amino acid + R2-α-ketoacid ⇌ R1-α-ketoacid + R2-amino acid A very common α-keto acid is α-ketoglutarate, an intermediate in the citric acid cycle. Transamination of α-ketoglutarate gives glutamate. The resulting α-ketoacid product is often a useful one as well, which can contribute as fuel or as a substrate for further metabolism processes. Examples are as follows: Alanine + α-ketoglutarate ⇌ pyruvate + glutamate Aspartate + α-ketoglutarate ⇌ oxaloacetate + glutamate Both pyruvate and oxaloacetate are key components of cellular metabolism, contributing as substrates or intermediates in fundamental processes such as glycolysis, gluconeogenesis, and the citric acid cycle. Glutamate also plays an important role in the body's disposal of excess or waste nitrogen. Glutamate undergoes deamination, an oxidative reaction catalysed by glutamate dehydrogenase, as follows: glutamate + H2O + NADP+ → α-ketoglutarate + NADPH + NH3 + H+ Ammonia (as ammonium) is then excreted predominantly as urea, synthesised in the liver. Transamination can thus be linked to deamination, effectively allowing nitrogen from the amine groups of amino acids to be removed, via glutamate as an intermediate, and finally excreted from the body in the form of urea. Glutamate is also a neurotransmitter (see below), which makes it one of the most abundant molecules in the brain. Malignant brain tumors known as glioma or glioblastoma exploit this phenomenon by using glutamate as an energy source, especially when these tumors become more dependent on glutamate due to mutations in the gene IDH1. Neurotransmitter Glutamate is the most abundant excitatory neurotransmitter in the vertebrate nervous system. At chemical synapses, glutamate is stored in vesicles. Nerve impulses trigger the release of glutamate from the presynaptic cell. Glutamate acts on ionotropic and metabotropic (G-protein coupled) receptors. In the opposing postsynaptic cell, glutamate receptors, such as the NMDA receptor or the AMPA receptor, bind glutamate and are activated. Because of its role in synaptic plasticity, glutamate is involved in cognitive functions such as learning and memory in the brain. The form of plasticity known as long-term potentiation takes place at glutamatergic synapses in the hippocampus, neocortex, and other parts of the brain. Glutamate works not only as a point-to-point transmitter, but also through spill-over synaptic crosstalk between synapses in which summation of glutamate released from a neighboring synapse creates extrasynaptic signaling/volume transmission. In addition, glutamate plays important roles in the regulation of growth cones and synaptogenesis during brain development as originally described by Mark Mattson. Brain nonsynaptic glutamatergic signaling circuits Extracellular glutamate in Drosophila brains has been found to regulate postsynaptic glutamate receptor clustering, via a process involving receptor desensitization. A gene expressed in glial cells actively transports glutamate into the extracellular space, while, in the nucleus accumbens-stimulating group II metabotropic glutamate receptors, this gene was found to reduce extracellular glutamate levels. This raises the possibility that this extracellular glutamate plays an "endocrine-like" role as part of a larger homeostatic system. GABA precursor Glutamate also serves as the precursor for the synthesis of the inhibitory gamma-aminobutyric acid (GABA) in GABA-ergic neurons. This reaction is catalyzed by glutamate decarboxylase (GAD). GABA-ergic neurons are identified (for research purposes) by revealing its activity (with the autoradiography and immunohistochemistry methods) which is most abundant in the cerebellum and pancreas. Stiff person syndrome is a neurologic disorder caused by anti-GAD antibodies, leading to a decrease in GABA synthesis and, therefore, impaired motor function such as muscle stiffness and spasm. Since the pancreas has abundant GAD, a direct immunological destruction occurs in the pancreas and the patients will have diabetes mellitus. Flavor enhancer Glutamic acid, being a constituent of protein, is present in foods that contain protein, but it can only be tasted when it is present in an unbound form. Significant amounts of free glutamic acid are present in a wide variety of foods, including cheeses and soy sauce, and glutamic acid is responsible for umami, one of the five basic tastes of the human sense of taste. Glutamic acid often is used as a food additive and flavor enhancer in the form of its sodium salt, known as monosodium glutamate (MSG). Nutrient All meats, poultry, fish, eggs, dairy products, and kombu are excellent sources of glutamic acid. Some protein-rich plant foods also serve as sources. 30% to 35% of gluten (much of the protein in wheat) is glutamic acid. Ninety-five percent of the dietary glutamate is metabolized by intestinal cells in a first pass. Plant growth Auxigro is a plant growth preparation that contains 30% glutamic acid. NMR spectroscopy In recent years, there has been much research into the use of residual dipolar coupling (RDC) in nuclear magnetic resonance spectroscopy (NMR). A glutamic acid derivative, poly-γ-benzyl-L-glutamate (PBLG), is often used as an alignment medium to control the scale of the dipolar interactions observed. Role of glutamate in aging Pharmacology The drug phencyclidine (more commonly known as PCP or 'Angel Dust') antagonizes glutamic acid non-competitively at the NMDA receptor. For the same reasons, dextromethorphan and ketamine also have strong dissociative and hallucinogenic effects. Acute infusion of the drug eglumetad (also known as eglumegad or LY354740), an agonist of the metabotropic glutamate receptors 2 and 3) resulted in a marked diminution of yohimbine-induced stress response in bonnet macaques (Macaca radiata); chronic oral administration of eglumetad in those animals led to markedly reduced baseline cortisol levels (approximately 50 percent) in comparison to untreated control subjects. Eglumetad has also been demonstrated to act on the metabotropic glutamate receptor 3 (GRM3) of human adrenocortical cells, downregulating aldosterone synthase, CYP11B1, and the production of adrenal steroids (i.e. aldosterone and cortisol). Glutamate does not easily pass the blood brain barrier, but, instead, is transported by a high-affinity transport system. It can also be converted into glutamine. Glutamate toxicity can be reduced by antioxidants, and the psychoactive principle of cannabis, tetrahydrocannabinol (THC), and the non psychoactive principle cannabidiol (CBD), and other cannabinoids, is found to block glutamate neurotoxicity with a similar potency, and thereby potent antioxidants. See also References Further reading External links Glutamic acid MS Spectrum Amino acids Proteinogenic amino acids Glucogenic amino acids Excitatory amino acids Flavor enhancers Umami enhancers Glutamates Glutamic acids Excitatory amino acid receptor agonists Glycine receptor agonists Peripherally selective drugs Chelating agents Glutamate (neurotransmitter) E-number additives
Glutamic acid
[ "Chemistry" ]
3,087
[ "Amino acids", "Biomolecules by chemical classification", "Chelating agents", "Process chemicals" ]
63,547
https://en.wikipedia.org/wiki/Pancreatitis
Pancreatitis is a condition characterized by inflammation of the pancreas. The pancreas is a large organ behind the stomach that produces digestive enzymes and a number of hormones. There are two main types: acute pancreatitis, and chronic pancreatitis. Signs and symptoms of pancreatitis include pain in the upper abdomen, nausea and vomiting. The pain often goes into the back and is usually severe. In acute pancreatitis, a fever may occur; symptoms typically resolve in a few days. In chronic pancreatitis, weight loss, fatty stool, and diarrhea may occur. Complications may include infection, bleeding, diabetes mellitus, or problems with other organs. The two most common causes of acute pancreatitis are a gallstone blocking the common bile duct after the pancreatic duct has joined; and heavy alcohol use. Other causes include direct trauma, certain medications, infections such as mumps, and tumors. Chronic pancreatitis may develop as a result of acute pancreatitis. It is most commonly due to many years of heavy alcohol use. Other causes include high levels of blood fats, high blood calcium, some medications, and certain genetic disorders, such as cystic fibrosis, among others. Smoking increases the risk of both acute and chronic pancreatitis. Diagnosis of acute pancreatitis is based on a threefold increase in the blood of either amylase or lipase. In chronic pancreatitis, these tests may be normal. Medical imaging such as ultrasound and CT scan may also be useful. Acute pancreatitis is usually treated with intravenous fluids, pain medication, and sometimes antibiotics. Typically eating and drinking are disallowed, and a nasogastric tube is placed in the stomach. A procedure known as an endoscopic retrograde cholangiopancreatography (ERCP) may be done to examine the distal common bile duct and remove a gallstone if present. In those with gallstones the gallbladder is often also removed. In chronic pancreatitis, in addition to the above, temporary feeding through a nasogastric tube may be used to provide adequate nutrition. Long-term dietary changes and pancreatic enzyme replacement may be required. Occasionally, surgery is done to remove parts of the pancreas. Globally, in 2015 about 8.9 million cases of pancreatitis occurred. This resulted in 132,700 deaths, up from 83,000 deaths in 1990. Acute pancreatitis occurs in about 30 per 100,000 people a year. New cases of chronic pancreatitis develop in about 8 per 100,000 people a year and currently affect about 50 per 100,000 people in the United States. It is more common in men than women. Often chronic pancreatitis starts between the ages of 30 and 40 and is rare in children. Acute pancreatitis was first described on autopsy in 1882 while chronic pancreatitis was first described in 1946. Signs and symptoms The most common symptoms of pancreatitis are severe upper abdominal or left upper quadrant burning pain radiating to the back, nausea, and vomiting that is worse with eating. The physical examination will vary depending on severity and presence of internal bleeding. Blood pressure may be elevated by pain or decreased by dehydration or bleeding. Heart and respiratory rates are often elevated. The abdomen is usually tender but to a lesser degree than the pain itself. As is common in abdominal disease, bowel sounds may be reduced from reflex bowel paralysis. Fever or jaundice may be present. Chronic pancreatitis can lead to diabetes or pancreatic cancer. Unexplained weight loss may occur from a lack of pancreatic enzymes hindering digestion. Complications Early complications include shock, infection, systemic inflammatory response syndrome, low blood calcium, high blood glucose, and dehydration. Blood loss, dehydration, and fluid leaking into the abdominal cavity (ascites) can lead to kidney failure. Respiratory complications are often severe. Pleural effusion is usually present. Shallow breathing from pain can lead to lung collapse. Pancreatic enzymes may attack the lungs, causing inflammation. Severe inflammation can lead to intra-abdominal hypertension and abdominal compartment syndrome, further impairing renal and respiratory function and potentially requiring management with an open abdomen to relieve the pressure. Late complications include recurrent pancreatitis and the development of pancreatic pseudocysts—collections of pancreatic secretions that have been walled off by scar tissue. These may cause pain, become infected, rupture and bleed, block the bile duct and cause jaundice, or migrate around the abdomen. Acute necrotizing pancreatitis can lead to a pancreatic abscess, a collection of pus caused by necrosis, liquefaction, and infection. This happens in approximately 3% of cases or almost 60% of cases involving more than two pseudocysts and gas in the pancreas. Causes About 80 percent of pancreatitis cases are caused by gallstones or alcohol. Choledocholithiasis (gallstones in the bile duct) are the single most common cause of acute pancreatitis, and alcoholism is the single most common cause of chronic pancreatitis. Serum triglyceride levels greater than 1000 mg/dL (11.29 mmol/L, i.e. hyperlipidemia) is another cause. The mnemonic "GET SMASHED" is often used to help clinicians and medical students remember the common causes of pancreatitis: Gallstones, Ethanol, Trauma, Steroids, Mumps, Autoimmune, Scorpion sting, Hyperlipidemia, hypothermia or hyperparathyroidism, ERCP, Drugs (commonly azathioprine, valproic acid, liraglutide). Medications There are seven classes of medications associated with acute pancreatitis: statins, ACE inhibitors, oral contraceptives/hormone replacement therapy (HRT), diuretics, antiretroviral therapy, valproic acid, and oral hypoglycemic agents. Mechanisms of these drugs causing pancreatitis are not known exactly, but it is possible that statins have direct toxic effect on the pancreas or through the long-term accumulation of toxic metabolites. Meanwhile, ACE inhibitors cause angioedema of the pancreas through the accumulation of bradykinin. Birth control pills and HRT cause arterial thrombosis of the pancreas through the accumulation of fat (hypertriglyceridemia). Diuretics such as furosemide have a direct toxic effect on the pancreas. Meanwhile, thiazide diuretics cause hypertriglyceridemia and hypercalcemia, where the latter is the risk factor for pancreatic stones. HIV infection itself can cause a person to be more likely to get pancreatitis. Meanwhile, antiretroviral drugs may cause metabolic disturbances such as hyperglycemia and hypercholesterolemia, which predisposes to pancreatitis. Valproic acid may have direct toxic effect on the pancreas. Various oral hypoglycemic agents are associated with pancreatitis including metformin, but glucagon-like peptide-1 mimetics such as exenatide are more strongly associated with pancreatitis by promoting inflammation in combination with a high-fat diet. Atypical antipsychotics such as clozapine, risperidone, and olanzapine can also cause pancreatitis. Infection A number of infectious agents have been recognized as causes of pancreatitis including: Viruses Coxsackie virus Cytomegalovirus Hepatitis B Herpes simplex virus Mumps Varicella-zoster virus Bacteria Legionella Leptospira Mycoplasma Salmonella Fungi Aspergillus Parasites Ascaris Cryptosporidium Toxoplasma Other Other common causes include trauma, autoimmune disease, high blood calcium, hypothermia, and endoscopic retrograde cholangiopancreatography (ERCP). Pancreas divisum is a common congenital malformation of the pancreas that may underlie some recurrent cases. Diabetes mellitus type 2 is associated with a 2.8-fold higher risk. Less common causes include pancreatic cancer, pancreatic duct stones, vasculitis (inflammation of the small blood vessels in the pancreas), and porphyria—particularly acute intermittent porphyria and erythropoietic protoporphyria. There is an inherited form that results in the activation of trypsinogen within the pancreas, leading to autodigestion. Involved genes may include trypsin 1, which codes for trypsinogen, SPINK1, which codes for a trypsin inhibitor, or cystic fibrosis transmembrane conductance regulator. Diagnosis The differential diagnosis for pancreatitis includes but is not limited to cholecystitis, choledocholithiasis, perforated peptic ulcer, bowel infarction, small bowel obstruction, hepatitis, and mesenteric ischemia. Diagnosis requires 2 of the 3 following criteria: Characteristic acute onset of epigastric or vague abdominal pain that may radiate to the back (see signs and symptoms above) Serum amylase or lipase levels ≥ 3 times the upper limit of normal An imaging study with characteristic changes. CT, MRI, abdominal ultrasound or endoscopic ultrasound can be used for diagnosis. Amylase and lipase are 2 enzymes produced by the pancreas. Elevations in lipase are generally considered a better indicator for pancreatitis as it has greater specificity and has a longer half life. However, both enzymes can be elevated in other disease states. In chronic pancreatitis, the fecal pancreatic elastase-1 (FPE-1) test is a marker of exocrine pancreatic function. Additional tests that may be useful in evaluating chronic pancreatitis include hemoglobin A1C, immunoglobulin G4, rheumatoid factor, and anti-nuclear antibody. For imaging, abdominal ultrasound is convenient, simple, non-invasive, and inexpensive. It is more sensitive and specific for pancreatitis from gallstones than other imaging modalities. However, in 25–35% of patients the view of the pancreas can be obstructed by bowel gas making it difficult to evaluate. A contrast-enhanced CT scan is usually performed more than 48 hours after the onset of pain to evaluate for pancreatic necrosis and extrapancreatic fluid as well as predict the severity of the disease. CT scanning earlier can be falsely reassuring. ERCP or an endoscopic ultrasound can also be used if a biliary cause for pancreatitis is suspected. Treatment The treatment of pancreatitis is supportive and depends on severity. Morphine generally is suitable for pain control. There are no clinical studies to suggest that morphine can aggravate or cause pancreatitis or cholecystitis. The treatment for acute pancreatitis will depend on whether the diagnosis is for the mild form of the condition, which causes no complications, or the severe form, which can cause serious complications. Mild acute pancreatitis The treatment of mild acute pancreatitis is successfully carried out by admission to a general hospital ward. Traditionally, people were not allowed to eat until the inflammation resolved but more recent evidence suggests early feeding is safe and improves outcomes, and may result in an ability to leave the hospital sooner. Due to inflammation occurring in pancreatitis, proinflammatory cytokines secreted into the bloodstream can cause inflammation throughout the body, including the lungs and can manifest as ARDS. Because pancreatitis can cause lung injury and affect normal lung function, supplemental oxygen is occasionally delivered through breathing tubes that are connected via the nose (e.g., nasal cannulae) or via a mask. The tubes can then be removed after a few days once it is clear that the condition is improving. Dehydration may result during an episode of acute pancreatitis, so fluids will be provided intravenously. Opioids may be used for the pain. When the pancreatitis is due to gallstones, early gallbladder removal also appears to improve outcomes. Severe acute pancreatitis Severe pancreatitis can cause organ failure, necrosis, infected necrosis, pseudocyst, and abscess. If diagnosed with severe acute pancreatitis, people will need to be admitted to a high-dependency unit or intensive care unit. It is likely that the levels of fluids inside the body will have dropped significantly as it diverts bodily fluids and nutrients in an attempt to repair the pancreas. The drop in fluid levels can lead to a reduction in the volume of blood within the body, which is known as hypovolemic shock. Hypovolemic shock can be life-threatening as it can very quickly starve the body of the oxygen-rich blood that it needs to survive. To avoid going into hypovolemic shock, fluids will be administered intravenously. Oxygen will be supplied through tubes attached to the nose and ventilation equipment may be used to assist with breathing. Feeding tubes may be used to provide nutrients, combined with appropriate analgesia. As with mild pancreatitis, it will be necessary to treat the underlying cause—gallstones, discontinuing medications, cessation of alcohol, etc. If the cause is gallstones, it is likely that an ERCP procedure or removal of the gallbladder will be recommended. The gallbladder should be removed during the same hospital admission or within two weeks of pancreatitis onset so as to limit the risk of recurrent pancreatitis. If the cause of pancreatitis is alcohol, cessation of alcohol consumption and treatment for alcohol dependency may improve pancreatitis. Even if the underlying cause is not related to alcohol consumption, doctors recommend avoiding it for at least six months as this can cause further damage to the pancreas during the recovery process. Oral intake, especially fats, is generally restricted initially but early enteral feeding within 48 hours has been shown to improve clinical outcomes. Fluids and electrolytes are replaced intravenously. Nutritional support is initiated via tube feeding to surpass the portion of the digestive tract most affected by secreted pancreatic enzymes if there is no improvement in the first 72–96 hours of treatment. Prognosis Severe acute pancreatitis has mortality rates around 2–9%, higher where necrosis of the pancreas has occurred. Several scoring systems are used to predict the severity of an attack of pancreatitis. They each combine demographic and laboratory data to estimate severity or probability of death. Examples include APACHE II, Ranson, BISAP, and Glasgow. The Modified Glasgow criteria suggests that a case be considered severe if at least three of the following are true: Age > 55 years Blood levels: PO2 oxygen < 60 mmHg or 7.9 kPa White blood cells > 15,000/μL Calcium < 2 mmol/L Blood urea nitrogen > 16 mmol/L Lactate dehydrogenase (LDH) > 600iu/L Aspartate transaminase (AST) > 200iu/L Albumin < 3.2g/L Glucose > 10 mmol/L This can be remembered using the mnemonic PANCREAS: PO2 oxygen < 60 mmHg or 7.9 kPa Age > 55 Neutrophilia white blood cells > 15,000/μL Calcium < 2 mmol/L Renal function (BUN) > 16 mmol/L Enzymes lactate dehydrogenase (LDH) > 600iu/L aspartate transaminase (AST) > 200iu/L Albumin < 3.2g/L Sugar glucose > 10 mmol/L The BISAP score (blood urea nitrogen level >25 mg/dL (8.9 mmol/L), impaired mental status, systemic inflammatory response syndrome, age over 60 years, pleural effusion) has been validated as similar to other prognostic scoring systems. Epidemiology Globally the incidence of acute pancreatitis is 5 to 35 cases per 100,000 people. The incidence of chronic pancreatitis is 4–8 per 100,000 with a prevalence of 26–42 cases per 100,000. In 2013 pancreatitis resulted in 123,000 deaths up from 83,000 deaths in 1990. Costs In adults in the United Kingdom, the estimated average total direct and indirect costs of chronic pancreatitis is roughly £79,000 per person on an annual basis. Acute recurrent pancreatitis and chronic pancreatitis occur infrequently in children, but are associated with high healthcare costs due to substantial disease burden. Globally, the estimated average total cost of treatment for children with these conditions is approximately $40,500/person/year. Other animals Fatty foods may cause canine pancreatitis in dogs. See also Exocrine pancreatic insufficiency Chronic pancreatitis References External links GeneReviews/NCBI/NIH/UW entry on PRSS1-Related Hereditary Pancreatitis Abdominal pain Herpes simplex virus–associated diseases Inflammations Metabolic disorders Pancreas disorders Wikipedia emergency medicine articles ready to translate Wikipedia medicine articles ready to translate
Pancreatitis
[ "Chemistry" ]
3,691
[ "Metabolic disorders", "Metabolism" ]
63,550
https://en.wikipedia.org/wiki/Arginine
Arginine is the amino acid with the formula (H2N)(HN)CN(H)(CH2)3CH(NH2)CO2H. The molecule features a guanidino group appended to a standard amino acid framework. At physiological pH, the carboxylic acid is deprotonated (−CO2−) and both the amino and guanidino groups are protonated, resulting in a cation. Only the -arginine (symbol Arg or R) enantiomer is found naturally. Arg residues are common components of proteins. It is encoded by the codons CGU, CGC, CGA, CGG, AGA, and AGG. The guanidine group in arginine is the precursor for the biosynthesis of nitric oxide. Like all amino acids, it is a white, water-soluble solid. The one-letter symbol R was assigned to arginine for its phonetic similarity. History Arginine was first isolated in 1886 from yellow lupin seedlings by the German chemist Ernst Schulze and his assistant Ernst Steiger. He named it from the Greek árgyros (ἄργυρος) meaning "silver" due to the silver-white appearance of arginine nitrate crystals. In 1897, Schulze and Ernst Winterstein (1865–1949) determined the structure of arginine. Schulze and Winterstein synthesized arginine from ornithine and cyanamide in 1899, but some doubts about arginine's structure lingered until Sørensen's synthesis of 1910. Sources Production It is traditionally obtained by hydrolysis of various cheap sources of protein, such as gelatin. It is obtained commercially by fermentation. In this way, 25-35 g/liter can be produced, using glucose as a carbon source. Dietary sources Arginine is classified as a semiessential or conditionally essential amino acid, depending on the developmental stage and health status of the individual. Preterm infants are unable to synthesize arginine internally, making the amino acid nutritionally essential for them. Most healthy people do not need to supplement with arginine because it is a component of all protein-containing foods and can be synthesized in the body from glutamine via citrulline. Additional, dietary arginine is necessary for otherwise healthy individuals temporarily under physiological stress, for example during recovery from burns, injury or sepsis, or if either of the major sites of arginine biosynthesis, the small intestine and kidneys, have reduced function, because the small bowel does the first step of the synthesizing process and the kidneys do the second. Arginine is an essential amino acid for birds, as they do not have a urea cycle. For some carnivores, for example cats, dogs and ferrets, arginine is essential, because after a meal, their highly efficient protein catabolism produces large quantities of ammonia which need to be processed through the urea cycle, and if not enough arginine is present, the resulting ammonia toxicity can be lethal. This is not a problem in practice, because meat contains sufficient arginine to avoid this situation. Animal sources of arginine include meat, dairy products, and eggs, and plant sources include seeds of all types, for example grains, beans, and nuts. Biosynthesis Arginine is synthesized from citrulline in the urea cycle by the sequential action of the cytosolic enzymes argininosuccinate synthetase and argininosuccinate lyase. This is an energetically costly process, because for each molecule of argininosuccinate that is synthesized, one molecule of adenosine triphosphate (ATP) is hydrolyzed to adenosine monophosphate (AMP), consuming two ATP equivalents. The pathways linking arginine, glutamine, and proline are bidirectional. Thus, the net use or production of these amino acids is highly dependent on cell type and developmental stage. Arginine is made by the body as follows. The epithelial cells of the small intestine produce citrulline, primarily from glutamine and glutamate, which is secreted into the bloodstream which carries it to the proximal tubule cells of the kidney, which extract the citrulline and convert it to arginine, which is returned to the blood. This means that impaired small bowel or renal function can reduce arginine synthesis and thus create a dietary requirement for arginine. For such a person, arginine would become "essential". Synthesis of arginine from citrulline also occurs at a low level in many other cells, and cellular capacity for arginine synthesis can be markedly increased under circumstances that increase the production of inducible nitric oxide synthase (NOS). This allows citrulline, a byproduct of the NOS-catalyzed production of nitric oxide, to be recycled to arginine in a pathway known as the citrulline to nitric oxide (citrulline-NO) or arginine-citrulline pathway. This is demonstrated by the fact that, in many cell types, nitric oxide synthesis can be supported to some extent by citrulline, and not just by arginine. This recycling is not quantitative, however, because citrulline accumulates in nitric oxide producing cells along with nitrate and nitrite, the stable end-products of nitric oxide breakdown. Function Arginine plays an important role in cell division, wound healing, removing ammonia from the body, immune function, and the release of hormones. It is a precursor for the synthesis of nitric oxide (NO), making it important in the regulation of blood pressure. Arginine is necessary for T-cells to function in the body, and can lead to their deregulation if depleted. Proteins Arginine's side chain is amphipathic, because at physiological pH it contains a positively charged guanidinium group, which is highly polar, at the end of a hydrophobic aliphatic hydrocarbon chain. Because globular proteins have hydrophobic interiors and hydrophilic surfaces, arginine is typically found on the outside of the protein, where the hydrophilic head group can interact with the polar environment, for example taking part in hydrogen bonding and salt bridges. For this reason, it is frequently found at the interface between two proteins. The aliphatic part of the side chain sometimes remains below the surface of the protein. Arginine residues in proteins can be deiminated by PAD enzymes to form citrulline, in a post-translational modification process called citrullination.This is important in fetal development, is part of the normal immune process, as well as the control of gene expression, but is also significant in autoimmune diseases. Another post-translational modification of arginine involves methylation by protein methyltransferases. Precursor Arginine is the immediate precursor of nitric oxide, an important signaling molecule which can act as a second messenger, as well as an intercellular messenger which regulates vasodilation, and also has functions in the immune system's reaction to infection. Arginine is also a precursor for urea, ornithine, and agmatine; is necessary for the synthesis of creatine; and can also be used for the synthesis of polyamines (mainly through ornithine and to a lesser degree through agmatine, citrulline, and glutamate). The presence of asymmetric dimethylarginine (ADMA), a close relative, inhibits the nitric oxide reaction; therefore, ADMA is considered a marker for vascular disease, just as L-arginine is considered a sign of a healthy endothelium. Structure The amino acid side-chain of arginine consists of a 3-carbon aliphatic straight chain, the distal end of which is capped by a guanidinium group, which has a pKa of 13.8, and is therefore always protonated and positively charged at physiological pH. Because of the conjugation between the double bond and the nitrogen lone pairs, the positive charge is delocalized, enabling the formation of multiple hydrogen bonds. Research Growth hormone Intravenously administered arginine is used in growth hormone stimulation tests because it stimulates the secretion of growth hormone. A review of clinical trials concluded that oral arginine increases growth hormone, but decreases growth hormone secretion, which is normally associated with exercising. However, a more recent trial reported that although oral arginine increased plasma levels of L-arginine it did not cause an increase in growth hormone. Herpes-Simplex Virus (Cold sores) Research from 1964 into amino acid requirements of herpes simplex virus in human cells indicated that "...the lack of arginine or histidine, and possibly the presence of lysine, would interfere markedly with virus synthesis", but concludes that "no ready explanation is available for any of these observations". Further reviews conclude that "lysine's efficacy for herpes labialis may lie more in prevention than treatment." and that "the use of lysine for decreasing the severity or duration of outbreaks" is not supported, while further research is needed. A 2017 study concludes that "clinicians could consider advising patients that there is a theoretical role of lysine supplementation in the prevention of herpes simplex sores but the research evidence is insufficient to back this. Patients with cardiovascular or gallbladder disease should be cautioned and warned of the theoretical risks." High blood pressure A meta-analysis showed that L-arginine reduces blood pressure with pooled estimates of 5.4 mmHg for systolic blood pressure and 2.7 mmHg for diastolic blood pressure. Supplementation with -arginine reduces diastolic blood pressure and lengthens pregnancy for women with gestational hypertension, including women with high blood pressure as part of pre-eclampsia. It did not lower systolic blood pressure or improve weight at birth. Schizophrenia Both liquid chromatography and liquid chromatography/mass spectrometric assays have found that brain tissue of deceased people with schizophrenia shows altered arginine metabolism. Assays also confirmed significantly reduced levels of γ-aminobutyric acid (GABA), but increased agmatine concentration and glutamate/GABA ratio in the schizophrenia cases. Regression analysis indicated positive correlations between arginase activity and the age of disease onset and between L-ornithine level and the duration of illness. Moreover, cluster analyses revealed that L-arginine and its main metabolites L-citrulline, L-ornithine and agmatine formed distinct groups, which were altered in the schizophrenia group. Despite this, the biological basis of schizophrenia is still poorly understood, a number of factors, such as dopamine hyperfunction, glutamatergic hypofunction, GABAergic deficits, cholinergic system dysfunction, stress vulnerability and neurodevelopmental disruption, have been linked to the aetiology and/or pathophysiology of the disease. Raynaud's phenomenon Oral L-arginine has been shown to reverse digital necrosis in Raynaud syndrome Safety and potential drug interactions L-arginine is recognized as safe (GRAS-status) at intakes of up to 20 grams per day. L-arginine is found in many foods, such as fish, poultry, and dairy products, and is used as a dietary supplement. It may interact with various prescription drugs and herbal supplements. See also Arginine glutamate AAKG Canavanine and canaline are toxic analogs of arginine and ornithine. References Sources External links NIST Chemistry Webbook L-arginine, Mayo Clinic Proteinogenic amino acids Glucogenic amino acids Alpha-Amino acids Basic amino acids Guanidines Urea cycle
Arginine
[ "Chemistry" ]
2,555
[ "Guanidines", "Functional groups" ]
63,551
https://en.wikipedia.org/wiki/Serine
Serine (symbol Ser or S) is an α-amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated − form under biological conditions), a carboxyl group (which is in the deprotonated − form under biological conditions), and a side chain consisting of a hydroxymethyl group, classifying it as a polar amino acid. It can be synthesized in the human body under normal physiological circumstances, making it a nonessential amino acid. It is encoded by the codons UCU, UCC, UCA, UCG, AGU and AGC. Occurrence This compound is one of the proteinogenic amino acids. Only the L-stereoisomer appears naturally in proteins. It is not essential to the human diet, since it is synthesized in the body from other metabolites, including glycine. Serine was first obtained from silk protein, a particularly rich source, in 1865 by Emil Cramer. Its name is derived from the Latin for silk, sericum. Serine's structure was established in 1902. Biosynthesis The biosynthesis of serine starts with the oxidation of 3-phosphoglycerate (an intermediate from glycolysis) to 3-phosphohydroxypyruvate and NADH by phosphoglycerate dehydrogenase (). Reductive amination (transamination) of this ketone by phosphoserine transaminase () yields 3-phosphoserine (O-phosphoserine) which is hydrolyzed to serine by phosphoserine phosphatase (). In bacteria such as E. coli these enzymes are encoded by the genes serA (EC 1.1.1.95), serC (EC 2.6.1.52), and serB (EC 3.1.3.3). Serine hydroxymethyltransferase (SMHT) also catalyzes the biosynthesis of glycine (retro-aldol cleavage) from serine, transferring the resulting formalddehyde synthon to 5,6,7,8-tetrahydrofolate. However, that reaction is reversible, and will convert excess glycine to serine. SHMT is a pyridoxal phosphate (PLP) dependent enzyme. Synthesis and reactions Industrially, L-serine is produced from glycine and methanol catalyzed by hydroxymethyltransferase. Racemic serine can be prepared in the laboratory from methyl acrylate in several steps: Hydrogenation of serine gives the diol serinol: Biological function Metabolic Serine is important in metabolism in that it participates in the biosynthesis of purines and pyrimidines. It is the precursor to several amino acids including glycine and cysteine, as well as tryptophan in bacteria. It is also the precursor to numerous other metabolites, including sphingolipids and folate, which is the principal donor of one-carbon fragments in biosynthesis. Signaling D-Serine, synthesized in neurons by serine racemase from L-serine (its enantiomer), serves as a neuromodulator by coactivating NMDA receptors, making them able to open if they then also bind glutamate. D-serine is a potent agonist at the glycine site (NR1) of canonical diheteromeric NMDA receptors. For the receptor to open, glutamate and either glycine or D-serine must bind to it; in addition a pore blocker must not be bound (e.g. Mg2+ or Zn2+). Some research has shown that D-serine is a more potent agonist at the NMDAR glycine site than glycine itself. However, D-serine has been shown to work as an antagonist/inverse co-agonist of t-NMDA receptors through the glycine binding site on the GluN3 subunit. Ligands D-serine was thought to exist only in bacteria until relatively recently; it was the second D amino acid discovered to naturally exist in humans, present as a signaling molecule in the brain, soon after the discovery of D-aspartate. Had D amino acids been discovered in humans sooner, the glycine site on the NMDA receptor might instead be named the D-serine site. Apart from central nervous system, D-serine plays a signaling role in peripheral tissues and organs such as cartilage, kidney, and corpus cavernosum. Gustatory sensation Pure D-serine is an off-white crystalline powder with a very faint musty aroma. D-Serine is sweet with an additional minor sour taste at medium and high concentrations. Clinical significance Serine deficiency disorders are rare defects in the biosynthesis of the amino acid L-serine. At present three disorders have been reported: 3-phosphoglycerate dehydrogenase deficiency 3-phosphoserine phosphatase deficiency Phosphoserine aminotransferase deficiency These enzyme defects lead to severe neurological symptoms such as congenital microcephaly and severe psychomotor retardation and in addition, in patients with 3-phosphoglycerate dehydrogenase deficiency to intractable seizures. These symptoms respond to a variable degree to treatment with L-serine, sometimes combined with glycine. Response to treatment is variable and the long-term and functional outcome is unknown. To provide a basis for improving the understanding of the epidemiology, genotype/phenotype correlation and outcome of these diseases their impact on the quality of life of patients, as well as for evaluating diagnostic and therapeutic strategies a patient registry was established by the noncommercial International Working Group on Neurotransmitter Related Disorders (iNTD). Besides disruption of serine biosynthesis, its transport may also become disrupted. One example is spastic tetraplegia, thin corpus callosum, and progressive microcephaly, a disease caused by mutations that affect the function of the neutral amino acid transporter A. Research for therapeutic use The classification of L-serine as a non-essential amino acid has come to be considered as conditional, since vertebrates such as humans cannot always synthesize optimal quantities over entire lifespans. Safety of L-serine has been demonstrated in an FDA-approved human phase I clinical trial with Amyotrophic Lateral Sclerosis, ALS, patients (ClinicalTrials.gov identifier: NCT01835782), but treatment of ALS symptoms has yet to be shown. A 2011 meta-analysis found adjunctive sarcosine to have a medium effect size for negative and total symptoms of schizophrenia. There also is evidence that L‐serine could acquire a therapeutic role in diabetes. D-Serine is being studied in rodents as a potential treatment for schizophrenia. D-Serine also has been described as a potential biomarker for early Alzheimer's disease (AD) diagnosis, due to a relatively high concentration of it in the cerebrospinal fluid of probable AD patients. D-serine, which is made in the brain, has been shown to work as an antagonist/inverse co-agonist of t-NMDA receptors mitigating neuron loss in an animal model of temporal lobe epilepsy. D-Serine has been theorized as a potential treatment for sensorineural hearing disorders such as hearing loss and tinnitus. See also Isoserine Homoserine (isothreonine) Serine octamer cluster References External links Serine MS Spectrum Alpha-Amino acids Proteinogenic amino acids Glucogenic amino acids NMDA receptor agonists Glycine receptor agonists Aldols Amino alcohols Inhibitory amino acids
Serine
[ "Chemistry" ]
1,679
[ "Organic compounds", "Amino alcohols" ]
63,552
https://en.wikipedia.org/wiki/Amylase
An amylase () is an enzyme that catalyses the hydrolysis of starch (Latin ) into sugars. Amylase is present in the saliva of humans and some other mammals, where it begins the chemical process of digestion. Foods that contain large amounts of starch but little sugar, such as rice and potatoes, may acquire a slightly sweet taste as they are chewed because amylase degrades some of their starch into sugar. The pancreas and salivary gland make amylase (alpha amylase) to hydrolyse dietary starch into disaccharides and trisaccharides which are converted by other enzymes to glucose to supply the body with energy. Plants and some bacteria also produce amylase. Specific amylase proteins are designated by different Greek letters. All amylases are glycoside hydrolases and act on α-1,4-glycosidic bonds. Classification α-Amylase The α-amylases () (CAS 9014–71–5) (alternative names: 1,4-α-D-glucan glucanohydrolase; glycogenase) are calcium metalloenzymes. By acting at random locations along the starch chain, α-amylase breaks down long-chain saccharides, ultimately yielding either maltotriose and maltose from amylose, or maltose, glucose and "limit dextrin" from amylopectin. They belong to glycoside hydrolase family 13 (https://www.cazypedia.org/index.php/Glycoside_Hydrolase_Family_13). Because it can act anywhere on the substrate, α-amylase tends to be faster-acting than β-amylase. In animals, it is a major digestive enzyme, and its optimum pH is 6.7–7.0. In human physiology, both the salivary and pancreatic amylases are α-amylases. The α-amylase form is also found in plants, fungi (ascomycetes and basidiomycetes) and bacteria (Bacillus). β-Amylase Another form of amylase, β-amylase () (alternative names: 1,4-α-D-glucan maltohydrolase; glycogenase; saccharogen amylase) is also synthesized by bacteria, fungi, and plants. Working from the non-reducing end, β-amylase catalyzes the hydrolysis of the second α-1,4 glycosidic bond, cleaving off two glucose units (maltose) at a time. During the ripening of fruit, β-amylase breaks starch into maltose, resulting in the sweet flavor of ripe fruit. They belong to glycoside hydrolase family 14. Both α-amylase and β-amylase are present in seeds; β-amylase is present in an inactive form prior to germination, whereas α-amylase and proteases appear once germination has begun. Many microbes also produce amylase to degrade extracellular starches. Animal tissues do not contain β-amylase, although it may be present in microorganisms contained within the digestive tract. The optimum pH for β-amylase is 4.0–5.0. γ-Amylase γ-Amylase () (alternative names: Glucan 1,4-a-glucosidase; amyloglucosidase; exo-1,4-α-glucosidase; glucoamylase; lysosomal α-glucosidase; 1,4-α-D-glucan glucohydrolase) will cleave α(1–6) glycosidic linkages, as well as the last α-1,4 glycosidic bond at the nonreducing end of amylose and amylopectin, yielding glucose. The γ-amylase has the most acidic optimum pH of all amylases because it is most active around pH 3. They belong to a variety of different glycoside hydrolase families, such as glycoside hydrolase family 15 in fungi, glycoside hydrolase family 31 of human MGAM, and glycoside hydrolase family 97 of bacterial forms. Uses Fermentation α- and β-amylases are important in brewing beer and liquor made from sugars derived from starch. In fermentation, yeast ingests sugars and excretes ethanol. In beer and some liquors, the sugars present at the beginning of fermentation have been produced by "mashing" grains or other starch sources (such as potatoes). In traditional beer brewing, malted barley is mixed with hot water to create a "mash", which is held at a given temperature to allow the amylases in the malted grain to convert the barley's starch into sugars. Different temperatures optimize the activity of alpha or beta amylase, resulting in different mixtures of fermentable and unfermentable sugars. In selecting mash temperature and grain-to-water ratio, a brewer can change the alcohol content, mouthfeel, aroma, and flavor of the finished beer. In some historic methods of producing alcoholic beverages, the conversion of starch to sugar starts with the brewer chewing grain to mix it with saliva. This practice continues to be practiced in home production of some traditional drinks, such as chhaang in the Himalayas, chicha in the Andes and kasiri in Brazil and Suriname. Flour additive Amylases are used in breadmaking and to break down complex sugars, such as starch (found in flour), into simple sugars. Yeast then feeds on these simple sugars and converts it into the waste products of ethanol and carbon dioxide. This imparts flavour and causes the bread to rise. While amylases are found naturally in yeast cells, it takes time for the yeast to produce enough of these enzymes to break down significant quantities of starch in the bread. This is the reason for long fermented doughs such as sourdough. Modern breadmaking techniques have included amylases (often in the form of malted barley) into bread improver, thereby making the process faster and more practical for commercial use. α-Amylase is often listed as an ingredient on commercially package-milled flour. Bakers with long exposure to amylase-enriched flour are at risk of developing dermatitis or asthma. Molecular biology In molecular biology, the presence of amylase can serve as an additional method of selecting for successful integration of a reporter construct in addition to antibiotic resistance. As reporter genes are flanked by homologous regions of the structural gene for amylase, successful integration will disrupt the amylase gene and prevent starch degradation, which is easily detectable through iodine staining. Medical uses Amylase also has medical applications in the use of pancreatic enzyme replacement therapy (PERT). It is one of the components in Sollpura (liprotamase) to help in the breakdown of saccharides into simple sugars. Other uses An inhibitor of alpha-amylase, called phaseolamin, has been tested as a potential diet aid. When used as a food additive, amylase has E number E1100, and may be derived from pig pancreas or mold fungi. Bacilliary amylase is also used in clothing and dishwasher detergents to dissolve starches from fabrics and dishes. Factory workers who work with amylase for any of the above uses are at increased risk of occupational asthma. Five to nine percent of bakers have a positive skin test, and a fourth to a third of bakers with breathing problems are hypersensitive to amylase. Hyperamylasemia Blood serum amylase may be measured for purposes of medical diagnosis. A higher than normal concentration may reflect any of several medical conditions, including acute inflammation of the pancreas (which may be measured concurrently with the more specific lipase), perforated peptic ulcer, torsion of an ovarian cyst, strangulation, ileus, mesenteric ischemia, macroamylasemia and mumps. Amylase may be measured in other body fluids, including urine and peritoneal fluid. A January 2007 study from Washington University in St. Louis suggests that saliva tests of the enzyme could be used to indicate sleep deficits, as the enzyme increases its activity in correlation with the length of time a subject has been deprived of sleep. History In 1831, Erhard Friedrich Leuchs (1800–1837) described the hydrolysis of starch by saliva, due to the presence of an enzyme in saliva, "ptyalin", an amylase. it was named after the Ancient Greek name for saliva: - . The modern history of enzymes began in 1833, when French chemists Anselme Payen and Jean-François Persoz isolated an amylase complex from germinating barley and named it "diastase". It is from this term that all subsequent enzyme names tend to end in the suffix -ase. In 1862, Russian biochemist (1838–1923) separated pancreatic amylase from trypsin. Evolution Salivary amylase Saccharides are a food source rich in energy. Large polymers such as starch are partially hydrolyzed in the mouth by the enzyme amylase before being cleaved further into sugars. Many mammals have seen great expansions in the copy number of the amylase gene. These duplications allow for the pancreatic amylase AMY2 to re-target to the salivary glands, allowing animals to detect starch by taste and to digest starch more efficiently and in higher quantities. This has happened independently in mice, rats, dogs, pigs, and most importantly, humans after the agricultural revolution. Following the agricultural revolution 12,000 years ago, human diet began to shift more to plant and animal domestication in place of hunting and gathering. Starch has become a staple of the human diet. Despite the obvious benefits, early humans did not possess salivary amylase, a trend that is also seen in evolutionary relatives of the human, such as chimpanzees and bonobos, who possess either one or no copies of the gene responsible for producing salivary amylase. Like in other mammals, the pancreatic alpha-amylase AMY2 was duplicated multiple times. One event allowed it to evolve salivary specificity, leading to the production of amylase in the saliva (named in humans as AMY1). The 1p21.1 region of human chromosome 1 contains many copies of these genes, variously named AMY1A, AMY1B, AMY1C, AMY2A, AMY2B, and so on. However, not all humans possess the same number of copies of the AMY1 gene. Populations known to rely more on saccharides have a higher number of AMY1 copies than human populations that, by comparison, consume little starch. The number of AMY1 gene copies in humans can range from six copies in agricultural groups such as European-American and Japanese (two high starch populations) to only two to three copies in hunter-gatherer societies such as the Biaka, Datog, and Yakuts. The correlation that exists between starch consumption and number of AMY1 copies specific to population suggest that more AMY1 copies in high starch populations has been selected for by natural selection and considered the favorable phenotype for those individuals. Therefore, it is most likely that the benefit of an individual possessing more copies of AMY1 in a high starch population increases fitness and produces healthier, fitter offspring. This fact is especially apparent when comparing geographically close populations with different eating habits that possess a different number of copies of the AMY1 gene. Such is the case for some Asian populations that have been shown to possess few AMY1 copies relative to some agricultural populations in Asia. This offers strong evidence that natural selection has acted on this gene as opposed to the possibility that the gene has spread through genetic drift. Variations of amylase copy number in dogs mirrors that of human populations, suggesting they acquired the extra copies as they followed humans around. Unlike humans whose amylase levels depend on starch content in diet, wild animals eating a broad range of foods tend to have more copies of amylase. This may have to do with mainly detection of starch as opposed to digestion. References Chemical pathology EC 3.2.1 Enzymes Food additives Saliva Enzymes of known structure
Amylase
[ "Chemistry", "Biology" ]
2,635
[ "Biochemistry", "Chemical pathology", "Excretion", "Saliva" ]
63,577
https://en.wikipedia.org/wiki/Cashew
Cashew is the common name of a tropical evergreen tree Anacardium occidentale, in the family Anacardiaceae. It is native to South America and is the source of the cashew nut and the cashew apple, an accessory fruit. The tree can grow as tall as , but the dwarf cultivars, growing up to , prove more profitable, with earlier maturity and greater yields. The cashew nut is edible and is eaten on its own as a snack, used in recipes, or processed into cashew cheese or cashew butter. The nut is often simply called a 'cashew'. In 2019, four million tonnes of cashew nuts were produced globally, with Ivory Coast and India the leading producers. As well as the nut and fruit, the plant has several other uses. The shell of the cashew seed yields derivatives that can be used in many applications including lubricants, waterproofing, paints, and, starting in World War II, arms production. The cashew apple is a light reddish to yellow fruit, whose pulp and juice can be processed into a sweet, astringent fruit drink or fermented and distilled into liquor. Description The cashew tree is large and evergreen, growing to tall, with a short, often irregularly shaped trunk. The leaves are spirally arranged, leathery textured, elliptic to obovate, long and broad, with smooth margins. The flowers are produced in a panicle or corymb up to long; each flower is small, pale green at first, then turning reddish, with five slender, acute petals long. The largest cashew tree in the world covers an area around and is located in Natal, Brazil. The fruit of the cashew tree is an accessory fruit (sometimes called a pseudocarp or false fruit). What appears to be the fruit is an oval or pear-shaped structure, a hypocarpium, that develops from the pedicel and the receptacle of the cashew flower. Called the cashew apple, better known in Central America as , it ripens into a yellow or red structure about long. The true fruit of the cashew tree is a kidney-shaped or boxing glove-shaped drupe that grows at the end of the cashew apple. The drupe first develops on the tree and then the pedicel expands to become the cashew apple. The drupe becomes the true fruit, a single shell-encased seed, which is often considered a nut in the culinary sense. The seed is surrounded by a double shell that contains an allergenic phenolic resin, anacardic acid—which is a potent skin irritant chemically related to the better-known and also toxic allergenic oil urushiol, which is found in the related poison ivy and lacquer tree. Etymology The English name derives from the Portuguese name for the fruit of the cashew tree: (), also known as , which itself is from the Tupi word , literally meaning "nut that produces itself". The generic name Anacardium is composed of the Greek prefix ana- (), the Greek cardia (), and the Neo-Latin suffix . It possibly refers to the heart shape of the fruit, to "the top of the fruit stem" or to the seed. The word anacardium was earlier used to refer to Semecarpus anacardium (the marking nut tree) before Carl Linnaeus transferred it to the cashew; both plants are in the same family. The epithet occidentale derives from the Western (or Occidental) world. The plant has diverse common names in various languages among its wide distribution range, including (French) with the fruit referred to as , (), or (Portuguese). Distribution and habitat The species is native to tropical South America and later was distributed around the world in the 1500s by Portuguese explorers. Portuguese colonists in Brazil began exporting cashew nuts as early as the 1550s. The Portuguese took it to Goa, formerly Estado da Índia Portuguesa in India, between 1560 and 1565. From there, it spread throughout Southeast Asia and eventually Africa. Cultivation The cashew tree is cultivated in the tropics between 25°N and 25°S, and is well-adapted to hot lowland areas with a pronounced dry season, where the mango and tamarind trees also thrive. The traditional cashew tree is tall (up to ) and takes three years from planting before it starts production, and eight years before economic harvests can begin. More recent breeds, such as the dwarf cashew trees, are up to tall and start producing after the first year, with economic yields after three years. The cashew nut yields for the traditional tree are about , in contrast to over a ton per hectare for the dwarf variety. Grafting and other modern tree management technologies are used to further improve and sustain cashew nut yields in commercial orchards. Production In 2021, global production of cashew nuts (as the kernel) was 3.7 million tonnes, led by Ivory Coast and India with a combined 43% of the world total (table). Trade The top ten exporters of cashew nuts (in-shell; HS Code 080131) in value (USD) in 2021 were Ghana, Tanzania, Guinea-Bissau, Nigeria, Ivory Coast, Burkina Faso, Senegal, Indonesia, United Arab Emirates (UAE), and Guinea. From 2017 to 2021, the top ten exporters of cashew nuts (shelled; HS Code 080132) were Vietnam, India, the Netherlands, Germany, Brazil, Ivory Coast, Nigeria, Indonesia, Burkina Faso, and the United States. In 2014, the rapid growth of cashew cultivation in the Ivory Coast made this country the top African exporter. Fluctuations in world market prices, poor working conditions, and low pay for local harvesting have caused discontent in the cashew nut industry. Almost all cashews produced in Africa between 2000 and 2019 were exported as raw nuts which are much less profitable than shelled nuts. One of the goals of the African Cashew Alliance is to promote Africa's cashew processing capabilities to improve the profitability of Africa's cashew industry. In 2011, Human Rights Watch reported that forced labour was used for cashew processing in Vietnam. Around 40,000 current or former drug users were forced to remove shells from "blood cashews" or perform other work and often beaten at more than 100 rehabilitation centers. Toxicity Some people are allergic to cashews, but they are a less frequent allergen than other tree nuts or peanuts. For up to 6% of children and 3% of adults, consuming cashews may cause allergic reactions, ranging from mild discomfort to life-threatening anaphylaxis. These allergies are triggered by the proteins found in tree nuts, and cooking often does not remove or change these proteins. Reactions to cashew and tree nuts can also occur as a consequence of hidden nut ingredients or traces of nuts that may inadvertently be introduced during food processing, handling, or manufacturing. The shell of the cashew nut contains oil compounds that can cause contact dermatitis similar to poison ivy, primarily resulting from the phenolic lipids, anacardic acid, and cardanol. Because it can cause dermatitis, cashews are typically not sold in the shell to consumers. Readily and inexpensively extracted from the waste shells, cardanol is under research for its potential applications in nanomaterials and biotechnology. Uses Nutrition Raw cashew nuts are 5% water, 30% carbohydrates, 44% fat, and 18% protein (table). In a 100-gram reference amount, raw cashews provide 553 kilocalories, 67% of the Daily Value (DV) in total fats, 36% DV of protein, 13% DV of dietary fiber and 11% DV of carbohydrates. Cashew nuts are rich sources (20% or more of the DV) of dietary minerals, including particularly copper, manganese, phosphorus, and magnesium (79–110% DV), and of thiamin, vitamin B6 and vitamin K (32–37% DV). Iron, potassium, zinc, and selenium are present in significant content (14–61% DV) (table). Cashews (100 g, raw) contain of beta-sitosterol. Nut and shell Culinary uses for cashew seeds in snacking and cooking are similar to those for all tree seeds called nuts. Cashews are commonly used in South Asian cuisine, whole for garnishing sweets or curries, or ground into a paste that forms a base of sauces for curries (e.g., korma), or some sweets (e.g., kaju barfi). It is also used in powdered form in the preparation of several Indian sweets and desserts. In Goan cuisine, both roasted and raw kernels of Goa Kaju are used whole for making curries and sweets. Cashews are also used in Thai and Chinese cuisines, generally in whole form. In the Philippines, cashew is a known product of Antipolo and is eaten with suman. The province of Pampanga also has a sweet dessert called turrones de casuy, which is cashew marzipan wrapped in white wafers. In Indonesia, roasted and salted cashews are called kacang mete or kacang mede, while the cashew apple is called jambu monyet ( 'monkey rose apple'). In the 21st century, cashew cultivation increased in several African countries to meet the manufacturing demands for cashew milk, a plant milk alternative to dairy milk. In Mozambique, bolo polana is a cake prepared using powdered cashews and mashed potatoes as the main ingredients. This dessert is common in South Africa. Husk The cashew nut kernel has a slight curvature and two cotyledons, each representing around 20–25% of the weight of the nut. It is encased in a reddish-brown membrane called a husk, which accounts for approximately 5% of the total nut. Cashew nut husk is used in emerging industrial applications, such as an adsorbent, composites, biopolymers, dyes and enzyme synthesis. Apple The mature cashew apple can be eaten fresh, cooked in curries, or fermented into vinegar, citric acid or an alcoholic drink. It is also used to make preserves, chutneys, and jams in some countries, such as India and Brazil. In many countries, particularly in South America, the cashew apple is used to flavor drinks, both alcoholic and nonalcoholic. In Brazil, cashew fruit juice and fruit pulp are used in the production of sweets, and juice mixed with alcoholic beverages such as cachaça, and as flour, milk, or cheese. In Panama, the cashew fruit is cooked with water and sugar for a prolonged time to make a sweet, brown, paste-like dessert called ( being a Spanish name for cashew). Cashew nuts are more widely traded than cashew apples, because the fruit, unlike the nut, is easily bruised and has a very limited shelf life. Cashew apple juice, however, may be used for manufacturing blended juices. When the apple is consumed, its astringency is sometimes removed by steaming the fruit for five minutes before washing it in cold water. Steeping the fruit in boiling salt water for five minutes also reduces the astringency. In Cambodia, where the plant is usually grown as an ornamental rather than an economic tree, the fruit is a delicacy and is eaten with salt. Alcohol In the Indian state of Goa, the ripened cashew apples are mashed, and the juice, called "neero", is extracted and kept for fermentation for a few days. This fermented juice then undergoes a double distillation process. The resulting beverage is called feni or fenny. Feni is about 40–42% alcohol (80–84 proof). The single-distilled version is called urrak, which is about 15% alcohol (30 proof). In Tanzania, the cashew apple (bibo in Swahili) is dried and reconstituted with water and fermented, then distilled to make a strong liquor called gongo. Nut oil Cashew nut oil is a dark yellow oil derived from pressing the cashew nuts (typically from lower-value broken chunks created accidentally during processing) and is used for cooking or as a salad dressing. The highest quality oil is produced from a single cold pressing. Shell oil Cashew nutshell liquid (CNSL) or cashew shell oil (CAS registry number 8007-24-7) is a natural resin with a yellowish sheen found in the honeycomb structure of the cashew nutshell, and is a byproduct of processing cashew nuts. As it is a strong irritant, it should not be confused with edible cashew nut oil. It is dangerous to handle in small-scale processing of the shells, but is itself a raw material with multiple uses. It is used in tropical folk medicine and for anti-termite treatment of timber. Its composition varies depending on how it is processed. Cold, solvent-extracted CNSL is mostly composed of anacardic acids (70%), cardol (18%) and cardanol (5%). Heating CNSL decarboxylates the anacardic acids, producing a technical grade of CNSL that is rich in cardanol. Distillation of this material gives distilled, technical CNSL containing 78% cardanol and 8% cardol (cardol has one more hydroxyl group than cardanol). This process also reduces the degree of thermal polymerization of the unsaturated alkyl-phenols present in CNSL. Anacardic acid is also used in the chemical industry for the production of cardanol, which is used for resins, coatings, and frictional materials. These substances are skin allergens, like lacquer and the oils of poison ivy, and they present a danger during manual cashew processing. This natural oil phenol has interesting chemical structural features that can be modified to create a wide spectrum of biobased monomers. These capitalize on the chemically versatile construct, which contains three functional groups: the aromatic ring, the hydroxyl group, and the double bonds in the flanking alkyl chain. These include polyols, which have recently seen increased demand for their biobased origin and key chemical attributes such as high reactivity, range of functionalities, reduction in blowing agents, and naturally occurring fire retardant properties in the field of rigid polyurethanes, aided by their inherent phenolic structure and larger number of reactive units per unit mass. CNSL may be used as a resin for carbon composite products. CNSL-based novolac is another versatile industrial monomer deriving from cardanol typically used as a reticulating agent (hardener) for epoxy matrices in composite applications providing good thermal and mechanical properties to the final composite material. Animal feed Discarded cashew nuts unfit for human consumption, alongside the residues of oil extraction from cashew kernels, can be fed to livestock. Animals can also eat the leaves of cashew trees. Other uses As well as the nut and fruit, the plant has several other uses. In Cambodia, the bark gives a yellow dye, the timber is used in boat-making, and for house-boards, and the wood makes excellent charcoal. The shells yield a black oil used as a preservative and water-proofing agent in varnishes, cement, and as a lubricant or timber seal. Timber is used to manufacture furniture, boats, packing crates, and charcoal. Its juice turns black on exposure to air, providing an indelible ink. See also List of culinary nuts Semecarpus anacardium (the Oriental Anacardium), a native of India and closely related to the cashew References External links {Shri Adinath Cashew} Dealer in India Anacardium Crops originating from South America Drupes Edible nuts and seeds Flora of Southern America Fruit trees Medicinal plants of South America Nut oils Plants described in 1753 Resins Tropical agriculture
Cashew
[ "Physics" ]
3,404
[ "Amorphous solids", "Unsolved problems in physics", "Resins" ]
63,610
https://en.wikipedia.org/wiki/Peafowl
Peafowl is a common name for two bird species of the genus Pavo and one species of the closely related genus Afropavo within the tribe Pavonini of the family Phasianidae (the pheasants and their allies). Male peafowl are referred to as peacocks, and female peafowl are referred to as peahens. The two Asiatic species are the blue or Indian peafowl originally from the Indian subcontinent, and the green peafowl from Southeast Asia. The Congo peafowl, native only to the Congo Basin, is not a true peafowl. Male peafowl are known for their piercing calls and their extravagant plumage. The latter is especially prominent in the Asiatic species, which have an eye-spotted "tail" or "train" of covert feathers, which they display as part of a courtship ritual. The functions of the elaborate iridescent coloration and large "train" of peacocks have been the subject of extensive scientific debate. Charles Darwin suggested that they served to attract females, and the showy features of the males had evolved by sexual selection. More recently, Amotz Zahavi proposed in his handicap principle that these features acted as honest signals of the males' fitness, since less-fit males would be disadvantaged by the difficulty of surviving with such large and conspicuous structures. Description The Indian peacock (Pavo cristatus) has iridescent blue and green plumage, mostly metal-like blue and green. In both species, females are a little smaller than males in terms of weight and wingspan, but males are significantly longer due to the "tail", also known as a "train". The peacock train consists not of tail quill feathers but highly elongated upper tail coverts. These feathers are marked with eyespots, best seen when a peacock fans his tail. All species have a crest atop the head. The Indian peahen has a mixture of dull grey, brown, and green in her plumage. The female also displays her plumage to ward off female competition or signal danger to her young. Male green peafowls (Pavo muticus) have green and bronze or gold plumage, and black wings with a sheen of blue. Unlike Indian peafowl, the green peahen is similar to the male, but has shorter upper tail coverts, a more coppery neck, and overall less iridescence. Both males and females have spurs. The Congo peacock (Afropavo congensis) male does not display his covert feathers, but uses his actual tail feathers during courtship displays. These feathers are much shorter than those of the Indian and green species, and the ocelli are much less pronounced. Females of the Indian and African species are dull grey and/or brown. Chicks of both sexes in all the species are cryptically colored. They vary between yellow and tawny, usually with patches of darker brown or light tan and "dirty white" ivory. Mature peahens have been recorded as suddenly growing typically male peacock plumage and making male calls. Research has suggested that changes in mature birds are due to a lack of estrogen from old or damaged ovaries, and that male plumage and calls are the default unless hormonally suppressed. Iridescence and structural coloration As with many birds, vibrant iridescent plumage colors are not primarily pigments, but structural coloration. Optical interference Bragg reflections, based on regular, periodic nanostructures of the barbules (fiber-like components) of the feathers, produce the peacock's colors. 2D photonic-crystal structures within the layers of the barbules cause the coloration of their feathers. Slight changes to the spacing of the barbules result in different colors. Brown feathers are a mixture of red and blue: one color is created by the periodic structure and the other is created by a Fabry–Pérot interference peak from reflections from the outer and inner boundaries. Color derived from physical structure rather than pigment can vary with viewing angle, causing iridescence. Courtship Most commonly, during a courtship display, the visiting female peahen will stop directly in front of the male peacock, thus providing her with the ability to assess the male at 90° to the surface of the feather. Then, the male will turn and display his feathers about 45° to the right of the sun's azimuth which allows the sunlight to accentuate the iridescence of his train. If the female chooses to interact with the male, he will then turn to face her and shiver his train so as to begin the mating process. Evolution Sexual selection Charles Darwin suggested in The Descent of Man and Selection in Relation to Sex that peafowl plumage may have evolved through sexual selection: Aposematism and natural selection It has been suggested that a peacock's train, loud call, and fearless behavior have been formed by natural selection (with or without sexual selection too), and served as an aposematic display to intimidate predators and rivals. This hypothesis is designed to explain Takahashi's observations that in Japan, neither reproductive success nor physical condition correlate with the train's length, symmetry or number of eyespots. Female choice Multiple hypotheses involving female choice have been posited. One hypothesis is that females choose mates with good genes. Males with more exaggerated secondary sexual characteristics, such as bigger, brighter peacock trains, tend to have better genes in the peahen's eyes. These better genes directly benefit her offspring, as well as her fitness and reproductive success. Runaway selection is another hypothesis. In runaway sexual selection, linked genes in males and females code for sexually dimorphic traits in males, and preference for those traits in females. The close spatial association of alleles for loci involved in the train in males, and for preference for more exuberant trains in females, on the chromosome (linkage disequilibrium) causes a positive feedback loop that exaggerates both the male traits and the female preferences. Another hypothesis is sensory bias, in which females have a preference for a trait in a non-mating context that becomes transferred to mating, such as Merle Jacobs' food-courtship hypothesis, which suggests that peahens are attracted to peacocks for the resemblance of their eye spots to blue berries. Multiple causalities for the evolution of female choice are also possible. The peacock's train and iridescent plumage are perhaps the best-known examples of traits believed to have arisen through sexual selection, though with some controversy. Male peafowl erect their trains to form a shimmering fan in their display for females. Marion Petrie tested whether or not these displays signalled a male's genetic quality by studying a feral population of peafowl in Whipsnade Wildlife Park in southern England. The number of eyespots in the train predicted a male's mating success. She was able to manipulate this success by cutting the eyespots off some of the males' tails: females lost interest in pruned males and became attracted to untrimmed ones. Males with fewer eyespots, thus having lower mating success, suffered from greater predation. She allowed females to mate with males with differing numbers of eyespots, and reared the offspring in a communal incubator to control for differences in maternal care. Chicks fathered by more ornamented males weighed more than those fathered by less ornamented males, an attribute generally associated with better survival rate in birds. These chicks were released into the park and recaptured one year later. Those with heavily ornamented feathers were better able to avoid predators and survive in natural conditions. Thus, Petrie's work shows correlations between tail ornamentation, mating success, and increased survival ability in both the ornamented males and their offspring. Furthermore, peafowl and their sexual characteristics have been used in the discussion of the causes for sexual traits. Amotz Zahavi used the excessive tail plumes of male peafowls as evidence for his "handicap principle". Since these trains are likely to be deleterious to an individual's survival (as their brilliance makes them more visible to predators and their length hinders escape from danger), Zahavi argued that only the fittest males could survive the handicap of a large train. Thus, a brilliant train serves as an honest indicator for females that these highly ornamented males are good at surviving for other reasons, so are preferable mates. This theory may be contrasted with Ronald Fisher's hypothesis that male sexual traits are the result of initially arbitrary aesthetic selection by females. In contrast to Petrie's findings, a seven-year Japanese study of free-ranging peafowl concluded that female peafowl do not select mates solely on the basis of their trains. Mariko Takahashi found no evidence that peahens preferred peacocks with more elaborate trains (such as with more eyespots), a more symmetrical arrangement, or a greater length. Takahashi determined that the peacock's train was not the universal target of female mate choice, showed little variance across male populations, and did not correlate with male physiological condition. Adeline Loyau and her colleagues responded that alternative and possibly central explanations for these results had been overlooked. They concluded that female choice might indeed vary in different ecological conditions. Plumage colours as attractants A peacock's copulation success rate depends on the colours of his eyespots (ocelli) and the angle at which they are displayed. The angle at which the ocelli are displayed during courtship is more important in a peahen's choice of males than train size or number of ocelli. Peahens pay careful attention to the different parts of a peacock's train during his display. The lower train is usually evaluated during close-up courtship, while the upper train is more of a long-distance attraction signal. Actions such as train rattling and wing shaking also kept the peahens' attention. Redundant signal hypothesis Although an intricate display catches a peahen's attention, the redundant signal hypothesis also plays a crucial role in keeping this attention on the peacock's display. The redundant signal hypothesis explains that whilst each signal that a male projects is about the same quality, the addition of multiple signals enhances the reliability of that mate. This idea also suggests that the success of multiple signalling is not only due to the repetitiveness of the signal, but also of multiple receivers of the signal. In the peacock species, males congregate a communal display during breeding season and the peahens observe. Peacocks first defend their territory through intra-sexual behaviour, defending their areas from intruders. They fight for areas within the congregation to display a strong front for the peahens. Central positions are usually taken by older, dominant males, which influences mating success. Certain morphological and behavioural traits come in to play during inter and intra-sexual selection, which include train length for territory acquisition and visual and vocal displays involved in mate choice by peahens. Behaviour Peafowl are forest birds that nest on the ground, but roost in trees. They are terrestrial feeders. All species of peafowl are believed to be polygamous. In common with other members of the Galliformes, the males possess metatarsal spurs or "thorns" on their legs used during intraspecific territorial fights with some other members of their kind. In courtship, vocalisation stands to be a primary way for peacocks to attract peahens. Some studies suggest that the intricacy of the "song" produced by displaying peacocks proved to be impressive to peafowl. Singing in peacocks usually occurs just before, just after, or sometimes during copulation. Diet Peafowl are omnivores and mostly eat plants, flower petals, seed heads, insects and other arthropods, reptiles, and amphibians. Wild peafowl look for their food scratching around in leaf litter either early in the morning or at dusk. They retreat to the shade and security of the woods for the hottest portion of the day. These birds are not picky and will eat almost anything they can fit in their beak and digest. They actively hunt insects like ants, crickets and termites; millipedes; and other arthropods and small mammals. Indian peafowl also eat small snakes. Domesticated peafowl may also eat bread and cracked grain such as oats and corn, cheese, cooked rice and sometimes cat food. It has been noticed by keepers that peafowl enjoy protein-rich food including larvae that infest granaries, different kinds of meat and fruit, as well as vegetables including dark leafy greens, broccoli, carrots, beans, beets, and peas. Cultural significance Indian peafowl The peafowl is native to India and significant in its culture. In Hinduism, the Indian peacock is the mount of the god of war, Kartikeya, and the warrior goddess Kaumari, and is also depicted around the goddess Santoshi. During a war with Asuras, Kartikeya split the demon king Surapadman in half. Out of respect for his adversary's prowess in battle, the god converted the two halves into an integral part of himself. One half became a peacock serving as his mount, and the other a rooster adorning his flag. The peacock displays the divine shape of Omkara when it spreads its magnificent plumes into a full-blown circular form. In the Tantric traditions of Hinduism the goddess Tvarita is depicted with peacock feathers. A peacock feather also adorns the crest of the god Krishna. Chandragupta Maurya, the founder of the Mauryan Empire, was born an orphan and raised by a family farming peacocks. According to the Buddhist tradition, the ancestors of the Maurya kings had settled in a region where peacocks (mora in Pali) were abundant. Therefore, they came to be known as "Moriyas", literally, "belonging to the place of peacocks". According to another Buddhist account, these ancestors built a city called Moriya-nagara ("Moriya-city"), which was so called, because it was built with the "bricks coloured like peacocks' necks". After conquering the Nanda Empire and defeating the Seleucid Empire, the Chandragupta dynasty reigned uncontested during its time. Its royal emblem remained the peacock until Emperor Ashoka changed it to a lion, as seen in the Lion Capital of Ashoka, as well in his edicts. The peacock continued to represent elegance and royalty in India during medieval times; for instance, the Mughal seat of power was called the Peacock Throne. The peacock is represented in both the Burmese and Sinhalese zodiacs. To the Sinhalese people, the peacock is the third animal of the zodiac of Sri Lanka. Peacocks (often a symbol of pride and vanity) were believed to deliberately consume poisonous substances in order to become immune to them, as well as to make the colours of their resplendent plumage all the more vibrant – seeing as so many poisonous flora and fauna are so colourful due to aposematism, this idea appears to have merit. The Buddhist deity Mahamayuri is depicted seated on a peacock. Peacocks are seen supporting the throne of Amitabha, the ruby red sunset coloured archetypal Buddha of Infinite Light. India adopted the peacock as its national bird in 1963 and it is one of the national symbols of India. Middle East Yazidism Tawûsî Melek (lit. 'Peacock Angel') one of the central figures of the Yazidi religion, is symbolized with a peacock. In Yazidi creation stories, before the creation of this world, God created seven Divine Beings, of whom Tawûsî Melek was appointed as the leader. God assigned all of the world's affairs to these seven Divine Beings, also often referred to as the Seven Angels or heft sirr ("the Seven Mysteries"). In Yazidism, the peacock is believed to represent the diversity of the world, and the colourfulness of the peacock's feathers is considered to represent of all the colours of nature. The feathers of the peacock also symbolize sun rays, from which come light, luminosity and brightness. The peacock opening the feathers of its tail in a circular shape symbolizes the sunrise. Consequently, due to its holiness, Yazidis are not allowed to hunt and eat the peacock, ill-treat it or utter bad words about it. Images of the peacock are also found drawn around the sanctuary of Lalish and on other Yazidi shrines and holy sites, homes, as well as religious, social, cultural and academic centres. Mandaeism In The Baptism of Hibil Ziwa, the Mandaean uthra and emanation Yushamin is described as a peacock. Ancient Greece Ancient Greeks believed that the flesh of peafowl did not decay after death, so it became a symbol of immortality. In Hellenistic imagery, the Greek goddess Hera's chariot was pulled by peacocks, birds not known to Greeks before the conquests of Alexander. Alexander's tutor, Aristotle, refers to it as "the Persian bird". When Alexander saw the birds in India, he was so amazed at their beauty that he threatened the severest penalties for any man who slew one. Claudius Aelianus writes that there were peacocks in India, larger than anywhere else. One myth states that Hera's servant, the hundred-eyed Argus Panoptes, was instructed to guard the woman-turned-cow, Io. Hera had transformed Io into a cow after learning of Zeus's interest in her. Zeus had the messenger of the gods, Hermes, kill Argus through eternal sleep and free Io. According to Ovid, to commemorate her faithful watchman, Hera had the hundred eyes of Argus preserved forever, in the peacock's tail. Christianity The symbolism was adopted by early Christianity, thus many early Christian paintings and mosaics show the peacock. The peacock is still used in the Easter season, especially in the east. The "eyes" in the peacock's tail feathers can symbolise the all-seeing Christian God, the Church, or angelic wisdom. The emblem of a pair of peacocks drinking from a vase is used as a symbol of the eucharist and the resurrection, as it represents the Christian believer drinking from the waters of eternal life. The peacock can also symbolise the cosmos if one interprets its tail with its many "eyes" as the vault of heaven dotted by the sun, moon, and stars. Due to the adoption by Augustine of the ancient idea that the peacock's flesh did not decay, the bird was again associated with immortality. In Christian iconography, two peacocks are often depicted either side of the Tree of Life. The symbolic association of peacock feathers with the wings of angels led to the belief that the waving of such liturgical fans resulted in an automated emission of prayers. This affinity between peacocks' and angels' feathers was also expressed in other artistic media, including paintings of angels with peacock feather wings Judaism Among Ashkenazi Jews, the golden peacock is a symbol for joy and creativity, with quills from the bird's feathers being a metaphor for a writer's inspiration. Renaissance The peacock motif was revived in the Renaissance iconography that unified Hera and Juno, and on which European painters focused. Contemporary In 1956, John J. Graham created an abstraction of an 11-feathered peacock logo for American broadcaster NBC. This brightly hued peacock was adopted due to the increase in colour programming. NBC's first colour broadcasts showed only a still frame of the colourful peacock. The emblem made its first on-air appearance on 22 May 1956. The current, six-feathered logo debuted on 12 May 1986. Breeding and colour variations Hybrids between Indian peafowl and Green peafowl are called Spaldings, after the first person to successfully hybridise them, Keith Spalding. Spaldings with a high-green phenotype do much better in cold temperatures than the cold-intolerant green peafowl while still looking like their green parents. Plumage varies between individual spaldings, with some looking far more like green peafowl and some looking far more like blue peafowl, though most visually carry traits of both. In addition to the wild-type "blue" colouration, several hundred variations in colour and pattern are recognised as separate morphs of the Indian Blue among peafowl breeders. Pattern variations include solid-wing/black shoulder (the black and brown stripes on the wing are instead one solid colour), pied, white-eye (the ocelli in a male's eye feathers have white spots instead of black), and silver pied (a mostly white bird with small patches of colour). Colour variations include white, purple, Buford bronze, opal, midnight, charcoal, jade, and taupe, as well as the sex-linked colours purple, cameo, peach, and Sonja's Violeta. Additional colour and pattern variations are first approved by the United Peafowl Association to become officially recognised as a morph among breeders. Alternately-coloured peafowl are born differently coloured than wild-type peafowl, and though each colour is recognisable at hatch, their peachick plumage does not necessarily match their adult plumage. Occasionally, peafowl appear with white plumage. Although albino peafowl do exist, this is quite rare, and almost all white peafowl are not albinos; they have a genetic condition called leucism, which causes pigment cells to fail to migrate from the neural crest during development. Leucistic peafowl can produce pigment but not deposit the pigment to their feathers, resulting in a blue-grey eye colour and the complete lack of colouration in their plumage. Pied peafowl are affected by partial leucism, where only some pigment cells fail to migrate, resulting in birds that have colour but also have patches absent of all colour; they, too, have blue-grey eyes. By contrast, true albino peafowl would have a complete lack of melanin, resulting in irises that look red or pink. Leucistic peachicks are born yellow and become fully white as they mature. The black-shouldered or Japanned mutation was initially considered as a subspecies of the Indian peafowl (P. c. nigripennis) (or even a separate species (P. nigripennis)) and was a topic of some interest during Darwin's time. Others had doubts about its taxonomic status, but the English naturalist and biologist Charles Darwin (1809–1882) presented firm evidence for it being a variety under domestication, which treatment is now well established and accepted. It being a colour variation rather than a wild species was important for Darwin to prove, as otherwise it could undermine his theory of slow modification by natural selection in the wild. It is, however, only a case of genetic variation within the population. In this mutation, the adult male is melanistic with black wings. Gastronomy In ancient Rome, peafowl were served as a delicacy. The dish was introduced there in approximately 35 B.C. The poet Horace ridiculed the eating of peafowl, saying they tasted like chicken. Peafowl eggs were also valued. Gaius Petronius in his Satyricon also mocked the ostentation and snobbery of eating peafowl and their eggs. During the Medieval period, various types of fowl were consumed as food, with the poorer populations (such as serfs) consuming more common birds, such as chicken. However, the more wealthy gentry were privileged to eat less usual foods, such as swan, and even peafowl were consumed. On a king's table, a peacock would be for ostentatious display as much as for culinary consumption. From the 1864 The English and Australian Cookery Book, regarding occasions and preparation of the bird: Instead of plucking this bird, take off the skin with the greatest care, so that the feathers do not get detached or broken. Stuff it with what you like, as truffles, mushrooms, livers of fowls, bacon, salt, spice, thyme, crumbs of bread, and a bay-leaf. Wrap the claws and head in several folds of cloth, and envelope the body in buttered paper. The head and claws, which project at the two ends, must be basted with water during the cooking, to preserve them, and especially the tuft. Before taking it off the spit, brown the bird by removing the paper. Garnish with lemon and flowers. If to come on the table cold, place the bird in a wooden trencher, in the middle of which is fixed a wooden skewer, which should penetrate the body of the bird, to keep it upright. Arrange the claws and feathers in a natural manner, and the tail like a fan, supported with wire. No ordinary cook can place a peacock on the table properly. This ceremony was reserved, in the times of chivalry, for the lady most distinguished for her beauty. She carried it, amidst inspiring music, and placed it, at the commencement of the banquet, before the master of the house. At a nuptial feast, the peacock was served by the maid of honour, and placed before the bride for her to consume. References Further reading External links Birds of Asia Bird common names Articles containing video clips Paraphyletic groups
Peafowl
[ "Biology" ]
5,254
[ "Phylogenetics", "Paraphyletic groups" ]
63,690
https://en.wikipedia.org/wiki/Palant%C3%ADr
A palantír (; ) is one of several indestructible crystal balls from J. R. R. Tolkien's epic-fantasy novel The Lord of the Rings. The word comes from Quenya 'far', and 'watch over'. The palantírs were used for communication and to see events in other parts of Arda, or in the past. The palantírs were made by the Elves of Valinor in the First Age, as told in The Silmarillion. By the time of The Lord of the Rings at the end of the Third Age, a few palantírs remained in use. They are used in some climactic scenes by major characters: Sauron, Saruman, Denethor the Steward of Gondor, and two members of the Company of the Ring: Aragorn and Pippin. A major theme of palantír usage is that while the stones show real objects or events, those using the stones had to "possess great strength of will and of mind" to direct the stone's gaze to its full capability. The stones were an unreliable guide to action, since what was not shown could be more important than what was selectively presented. A risk lay in the fact that users with sufficient power could choose what to show and what to conceal to other stones: in The Lord of the Rings, a palantír has fallen into the Enemy's hands, making the usefulness of all other existing stones questionable. Commentators such as the Tolkien scholar Paul Kocher note the hand of providence in their usage, while Joseph Pearce compares Sauron's use of the stones to broadcast wartime propaganda. Tom Shippey suggests that the message is that "speculation", looking into any sort of magic mirror (Latin: speculum) or stone to see the future, rather than trusting in providence, leads to error. Fictional artifact Origins In Tolkien's fantasy The Lord of the Rings, the palantírs were made by the Elves of Valinor in the Uttermost West, by the Noldor, apparently by Fëanor himself from silima, "that which shines". The number that he made is not stated, but there were at least eight of them. Seven of the stones given to Amandil of Númenor during the Second Age were saved by his son Elendil; he took them with him to Middle-earth, while at least the Master-stone remained behind. Four were taken to Gondor, while three stayed in Arnor. Originally, the stones of Arnor were at Elostirion in the Tower Hills, Amon Sul (Weathertop), and Annuminas: the Elostirion stone, Elendil's own, looked only Westwards from Middle-earth across the ocean to the Master-stone at the Tower of Avallonë upon Eressëa, an island off Valinor. The stones of Gondor were in Orthanc, Minas Tirith, Osgiliath, and Minas Ithil. By the time of The Lord of the Rings, the stone of Orthanc was in the hands of the wizard Saruman, while the stone of Minas Ithil, (by then Minas Morgul, the city of the Nazgûl), had been taken by the dark lord Sauron. That of Minas Tirith remained in the hands of the Steward of Gondor, Denethor. The stone of Osgiliath had been lost in the Anduin when the city was sacked. Gandalf names two of these as the and the . Characteristics A single palantír enabled its user to see places far off, or events in the past. A person could look into a palantír to communicate with anyone looking into another palantír. They could then see "visions of the things in the mind" of the person looking into the other stone. The stones were made of a dark crystal, indestructible by any normal means, except perhaps the fire of Orodruin. They ranged in size from a diameter of about a foot (30 cm) to much larger stones that could not be lifted by one person. The Stone of Osgiliath had power over other stones including the ability to eavesdrop. The minor stones required one to move around them, thereby changing the viewpoint of its vision, whereas the major stones could be turned on their axis. A wielder of great power such as Sauron could dominate a weaker user through the stone, which was the experience of Pippin Took and Saruman. Even one as powerful as Sauron could not make the palantírs "lie", or create false images; the most he could do was to selectively display truthful images to create a false impression in the viewer's mind. In The Lord of the Rings, four such uses of the stones are described, and in each case, a true image is shown, but the viewer draws a false conclusion from the facts. This applies to Sauron when he sees Pippin in Saruman's stone and assumes that Pippin has the One Ring, and that Saruman has therefore captured it. Denethor, too, is deceived through his use of a palantír, this time by Sauron, who drives Denethor to suicide by truthfully showing him the Black Fleet approaching Gondor, without telling him that the ships are crewed by Aragorn's troops, coming to Gondor's rescue. Shippey suggests that this consistent pattern is Tolkien's way of telling the reader that one should not "speculate" – the word meaning both to try to double-guess the future, and to look into a mirror (Latin: speculum 'glass or mirror') or crystal ball – but should trust in one's luck and make one's own mind up, courageously facing one's duty in each situation. The English literature scholar Paul Kocher similarly noted the hand of providence: Wormtongue's throwing of the stone providentially leads to Pippin's foolish look into the stone, which deceives Sauron; and it allows Aragorn to claim the stone and use it to deceive Sauron further. This leads him to assume that Aragorn has the One Ring. That in turn provokes Sauron into a whole series of what turn out to be disastrous actions: a premature attack on Minas Tirith; a rushed exit of the army of Minas Morgul, thus letting the hobbits through the pass of Cirith Ungol with the One Ring, and so on until the quest to destroy the ring succeeds against all odds. The Tolkien scholar Jane Chance writes that Saruman's sin, in Christian terms, is to seek Godlike knowledge by gazing in a short-sighted way into the Orthanc palantír in the hope of rivalling Sauron. She quotes Tolkien's description in The Two Towers, which states that Saruman explored "all those arts and subtle devices, for which he forsook his former wisdom". She explains that he is in this way giving up actual wisdom for "mere knowledge", imagining the arts were his own but in fact coming from Sauron. This prideful self-aggrandisement leads to his fall. She notes that it is ironic in this context that palantír means "far-sighted". Joseph Pearce compares Sauron's use of the seeing stones to "broadcast propaganda and sow the seeds of despair among his enemies" with the communications technologies used to spread propaganda in the Second World War and then the Cold War, when Tolkien was writing. In film A palantír appears in the film director Peter Jackson's The Lord of the Rings films. The Tolkien critic Allison Harl compares Jackson to Saruman, and his camera to a palantír, writing that "Jackson chooses to look through the perilous lens, putting his camera to use to exert control over the [original Tolkien] text." Harl cites Laura Mulvey's essay "Visual Pleasure and the Narrative Cinema" which describes "scopophilia", the voyeuristic pleasure of looking, based on Sigmund Freud's writings on sexuality. Harl gives as an example the sequence in The Two Towers where Jackson's camera "like the Evil Eye of Sauron" travels towards Saruman's tower, Isengard and "zooms into the dangerous palantír", in her opinion giving the cinema viewer "an omniscient and privileged perspective" consisting of a Sauron-like power to observe the whole of Middle-earth. The sequence ends fittingly, in her opinion, with Mordor and the Eye of Sauron, bringing the viewer, like Saruman, to meet the Enemy's gaze. As a consequence of Jackson's exclusion of The Scouring of the Shire, Saruman is killed by Wormtongue much earlier (at the beginning of the extended edition of The Lord of the Rings: The Return of the King), while Gandalf acquires the Orthanc palantír after Pippin retrieves it from Saruman's corpse, instead of having Wormtongue throw it from a window of the tower. Further, Sauron uses the Palantír to show Aragorn a dying Arwen, (a scene from the future) in the hope of weakening his resolve. Influence The software data-collection company Palantir Technologies was named by its founder, Peter Thiel, after Tolkien's seeing stones. An astronomical telescope at the Lowell Observatory, using a main mirror with spherical curvature, has the acronym PALANTIR. This stands for Precision Array of Large-Aperture New Telescopes for Image Reconstruction, and is meant to reference the "far-seeing stones in [The] Lord of the Rings". See also Alatyr (mythology) References Primary Secondary Sources Middle-earth objects Fictional balls Magic items Fictional elements introduced in 1954 de:Gegenstände in Tolkiens Welt#Die palantírs pl:Lista artefaktów Śródziemia#Palantíry
Palantír
[ "Physics" ]
2,084
[ "Magic items", "Physical objects", "Matter" ]
63,692
https://en.wikipedia.org/wiki/%CE%91-Methyltryptamine
α-Methyltryptamine (αMT, AMT) is a psychedelic, stimulant, and entactogen drug of the tryptamine family. It was originally developed as an antidepressant at Upjohn in the 1960s, and was used briefly as an antidepressant in the Soviet Union under the brand name Indopan or Indopane before being discontinued. Side effects of αMT include agitation, restlessness, confusion, lethargy, pupil dilation, jaw clenching, and rapid heart rate, among others. αMT acts as a releasing agent of serotonin, norepinephrine, and dopamine, as a serotonin receptor agonist, and as a weak monoamine oxidase inhibitor. αMT is a substituted tryptamine and is closely related to α-ethyltryptamine (αET) and other α-alkylated tryptamines. αMT appears to have first been described by at least 1929. It started being more studied in the late 1950s and was briefly used as an antidepressant in the Soviet Union in the 1960s. The drug started being used recreationally in the 1960s, with use increasing in the 1990s, and cases of death have been reported. αMT is a controlled substance in various countries, including the United States. Medical uses Under the brand name Indopan or Indopane, αMT at doses of 5 to 10mg was used for an antidepressant effect. Effects With 20 to 30mg, euphoria, empathy, and psychedelic effects become apparent and can last as long as 12hours. A dose exceeding 40mg is generally considered strong. In rare cases or extreme doses, the duration of effects might exceed 24hours. Users report that αMT in freebase form is smoked, with doses between and 2 and 5mg. Side effects Neurologic side effects of αMT include agitation, restlessness, confusion, and lethargy. Physical manifestations including vomiting, mydriasis (pupillary dilation), jaw clenching, tachycardia, salivation, diaphoresis (sweating), and elevations in blood pressure, temperature, and respiratory rate. Side effects self-reported by recreational users include anxiety, muscle tension, jaw tightness, pupil dilation, tachycardia, headaches, nausea, and vomiting, as well as psychedelic effects including visual hallucinations and an altered state of mind. αMT is capable of causing life-threatening side effects including hyperthermia, hypertension, and tachycardia. Fatalities have been reported in association with high doses or concomitant use of other drugs. Fatalities verified with toxicology and autopsy include those of a 22-year-old man in Miami-Dade county and a British teenager, both of whom died after consuming 1g of αMT. Pharmacology Pharmacodynamics αMT acts as a relatively balanced reuptake inhibitor and releasing agent of the main three monoamines; serotonin, norepinephrine, and dopamine, and as a non-selective serotonin receptor agonist. Monoamine oxidase inhibition αMT has been shown as a reversible inhibitor of the enzyme monoamine oxidase (MAO) in vitro and in vivo. In rats, the potency of αMT as an MAO-A inhibitor in the brain was approximately equal to that of harmaline at equimolar doses. Dextroamphetamine did not enhance the 5-hydroxytryptophan-induced rise of serotonin at any level. The of αMT for inhibition of MAO-A has been found to be 380nM. This is similar to that of agents like para-methoxyamphetamine (PMA) and 4-methylthioamphetamine (4-MTA). Serotonergic neurotoxicity A close analogue of αMT, α-ethyltryptamine (αET), is known to be a serotonergic neurotoxin similarly to MDMA and para-chloroamphetamine (PCA). Pharmacokinetics 2-Oxo-αMT, 6-hydroxy-αMT, 7-hydroxy-αMT, and 1′-hydroxy-αMT were detected as metabolites of αMT in male Wistar rats. Chemistry αMT is a synthetic substituted tryptamine with a methyl substituent at the alpha carbon. This alpha substitution makes it a relatively poor substrate for monoamine oxidase A, thereby prolonging αMT's half-life, allowing it to reach the brain and enter the central nervous system. Its chemical relation to tryptamine is analogous to that of amphetamine to phenethylamine, amphetamine being α-methylphenethylamine. αMT is closely related to the neurotransmitter serotonin (5-hydroxytryptamine) which partially explains its mechanism of action. Many analogues of αMT are known, including α-ethyltryptamine (αET), 4-methyl-αMT, 5-chloro-αMT (PAL-542), 5-fluoro-αMT (PAL-544), 5-fluoro-αET (PAL-545), 5-methoxy-αMT (5-MeO-αMT), α,N-dimethyltryptamine (α,N-DMT; N-methyl-αMT), α,N,N-trimethyltryptamine (α,N,N-TMT; N-dimethyl-αMT), α-methylserotonin (α-methyl-5-HT; 5-hydroxy-αMT), and indolylpropylaminopentane (IPAP; α,N-dipropyltryptamine or α,N-DPT), among others. Another analogue of αMT is the β-keto and N-methylated derivative BK-NM-AMT. α-Methyltryptophan, a prodrug of α-methylserotonin, also metabolizes into αMT, but only in small amounts. Synthesis The synthesis of αMT can be accomplished through several different routes, the two most widely known being the nitroaldol condensation between indole-3-carboxaldehyde and nitroethane under ammonium acetate catalysis which produces 1-(3-indolyl)-2-nitropropene-1, the product can subsequently be reduced using a reducing agent like lithium aluminum hydride The alternative synthesis is the condensation between indole-3-acetone and hydroxylamine., followed by reduction of the obtained ketoxime with lithium aluminum hydride. History αMT has been said to have been first synthesized in 1947, alongside α-ethyltryptamine (αET). However, other sources suggest that αMT was first described in the scientific literature by at least 1929. It was specifically described as an antagonist of ergotamine at this time. αMT started to be more intensively studied, along with αET, in the late 1950s and early 1960s. It was researched by Upjohn (code name U-14,164E) and Sandoz (code name IT-290) as a possible pharmaceutical drug and was simultaneously marketed in the Soviet Union as an antidepressant under the brand name Indopan or Indopane in the 1960s. However, the drug was used clinically for only a short period of time before being withdrawn. αMT started being used as a recreational drug in the 1960s and use as a designer drug increased in the 1990s. It became a controlled substance in the United States in 2003. Society and culture Names αMT never received a formal generic name. In the scientific literature, it has been referred to as α-methyltryptamine or alpha-methyltryptamine (abbreviated as α-MT, αMT, or AMT). αMT has also been referred to by developmental code names including IT-290 (Sandoz), NSC-97069, PAL-17, Ro 3-0926, and U-14,164E (Upjohn). In the Soviet Union, the drug was merely referred to by its brand name Indopan or Indopane. Other synonyms of αMT include 3-(2-aminopropyl)indole and 3-IT. (+)-αMT has been referred to by the code name IT-403. Legality Australia The 5-methoxy analogue, 5-MeO-αMT is schedule 9 in Australia and αMT would be controlled as an analogue of this. Austria αMT is placed under Austrian law (NPSG) Group 6. Canada Canada has no mention of αMT in the Controlled Drugs and Substances Act. China As of October 2015 αMT is a controlled substance in China. Denmark In Denmark (2010), the Danish Minister for the Interior and Health placed αMT to their lists of controlled substances (List B). Finland AMT, alfa-methyltryptamine, is a controlled drug in Finland. Germany αMT is listed under the Narcotics Act in schedule 1 (narcotics not eligible for trade and medical prescriptions) in Germany. Hungary αMT was controlled on the Schedule C list in Hungary in 2013. Lithuania In Lithuania (2012), αMT is controlled as a tryptamine derivative put under control in the 1st list of Narcotic Drugs and Psychotropic Substances which use is prohibited for medical purposes. Slovakia αMT was placed in 2013 on the List of Hazardous Substances in Annex, § 2 in Slovakia. Slovenia αMT appeared on the Decree on Classification of Illicit Drugs in Slovenia (2013). Spain αMT is legal in Spain. Sweden Sveriges riksdags health ministry Statens folkhälsoinstitut classified αMT as "health hazard" under the act Lagen om förbud mot vissa hälsofarliga varor (translated Act on the Prohibition of Certain Goods Dangerous to Health) as of Mar 1, 2005, in their regulation SFS 2005:26 listed as alfa-metyltryptamin (AMT), making it illegal to sell or possess. United Kingdom αMT was made illegal in the United Kingdom as of 7 January 2015, along with 5-MeO-DALT. This was following the events of 10 June 2014 when the Advisory Council on the Misuse of Drugs recommended that αMT be scheduled as a class A drug by updating the blanket ban clause on tryptamines. United States The Drug Enforcement Administration (DEA) placed αMT temporarily in schedule I of the Controlled Substances Act (CSA) on April 4, 2003, pursuant to the temporary scheduling provisions of the CSA (68 FR16427). On September 29, 2004, αMT was permanently controlled as a schedule I substance under the CSA (69FR 58050). Research Besides depression, αMT has been studied in people with schizophrenia and other conditions. See also List of Russian drugs Notes References External links TiHKAL entry αMT Entry in TiHKAL • info Erowid page on αMT Lycaeum page on αMT 5-HT2A agonists Alpha-Alkyltryptamines Antidepressants Entactogens and empathogens Designer drugs Monoamine oxidase inhibitors Psychedelic tryptamines Russian drugs Serotonin-norepinephrine-dopamine releasing agents Serotonin receptor agonists Stimulants Withdrawn drugs
Α-Methyltryptamine
[ "Chemistry" ]
2,435
[ "Drug safety", "Withdrawn drugs" ]
63,763
https://en.wikipedia.org/wiki/Solved%20game
A solved game is a game whose outcome (win, lose or draw) can be correctly predicted from any position, assuming that both players play perfectly. This concept is usually applied to abstract strategy games, and especially to games with full information and no element of chance; solving such a game may use combinatorial game theory or computer assistance. Overview A two-player game can be solved on several levels: Ultra-weak solution Prove whether the first player will win, lose or draw from the initial position, given perfect play on both sides . This can be a non-constructive proof (possibly involving a strategy-stealing argument) that need not actually determine any details of the perfect play. Weak solution Provide an algorithm that secures a win for one player, or a draw for either, against any possible play by the opponent, from the beginning of the game. Strong solution Provide an algorithm that can produce perfect play for both players from any position, even if imperfect play has already occurred on one or both sides. Despite their name, many game theorists believe that "ultra-weak" proofs are the deepest, most interesting and valuable. "Ultra-weak" proofs require a scholar to reason about the abstract properties of the game, and show how these properties lead to certain outcomes if perfect play is realized. By contrast, "strong" proofs often proceed by brute force—using a computer to exhaustively search a game tree to figure out what would happen if perfect play were realized. The resulting proof gives an optimal strategy for every possible position on the board. However, these proofs are not as helpful in understanding deeper reasons why some games are solvable as a draw, and other, seemingly very similar games are solvable as a win. Given the rules of any two-person game with a finite number of positions, one can always trivially construct a minimax algorithm that would exhaustively traverse the game tree. However, since for many non-trivial games such an algorithm would require an infeasible amount of time to generate a move in a given position, a game is not considered to be solved weakly or strongly unless the algorithm can be run by existing hardware in a reasonable time. Many algorithms rely on a huge pre-generated database and are effectively nothing more. As a simple example of a strong solution, the game of tic-tac-toe is easily solvable as a draw for both players with perfect play (a result manually determinable). Games like nim also admit a rigorous analysis using combinatorial game theory. Whether a game is solved is not necessarily the same as whether it remains interesting for humans to play. Even a strongly solved game can still be interesting if its solution is too complex to be memorized; conversely, a weakly solved game may lose its attraction if the winning strategy is simple enough to remember (e.g., Maharajah and the Sepoys). An ultra-weak solution (e.g., Chomp or Hex on a sufficiently large board) generally does not affect playability. Perfect play In game theory, perfect play is the behavior or strategy of a player that leads to the best possible outcome for that player regardless of the response by the opponent. Perfect play for a game is known when the game is solved. Based on the rules of a game, every possible final position can be evaluated (as a win, loss or draw). By backward reasoning, one can recursively evaluate a non-final position as identical to the position that is one move away and best valued for the player whose move it is. Thus a transition between positions can never result in a better evaluation for the moving player, and a perfect move in a position would be a transition between positions that are equally evaluated. As an example, a perfect player in a drawn position would always get a draw or win, never a loss. If there are multiple options with the same outcome, perfect play is sometimes considered the fastest method leading to a good result, or the slowest method leading to a bad result. Perfect play can be generalized to non-perfect information games, as the strategy that would guarantee the highest minimal expected outcome regardless of the strategy of the opponent. As an example, the perfect strategy for rock paper scissors would be to randomly choose each of the options with equal (1/3) probability. The disadvantage in this example is that this strategy will never exploit non-optimal strategies of the opponent, so the expected outcome of this strategy versus any strategy will always be equal to the minimal expected outcome. Although the optimal strategy of a game may not (yet) be known, a game-playing computer might still benefit from solutions of the game from certain endgame positions (in the form of endgame tablebases), which will allow it to play perfectly after some point in the game. Computer chess programs are well known for doing this. Solved games Awari (a game of the Mancala family) The variant of Oware allowing game ending "grand slams" was strongly solved by Henri Bal and John Romein at the Vrije Universiteit in Amsterdam, Netherlands (2002). Either player can force the game into a draw. Chopsticks Strongly solved. If two players both play perfectly, the game will go on indefinitely. Connect Four Solved first by James D. Allen on October 1, 1988, and independently by Victor Allis on October 16, 1988. The first player can force a win. Strongly solved by John Tromp's 8-ply database (Feb 4, 1995). Weakly solved for all boardsizes where width+height is at most 15 (as well as 8×8 in late 2015) (Feb 18, 2006). Solved for all boardsizes where width+height equals 16 on May 22, 2024. Free gomoku Solved by Victor Allis (1993). The first player can force a win without opening rules. Ghost Solved by Alan Frank using the Official Scrabble Players Dictionary in 1987. Hexapawn 3×3 variant solved as a win for black, several other larger variants also solved. Kalah Most variants solved by Geoffrey Irving, Jeroen Donkers and Jos Uiterwijk (2000) except Kalah (6/6). The (6/6) variant was solved by Anders Carstensen (2011). Strong first-player advantage was proven in most cases. L game Easily solvable. Either player can force the game into a draw. Maharajah and the Sepoys This asymmetrical game is a win for the sepoys player with correct play. Nim Strongly solved. Nine men's morris Solved by Ralph Gasser (1993). Either player can force the game into a draw. Order and Chaos Order (First player) wins. Ohvalhu Weakly solved by humans, but proven by computers. (Dakon is, however, not identical to Ohvalhu, the game which actually had been observed by de Voogt) Pangki Strongly solved by Jason Doucette (2001). The game is a draw. There are only two unique first moves if you discard mirrored positions. One forces the draw, and the other gives the opponent a forced win in 15 moves. Pentago Strongly solved by Geoffrey Irving with use of a supercomputer at NERSC. The first player wins. Quarto Solved by Luc Goossens (1998). Two perfect players will always draw. Renju-like game without opening rules involved Claimed to be solved by János Wagner and István Virág (2001). A first-player win. Teeko Solved by Guy Steele (1998). Depending on the variant either a first-player win or a draw. Three men's morris Trivially solvable. Either player can force the game into a draw. Three musketeers Strongly solved by Johannes Laire in 2009, and weakly solved by Ali Elabridi in 2017. It is a win for the blue pieces (Cardinal Richelieu's men, or, the enemy). Tic-tac-toe Trivially strongly solvable because of the small game tree. The game is a draw if no mistakes are made, with no mistake possible on the opening move. Wythoff's game Strongly solved by W. A. Wythoff in 1907. Weak-solves English draughts (checkers) This 8×8 variant of draughts was weakly solved on April 29, 2007, by the team of Jonathan Schaeffer. From the standard starting position, both players can guarantee a draw with perfect play. Checkers has a search space of 5×1020 possible game positions. The number of calculations involved was 1014, which were done over a period of 18 years. The process involved from 200 desktop computers at its peak down to around 50. Fanorona Weakly solved by Maarten Schadd. The game is a draw. Losing chess Weakly solved in 2016 as a win for White beginning with 1. e3. Othello (Reversi) Weakly solved in 2023 by Hiroki Takizawa, a researcher at Preferred Networks. From the standard starting position on an 8×8 board, a perfect play by both players will result in a draw. Othello is the largest game solved to date, with a search space of 1028 possible game positions. Pentominoes Weakly solved by H. K. Orman. It is a win for the first player. Qubic Weakly solved by Oren Patashnik (1980) and Victor Allis. The first player wins. Sim Weakly solved: win for the second player. Lambs and tigers Weakly solved by Yew Jin Lim (2007). The game is a draw. Partially solved games Chess Fully solving chess remains elusive, and it is speculated that the complexity of the game may preclude it ever being solved. Through retrograde computer analysis, endgame tablebases (strong solutions) have been found for all three- to seven-piece endgames, counting the two kings as pieces. Some variants of chess on a smaller board with reduced numbers of pieces have been solved. Some other popular variants have also been solved; for example, a weak solution to Maharajah and the Sepoys is an easily memorable series of moves that guarantees victory to the "sepoys" player. Go The 5×5 board was weakly solved for all opening moves in 2002. The 7×7 board was weakly solved in 2015. Humans usually play on a 19×19 board, which is over 145 orders of magnitude more complex than 7×7. Hex A strategy-stealing argument (as used by John Nash) shows that all square board sizes cannot be lost by the first player. Combined with a proof of the impossibility of a draw, this shows that the game is a first player win (so it is ultra-weak solved). On particular board sizes, more is known: it is strongly solved by several computers for board sizes up to 6×6. Weak solutions are known for board sizes 7×7 (using a swapping strategy), 8×8, and 9×9; in the 8×8 case, a weak solution is known for all opening moves. Strongly solving Hex on an N×N board is unlikely as the problem has been shown to be PSPACE-complete. If Hex is played on an N×(N + 1) board then the player who has the shorter distance to connect can always win by a simple pairing strategy, even with the disadvantage of playing second. International draughts All endgame positions with two through seven pieces were solved, as well as positions with 4×4 and 5×3 pieces where each side had one king or fewer, positions with five men versus four men, positions with five men versus three men and one king, and positions with four men and one king versus four men. The endgame positions were solved in 2007 by Ed Gilbert of the United States. Computer analysis showed that it was highly likely to end in a draw if both players played perfectly. m,n,k-game It is trivial to show that the second player can never win; see strategy-stealing argument. Almost all cases have been solved weakly for k ≤ 4. Some results are known for k = 5. The games are drawn for k ≥ 8. See also Computer chess Computer Go Computer Othello Game complexity God's algorithm Zermelo's theorem (game theory) References Further reading Allis, Beating the World Champion? The state-of-the-art in computer game playing. in New Approaches to Board Games Research. External links Computational Complexity of Games and Puzzles by David Eppstein. GamesCrafters solving two-person games with perfect information and no chance Mathematical games Abstract strategy games Combinatorial game theory
Solved game
[ "Mathematics" ]
2,610
[ "Mathematical games", "Recreational mathematics", "Combinatorics", "Game theory", "Combinatorial game theory" ]
63,778
https://en.wikipedia.org/wiki/Uncertainty
Uncertainty or incertitude refers to situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable or stochastic environments, as well as due to ignorance, indolence, or both. It arises in any number of fields, including insurance, philosophy, physics, statistics, economics, finance, medicine, psychology, sociology, engineering, metrology, meteorology, ecology and information science. Concepts Although the terms are used in various ways among the general public, many specialists in decision theory, statistics and other quantitative fields have defined uncertainty, risk, and their measurement as: Uncertainty The lack of certainty, a state of limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome. Measurement Uncertainty can be measured through a set of possible states or outcomes where probabilities are assigned to each possible state or outcome – this also includes the application of a probability density function to continuous variables. Second-order uncertainty In statistics and economics, second-order uncertainty is represented in probability density functions over (first-order) probabilities. Opinions in subjective logic carry this type of uncertainty. Risk Risk is a state of uncertainty, where some possible outcomes have an undesired effect or significant loss. Measurement of risk includes a set of measured uncertainties, where some possible outcomes are losses, and the magnitudes of those losses. This also includes loss functions over continuous variables. Uncertainty versus variability There is a difference between uncertainty and variability. Uncertainty is quantified by a probability distribution which depends upon knowledge about the likelihood of what the single, true value of the uncertain quantity is. Variability is quantified by a distribution of frequencies of multiple instances of the quantity, derived from observed data. Knightian uncertainty In economics, in 1921 Frank Knight distinguished uncertainty from risk with uncertainty being lack of knowledge which is immeasurable and impossible to calculate. Because of the absence of clearly defined statistics in most economic decisions where people face uncertainty, he believed that we cannot measure probabilities in such cases; this is now referred to as Knightian uncertainty. Knight pointed out that the unfavorable outcome of known risks can be insured during the decision-making process because it has a clearly defined expected probability distribution. Unknown risks have no known expected probability distribution, which can lead to extremely risky company decisions. Other taxonomies of uncertainties and decisions include a broader sense of uncertainty and how it should be approached from an ethics perspective: Risk and uncertainty For example, if it is unknown whether or not it will rain tomorrow, then there is a state of uncertainty. If probabilities are applied to the possible outcomes using weather forecasts or even just a calibrated probability assessment, the uncertainty has been quantified. Suppose it is quantified as a 90% chance of sunshine. If there is a major, costly, outdoor event planned for tomorrow then there is a risk since there is a 10% chance of rain, and rain would be undesirable. Furthermore, if this is a business event and $100,000 would be lost if it rains, then the risk has been quantified (a 10% chance of losing $100,000). These situations can be made even more realistic by quantifying light rain vs. heavy rain, the cost of delays vs. outright cancellation, etc. Some may represent the risk in this example as the "expected opportunity loss" (EOL) or the chance of the loss multiplied by the amount of the loss (10% × $100,000 = $10,000). That is useful if the organizer of the event is "risk neutral", which most people are not. Most would be willing to pay a premium to avoid the loss. An insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add onto that other operating costs and profit. Since many people are willing to buy insurance for many reasons, then clearly the EOL alone is not the perceived value of avoiding the risk. Quantitative uses of the terms uncertainty and risk are fairly consistent among fields such as probability theory, actuarial science, and information theory. Some also create new terms without substantially changing the definitions of uncertainty or risk. For example, surprisal is a variation on uncertainty sometimes used in information theory. But outside of the more mathematical uses of the term, usage may vary widely. In cognitive psychology, uncertainty can be real, or just a matter of perception, such as expectations, threats, etc. Vagueness is a form of uncertainty where the analyst is unable to clearly differentiate between two different classes, such as 'person of average height' and 'tall person'. This form of vagueness can be modelled by some variation on Zadeh's fuzzy logic or subjective logic. Ambiguity is a form of uncertainty where even the possible outcomes have unclear meanings and interpretations. The statement "He returns from the bank" is ambiguous because its interpretation depends on whether the word 'bank' is meant as "the side of a river" or "a financial institution". Ambiguity typically arises in situations where multiple analysts or observers have different interpretations of the same statements. At the subatomic level, uncertainty may be a fundamental and unavoidable property of the universe. In quantum mechanics, the Heisenberg uncertainty principle puts limits on how much an observer can ever know about the position and velocity of a particle. This may not just be ignorance of potentially obtainable facts but that there is no fact to be found. There is some controversy in physics as to whether such uncertainty is an irreducible property of nature or if there are "hidden variables" that would describe the state of a particle even more exactly than Heisenberg's uncertainty principle allows. Radical uncertainty The term 'radical uncertainty' was popularised by John Kay and Mervyn King in their book Radical Uncertainty: Decision-Making for an Unknowable Future, published in March 2020. It is distinct from Knightian uncertainty, by whether or not it is 'resolvable'. If uncertainty arises from a lack of knowledge, and that lack of knowledge is resolvable by acquiring knowledge (such as by primary or secondary research) then it is not radical uncertainty. Only when there are no means available to acquire the knowledge which would resolve the uncertainty, is it considered 'radical'. In measurements The most commonly used procedure for calculating measurement uncertainty is described in the "Guide to the Expression of Uncertainty in Measurement" (GUM) published by ISO. A derived work is for example the National Institute of Standards and Technology (NIST) Technical Note 1297, "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results", and the Eurachem/Citac publication "Quantifying Uncertainty in Analytical Measurement". The uncertainty of the result of a measurement generally consists of several components. The components are regarded as random variables, and may be grouped into two categories according to the method used to estimate their numerical values: Type A, those evaluated by statistical methods Type B, those evaluated by other means, e.g., by assigning a probability distribution By propagating the variances of the components through a function relating the components to the measurement result, the combined measurement uncertainty is given as the square root of the resulting variance. The simplest form is the standard deviation of a repeated observation. In metrology, physics, and engineering, the uncertainty or margin of error of a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations: measured value ± uncertainty measured value measured value (uncertainty) In the last notation, parentheses are the concise notation for the ± notation. For example, applying 10 meters in a scientific or engineering application, it could be written or , by convention meaning accurate to within one tenth of a meter, or one hundredth. The precision is symmetric around the last digit. In this case it's half a tenth up and half a tenth down, so 10.5 means between 10.45 and 10.55. Thus it is understood that 10.5 means , and 10.50 means , also written and respectively. But if the accuracy is within two tenths, the uncertainty is ± one tenth, and it is required to be explicit: and or and . The numbers in parentheses apply to the numeral left of themselves, and are not part of that number, but part of a notation of uncertainty. They apply to the least significant digits. For instance, stands for , while stands for . This concise notation is used for example by IUPAC in stating the atomic mass of elements. The middle notation is used when the error is not symmetrical about the value – for example . This can occur when using a logarithmic scale, for example. Uncertainty of a measurement can be determined by repeating a measurement to arrive at an estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements. This procedure neglects systematic errors, however. When the uncertainty represents the standard error of the measurement, then about 68.3% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.7% of the atomic mass values given on the list of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the normal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quoted standard errors are easily converted to 68.3% ("one sigma"), 95.4% ("two sigma"), or 99.7% ("three sigma") confidence intervals. In this context, uncertainty depends on both the accuracy and precision of the measurement instrument. The lower the accuracy and precision of an instrument, the larger the measurement uncertainty is. Precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision. In the media Uncertainty in science, and science in general, may be interpreted differently in the public sphere than in the scientific community. This is due in part to the diversity of the public audience, and the tendency for scientists to misunderstand lay audiences and therefore not communicate ideas clearly and effectively. One example is explained by the information deficit model. Also, in the public realm, there are often many scientific voices giving input on a single topic. For example, depending on how an issue is reported in the public sphere, discrepancies between outcomes of multiple scientific studies due to methodological differences could be interpreted by the public as a lack of consensus in a situation where a consensus does in fact exist. This interpretation may have even been intentionally promoted, as scientific uncertainty may be managed to reach certain goals. For example, climate change deniers took the advice of Frank Luntz to frame global warming as an issue of scientific uncertainty, which was a precursor to the conflict frame used by journalists when reporting the issue. "Indeterminacy can be loosely said to apply to situations in which not all the parameters of the system and their interactions are fully known, whereas ignorance refers to situations in which it is not known what is not known." These unknowns, indeterminacy and ignorance, that exist in science are often "transformed" into uncertainty when reported to the public in order to make issues more manageable, since scientific indeterminacy and ignorance are difficult concepts for scientists to convey without losing credibility. Conversely, uncertainty is often interpreted by the public as ignorance. The transformation of indeterminacy and ignorance into uncertainty may be related to the public's misinterpretation of uncertainty as ignorance. Journalists may inflate uncertainty (making the science seem more uncertain than it really is) or downplay uncertainty (making the science seem more certain than it really is). One way that journalists inflate uncertainty is by describing new research that contradicts past research without providing context for the change. Journalists may give scientists with minority views equal weight as scientists with majority views, without adequately describing or explaining the state of scientific consensus on the issue. In the same vein, journalists may give non-scientists the same amount of attention and importance as scientists. Journalists may downplay uncertainty by eliminating "scientists' carefully chosen tentative wording, and by losing these caveats the information is skewed and presented as more certain and conclusive than it really is". Also, stories with a single source or without any context of previous research mean that the subject at hand is presented as more definitive and certain than it is in reality. There is often a "product over process" approach to science journalism that aids, too, in the downplaying of uncertainty. Finally, and most notably for this investigation, when science is framed by journalists as a triumphant quest, uncertainty is erroneously framed as "reducible and resolvable". Some media routines and organizational factors affect the overstatement of uncertainty; other media routines and organizational factors help inflate the certainty of an issue. Because the general public (in the United States) generally trusts scientists, when science stories are covered without alarm-raising cues from special interest organizations (religious groups, environmental organizations, political factions, etc.) they are often covered in a business related sense, in an economic-development frame or a social progress frame. The nature of these frames is to downplay or eliminate uncertainty, so when economic and scientific promise are focused on early in the issue cycle, as has happened with coverage of plant biotechnology and nanotechnology in the United States, the matter in question seems more definitive and certain. Sometimes, stockholders, owners, or advertising will pressure a media organization to promote the business aspects of a scientific issue, and therefore any uncertainty claims which may compromise the business interests are downplayed or eliminated. Applications Uncertainty is designed into games, most notably in gambling, where chance is central to play. In scientific modelling, in which the prediction of future events should be understood to have a range of expected values In computer science, and in particular data management, uncertain data is commonplace and can be modeled and stored within an uncertain database In optimization, uncertainty permits one to describe situations where the user does not have full control on the outcome of the optimization procedure, see scenario optimization and stochastic optimization. In weather forecasting, it is now commonplace to include data on the degree of uncertainty in a weather forecast. Uncertainty or error is used in science and engineering notation. Numerical values should only have to be expressed in those digits that are physically meaningful, which are referred to as significant figures. Uncertainty is involved in every measurement, such as measuring a distance, a temperature, etc., the degree depending upon the instrument or technique used to make the measurement. Similarly, uncertainty is propagated through calculations so that the calculated value has some degree of uncertainty depending upon the uncertainties of the measured values and the equation used in the calculation. In physics, the Heisenberg uncertainty principle forms the basis of modern quantum mechanics. In metrology, measurement uncertainty is a central concept quantifying the dispersion one may reasonably attribute to a measurement result. Such an uncertainty can also be referred to as a measurement error. In daily life, measurement uncertainty is often implicit ("He is 6 feet tall" give or take a few inches), while for any serious use an explicit statement of the measurement uncertainty is necessary. The expected measurement uncertainty of many measuring instruments (scales, oscilloscopes, force gages, rulers, thermometers, etc.) is often stated in the manufacturers' specifications. In engineering, uncertainty can be used in the context of validation and verification of material modeling. Uncertainty has been a common theme in art, both as a thematic device (see, for example, the indecision of Hamlet), and as a quandary for the artist (such as Martin Creed's difficulty with deciding what artworks to make). Uncertainty is an important factor in economics. According to economist Frank Knight, it is different from risk, where there is a specific probability assigned to each outcome (as when flipping a fair coin). Knightian uncertainty involves a situation that has unknown probabilities. Investing in financial markets such as the stock market involves Knightian uncertainty when the probability of a rare but catastrophic event is unknown. Philosophy In Western philosophy the first philosopher to embrace uncertainty was Pyrrho resulting in the Hellenistic philosophies of Pyrrhonism and Academic Skepticism, the first schools of philosophical skepticism. Aporia and acatalepsy represent key concepts in ancient Greek philosophy regarding uncertainty. William MacAskill, a philosopher at Oxford University, has also discussed the concept of Moral Uncertainty. Moral Uncertainty is "uncertainty about how to act given lack of certainty in any one moral theory, as well as the study of how we ought to act given this uncertainty." Artificial intelligence See also Certainty Dempster–Shafer theory Further research is needed Fuzzy set theory Game theory Information entropy Interval finite element Keynes' Treatise on Probability Measurement uncertainty Morphological analysis (problem-solving) Propagation of uncertainty Randomness Schrödinger's cat Scientific consensus Statistical mechanics Subjective logic Uncertainty quantification Uncertainty tolerance Volatility, uncertainty, complexity and ambiguity References Further reading "Treading Thin Air: Geoff Mann on Uncertainty and Climate Change", London Review of Books, vol. 45, no. 17 (7 September 2023), pp. 17–19. "[W]e are in desperate need of a politics that looks [the] catastrophic uncertainty [of global warming and climate change] square in the face. That would mean taking much bigger and more transformative steps: all but eliminating fossil fuels... and prioritizing democratic institutions over markets. The burden of this effort must fall almost entirely on the richest people and richest parts of the world, because it is they who continue to gamble with everyone else's fate." (p. 19.) External links Measurement Uncertainties in Science and Technology, Springer 2005 Proposal for a New Error Calculus Estimation of Measurement Uncertainties — an Alternative to the ISO Guide Bibliography of Papers Regarding Measurement Uncertainty Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results Strategic Engineering: Designing Systems and Products under Uncertainty (MIT Research Group) Understanding Uncertainty site from Cambridge's Winton programme Cognition Concepts in epistemology Doubt Experimental physics Measurement Probability interpretations Prospect theory Economics of uncertainty
Uncertainty
[ "Physics", "Mathematics" ]
3,918
[ "Physical quantities", "Quantity", "Probability interpretations", "Measurement", "Size", "Experimental physics" ]
63,792
https://en.wikipedia.org/wiki/Courtly%20love
Courtly love ( ; ) was a medieval European literary conception of love that emphasized nobility and chivalry. Medieval literature is filled with examples of knights setting out on adventures and performing various deeds or services for ladies because of their "courtly love". This kind of love was originally a literary fiction created for the entertainment of the nobility, but as time passed, these ideas about love spread to popular culture and attracted a larger literate audience. In the high Middle Ages, a "game of love" developed around these ideas as a set of social practices. "Loving nobly" was considered to be an enriching and improving practice. Courtly love began in the ducal and princely courts of Aquitaine, Provence, Champagne, ducal Burgundy and the Norman Kingdom of Sicily at the end of the eleventh century. In essence, courtly love was an experience between erotic desire and spiritual attainment, "a love at once illicit and morally elevating, passionate and disciplined, humiliating and exalting, human and transcendent". The topic was prominent with both musicians and poets, being frequently used by troubadours, and . The topic was also popular with major writers, including Dante, Petrarch and Geoffrey Chaucer. Origin of term Contemporary usage The term "courtly love" appears in only one extant source: Provençal in a late 12th-century poem by Peire d'Alvernhe. It is associated with the Provençal term ("fine love") which appears frequently in poetry, as well as its German translation . Provençal also uses the terms , . Modern usage The modern use of the term "courtly love" comes from Gaston Paris. He used the term ("courtly love") in a 1883 article discussing the relationship between Lancelot and Guinevere in Chrétien de Troyes's Lancelot, the Knight of the Cart ( 1181). In his article, Paris outlined four principal characteristics of : The love is illegitimate, furtive (ie. adulterous). The male lover is in an inferior position and the woman in an elevated one. The man does quests, tests, or trials in the woman's name. There is an art to it, it has rules, in the same vein as chivalry or courtesy. Paris used it as a descriptive phrase, not a technical term, and used it interchangeably with the phrase . Nonetheless, other scholars began using it as a technical term after him. In 1896, Lewis Freeman Mott applied the term "courtly love" to Dante Alighieri's love for Beatrice in La Vita Nuova (1294). The two relationships are very different — Lancelot and Guinevere are secret adulterous lovers, while Dante and Beatrice had no actual romantic relationship and only met twice in their whole lives. Nonetheless, the manner in which the two men describe their devotion to and quasi-religious adoration of their ladies is similar. In 1936, C. S. Lewis wrote The Allegory of Love which popularized the term "courtly love". He defined it as a "love of a highly specialized sort, whose characteristics may be enumerated as Humility, Courtesy, Adultery, and the Religion of Love". In 1964, Mosché Lazar differentiated three separate categories within "courtly love." Criticism Scholars debate whether "courtly love" constitutes a coherent idea. D. W. Robertson Jr. said, "the connotations of the term courtly love are so vague and flexible that its utility for purposes of definition has become questionable." John C. Moore called it "a term used for a number of different, in some cases contradictory, conceptions" and called it "a mischievous term which should be abandoned". Roger Boase admitted the term "has been subjected to a bewildering variety of uses and definitions", but nonetheless defended the concept of courtly love as real and useful. E. Talbot Donaldson criticized its usage as a technical term as an anachronism or neologism. Richard Trachsler says that "the concept of courtly literature is linked to the idea of the existence of courtly texts, texts produced and read by men and women sharing some kind of elaborate culture they all have in common". He argues that many of the texts that scholars claim to be courtly also include "uncourtly" texts, and argues that there is no clear way to determine "where courtliness ends and uncourtliness starts" because readers would enjoy texts which were supposed to be entirely courtly without realizing they were also enjoying texts which were uncourtly. This presents a clear problem in the understanding of courtliness. History The practice of courtly love developed in the castle life of four regions: Aquitaine, Provence, Champagne and ducal Burgundy, from around the time of the First Crusade (1099). Eleanor of Aquitaine (1124–1204) brought ideals of courtly love from Aquitaine first to the court of France, then to England (she became queen-consort in each of these two realms in succession). Her daughter Marie, Countess of Champagne (1145–1198) brought courtly behavior to the Count of Champagne's court. Courtly love found expression in the lyric poems written by troubadours, such as William IX, Duke of Aquitaine (1071–1126), one of the first troubadour poets. Poets adopted the terminology of feudalism, declaring themselves the vassal of the lady. The troubadour's model of the ideal lady was the wife of his employer or lord, a lady of higher status, usually the rich and powerful female head of the castle. When her husband was away on Crusade or elsewhere she dominated the household and cultural affairs; sometimes this was the case even when the husband was at home. The poet gave voice to the aspirations of the courtier class, for only those who were noble could engage in courtly love. This new kind of love saw nobility not based on wealth and family history, but on character and actions; such as devotion, piety, gallantry, thus appealing to poorer knights who saw an avenue for advancement. By the late 12th century Andreas Capellanus' highly influential work De amore had codified the rules of courtly love. De amore lists such rules as: "Marriage is no real excuse for not loving." "He who is not jealous cannot love." "No one can be bound by a double love." "When made public love rarely endures." Much of its structure and its sentiments derived from Ovid's Ars amatoria. Andalusian and Islamic influence One theory holds that courtly love in Southern France was influenced by Arabic poetry in Al-Andalus. In contemporary Andalusian writing, Ṭawq al-Ḥamāmah (The Ring of the Dove) by Ibn Hazm is a treatise on love which emphasizes restraint and chastity. Tarjumān al-Ashwāq (The Translator of Desires) by Ibn Arabi is a collection of love poetry. Outside of Al-Andalus, Kitab al-Zahra (Book of the Flower) by Ibn Dawud and Risala fi'l-Ishq (Treatise of Love) by Ibn Sina are roughly contemporary treaties on love. Ibn Arabi and Ibn Sina both weave together themes of sensual love with divine love. According to Gustave E. von Grunebaum, notions of "love for love's sake" and "exaltation of the beloved lady" can be traced back to Arabic literature of the 9th and 10th centuries. The ennobling power of love is overtly discussed in Risala fi'l-Ishq. According to an argument outlined by María Rosa Menocal in The Arabic Role in Medieval Literary History (1987), in 11th-century Spain, a group of wandering poets appeared who would go from court to court, and sometimes travel to Christian courts in southern France, a situation closely mirroring what would happen in southern France about a century later. Contacts between these Spanish poets and the French troubadours were frequent. The metrical forms used by the Spanish poets resembled those later used by the troubadours. Analysis The historic analysis of courtly love varies between different schools of historians. That sort of history which views the early Middle Ages dominated by a prudish and patriarchal theocracy views courtly love as a "humanist" reaction to the puritanical views of the Catholic Church. Scholars who endorse this view value courtly love for its exaltation of femininity as an ennobling, spiritual, and moral force, in contrast to the ironclad chauvinism of the first and second estates. The condemnation of courtly love in the beginning of the 13th century by the church as heretical, is seen by these scholars as the Church's attempt to put down this "sexual rebellion". However, other scholars note that courtly love was certainly tied to the Church's effort to civilize the crude Germanic feudal codes in the late 11th century. It has also been suggested that the prevalence of arranged marriages required other outlets for the expression of more personal occurrences of romantic love, and thus it was not in reaction to the prudery or patriarchy of the Church but to the nuptial customs of the era that courtly love arose. In the Germanic cultural world, a special form of courtly love can be found, namely . At times, the lady could be a , a far-away princess, and some tales told of men who had fallen in love with women whom they had never seen, merely on hearing their perfection described, but normally she was not so distant. As the etiquette of courtly love became more complicated, the knight might wear the colors of his lady: where blue or black were sometimes the colors of faithfulness, green could be a sign of unfaithfulness. Salvation, previously found in the hands of the priesthood, now came from the hands of one's lady. In some cases, there were also women troubadours who expressed the same sentiment for men. Literary convention The literary convention of courtly love can be found in most of the major authors of the Middle Ages, such as Geoffrey Chaucer, John Gower, Dante, Marie de France, Chretien de Troyes, Gottfried von Strassburg and Thomas Malory. The medieval genres in which courtly love conventions can be found include the lyric, the romance and the allegory. Lyric Courtly love was born in the lyric, first appearing with Provençal poets in the 11th century, including itinerant and courtly minstrels such as the French troubadours and trouvères, as well as the writers of lays. Texts about courtly love, including lays, were often set to music by troubadours or minstrels. According to scholar Ardis Butterfield, courtly love is "the air which many genres of troubadour song breathe". Not much is known about how, when, where, and for whom these pieces were performed, but we can infer that the pieces were performed at court by troubadours, trouvères, or the courtiers themselves. This can be inferred because people at court were encouraged or expected to be "courtly" and be proficient in many different areas, including music. Several troubadours became extremely wealthy playing the fiddle and singing their songs about courtly love for a courtly audience. It is difficult to know how and when these songs were performed because most of the information on these topics is provided in the music itself. One lay, the "Lay of Lecheor", says that after a lay was composed, "Then the lay was preserved / Until it was known everywhere / For those who were skilled musicians / On viol, harp and rote / Carried it forth from that region…" Scholars have to then decide whether to take this description as truth or fiction. Period examples of performance practice, of which there are few, show a quiet scene with a household servant performing for the king or lord and a few other people, usually unaccompanied. According to scholar Christopher Page, whether or not a piece was accompanied depended on the availability of instruments and people to accompany—in a courtly setting. For troubadours or minstrels, pieces were often accompanied by fiddle, also called a vielle, or a harp. Courtly musicians also played the vielle and the harp, as well as different types of viols and flutes. This French tradition spread later to the German Minnesänger, such as Walther von der Vogelweide and Wolfram von Eschenbach. It also influenced the Sicilian School of Italian vernacular poetry, as well as Petrarch and Dante. Romance The vernacular poetry of the , or courtly romances, included many examples of courtly love. Some of them are set within the cycle of poems celebrating King Arthur's court. This was a literature of leisure, directed to a largely female audience for the first time in European history. Allegory Allegory is common in the romantic literature of the Middle Ages, and it was often used to interpret what was already written. There is a strong connection between religious imagery and human sexual love in medieval writings. The tradition of medieval allegory began in part with the interpretation of the Song of Songs in the Bible. Some medieval writers thought that the book should be taken literally as an erotic text; others believed that the Song of Songs was a metaphor for the relationship between Christ and the church and that the book could not even exist without that as its metaphorical meaning. Still others claimed that the book was written literally about sex but that this meaning must be "superseded by meanings related to Christ, to the church and to the individual Christian soul". Marie de France's lai "Eliduc" toys with the idea that human romantic love is a symbol for God's love when two people love each other so fully and completely that they leave each other for God, separating and moving to different religious environments. Furthermore, the main character's first wife leaves her husband and becomes a nun so that he can marry his new lover. Allegorical treatment of courtly love is also found in the Roman de la Rose by Guillaume de Lorris and Jean de Meun. In it, a man becomes enamored with an individual rose on a rosebush, attempting to pick it and finally succeeding. The rose represents the female body, but the romance also contains lengthy digressive "discussions on free will versus determinism as well as on optics and the influence of heavenly bodies on human behavior". Courtly love in troubadour poetry is associated with the word . comes from the Latin phrase "my lord", . The part is alternatively interpreted as coming from or , though the meaning is unchanged regardless. Troubadours beginning with Guilhem de Poitou would address the lady as , flattering her by addressing her as his lord and also serving as an ambiguous code-name. These points of multiple meaning and ambiguity facilitated a "coquetry of class", allowing the male troubadours to use the images of women as a means to gain social status with other men, but simultaneously, Bogin suggests, voiced deeper longings for the audience: "In this way, the sexual expressed the social and the social the sexual; and in the poetry of courtly love the static hierarchy of feudalism was uprooted and transformed to express a world of motion and transformation." Later influence Through such routes as Capellanus's record of the Courts of Love and the later works of Petrarchism (as well as the continuing influence of Ovid), the themes of courtly love were not confined to the medieval, but appear both in serious and comic forms in early modern Europe. Shakespeare's Romeo and Juliet, for example, shows Romeo attempting to love Rosaline in an almost contrived courtly fashion while Mercutio mocks him for it; and both in his plays and his sonnets the writer can be seen appropriating the conventions of courtly love for his own ends. Paul Gallico's 1939 novel The Adventures of Hiram Holliday depicts a Romantic modern American consciously seeking to model himself on the ideal medieval knight. Among other things, when finding himself in Austria in the aftermath of the Anschluss, he saves a Habsburg princess who is threatened by the Nazis, acts towards her in strict accordance with the maxims of courtly love and finally wins her after fighting a duel with her aristocratic betrothed. Points of controversy Sexuality A point of ongoing controversy about courtly love is to what extent it was sexual. All courtly love was erotic to some degree, and not purely platonic—the troubadours speak of the physical beauty of their ladies and the feelings and desires the ladies arouse in them. However, it is unclear what a poet should do: live a life of perpetual desire channeling his energies to higher ends, or physically consummate. Scholars have seen it both ways. Denis de Rougemont said that the troubadours were influenced by Cathar doctrines which rejected the pleasures of the flesh and that they were metaphorically addressing the spirit and soul of their ladies. Rougemont also said that courtly love subscribed to the code of chivalry, and therefore a knight's loyalty was always to his King before his mistress. Edmund Reiss claimed it was also a spiritual love, but a love that had more in common with Christian love, or caritas. On the other hand, scholars such as Mosché Lazar claim it was adulterous sexual love with physical possession of the lady the desired end. Many scholars identify courtly love as the "pure love" described in 1184 by Capellanus in De amore: On the other hand, continual references to beds and sleeping in the lover's arms in medieval sources such as the troubador and romances such as Chrétien's Lancelot imply at least in some cases a context of actual sexual intercourse. Within the corpus of troubadour poems there is a wide range of attitudes, even across the works of individual poets. Some poems are physically sensual, even bawdily imagining nude embraces, while others are highly spiritual and border on the platonic. Real-world practice A continued point of controversy is whether courtly love was purely literary or was actually practiced in real life. There are no historical records that offer evidence of its presence in reality. Historian John F. Benton found no documentary evidence in law codes, court cases, chronicles or other historical documents. However, the existence of the non-fiction genre of courtesy books is perhaps evidence for its practice. For example, according to Christine de Pizan's courtesy book Book of the Three Virtues (c. 1405), which expresses disapproval of courtly love, the convention was being used to justify and cover up illicit love affairs. Philip le Bon, in his Feast of the Pheasant in 1454, relied on parables drawn from courtly love to incite his nobles to swear to participate in an anticipated crusade, while well into the 15th century numerous actual political and social conventions were largely based on the formulas dictated by the "rules" of courtly love. Courts of love A point of controversy was the existence of "courts of love", first mentioned by Andreas Capellanus. These were supposed courts made up of tribunals staffed by 10 to 70 women who would hear a case of love and rule on it based on the rules of love. In the 19th century, historians took the existence of these courts as fact, but later historians such as Benton noted "none of the abundant letters, chronicles, songs and pious dedications" suggest they ever existed outside of the poetic literature. Likewise, feminist historian Emily James Putnam wrote in 1910 that, secrecy being "among the lover's first duties" in the ideology of courtly love, it is "manifestly absurd to suppose that a sentiment which depended on concealment for its existence should be amenable to public inquiry". According to Diane Bornstein, one way to reconcile the differences between the references to courts of love in the literature, and the lack of documentary evidence in real life, is that they were like literary salons or social gatherings, where people read poems, debated questions of love, and played word games of flirtation. Courtly love as a response to religion Theologians of the time emphasized love as more of a spiritual rather than sexual connection. There is a possibility that writings about courtly love were made as a response to the theological ideas about love. Many scholars believe that Andreas Capellanus' work De amore was a satire poking fun at doctors and theologians. In that work, Capellanus is supposedly writing to a young man named Walter, and he spends the first two books telling him how to achieve love and setting forth the rules of love. However, in the third book he tells Walter that the only way to live his life correctly is to shun love in favor of God. This sudden change is what has sparked the interest of many scholars, leading some to regard the first two books as satirizing courtly love and only the third book as expressing Capellanus' actual beliefs. Stages (Adapted from Barbara W. Tuchman) Attraction to the lady, usually via eyes/glance Worship of the lady from afar Declaration of passionate devotion Virtuous rejection by the lady Renewed wooing with oaths of virtue and eternal fealty Moans of approaching death from unsatisfied desire (and other physical manifestations of lovesickness) Heroic deeds of valor which win the lady's heart Consummation of the secret love Endless adventures and subterfuges avoiding detection See also Cicisbeo Domnei Dulcinea Notes References Sources Scholarship Medieval sources Further reading Heterosexuality Love Interpersonal relationships Cultural history of Europe pl:Trubadurzy (literatura)#Fin' amors – miłość dworska
Courtly love
[ "Biology" ]
4,541
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
63,793
https://en.wikipedia.org/wiki/Meteoroid
A meteoroid ( ) is a small rocky or metallic body in outer space. Meteoroids are distinguished as objects significantly smaller than asteroids, ranging in size from grains to objects up to a meter wide. Objects smaller than meteoroids are classified as micrometeoroids or space dust. Many are fragments from comets or asteroids, whereas others are collision impact debris ejected from bodies such as the Moon or Mars. The visible passage of a meteoroid, comet, or asteroid entering Earth's atmosphere is called a meteor, and a series of many meteors appearing seconds or minutes apart and appearing to originate from the same fixed point in the sky is called a meteor shower. An estimated 25 million meteoroids, micrometeoroids and other space debris enter Earth's atmosphere each day, which results in an estimated 15,000 tonnes of that material entering the atmosphere each year. A meteorite is the remains of a meteoroid that has survived the ablation of its surface material during its passage through the atmosphere as a meteor and has impacted the ground. Meteoroids In 1961, the International Astronomical Union (IAU) defined a meteoroid as "a solid object moving in interplanetary space, of a size considerably smaller than an asteroid and considerably larger than an atom". In 1995, Beech and Steel, writing in the Quarterly Journal of the Royal Astronomical Society, proposed a new definition where a meteoroid would be between 100 μm and across. In 2010, following the discovery of asteroids below 10 m in size, Rubin and Grossman proposed a revision of the previous definition of meteoroid to objects between and in diameter in order to maintain the distinction. According to Rubin and Grossman, the minimum size of an asteroid is given by what can be discovered from Earth-bound telescopes, so the distinction between meteoroid and asteroid is fuzzy. Some of the smallest asteroids discovered (based on absolute magnitude H) are with H = 33.2 and with H = 32.1 both with an estimated size of . In April 2017, the IAU adopted an official revision of its definition, limiting size to between and one meter in diameter, but allowing for a deviation for any object causing a meteor. Objects smaller than meteoroids are classified as micrometeoroids and interplanetary dust. The Minor Planet Center does not use the term "meteoroid". Composition Almost all meteoroids contain extraterrestrial nickel and iron. They have three main classifications: iron, stone, and stony-iron. Some stone meteoroids contain grain-like inclusions known as chondrules and are called chondrites. Stony meteoroids without these features are called "achondrites", which are typically formed from extraterrestrial igneous activity; they contain little or no extraterrestrial iron. The composition of meteoroids can be inferred as they pass through Earth's atmosphere from their trajectories and the light spectra of the resulting meteor. Their effects on radio signals also give information, especially useful for daytime meteors, which are otherwise very difficult to observe. From these trajectory measurements, meteoroids have been found to have many different orbits, some clustering in streams (see meteor showers) often associated with a parent comet, others apparently sporadic. Debris from meteoroid streams may eventually be scattered into other orbits. The light spectra, combined with trajectory and light curve measurements, have yielded various compositions and densities, ranging from fragile snowball-like objects with density about a quarter that of ice, to nickel-iron rich dense rocks. The study of meteorites also gives insights into the composition of non-ephemeral meteoroids. In the Solar System Most meteoroids come from the asteroid belt, having been perturbed by the gravitational influences of planets, but others are particles from comets, giving rise to meteor showers. Some meteoroids are fragments from bodies such as Mars or the Moon, that have been thrown into space by an impact. Meteoroids travel around the Sun in a variety of orbits and at various velocities. The fastest move at about through space in the vicinity of Earth's orbit. This is escape velocity from the Sun, equal to the square root of two times Earth's speed, and is the upper speed limit of objects in the vicinity of Earth, unless they come from interstellar space. Earth travels at about , so when meteoroids meet the atmosphere head-on (which only occurs when meteors are in a retrograde orbit such as the Leonids, which are associated with the retrograde comet 55P/Tempel–Tuttle) the combined speed may reach about (see Specific energy#Astrodynamics). Meteoroids moving through Earth's orbital space average about , but due to Earth's gravity meteors such as the Phoenicids can make atmospheric entry at as slow as about 11 km/s. On January 17, 2013, at 05:21 PST, a one-meter-sized comet from the Oort cloud entered Earth atmosphere over California and Nevada. The object had a retrograde orbit with perihelion at 0.98 ± 0.03 AU. It approached from the direction of the constellation Virgo (which was in the south about 50° above the horizon at the time), and collided head-on with Earth's atmosphere at vaporising more than above ground over a period of several seconds. Collision with Earth's atmosphere When meteoroids intersect with Earth's atmosphere at night, they are likely to become visible as meteors. If meteoroids survive the entry through the atmosphere and reach Earth's surface, they are called meteorites. Meteorites are transformed in structure and chemistry by the heat of entry and force of impact. A noted asteroid, , was observed in space on a collision course with Earth on 6 October 2008 and entered Earth's atmosphere the next day, striking a remote area of northern Sudan. It was the first time that a meteoroid had been observed in space and tracked prior to impacting Earth. NASA has produced a map showing the most notable asteroid collisions with Earth and its atmosphere from 1994 to 2013 from data gathered by U.S. government sensors (see below). Meteorites A meteorite is a portion of a meteoroid or asteroid that survives its passage through the atmosphere and hits the ground without being destroyed. Meteorites are sometimes, but not always, found in association with hypervelocity impact craters; during energetic collisions, the entire impactor may be vaporized, leaving no meteorites. Geologists use the term, "bolide", in a different sense from astronomers to indicate a very large impactor. For example, the USGS uses the term to mean a generic large crater-forming projectile in a manner "to imply that we do not know the precise nature of the impacting body ... whether it is a rocky or metallic asteroid, or an icy comet for example". Meteoroids also hit other bodies in the Solar System. On such stony bodies as the Moon or Mars that have little or no atmosphere, they leave enduring craters. Impact craters Meteoroid collisions with solid Solar System objects, including the Moon, Mercury, Callisto, Ganymede, and most small moons and asteroids, create impact craters, which are the dominant geographic features of many of those objects. On other planets and moons with active surface geological processes, such as Earth, Venus, Mars, Europa, Io, and Titan, visible impact craters may become eroded, buried, or transformed by tectonics over time. In early literature, before the significance of impact cratering was widely recognised, the terms cryptoexplosion or cryptovolcanic structure were often used to describe what are now recognised as impact-related features on Earth. Molten terrestrial material ejected from a meteorite impact crater can cool and solidify into an object known as a tektite. These are often mistaken for meteorites. Terrestrial rock, sometimes with pieces of the original meteorite, created or modified by an impact of a meteorite is called impactite. Gallery of meteorites See also Glossary of meteoritics Relating to meteoroids Interplanetary dust Micrometeoroid Near-Earth object Relating to meteorites Baetyl – Sacred stones possibly originating as meteorites Impact crater Impact event Meteorite Micrometeorite Tektite References External links A History of Meteors and Other Atmospheric Phenomena American Meteor Society British Astronomical Society meteor page International Meteor Organization Live Meteor Scanner Meteoroids Page at NASA's Solar System Exploration Meteor shower predictions Meteor Showers and Viewing Tips Society for Popular Astronomy – Meteor Section Earth Impact Effects Program Estimates crater size and other effects of a specified body colliding with Earth. Articles containing video clips Solar System
Meteoroid
[ "Astronomy" ]
1,765
[ "Outer space", "Solar System" ]
63,794
https://en.wikipedia.org/wiki/Impact%20event
An impact event is a collision between astronomical objects causing measurable effects. Impact events have been found to regularly occur in planetary systems, though the most frequent involve asteroids, comets or meteoroids and have minimal effect. When large objects impact terrestrial planets such as the Earth, there can be significant physical and biospheric consequences, as the impacting body is usually traveling at several kilometres a second (a minimum of for an Earth impacting body), though atmospheres mitigate many surface impacts through atmospheric entry. Impact craters and structures are dominant landforms on many of the Solar System's solid objects and present the strongest empirical evidence for their frequency and scale. Impact events appear to have played a significant role in the evolution of the Solar System since its formation. Major impact events have significantly shaped Earth's history, and have been implicated in the formation of the Earth–Moon system. Impact events also appear to have played a significant role in the evolutionary history of life. Impacts may have helped deliver the building blocks for life (the panspermia theory relies on this premise). Impacts have been suggested as the origin of water on Earth. They have also been implicated in several mass extinctions. The prehistoric Chicxulub impact, 66 million years ago, is believed to not only be the cause of the Cretaceous–Paleogene extinction event but acceleration of the evolution of mammals, leading to their dominance and, in turn, setting in place conditions for the eventual rise of humans. Throughout recorded history, hundreds of Earth impacts (and exploding bolides) have been reported, with some occurrences causing deaths, injuries, property damage, or other significant localised consequences. One of the best-known recorded events in modern times was the Tunguska event, which occurred in Siberia, Russia, in 1908. The 2013 Chelyabinsk meteor event is the only known such incident in modern times to result in numerous injuries. Its meteor is the largest recorded object to have encountered the Earth since the Tunguska event. The Comet Shoemaker–Levy 9 impact provided the first direct observation of an extraterrestrial collision of Solar System objects, when the comet broke apart and collided with Jupiter in July 1994. An extrasolar impact was observed in 2013, when a massive terrestrial planet impact was detected around the star ID8 in the star cluster NGC 2547 by NASA's Spitzer Space Telescope and confirmed by ground observations. Impact events have been a plot and background element in science fiction. In April 2018, the B612 Foundation reported: "It's 100 percent certain we'll be hit [by a devastating asteroid], but we're not 100 percent certain when." Also in 2018, physicist Stephen Hawking considered in his final book Brief Answers to the Big Questions that an asteroid collision was the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. On 26 September 2022, the Double Asteroid Redirection Test demonstrated the deflection of an asteroid. It was the first such experiment to be carried out by humankind and was considered to be highly successful. The orbital period of the target body was changed by 32 minutes. The criterion for success was a change of more than 73 seconds. Impacts and the Earth Major impact events have significantly shaped Earth's history, having been implicated in the formation of the Earth–Moon system, the evolutionary history of life, the origin of water on Earth, and several mass extinctions. Impact structures are the result of impact events on solid objects and, as the dominant landforms on many of the System's solid objects, present the most solid evidence of prehistoric events. Notable impact events include the hypothesized Late Heavy Bombardment, which would have occurred early in the history of the Earth–Moon system, and the confirmed Chicxulub impact 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event. Frequency and risk Small objects frequently collide with Earth. There is an inverse relationship between the size of the object and the frequency of such events. The lunar cratering record shows that the frequency of impacts decreases as approximately the cube of the resulting crater's diameter, which is on average proportional to the diameter of the impactor. Asteroids with a diameter strike Earth every 500,000 years on average. Large collisions – with objects – happen approximately once every twenty million years. The last known impact of an object of or more in diameter was at the Cretaceous–Paleogene extinction event 66 million years ago. The energy released by an impactor depends on diameter, density, velocity, and angle. The diameter of most near-Earth asteroids that have not been studied by radar or infrared can generally only be estimated within about a factor of two, by basing it on the asteroid's brightness. The density is generally assumed, because the diameter and mass, from which density can be calculated, are also generally estimated. Due to Earth's escape velocity, the minimum impact velocity is 11 km/s with asteroid impacts averaging around 17 km/s on the Earth. The most probable impact angle is 45 degrees. Impact conditions such as asteroid size and speed, but also density and impact angle determine the kinetic energy released in an impact event. The more energy is released, the more damage is likely to occur on the ground due to the environmental effects triggered by the impact. Such effects can be shock waves, heat radiation, the formation of craters with associated earthquakes, and tsunamis if bodies of water are hit. Human populations are vulnerable to these effects if they live within the affected zone. Large seiche waves arising from earthquakes and large-scale deposit of debris can also occur within minutes of impact, thousands of kilometres from impact. Airbursts Stony asteroids with a diameter of enter Earth's atmosphere about once a year. Asteroids with a diameter of 7 meters enter the atmosphere about every 5 years with as much kinetic energy as the atomic bomb dropped on Hiroshima (approximately 16 kilotons of TNT), but the air burst is reduced to just 5 kilotons. These ordinarily explode in the upper atmosphere and most or all of the solids are vaporized. However, asteroids with a diameter of , and which strike Earth approximately twice every century, produce more powerful airbursts. The 2013 Chelyabinsk meteor was estimated to be about 20 m in diameter with an airburst of around 500 kilotons, an explosion 30 times the Hiroshima bomb impact. Much larger objects may impact the solid earth and create a crater. Objects with a diameter less than are called meteoroids and seldom make it to the ground to become meteorites. An estimated 500 meteorites reach the surface each year, but only 5 or 6 of these typically create a weather radar signature with a strewn field large enough to be recovered and be made known to scientists. The late Eugene Shoemaker of the U.S. Geological Survey estimated the rate of Earth impacts, concluding that an event about the size of the nuclear weapon that destroyed Hiroshima occurs about once a year. Such events would seem to be spectacularly obvious, but they generally go unnoticed for a number of reasons: the majority of the Earth's surface is covered by water; a good portion of the land surface is uninhabited; and the explosions generally occur at relatively high altitude, resulting in a huge flash and thunderclap but no real damage. Although no human is known to have been killed directly by an impact, over 1000 people were injured by the Chelyabinsk meteor airburst event over Russia in 2013. In 2005 it was estimated that the chance of a single person born today dying due to an impact is around 1 in 200,000. The two to four-meter-sized asteroids , , 2018 LA, 2019 MO, 2022 EB5, and the suspected artificial satellite WT1190F are the only known objects to be detected before impacting the Earth. Geological significance Impacts have had, during the history of the Earth, a significant geological and climatic influence. The Moon's existence is widely attributed to a huge impact early in Earth's history. Impact events earlier in the history of Earth have been credited with creative as well as destructive events; it has been proposed that impacting comets delivered the Earth's water, and some have suggested that the origins of life may have been influenced by impacting objects by bringing organic chemicals or lifeforms to the Earth's surface, a theory known as exogenesis. These modified views of Earth's history did not emerge until relatively recently, chiefly due to a lack of direct observations and the difficulty in recognizing the signs of an Earth impact because of erosion and weathering. Large-scale terrestrial impacts of the sort that produced the Barringer Crater, locally known as Meteor Crater, east of Flagstaff, Arizona, are rare. Instead, it was widely thought that cratering was the result of volcanism: the Barringer Crater, for example, was ascribed to a prehistoric volcanic explosion (not an unreasonable hypothesis, given that the volcanic San Francisco Peaks stand only to the west). Similarly, the craters on the surface of the Moon were ascribed to volcanism. It was not until 1903–1905 that the Barringer Crater was correctly identified as an impact crater, and it was not until as recently as 1963 that research by Eugene Merle Shoemaker conclusively proved this hypothesis. The findings of late 20th-century space exploration and the work of scientists such as Shoemaker demonstrated that impact cratering was by far the most widespread geological process at work on the Solar System's solid bodies. Every surveyed solid body in the Solar System was found to be cratered, and there was no reason to believe that the Earth had somehow escaped bombardment from space. In the last few decades of the 20th century, a large number of highly modified impact craters began to be identified. The first direct observation of a major impact event occurred in 1994: the collision of the comet Shoemaker-Levy 9 with Jupiter. Based on crater formation rates determined from the Earth's closest celestial partner, the Moon, astrogeologists have determined that during the last 600 million years, the Earth has been struck by 60 objects of a diameter of or more. The smallest of these impactors would leave a crater almost across. Only three confirmed craters from that time period with that size or greater have been found: Chicxulub, Popigai, and Manicouagan, and all three have been suspected of being linked to extinction events though only Chicxulub, the largest of the three, has been consistently considered. The impact that caused Mistastin crater generated temperatures exceeding 2,370 °C, the highest known to have occurred on the surface of the Earth. Besides the direct effect of asteroid impacts on a planet's surface topography, global climate and life, recent studies have shown that several consecutive impacts might have an effect on the dynamo mechanism at a planet's core responsible for maintaining the magnetic field of the planet, and may have contributed to Mars' lack of current magnetic field. An impact event may cause a mantle plume (volcanism) at the antipodal point of the impact. The Chicxulub impact may have increased volcanism at mid-ocean ridges and has been proposed to have triggered flood basalt volcanism at the Deccan Traps. While numerous impact craters have been confirmed on land or in the shallow seas over continental shelves, no impact craters in the deep ocean have been widely accepted by the scientific community. Impacts of projectiles as large as one km in diameter are generally thought to explode before reaching the sea floor, but it is unknown what would happen if a much larger impactor struck the deep ocean. The lack of a crater, however, does not mean that an ocean impact would not have dangerous implications for humanity. Some scholars have argued that an impact event in an ocean or sea may create a megatsunami, which can cause destruction both at sea and on land along the coast, but this is disputed. The Eltanin impact into the Pacific Ocean 2.5 Mya is thought to involve an object about across but remains craterless. Biospheric effects The effect of impact events on the biosphere has been the subject of scientific debate. Several theories of impact-related mass extinction have been developed. In the past 500 million years there have been five generally accepted major mass extinctions that on average extinguished half of all species. One of the largest mass extinctions to have affected life on Earth was the Permian-Triassic, which ended the Permian period 250 million years ago and killed off 90 percent of all species; life on Earth took 30 million years to recover. The cause of the Permian-Triassic extinction is still a matter of debate; the age and origin of proposed impact craters, i.e. the Bedout High structure, hypothesized to be associated with it are still controversial. The last such mass extinction led to the demise of the non-avian dinosaurs and coincided with a large meteorite impact; this is the Cretaceous–Paleogene extinction event (also known as the K–T or K–Pg extinction event), which occurred 66 million years ago. There is no definitive evidence of impacts leading to the three other major mass extinctions. In 1980, physicist Luis Alvarez; his son, geologist Walter Alvarez; and nuclear chemists Frank Asaro and Helen V. Michael from the University of California, Berkeley discovered unusually high concentrations of iridium in a specific layer of rock strata in the Earth's crust. Iridium is an element that is rare on Earth but relatively abundant in many meteorites. From the amount and distribution of iridium present in the 65-million-year-old "iridium layer", the Alvarez team later estimated that an asteroid of must have collided with Earth. This iridium layer at the Cretaceous–Paleogene boundary has been found worldwide at 100 different sites. Multidirectionally shocked quartz (coesite), which is normally associated with large impact events or atomic bomb explosions, has also been found in the same layer at more than 30 sites. Soot and ash at levels tens of thousands times normal levels were found with the above. Anomalies in chromium isotopic ratios found within the K-T boundary layer strongly support the impact theory. Chromium isotopic ratios are homogeneous within the earth, and therefore these isotopic anomalies exclude a volcanic origin, which has also been proposed as a cause for the iridium enrichment. Further, the chromium isotopic ratios measured in the K-T boundary are similar to the chromium isotopic ratios found in carbonaceous chondrites. Thus a probable candidate for the impactor is a carbonaceous asteroid, but a comet is also possible because comets are assumed to consist of material similar to carbonaceous chondrites. Probably the most convincing evidence for a worldwide catastrophe was the discovery of the crater which has since been named Chicxulub Crater. This crater is centered on the Yucatán Peninsula of Mexico and was discovered by Tony Camargo and Glen Penfield while working as geophysicists for the Mexican oil company PEMEX. What they reported as a circular feature later turned out to be a crater estimated to be in diameter. This convinced the vast majority of scientists that this extinction resulted from a point event that is most probably an extraterrestrial impact and not from increased volcanism and climate change (which would spread its main effect over a much longer time period). Although there is now general agreement that there was a huge impact at the end of the Cretaceous that led to the iridium enrichment of the K-T boundary layer, remnants have been found of other, smaller impacts, some nearing half the size of the Chicxulub crater, which did not result in any mass extinctions, and there is no clear linkage between an impact and any other incident of mass extinction. Paleontologists David M. Raup and Jack Sepkoski have proposed that an excess of extinction events occurs roughly every 26 million years (though many are relatively minor). This led physicist Richard A. Muller to suggest that these extinctions could be due to a hypothetical companion star to the Sun called Nemesis periodically disrupting the orbits of comets in the Oort cloud, leading to a large increase in the number of comets reaching the inner Solar System where they might hit Earth. Physicist Adrian Melott and paleontologist Richard Bambach have more recently verified the Raup and Sepkoski finding, but argue that it is not consistent with the characteristics expected of a Nemesis-style periodicity. Sociological and cultural effects An impact event is commonly seen as a scenario that would bring about the end of civilization. In 2000, Discover magazine published a list of 20 possible sudden doomsday scenarios with an impact event listed as the most likely to occur. A joint Pew Research Center/Smithsonian survey from April 21 to 26, 2010 found that 31 percent of Americans believed that an asteroid will collide with Earth by 2050. A majority (61 percent) disagreed. Earth impacts In the early history of the Earth (about four billion years ago), bolide impacts were almost certainly common since the Solar System contained far more discrete bodies than at present. Such impacts could have included strikes by asteroids hundreds of kilometers in diameter, with explosions so powerful that they vaporized all the Earth's oceans. It was not until this heavy bombardment slackened that life appears to have begun to evolve on Earth. Precambrian The leading theory of the Moon's origin is the giant impact theory, which postulates that Earth was once hit by a planetoid the size of Mars; such a theory is able to explain the size and composition of the Moon, something not done by other theories of lunar formation. According to the theory of the Late Heavy Bombardment, there should have been 22,000 or more impact craters with diameters >20 km (12 mi), about 40 impact basins with diameters about 1,000 km (620 mi), and several impact basins with diameters about 5,000 km (3,100 mi). However, hundreds of millions of years of deformation at the Earth's crust pose significant challenges to conclusively identifying impacts from this period. Only two pieces of pristine lithosphere are believed to remain from this era: Kaapvaal Craton (in contemporary South Africa) and Pilbara Craton (in contemporary Western Australia) to search within which may potentially reveal evidence in the form of physical craters. Other methods may be used to identify impacts from this period, for example, indirect gravitational or magnetic analysis of the mantle, but may prove inconclusive. In 2021, evidence for a probable impact 3.46 billion-years ago at Pilbara Craton has been found in the form of a crater created by the impact of a asteroid (named "The Apex Asteroid") into the sea at a depth of (near the site of Marble Bar, Western Australia). The event caused global tsunamis. It is also coincidental to some of the earliest evidence of life on Earth, fossilized Stromatolites. Evidence for at least 4 impact events have been found in spherule layers (dubbed S1 through S8) from the Barberton Greenstone Belt in South Africa, spanning around 3.5-3.2 billion years ago. The sites of the impacts are thought to have been distant from the location of the belt. The impactors that generated these events are thought to have been much larger than those that created the largest known still existing craters/impact structures on Earth, with the impactors having estimated diameters of ~, with the craters generated by these impacts having an estimated diameter of . The largest impacts like those represented by the S2 layer are likely to have had far-reaching effects, such as the boiling of the surface layer of the oceans. The Maniitsoq structure, dated to around 3 billion years old (3 Ga), was once thought to be the result of an impact; however, follow-up studies have not confirmed its nature as an impact structure. The Maniitsoq structure is not recognised as an impact structure by the Earth Impact Database. In 2020, scientists discovered the world's oldest confirmed impact crater, the Yarrabubba crater, caused by an impact that occurred in Yilgarn Craton (what is now Western Australia), dated at more than 2.2 billion years ago with the impactor estimated to be around wide. It is believed that, at this time, the Earth was mostly or completely frozen, commonly called the Huronian glaciation. The Vredefort impact event, which occurred around 2 billion years ago in Kaapvaal Craton (what is now South Africa), caused the largest verified crater, a multi-ringed structure across, forming from an impactor approximately in diameter. The Sudbury impact event occurred on the Nuna supercontinent (now Canada) from a bolide approximately in diameter approximately 1.849 billion years ago Debris from the event would have been scattered across the globe. Paleozoic and Mesozoic Two asteroids are now believed to have struck Australia between 360 and 300 million years ago at the Western Warburton and East Warburton Basins, creating a . According to evidence found in 2015, it is the largest ever recorded. A third, possible impact was also identified in 2015 to the north, on the upper Diamantina River, also believed to have been caused by an asteroid 10 km across about 300 million years ago, but further studies are needed to establish that this crustal anomaly was indeed the result of an impact event. The prehistoric Chicxulub impact, 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event, was caused by an asteroid estimated to be about wide. Paleogene Analysis of the Hiawatha Glacier reveals the presence of a 31 km wide impact crater dated at 58 million years of age, less than 10 million years after the Cretaceous–Paleogene extinction event, scientists believe that the impactor was a metallic asteroid with a diameter in the order of 1.5 kilometres (0.9 mi). The impact would have had global effects. Pleistocene Artifacts recovered with tektites from the 803,000-year-old Australasian strewnfield event in Asia link a Homo erectus population to a significant meteorite impact and its aftermath. Significant examples of Pleistocene impacts include the Lonar crater lake in India, approximately 52,000 years old (though a study published in 2010 gives a much greater age), which now has a flourishing semi-tropical jungle around it. Holocene The Rio Cuarto craters in Argentina were produced approximately 10,000 years ago, at the beginning of the Holocene. If proved to be impact craters, they would be the first impact of the Holocene. The Campo del Cielo ("Field of Heaven") refers to an area bordering Argentina's Chaco Province where a group of iron meteorites were found, estimated as dating to 4,000–5,000 years ago. It first came to attention of Spanish authorities in 1576; in 2015, police arrested four alleged smugglers trying to steal more than a ton of protected meteorites. The Henbury craters in Australia (~5,000 years old) and Kaali craters in Estonia (~2,700 years old) were apparently produced by objects that broke up before impact. Whitecourt crater in Alberta, Canada is estimated to be between 1,080 and 1,130 years old. The crater is approximately 36 m (118 ft) in diameter and 9 m (30 ft) deep, is heavily forested and was discovered in 2007 when a metal detector revealed fragments of meteoric iron scattered around the area. A Chinese record states that 10,000 people were killed in the 1490 Qingyang event with the deaths caused by a hail of "falling stones"; some astronomers hypothesize that this may describe an actual meteorite fall, although they find the number of deaths implausible. Kamil Crater, discovered from Google Earth image review in Egypt, in diameter and deep, is thought to have been formed less than 3,500 years ago in a then-unpopulated region of western Egypt. It was found February 19, 2009 by V. de Michelle on a Google Earth image of the East Uweinat Desert, Egypt. 20th-century impacts One of the best-known recorded impacts in modern times was the Tunguska event, which occurred in Siberia, Russia, in 1908. This incident involved an explosion that was probably caused by the airburst of an asteroid or comet above the Earth's surface, felling an estimated 80 million trees over . In February 1947, another large bolide impacted the Earth in the Sikhote-Alin Mountains, Primorye, Soviet Union. It was during daytime hours and was witnessed by many people, which allowed V. G. Fesenkov, then chairman of the meteorite committee of the USSR Academy of Science, to estimate the meteoroid's orbit before it encountered the Earth. Sikhote-Alin is a massive fall with the overall size of the meteoroid estimated at . A more recent estimate by Tsvetkov (and others) puts the mass at around . It was an iron meteorite belonging to the chemical group IIAB and with a coarse octahedrite structure. More than 70 tonnes (metric tons) of material survived the collision. A case of a human injured by a space rock occurred on November 30, 1954, in Sylacauga, Alabama. There a stone chondrite crashed through a roof and hit Ann Hodges in her living room after it bounced off her radio. She was badly bruised by the fragments. Several persons have since claimed to have been struck by "meteorites" but no verifiable meteorites have resulted. A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first was the Příbram meteorite, which fell in Czechoslovakia (now the Czech Republic) in 1959. In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite. Following the Příbram fall, other nations established automated observing programs aimed at studying infalling meteorites. One of these was the Prairie Meteorite Network, operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern U.S. This program also observed a meteorite fall, the "Lost City" chondrite, allowing its recovery and a calculation of its orbit. Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, "Innisfree", in 1977. Finally, observations by the European Fireball Network, a descendant of the original Czech program that recovered Příbram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002. On August 10, 1972, a meteor which became known as the 1972 Great Daylight Fireball was witnessed by many people as it moved north over the Rocky Mountains from the U.S. Southwest to Canada. It was filmed by a tourist at the Grand Teton National Park in Wyoming with an 8-millimeter color movie camera. In size range the object was roughly between a car and a house, and while it could have ended its life in a Hiroshima-sized blast, there was never any explosion. Analysis of the trajectory indicated that it never came much lower than off the ground, and the conclusion was that it had grazed Earth's atmosphere for about 100 seconds, then skipped back out of the atmosphere to return to its orbit around the Sun. Many impact events occur without being observed by anyone on the ground. Between 1975 and 1992, American missile early warning satellites picked up 136 major explosions in the upper atmosphere. In the November 21, 2002, edition of the journal Nature, Peter Brown of the University of Western Ontario reported on his study of U.S. early warning satellite records for the preceding eight years. He identified 300 flashes caused by meteors in that time period and estimated the rate of Tunguska-sized events as once in 400 years. Eugene Shoemaker estimated that an event of such magnitude occurs about once every 300 years, though more recent analyses have suggested he may have overestimated by an order of magnitude. In the dark morning hours of January 18, 2000, a fireball exploded over the city of Whitehorse, Yukon Territory at an altitude of about , lighting up the night like day. The meteor that produced the fireball was estimated to be about in diameter, with a weight of 180 tonnes. This blast was also featured on the Science Channel series Killer Asteroids, with several witness reports from residents in Atlin, British Columbia. 21st-century impacts On 7 June 2006, a meteor was observed striking a location in the Reisadalen valley in Nordreisa Municipality in Troms County, Norway. Although initial witness reports stated that the resultant fireball was equivalent to the Hiroshima nuclear explosion, scientific analysis places the force of the blast at anywhere from 100 to 500 tonnes TNT equivalent, around three percent of Hiroshima's yield. On 15 September 2007, a chondritic meteor crashed near the village of Carancas in southeastern Peru near Lake Titicaca, leaving a water-filled hole and spewing gases across the surrounding area. Many residents became ill, apparently from the noxious gases shortly after the impact. On 7 October 2008, an approximately 4 meter asteroid labeled was tracked for 20 hours as it approached Earth and as it fell through the atmosphere and impacted in Sudan. This was the first time an object was detected before it reached the atmosphere and hundreds of pieces of the meteorite were recovered from the Nubian Desert. On 15 February 2013, an asteroid entered Earth's atmosphere over Russia as a fireball and exploded above the city of Chelyabinsk during its passage through the Ural Mountains region at 09:13 YEKT (03:13 UTC). The object's air burst occurred at an altitude between above the ground, and about 1,500 people were injured, mainly by broken window glass shattered by the shock wave. Two were reported in serious condition; however, there were no fatalities. Initially some 3,000 buildings in six cities across the region were reported damaged due to the explosion's shock wave, a figure which rose to over 7,200 in the following weeks. The Chelyabinsk meteor was estimated to have caused over $30 million in damage. It is the largest recorded object to have encountered the Earth since the 1908 Tunguska event. The meteor is estimated to have an initial diameter of 17–20 metres and a mass of roughly 10,000 tonnes. On 16 October 2013, a team from Ural Federal University led by Victor Grokhovsky recovered a large fragment of the meteor from the bottom of Russia's Lake Chebarkul, about 80 km west of the city. On 1 January 2014, a 3-meter (10 foot) asteroid, 2014 AA was discovered by the Mount Lemmon Survey and observed over the next hour, and was soon found to be on a collision course with Earth. The exact location was uncertain, constrained to a line between Panama, the central Atlantic Ocean, The Gambia, and Ethiopia. Around roughly the time expected (2 January 3:06 UTC) an infrasound burst was detected near the center of the impact range, in the middle of the Atlantic Ocean. This marks the second time a natural object was identified prior to impacting earth after 2008 TC3. Nearly two years later, on October 3, WT1190F was detected orbiting Earth on a highly eccentric orbit, taking it from well within the Geocentric satellite ring to nearly twice the orbit of the Moon. It was estimated to be perturbed by the Moon onto a collision course with Earth on November 13. With over a month of observations, as well as precovery observations found dating back to 2009, it was found to be far less dense than a natural asteroid should be, suggesting that it was most likely an unidentified artificial satellite. As predicted, it fell over Sri Lanka at 6:18 UTC (11:48 local time). The sky in the region was very overcast, so only an airborne observation team was able to successfully observe it falling above the clouds. It is now thought to be a remnant of the Lunar Prospector mission in 1998, and is the third time any previously unknown object – natural or artificial – was identified prior to impact. On 22 January 2018, an object, A106fgF, was discovered by the Asteroid Terrestrial-impact Last Alert System (ATLAS) and identified as having a small chance of impacting Earth later that day. As it was very dim, and only identified hours before its approach, no more than the initial 4 observations covering a 39-minute period were made of the object. It is unknown if it impacted Earth or not, but no fireball was detected in either infrared or infrasound, so if it did, it would have been very small, and likely near the eastern end of its potential impact area – in the western Pacific Ocean. On 2 June 2018, the Mount Lemmon Survey detected (ZLAF9B2), a small 2–5 meter asteroid which further observations soon found had an 85% chance of impacting Earth. Soon after the impact, a fireball report from Botswana arrived to the American Meteor Society. Further observations with ATLAS extended the observation arc from 1 hour to 4 hours and confirmed that the asteroid orbit indeed impacted Earth in southern Africa, fully closing the loop with the fireball report and making this the third natural object confirmed to impact Earth, and the second on land after . On 8 March 2019, NASA announced the detection of a large airburst that occurred on 18 December 2018 at 11:48 local time off the eastern coast of the Kamchatka Peninsula. The Kamchatka superbolide is estimated to have had a mass of roughly 1600 tons, and a diameter of 9 to 14 meters depending on its density, making it the third largest asteroid to impact Earth since 1900, after the Chelyabinsk meteor and the Tunguska event. The fireball exploded in an airburst above Earth's surface. 2019 MO, an approximately 4m asteroid, was detected by ATLAS a few hours before it impacted the Caribbean Sea near Puerto Rico in June 2019. In 2023, a small meteorite is believed to have crashed through the roof of a home in Trenton, New Jersey. The metallic rock was approximately 4 inches by 6 inches and weighed 4 pounds. The item was seized by police and tested for radioactivity. The object was later confirmed to be a meteorite by scientists at The College of New Jersey, as well as meteorite expert Jerry Delaney, who previously worked at Rutgers University and the American Museum of Natural History. Asteroid impact prediction In the late 20th and early 21st century scientists put in place measures to detect Near Earth objects, and predict the dates and times of asteroids impacting Earth, along with the locations at which they will impact. The International Astronomical Union Minor Planet Center (MPC) is the global clearing house for information on asteroid orbits. NASA's Sentry System continually scans the MPC catalog of known asteroids, analyzing their orbits for any possible future impacts. Currently none are predicted (the single highest probability impact currently listed is ~7 m asteroid , which is due to pass earth in September 2095 with only a 5% predicted chance of impacting). Currently prediction is mainly based on cataloging asteroids years before they are due to impact. This works well for larger asteroids (> 1 km across) as they are easily seen from a long distance. Over 95% of them are already known and their orbits have been measured, so any future impacts can be predicted long before they are on their final approach to Earth. Smaller objects are too faint to observe except when they come very close and so most cannot be observed before their final approach. Current mechanisms for detecting asteroids on final approach rely on wide-field ground based telescopes, such as the ATLAS system. However, current telescopes only cover part of the Earth and even more importantly cannot detect asteroids on the day-side of the planet, which is why so few of the smaller asteroids that commonly impact Earth are detected during the few hours that they would be visible. So far only four impact events have been successfully predicted, all from innocuous 2–5 m diameter asteroids and detected a few hours in advance. Current response status In April 2018, the B612 Foundation reported "It's 100 per cent certain we’ll be hit [by a devastating asteroid], but we're not 100 per cent certain when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation to launch a mission to intercept an asteroid. The preferred method is to deflect rather than disrupt an asteroid. Elsewhere in the Solar System Evidence of massive past impact events Impact craters provide evidence of past impacts on other planets in the Solar System, including possible interplanetary terrestrial impacts. Without carbon dating, other points of reference are used to estimate the timing of these impact events. Mars provides some significant evidence of possible interplanetary collisions. The North Polar Basin on Mars is speculated by some to be evidence for a planet-sized impact on the surface of Mars between 3.8 and 3.9 billion years ago, while Utopia Planitia is the largest confirmed impact and Hellas Planitia is the largest visible crater in the Solar System. The Moon provides similar evidence of massive impacts, with the South Pole–Aitken basin being the biggest. Mercury's Caloris Basin is another example of a crater formed by a massive impact event. Rheasilvia on Vesta is an example of a crater formed by an impact capable of, based on ratio of impact to size, severely deforming a planetary-mass object. Impact craters on the moons of Saturn such as Engelier and Gerin on Iapetus, Mamaldi on Rhea and Odysseus on Tethys and Herschel on Mimas form significant surface features. Models developed in 2018 to explain the unusual spin of Uranus support a long-held hypothesis that this was caused by an oblique collision with a massive object twice the size of Earth. Observed events Jupiter Jupiter is the most massive planet in the Solar System, and because of its large mass it has a vast sphere of gravitational influence, the region of space where an asteroid capture can take place under favorable conditions. Jupiter is able to capture comets in orbit around the Sun with a certain frequency. In general, these comets travel some revolutions around the planet following unstable orbits as highly elliptical and perturbable by solar gravity. While some of them eventually recover a heliocentric orbit, others crash on the planet or, more rarely, on its satellites. In addition to the mass factor, its relative proximity to the inner solar system allows Jupiter to influence the distribution of minor bodies there. For a long time it was believed that these characteristics led the gas giant to expel from the system or to attract most of the wandering objects in its vicinity and, consequently, to determine a reduction in the number of potentially dangerous objects for the Earth. Subsequent dynamic studies have shown that in reality the situation is more complex: the presence of Jupiter, in fact, tends to reduce the frequency of impact on the Earth of objects coming from the Oort cloud, while it increases it in the case of asteroids and short period comets. For this reason Jupiter is the planet of the Solar System characterized by the highest frequency of impacts, which justifies its reputation as the "sweeper" or "cosmic vacuum cleaner" of the Solar System. 2009 studies suggest an impact frequency of one every 50–350 years, for an object of 0.5–1 km in diameter; impacts with smaller objects would occur more frequently. Another study estimated that comets in diameter impact the planet once in approximately 500 years and those in diameter do so just once in every 6,000 years. In July 1994, Comet Shoemaker–Levy 9 was a comet that broke apart and collided with Jupiter, providing the first direct observation of an extraterrestrial collision of Solar System objects. The event served as a "wake-up call", and astronomers responded by starting programs such as Lincoln Near-Earth Asteroid Research (LINEAR), Near-Earth Asteroid Tracking (NEAT), Lowell Observatory Near-Earth Object Search (LONEOS) and several others which have drastically increased the rate of asteroid discovery. The 2009 impact event happened on July 19 when a new black spot about the size of Earth was discovered in Jupiter's southern hemisphere by amateur astronomer Anthony Wesley. Thermal infrared analysis showed it was warm and spectroscopic methods detected ammonia. JPL scientists confirmed that there was another impact event on Jupiter, probably involving a small undiscovered comet or other icy body. The impactor is estimated to have been about 200–500 meters in diameter. Later minor impacts were observed by amateur astronomers in 2010, 2012, 2016, and 2017; one impact was observed by Juno in 2020. Other impacts In 1998, two comets were observed plunging toward the Sun in close succession. The first of these was on June 1 and the second the next day. A video of this, followed by a dramatic ejection of solar gas (unrelated to the impacts), can be found at the NASA website. Both of these comets evaporated before coming into contact with the surface of the Sun. According to a theory by NASA Jet Propulsion Laboratory scientist Zdeněk Sekanina, the latest impactor to actually make contact with the Sun was the "supercomet" Howard-Koomen-Michels, also known as Solwind 1, on August 30, 1979. (See also sungrazer.) In 2010, between January and May, Hubble's Wide Field Camera 3 took images of an unusual X shape originated in the aftermath of the collision between asteroid P/2010 A2 with a smaller asteroid. Around March 27, 2012, based on evidence, there were signs of an impact on Mars. Images from the Mars Reconnaissance Orbiter provide compelling evidence of the largest impact observed to date on Mars in the form of fresh craters, the largest measuring 48.5 by 43.5 meters. It is estimated to be caused by an impactor 3 to 5 meters long. On March 19, 2013, an impact occurred on the Moon that was visible from Earth, when a boulder-sized 30 cm meteoroid slammed into the lunar surface at 90,000 km/h (25 km/s; 56,000 mph) creating a 20-meter crater. NASA has actively monitored lunar impacts since 2005, tracking hundreds of candidate events. On 18 September 2021 an impact event on Mars formed a cluster of craters, the largest being 130m in diameter. On 24 December 2021 an impact created a 150m-wide crater. Debris was ejected up to 35 km (19 miles) from the impact site. Extrasolar impacts Collisions between galaxies, or galaxy mergers, have been observed directly by space telescopes such as Hubble and Spitzer. However, collisions in planetary systems including stellar collisions, while long speculated, have only recently begun to be observed directly. In 2013, an impact between minor planets was detected around the star NGC 2547 ID 8 by Spitzer and confirmed by ground observations. Computer modelling suggests that the impact involved large asteroids or protoplanets similar to the events believed to have led to the formation of terrestrial planets like the Earth. See also List of bolides List of impact craters on Earth List of possible impact structures on Earth , 0 to 10 References Further reading External links Earth Impact Database Earth Impact Effects Program Estimates crater size and other effects of a specified body colliding with Earth. Exploring North American Impact Craters Climate forcing Planetary science Cosmic doomsday Astronomical events Megatsunami Natural disasters
Impact event
[ "Physics", "Astronomy" ]
9,143
[ "Physical phenomena", "Weather", "Astronomical events", "Impact events", "Natural disasters", "Planetary science", "Astronomical sub-disciplines" ]
63,799
https://en.wikipedia.org/wiki/Immunoperoxidase
Immunoperoxidase is a type of immunostain used in molecular biology, medical research, and clinical diagnostics. In particular, immunoperoxidase reactions refer to a sub-class of immunohistochemical or immunocytochemical procedures in which the antibodies are visualized via a peroxidase-catalyzed reaction. Immunohistochemistry and immunocytochemistry are methods used to determine in which cells or parts of cells, a particular protein or other macromolecule are located. These stains use antibodies to bind to specific antigens, usually of protein or glycoprotein origin. Since antibodies are normally invisible, special strategies must be employed to detect these bound antibodies. In an immunoperoxidase procedure, an enzyme known as a peroxidase is used to catalyze a chemical reaction to produce a coloured product. Simply, a very thin slice of tissue is fixed onto glass, incubated with antibody or a series of antibodies, the last of which is chemically linked to peroxidase. After developing the stain by adding the chemical substrate, the distribution of the stain can be examined by microscopy. Types of antibodies Originally all antibodies produced for immunostaining were polyclonal, i.e. raised by normal antibody reactions in animals such as horses or rabbits. Now, many are monoclonal, i.e. produced in tissue culture. Monoclonal antibodies that consist of only one type of antibody tend to provide greater antigen specificity, and also tend to be more consistent between batches. Methods for immunoperoxidase staining The first step in immunoperoxidase staining is the binding of the specific (primary) antibody to the cell or tissue sample. The detection of the primary antibody can be then accomplished directly (example 1) or indirectly (examples 2 & 3). Example 1. The primary antibody can be directly tagged with the enzyme peroxidase which is then used to catalyse a chemical reaction to generate a coloured product. Example 2. The primary antibody can be tagged with a small molecule that can be recognised by a peroxidase-conjugated binding molecule with high affinity. The most common example of this is a biotin linked primary antibody that binds to an enzyme-bound streptavidin. This method can be used to amplify the signal. Example 3. An untagged primary antibody is detected using a general secondary antibody that recognises all antibodies originating from same animal species as the primary. The secondary antibody is tagged with peroxidase. Optimal staining depends on a number of factors including the antibody dilution, the staining chemicals, the preparation and/or fixation of the cells/tissue, and length of incubation with antibody/staining reagents. These are often determined by trial and error rather than any sort of systematic approach. Alternatives to peroxidase stains Other catalytic enzymes such as alkaline phosphatase can be used instead of peroxidases for both direct and indirect staining methods. Alternatively, the primary antibody can be detected using fluorescent label (immunofluorescence), or be attached to colloidal gold particles for electron microscopy. Uses of immunoperoxidase staining Immunoperoxidase staining is used in clinical diagnostics and in laboratory research. In clinical diagnostics, immunostaining can be used on tissue biopsies for more detailed histopathological study. In the case of cancer, it can aid in sub-classifying tumours. Immunostaining can also be used to help diagnose skin conditions, glomerulonephritis and to sub classify amyloid deposits. Related techniques are also useful in sub-typing lymphocytes which all look quite similar on light microscopy. In laboratory research, antibodies against specific markers of cellular differentiation can be used to label individual cell types. This can enable a better understanding of mechanistic changes to specific cell lineages resulting from a particular experimental intervention. See also Indirect immunoperoxidase assay External links Immunohistochemistry Protocols, Buffers and Troubleshooting Laboratory techniques
Immunoperoxidase
[ "Chemistry" ]
873
[ "nan" ]
63,847
https://en.wikipedia.org/wiki/Formaldehyde
Formaldehyde ( , ) (systematic name methanal) is an organic compound with the chemical formula and structure , more precisely . The compound is a pungent, colourless gas that polymerises spontaneously into paraformaldehyde. It is stored as aqueous solutions (formalin), which consists mainly of the hydrate CH2(OH)2. It is the simplest of the aldehydes (). As a precursor to many other materials and chemical compounds, in 2006 the global production of formaldehyde was estimated at 12 million tons per year. It is mainly used in the production of industrial resins, e.g., for particle board and coatings. Small amounts also occur naturally. Formaldehyde is classified as a carcinogen and can cause respiratory and skin irritation upon exposure. Forms Formaldehyde is more complicated than many simple carbon compounds in that it adopts several diverse forms. These compounds can often be used interchangeably and can be interconverted. Molecular formaldehyde. A colorless gas with a characteristic pungent, irritating odor. It is stable at about 150 °C, but it polymerizes when condensed to a liquid. 1,3,5-Trioxane, with the formula (CH2O)3. It is a white solid that dissolves without degradation in organic solvents. It is a trimer of molecular formaldehyde. Paraformaldehyde, with the formula HO(CH2O)nH. It is a white solid that is insoluble in most solvents. Methanediol, with the formula CH2(OH)2. This compound also exists in equilibrium with various oligomers (short polymers), depending on the concentration and temperature. A saturated water solution, of about 40% formaldehyde by volume or 37% by mass, is called "100% formalin". A small amount of stabilizer, such as methanol, is usually added to suppress oxidation and polymerization. A typical commercial-grade formalin may contain 10–12% methanol in addition to various metallic impurities. "Formaldehyde" was first used as a generic trademark in 1893 following a previous trade name, "formalin". Structure and bonding Molecular formaldehyde contains a central carbon atom with a double bond to the oxygen atom and a single bond to each hydrogen atom. This structure is summarised by the condensed formula H2C=O. The molecule is planar, Y-shaped and its molecular symmetry belongs to the C2v point group. The precise molecular geometry of gaseous formaldehyde has been determined by gas electron diffraction and microwave spectroscopy. The bond lengths are 1.21 Å for the carbon–oxygen bond and around 1.11 Å for the carbon–hydrogen bond, while the H–C–H bond angle is 117°, close to the 120° angle found in an ideal trigonal planar molecule. Some excited electronic states of formaldehyde are pyramidal rather than planar as in the ground state. Occurrence Processes in the upper atmosphere contribute more than 80% of the total formaldehyde in the environment. Formaldehyde is an intermediate in the oxidation (or combustion) of methane, as well as of other carbon compounds, e.g. in forest fires, automobile exhaust, and tobacco smoke. When produced in the atmosphere by the action of sunlight and oxygen on atmospheric methane and other hydrocarbons, it becomes part of smog. Formaldehyde has also been detected in outer space. Formaldehyde and its adducts are ubiquitous in nature. Food may contain formaldehyde at levels 1–100 mg/kg. Formaldehyde, formed in the metabolism of the amino acids serine and threonine, is found in the bloodstream of humans and other primates at concentrations of approximately 50 micromolar. Experiments in which animals are exposed to an atmosphere containing isotopically labeled formaldehyde have demonstrated that even in deliberately exposed animals, the majority of formaldehyde-DNA adducts found in non-respiratory tissues are derived from endogenously produced formaldehyde. Formaldehyde does not accumulate in the environment, because it is broken down within a few hours by sunlight or by bacteria present in soil or water. Humans metabolize formaldehyde quickly, converting it to formic acid. It nonetheless presents significant health concerns, as a contaminant. Interstellar formaldehyde Formaldehyde appears to be a useful probe in astrochemistry due to prominence of the 110←111 and 211←212 K-doublet transitions. It was the first polyatomic organic molecule detected in the interstellar medium. Since its initial detection in 1969, it has been observed in many regions of the galaxy. Because of the widespread interest in interstellar formaldehyde, it has been extensively studied, yielding new extragalactic sources. A proposed mechanism for the formation is the hydrogenation of CO ice: H + CO → HCO HCO + H → CH2O HCN, HNC, H2CO, and dust have also been observed inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON). Synthesis and industrial production Laboratory synthesis Formaldehyde was discovered in 1859 by the Russian chemist Aleksandr Butlerov (1828–1886) when he attempted to synthesize methanediol ("methylene glycol") from iodomethane and silver oxalate. In his paper, Butlerov referred to formaldehyde as "dioxymethylen" (methylene dioxide) because his empirical formula for it was incorrect, as atomic weights were not precisely determined until the Karlsruhe Congress. The compound was identified as an aldehyde by August Wilhelm von Hofmann, who first announced its production by passing methanol vapor in air over hot platinum wire. With modifications, Hofmann's method remains the basis of the present day industrial route. Solution routes to formaldehyde also entail oxidation of methanol or iodomethane. Industry Formaldehyde is produced industrially by the catalytic oxidation of methanol. The most common catalysts are silver metal, iron(III) oxide, iron molybdenum oxides (e.g. iron(III) molybdate) with a molybdenum-enriched surface, or vanadium oxides. In the commonly used formox process, methanol and oxygen react at c. 250–400 °C in presence of iron oxide in combination with molybdenum and/or vanadium to produce formaldehyde according to the chemical equation: 2CH3OH + O2 → 2CH2O + 2H2O The silver-based catalyst usually operates at a higher temperature, about 650 °C. Two chemical reactions on it simultaneously produce formaldehyde: that shown above and the dehydrogenation reaction: CH3OH → CH2O + H2 In principle, formaldehyde could be generated by oxidation of methane, but this route is not industrially viable because the methanol is more easily oxidized than methane. Biochemistry Formaldehyde is produced via several enzyme-catalyzed routes. Living beings, including humans, produce formaldehyde as part of their metabolism. Formaldehyde is key to several bodily functions (e.g. epigenetics), but its amount must also be tightly controlled to avoid self-poisoning. Serine hydroxymethyltransferase can decompose serine into formaldehyde and glycine, according to this reaction: HOCH2CH(NH2)CO2H → CH2O + H2C(NH2)CO2H. Methylotrophic microbes convert methanol into formaldehyde and energy via methanol dehydrogenase: CH3OH → CH2O + 2e− + 2H+ Other routes to formaldehyde include oxidative demethylations, semicarbazide-sensitive amine oxidases, dimethylglycine dehydrogenases, lipid peroxidases, P450 oxidases, and N-methyl group demethylases. Formaldehyde is catabolized by alcohol dehydrogenase ADH5 and aldehyde dehydrogenase ALDH2. Organic chemistry Formaldehyde is a building block in the synthesis of many other compounds of specialised and industrial significance. It exhibits most of the chemical properties of other aldehydes but is more reactive. Polymerization and hydration Monomeric CH2O is a gas and is rarely encountered in the laboratory. Aqueous formaldehyde, unlike some other small aldehydes (which need specific conditions to oligomerize through aldol condensation) oligomerizes spontaneously at a common state. The trimer 1,3,5-trioxane, , is a typical oligomer. Many cyclic oligomers of other sizes have been isolated. Similarly, formaldehyde hydrates to give the geminal diol methanediol, which condenses further to form hydroxy-terminated oligomers HO(CH2O)nH. The polymer is called paraformaldehyde. The higher concentration of formaldehyde—the more equilibrium shifts towards polymerization. Diluting with water or increasing the solution temperature, as well as adding alcohols (such as methanol or ethanol) lowers that tendency. Gaseous formaldehyde polymerizes at active sites on vessel walls, but the mechanism of the reaction is unknown. Small amounts of hydrogen chloride, boron trifluoride, or stannic chloride present in gaseous formaldehyde provide the catalytic effect and make the polymerization rapid. Cross-linking reactions Formaldehyde forms cross-links by first combining with a protein to form methylol, which loses a water molecule to form a Schiff base. The Schiff base can then react with DNA or protein to create a cross-linked product. This reaction is the basis for the most common process of chemical fixation. Oxidation and reduction Formaldehyde is readily oxidized by atmospheric oxygen into formic acid. For this reason, commercial formaldehyde is typically contaminated with formic acid. Formaldehyde can be hydrogenated into methanol. In the Cannizzaro reaction, formaldehyde and base react to produce formic acid and methanol, a disproportionation reaction. Hydroxymethylation and chloromethylation Formaldehyde reacts with many compounds, resulting in hydroxymethylation: X-H + CH2O → X-CH2OH(X = R2N, RC(O)NR', SH). The resulting hydroxymethyl derivatives typically react further. Thus, amines give hexahydro-1,3,5-triazines: 3RNH2 + 3CH2O → (RNCH2)3 + 3H2O Similarly, when combined with hydrogen sulfide, it forms trithiane: 3CH2O + 3H2S → (CH2S)3 + 3H2O In the presence of acids, it participates in electrophilic aromatic substitution reactions with aromatic compounds resulting in hydroxymethylated derivatives: ArH + CH2O → ArCH2OH When conducted in the presence of hydrogen chloride, the product is the chloromethyl compound, as described in the Blanc chloromethylation. If the arene is electron-rich, as in phenols, elaborate condensations ensue. With 4-substituted phenols one obtains calixarenes. Phenol results in polymers. Other reactions Many amino acids react with formaldehyde. Cysteine converts to thioproline. Uses Industrial applications Formaldehyde is a common precursor to more complex compounds and materials. In approximate order of decreasing consumption, products generated from formaldehyde include urea formaldehyde resin, melamine resin, phenol formaldehyde resin, polyoxymethylene plastics, 1,4-butanediol, and methylene diphenyl diisocyanate. The textile industry uses formaldehyde-based resins as finishers to make fabrics crease-resistant. When condensed with phenol, urea, or melamine, formaldehyde produces, respectively, hard thermoset phenol formaldehyde resin, urea formaldehyde resin, and melamine resin. These polymers are permanent adhesives used in plywood and carpeting. They are also foamed to make insulation, or cast into moulded products. Production of formaldehyde resins accounts for more than half of formaldehyde consumption. Formaldehyde is also a precursor to polyfunctional alcohols such as pentaerythritol, which is used to make paints and explosives. Other formaldehyde derivatives include methylene diphenyl diisocyanate, an important component in polyurethane paints and foams, and hexamine, which is used in phenol-formaldehyde resins as well as the explosive RDX. Condensation with acetaldehyde affords pentaerythritol, a chemical necessary in synthesizing PETN, a high explosive: Niche uses Disinfectant and biocide An aqueous solution of formaldehyde can be useful as a disinfectant as it kills most bacteria and fungi (including their spores). It is used as an additive in vaccine manufacturing to inactivate toxins and pathogens. Formaldehyde releasers are used as biocides in personal care products such as cosmetics. Although present at levels not normally considered harmful, they are known to cause allergic contact dermatitis in certain sensitised individuals. Aquarists use formaldehyde as a treatment for the parasites Ichthyophthirius multifiliis and Cryptocaryon irritans. Formaldehyde is one of the main disinfectants recommended for destroying anthrax. Formaldehyde is also approved for use in the manufacture of animal feeds in the US. It is an antimicrobial agent used to maintain complete animal feeds or feed ingredients Salmonella negative for up to 21 days. Tissue fixative and embalming agent Formaldehyde preserves or fixes tissue or cells. The process involves cross-linking of primary amino groups. The European Union has banned the use of formaldehyde as a biocide (including embalming) under the Biocidal Products Directive (98/8/EC) due to its carcinogenic properties. Countries with a strong tradition of embalming corpses, such as Ireland and other colder-weather countries, have raised concerns. Despite reports to the contrary, no decision on the inclusion of formaldehyde on Annex I of the Biocidal Products Directive for product-type 22 (embalming and taxidermist fluids) had been made . Formaldehyde-based crosslinking is exploited in ChIP-on-chip or ChIP-sequencing genomics experiments, where DNA-binding proteins are cross-linked to their cognate binding sites on the chromosome and analyzed to determine what genes are regulated by the proteins. Formaldehyde is also used as a denaturing agent in RNA gel electrophoresis, preventing RNA from forming secondary structures. A solution of 4% formaldehyde fixes pathology tissue specimens at about one mm per hour at room temperature. Drug testing Formaldehyde and 18 M (concentrated) sulfuric acid makes Marquis reagent—which can identify alkaloids and other compounds. Photography In photography, formaldehyde is used in low concentrations for the process C-41 (color negative film) stabilizer in the final wash step, as well as in the process E-6 pre-bleach step, to make it unnecessary in the final wash. Due to improvements in dye coupler chemistry, more modern (2006 or later) E-6 and C-41 films do not need formaldehyde, as their dyes are already stable. Safety In view of its widespread use, toxicity, and volatility, formaldehyde poses a significant danger to human health. In 2011, the US National Toxicology Program described formaldehyde as "known to be a human carcinogen". Chronic inhalation Concerns are associated with chronic (long-term) exposure by inhalation as may happen from thermal or chemical decomposition of formaldehyde-based resins and the production of formaldehyde resulting from the combustion of a variety of organic compounds (for example, exhaust gases). As formaldehyde resins are used in many construction materials, it is one of the more common indoor air pollutants. At concentrations above 0.1 ppm in air, formaldehyde can irritate the eyes and mucous membranes. Formaldehyde inhaled at this concentration may cause headaches, a burning sensation in the throat, and difficulty breathing, and can trigger or aggravate asthma symptoms. The CDC considers formaldehyde as a systemic poison. Formaldehyde poisoning can cause permanent changes in the nervous system's functions. A 1988 Canadian study of houses with urea-formaldehyde foam insulation found that formaldehyde levels as low as 0.046 ppm were positively correlated with eye and nasal irritation. A 2009 review of studies has shown a strong association between exposure to formaldehyde and the development of childhood asthma. A theory was proposed for the carcinogenesis of formaldehyde in 1978. In 1987 the United States Environmental Protection Agency (EPA) classified it as a probable human carcinogen, and after more studies the WHO International Agency for Research on Cancer (IARC) in 1995 also classified it as a probable human carcinogen. Further information and evaluation of all known data led the IARC to reclassify formaldehyde as a known human carcinogen associated with nasal sinus cancer and nasopharyngeal cancer. Studies in 2009 and 2010 have also shown a positive correlation between exposure to formaldehyde and the development of leukemia, particularly myeloid leukemia. Nasopharyngeal and sinonasal cancers are relatively rare, with a combined annual incidence in the United States of < 4,000 cases. About 30,000 cases of myeloid leukemia occur in the United States each year. Some evidence suggests that workplace exposure to formaldehyde contributes to sinonasal cancers. Professionals exposed to formaldehyde in their occupation, such as funeral industry workers and embalmers, showed an increased risk of leukemia and brain cancer compared with the general population. Other factors are important in determining individual risk for the development of leukemia or nasopharyngeal cancer. In yeast, formaldehyde is found to perturb pathways for DNA repair and cell cycle. In the residential environment, formaldehyde exposure comes from a number of routes; formaldehyde can be emitted by treated wood products, such as plywood or particle board, but it is produced by paints, varnishes, floor finishes, and cigarette smoking as well. In July 2016, the U.S. EPA released a prepublication version of its final rule on Formaldehyde Emission Standards for Composite Wood Products. These new rules impact manufacturers, importers, distributors, and retailers of products containing composite wood, including fiberboard, particleboard, and various laminated products, who must comply with more stringent record-keeping and labeling requirements. The U.S. EPA allows no more than 0.016 ppm formaldehyde in the air in new buildings constructed for that agency. A U.S. EPA study found a new home measured 0.076 ppm when brand new and 0.045 ppm after 30 days. The Federal Emergency Management Agency (FEMA) has also announced limits on the formaldehyde levels in trailers purchased by that agency. The EPA recommends the use of "exterior-grade" pressed-wood products with phenol instead of urea resin to limit formaldehyde exposure, since pressed-wood products containing formaldehyde resins are often a significant source of formaldehyde in homes. The eyes are most sensitive to formaldehyde exposure: The lowest level at which many people can begin to smell formaldehyde ranges between 0.05 and 1 ppm. The maximum concentration value at the workplace is 0.3 ppm. In controlled chamber studies, individuals begin to sense eye irritation at about 0.5 ppm; 5 to 20 percent report eye irritation at 0.5 to 1 ppm; and greater certainty for sensory irritation occurred at 1 ppm and above. While some agencies have used a level as low as 0.1 ppm as a threshold for irritation, the expert panel found that a level of 0.3 ppm would protect against nearly all irritation. In fact, the expert panel found that a level of 1.0 ppm would avoid eye irritation—the most sensitive endpoint—in 75–95% of all people exposed. Formaldehyde levels in building environments are affected by a number of factors. These include the potency of formaldehyde-emitting products present, the ratio of the surface area of emitting materials to volume of space, environmental factors, product age, interactions with other materials, and ventilation conditions. Formaldehyde emits from a variety of construction materials, furnishings, and consumer products. The three products that emit the highest concentrations are medium density fiberboard, hardwood plywood, and particle board. Environmental factors such as temperature and relative humidity can elevate levels because formaldehyde has a high vapor pressure. Formaldehyde levels from building materials are the highest when a building first opens because materials would have less time to off-gas. Formaldehyde levels decrease over time as the sources suppress. In operating rooms, formaldehyde is produced as a byproduct of electrosurgery and is present in surgical smoke, exposing surgeons and healthcare workers to potentially unsafe concentrations. Formaldehyde levels in air can be sampled and tested in several ways, including impinger, treated sorbent, and passive monitors. The National Institute for Occupational Safety and Health (NIOSH) has measurement methods numbered 2016, 2541, 3500, and 3800. In June 2011, the twelfth edition of the National Toxicology Program (NTP) Report on Carcinogens (RoC) changed the listing status of formaldehyde from "reasonably anticipated to be a human carcinogen" to "known to be a human carcinogen." Concurrently, a National Academy of Sciences (NAS) committee was convened and issued an independent review of the draft U.S. EPA IRIS assessment of formaldehyde, providing a comprehensive health effects assessment and quantitative estimates of human risks of adverse effects. Acute irritation and allergic reaction For most people, irritation from formaldehyde is temporary and reversible, although formaldehyde can cause allergies and is part of the standard patch test series. In 2005–06, it was the seventh-most-prevalent allergen in patch tests (9.0%). People with formaldehyde allergy are advised to avoid formaldehyde releasers as well (e.g., Quaternium-15, imidazolidinyl urea, and diazolidinyl urea). People who suffer allergic reactions to formaldehyde tend to display lesions on the skin in the areas that have had direct contact with the substance, such as the neck or thighs (often due to formaldehyde released from permanent press finished clothing) or dermatitis on the face (typically from cosmetics). Formaldehyde has been banned in cosmetics in both Sweden and Japan. Other routes Formaldehyde occurs naturally, and is "an essential intermediate in cellular metabolism in mammals and humans." According to the American Chemistry Council, "Formaldehyde is found in every living system—from plants to animals to humans. It metabolizes quickly in the body, breaks down rapidly, is not persistent and does not accumulate in the body." The twelfth edition of NTP Report on Carcinogens notes that "food and water contain measureable concentrations of formaldehyde, but the significance of ingestion as a source of formaldehyde exposure for the general population is questionable." Food formaldehyde generally occurs in a bound form and formaldehyde is unstable in an aqueous solution. In humans, ingestion of as little as of 37% formaldehyde solution can cause death. Other symptoms associated with ingesting such a solution include gastrointestinal damage (vomiting, abdominal pain), and systematic damage (dizziness). Testing for formaldehyde is by blood and/or urine by gas chromatography–mass spectrometry. Other methods include infrared detection, gas detector tubes, etc., of which high-performance liquid chromatography is the most sensitive. Regulation Several web articles claim that formaldehyde has been banned from manufacture or import into the European Union (EU) under REACH (Registration, Evaluation, Authorization, and restriction of Chemical substances) legislation. That is a misconception, as formaldehyde is not listed in the Annex I of Regulation (EC) No 689/2008 (export and import of dangerous chemicals regulation), nor on a priority list for risk assessment. However, formaldehyde is banned from use in certain applications (preservatives for liquid-cooling and processing systems, slimicides, metalworking-fluid preservatives, and antifouling products) under the Biocidal Products Directive. In the EU, the maximum allowed concentration of formaldehyde in finished products is 0.2%, and any product that exceeds 0.05% has to include a warning that the product contains formaldehyde. In the United States, Congress passed a bill July 7, 2010, regarding the use of formaldehyde in hardwood plywood, particle board, and medium density fiberboard. The bill limited the allowable amount of formaldehyde emissions from these wood products to 0.09 ppm, and required companies to meet this standard by January 2013. The final U.S. EPA rule specified maximum emissions of "0.05 ppm formaldehyde for hardwood plywood, 0.09 ppm formaldehyde for particleboard, 0.11 ppm formaldehyde for medium-density fiberboard, and 0.13 ppm formaldehyde for thin medium-density fiberboard." Formaldehyde was declared a toxic substance by the 1999 Canadian Environmental Protection Act. The FDA is proposing a ban on hair relaxers with formaldehyde due to cancer concerns. Contaminant in food Scandals have broken in both the 2005 Indonesia food scare and 2007 Vietnam food scare regarding the addition of formaldehyde to foods to extend shelf life. In 2011, after a four-year absence, Indonesian authorities found foods with formaldehyde being sold in markets in a number of regions across the country. In August 2011, at least at two Carrefour supermarkets, the Central Jakarta Livestock and Fishery Sub-Department found cendol containing 10 parts per million of formaldehyde. In 2014, the owner of two noodle factories in Bogor, Indonesia, was arrested for using formaldehyde in noodles. 50 kg of formaldehyde was confiscated. Foods known to be contaminated included noodles, salted fish, and tofu. Chicken and beer were also rumored to be contaminated. In some places, such as China, manufacturers still use formaldehyde illegally as a preservative in foods, which exposes people to formaldehyde ingestion. In the early 1900s, it was frequently added by US milk plants to milk bottles as a method of pasteurization due to the lack of knowledge and concern regarding formaldehyde's toxicity. In 2011 in Nakhon Ratchasima, Thailand, truckloads of rotten chicken were treated with formaldehyde for sale in which "a large network", including 11 slaughterhouses run by a criminal gang, were implicated. In 2012, 1 billion rupiah (almost US$100,000) of fish imported from Pakistan to Batam, Indonesia, were found laced with formaldehyde. Formalin contamination of foods has been reported in Bangladesh, with stores and supermarkets selling fruits, fishes, and vegetables that have been treated with formalin to keep them fresh. However, in 2015, a Formalin Control Bill was passed in the Parliament of Bangladesh with a provision of life-term imprisonment as the maximum punishment as well as a maximum fine of 2,000,000 BDT but not less than 500,000 BDT for importing, producing, or hoarding formalin without a license. Formaldehyde was one of the chemicals used in 19th century industrialised food production that was investigated by Dr. Harvey W. Wiley with his famous 'Poison Squad' as part of the US Department of Agriculture. This led to the 1906 Pure Food and Drug Act, a landmark event in the early history of food regulation in the United States. See also 1,3-Dioxetane DMDM hydantoin Sawdust | Health impacts of sawdust Sulphobes Transition metal complexes of aldehydes and ketones Wood glue Wood preservation References Notes External links (gas) (solution) Formaldehyde from ChemSub Online Prevention guide—Formaldehyde in the Workplace (PDF) from the IRSST Formaldehyde from the National Institute for Occupational Safety and Health "Formaldehyde Added to 'Known Carcinogens' List Despite Lobbying by Chemical Industry"—video report by Democracy Now! Do you own a post-Katrina FEMA trailer? Check your VIN# So you're living in one of FEMA's Katrina trailers... What can you do? Alkanals Endogenous aldehydes Anatomical preservation Hazardous air pollutants IARC Group 1 carcinogens Chemical hazards Organic compounds with 1 carbon atom Aldehydes Indoor air pollution
Formaldehyde
[ "Chemistry" ]
6,180
[ "Chemical hazards", "Formaldehyde" ]
63,860
https://en.wikipedia.org/wiki/Picture%20archiving%20and%20communication%20system
A picture archiving and communication system (PACS) is a medical imaging technology which provides economical storage and convenient access to images from multiple modalities (source machine types). Electronic images and reports are transmitted digitally via PACS; this eliminates the need to manually file, retrieve, or transport film jackets, the folders used to store and protect X-ray film. The universal format for PACS image storage and transfer is DICOM (Digital Imaging and Communications in Medicine). Non-image data, such as scanned documents, may be incorporated using consumer industry standard formats like PDF (Portable Document Format), once encapsulated in DICOM. A PACS consists of four major components: The imaging modalities such as X-ray plain film (PF), computed tomography (CT) and magnetic resonance imaging (MRI), a secured network for the transmission of patient information, workstations for interpreting and reviewing images, and archives for the storage and retrieval of images and reports. Combined with available and emerging web technology, PACS has the ability to deliver timely and efficient access to images, interpretations, and related data. PACS reduces the physical and time barriers associated with traditional film-based image retrieval, distribution, and display. Types of images Most PACS handle images from various medical imaging instruments, including ultrasound (US), magnetic resonance (MR), Nuclear Medicine imaging, positron emission tomography (PET), computed tomography (CT), endoscopy (ES), mammograms (MG), digital radiography (DR), phosphor plate radiography, Visible Light Photography (VL), Histopathology, ophthalmology, etc. Additional types of image formats are always being added. Clinical areas beyond radiology; cardiology, oncology, gastroenterology, and even the laboratory are creating medical images that can be incorporated into PACS. (see DICOM Application areas). Uses PACS has four main uses: Hard copy replacement: PACS replaces hard-copy based means of managing medical images, such as film archives. With the decreasing price of digital storage, PACS provide a growing cost and space advantage over film archives in addition to the instant access to prior images at the same institution. Digital copies are referred to as Soft-copy. Remote access: It expands on the possibilities of conventional systems by providing capabilities of off-site viewing and reporting (distance education, telediagnosis). It enables practitioners in different physical locations to access the same information simultaneously for teleradiology. Electronic image integration platform: PACS provides the electronic platform for radiology images interfacing with other medical automation systems such as Hospital Information System (HIS), Electronic Medical Record (EMR), Practice Management Software, and Radiology Information System (RIS). Radiology Workflow Management: PACS is used by radiology personnel to manage the workflow of patient exams. PACS is offered by virtually all the major medical imaging equipment manufacturers, medical IT companies and many independent software companies. Basic PACS software can be found free on the Internet. Architecture The architecture is the physical implementation of required functionality, or what one sees from the outside. There are different views, depending on the user. A radiologist typically sees a viewing station, a technologist a QA workstation, while a PACS administrator might spend most of their time in the climate-controlled computer room. The composite view is rather different for the various vendors. Typically a PACS consists of a multitude of devices. The first step in typical PACS systems is the modality. Modalities are typically computed tomography (CT), ultrasound, nuclear medicine, positron emission tomography (PET), and magnetic resonance imaging (MRI). Depending on the facility's workflow most modalities send to a quality assurance (QA) workstation or sometimes called a PACS gateway. The QA workstation is a checkpoint to make sure patient demographics are correct as well as other important attributes of a study. If the study information is correct the images are passed to the archive for storage. The central storage device (archive) stores images and in some cases reports, measurements and other information that resides with the images. The next step in the PACS workflow is the reading workstations. The reading workstation is where the radiologist reviews the patient's study and formulates their diagnosis. Normally tied to the reading workstation is a reporting package that assists the radiologist with dictating the final report. Reporting software is optional and there are various ways in which doctors prefer to dictate their report. Ancillary to the workflow mentioned, there is normally CD/DVD authoring software used to burn patient studies for distribution to patients or referring physicians. The diagram above shows a typical workflow in most imaging centers and hospitals. Note that this section does not cover integration to a Radiology Information System, Hospital Information System and other such front-end system that relates to the PACS workflow. More and more PACS include web-based interfaces to utilize the internet or a wide area network (WAN) as their means of communication, usually via VPN (Virtual Private Network) or SSL (Secure Sockets Layer). The clients side software may use ActiveX, JavaScript and/or a Java Applet. More robust PACS clients are full applications which can utilize the full resources of the computer they are executing on and are unaffected by the frequent unattended Web Browser and Java updates. As the need for distribution of images and reports becomes more widespread there is a push for PACS systems to support DICOM part 18 of the DICOM standard. Web Access to DICOM Objects (WADO) creates the necessary standard to expose images and reports over the web through truly portable medium. Without stepping outside the focus of the PACS architecture, WADO becomes the solution to cross platform capability and can increase the distribution of images and reports to referring physicians and patients. PACS image backup is a critical, but sometimes overlooked, part of the PACS Architecture (see below). Within the United States, HIPAA requires that backup copies of patient images be made in case of image loss from the PACS. There are several methods of backing up the images, but they typically involve automatically sending copies of the images to a separate computer for storage, preferably off-site. Querying (C-FIND) and Image (Instance) Retrieval (C-MOVE and C-GET) The communication with the PACS server is done through DICOM messages that are similar to DICOM image "headers", but with different attributes. A query (C-FIND) is performed as follows: The client establishes the network connection to the PACS server. The client prepares a C-FIND request message which is a list of DICOM attributes. The client fills in the C-FIND request message with the keys that should be matched. E.g. to query for a patient ID, the patient ID attribute is filled with the patient's ID. The client creates empty (zero length) attributes for all the attributes it wishes to receive from the server. E.g. if the client wishes to receive an ID that it can use to receive images (see image retrieval) it should include a zero-length SOPInstanceUID (0008,0018) attribute in the C-FIND request messages. The C-FIND request message is sent to the server. The server sends back to the client a list of C-FIND response messages, each of which is also a list of DICOM attributes, populated with values for each match. The client extracts the attributes that are of interest from the response messages objects. Images (and other composite instances like Presentation States and Structured Reports) are then retrieved from a PACS server through either a C-MOVE or C-GET request, using the DICOM network protocol. Retrieval can be performed at the Study, Series or Image (instance) level. The C-MOVE request specifies where the retrieved instances should be sent (using separate C-STORE messages on one or more separate connections) with an identifier known as the destination Application Entity Title (AE Title). For a C-MOVE to work, the server must be configured with mapping of the AE Title to a TCP/IP address and port, and as a consequence the server must know in advance all the AE Titles that it will ever be requested to send images to. A C-GET, on the other hand, performs the C-STORE operations on the same connection as the request, and hence does not require that the "server" know the "client" TCP/IP address and port, and hence also works more easily through firewalls and with network address translation, environments in which the incoming TCP C-STORE connections required for C-MOVE may not get through. The difference between C-MOVE and C-GET is somewhat analogous to the difference between active and passive FTP. C-MOVE is most commonly used within enterprises and facilities, whereas C-GET is more practical between enterprises. In addition to the traditional DICOM network services, particularly for cross-enterprise use, DICOM (and IHE) define other retrieval mechanisms, including WADO, WADO-WS and most recently WADO-RS. Image archival and backup Digital medical images are typically stored locally on a PACS for retrieval. It is important (and required in the United States by the Security Rule's Administrative Safeguards section of HIPAA) that facilities have a means of recovering images in the event of an error or disaster. While each facility is different, the goal in image back-up is to make it automatic and as easy to administer as possible. The hope is that the copies won't be needed; however, disaster recovery and business continuity planning dictates that plans should include maintaining copies of data even when an entire site is temporarily or permanently lost. Ideally, copies of images should be maintained in several locations, including off-site to provide disaster recovery capabilities. In general, PACS data is no different than other business critical data and should be protected with multiple copies at multiple locations. As PACS data can be considered protected health information (PHI), regulations may apply, most notably HIPAA and HIPAA Hi-Tech requirements. Images may be stored both locally and remotely on off-line media such as disk, tape or optical media. The use of storage systems, using modern data protection technologies has become increasingly common, particularly for larger organizations with greater capacity and performance requirements. Storage systems may be configured and attached to the PACS server in various ways, either as Direct-Attached Storage (DAS), Network-attached storage (NAS), or via a Storage Area Network (SAN). However the storage is attached, enterprise storage systems commonly utilize RAID and other technologies to provide high availability and fault tolerance to protect against failures. In the event that it is necessary to reconstruct a PACS partially or completely, some means of rapidly transferring data back to the PACS is required, preferably while the PACS continues to operate. Modern data storage replication technologies may be applied to PACS information, including the creation of local copies via point-in-time copy for locally protected copies, along with complete copies of data on separate repositories including disk and tape based systems. Remote copies of data should be created, either by physically moving tapes off-site, or copying data to remote storage systems. Whenever HIPAA protected data is moved, it should be encrypted, which includes sending via physical tape or replication technologies over WAN to a secondary location. Other options for creating copies of PACS data include removable media (hard drives, DVDs or other media that can hold many patients' images) that is physically transferred off-site. HIPAA HITECH mandates encryption of stored data in many instances or other security mechanisms to avoid penalties for failure to comply. The back-up infrastructure may also be capable of supporting the migration of images to a new PACS. Due to the high volume of images that need to be archived many rad centers are migrating their systems to a Cloud-based PACS. Integration A full PACS should provide a single point of access for images and their associated data. That is, it should support all digital modalities, in all departments, throughout the organisation. However, until PACS penetration is complete, individual islands of digital imaging not yet connected to a central PACS may exist. These may take the form of a localized, modality-specific network of modalities, workstations and storage (a so-called "mini-PACS"), or may consist of a small cluster of modalities directly connected to reading workstations without long-term storage or management. Such systems are also often not connected to the departmental information system. Historically, Ultrasound, Nuclear Medicine and Cardiology Cath Labs are often departments that adopt such an approach. More recently, Full Field digital mammography (FFDM) has taken a similar approach, largely because of the large image size, highly specialized reading workflow and display requirements, and intervention by regulators. The rapid deployment of FFDM in the US following the DMIST study has led to the integration of Digital Mammography and PACS becoming more commonplace. All PACS, whether they span the entire enterprise or are localized within a department, should also interface with existing hospital information systems: Hospital information system (HIS) and Radiology Information System (RIS). There are several data flowing into PACS as inputs for next procedures and back to HIS as results corresponding inputs: In: Patient Identification and Orders for examination. These data are sent from HIS to RIS via integration interface, in most hospitals, via the HL7 protocol. Patient ID and Orders will be sent to Modality (CT, MR, etc.) via the DICOM protocol (Worklist). Images will be created after images scanning and then forwarded to PACS Server. Diagnosis Report is created based on the images retrieved for presenting from PACS Server by physician/radiologist and then saved to RIS System. Out: Diagnosis Report and Images created accordingly. Diagnosis Report is sent back to HIS via HL7 usually and Images are sent back to HIS via DICOM usually if there is a DICOM Viewer integrated with HIS in hospitals (In most of cases, Clinical Physician gets reminder of Diagnosis Report coming and then queries images from PACS Server). Interfacing between multiple systems provides a more consistent and more reliable dataset: Less risk of entering an incorrect patient ID for a study – modalities that support DICOM worklists can retrieve identifying patient information (patient name, patient number, accession number) for upcoming cases and present that to the technologist, preventing data entry errors during acquisition. Once the acquisition is complete, the PACS can compare the embedded image data with a list of scheduled studies from RIS, and can flag a warning if the image data does not match a scheduled study. Data saved in the PACS can be tagged with unique patient identifiers (such as a social security number or NHS number) obtained from HIS. Providing a robust method of merging datasets from multiple hospitals, even where the different centers use different ID systems internally. An interface can also improve workflow patterns: When a study has been reported by a radiologist the PACS can mark it as read. This avoids needless double-reading. The report can be attached to the images and be viewable via a single interface. Improved use of online storage and nearline storage in the image archive. The PACS can obtain lists of appointments and admissions in advance, allowing images to be pre-fetched from off-line storage or near-line storage onto online disk storage. Recognition of the importance of integration has led a number of suppliers to develop fully integrated RIS/PACS. These may offer a number of advanced features: Dictation of reports can be integrated into a single system. Integrated speech-to-text voice recognition software may be used to create and upload a report to the patient's chart within minutes of the patient's scan, or the reporting physician may dictate their findings into a phone system or voice recorder. That recording may be automatically sent to a transcript writer's workstation for typing, but it can also be made available for access by physicians, avoiding typing delays for urgent results, or retained in case of typing error. Provides a single tool for quality control and audit purposes. Rejected images can be tagged, allowing later analysis (as may be required under radiation protection legislation). Workloads and turn-around time can be reported automatically for management purposes. Acceptance testing The PACS installation process is complicated requiring time, resources, planning, and testing. Installation is not complete until the acceptance test is passed. Acceptance testing of a new installation is a vital step to assure user compliance, functionality, and especially clinical safety. Take for example the Therac-25, a radiation medical device involved in accidents in which patients were given massive overdoses of radiation, due to unverified software control. The acceptance test determines whether the PACS is ready for clinical use and marks the warranty timeline while serving as a payment milestone. The test process varies in time requirements depending on facility size but contract condition of 30-day time limit is not unusual. It requires detailed planning and development of testing criteria prior to writing the contract. It is a joint process requiring defined test protocols and benchmarks. Testing uncovers deficiencies. A study determined that the most frequently cited deficiencies were the most costly components. Failures ranked from most-to-least common are: Workstation; HIS/RIS/ACS broker interfaces; RIS; Computer Monitors; Web-based image distribution system; Modality interfaces; Archive devices; Maintenance; Training; Network; DICOM; Teleradiology; Security; Film digitizer. History One of the first basic PACS was created in 1972 by Dr Richard J. Steckel. The principles of PACS were first discussed at meetings of radiologists in 1982. Various people are credited with the coinage of the term PACS. Cardiovascular radiologist Dr Andre Duerinckx reported in 1983 that he had first used the term in 1981. Dr Samuel Dwyer, though, credits Dr Judith M. Prewitt for introducing the term. Dr Harold Glass, a medical physicist working in London in the early 1990s secured UK Government funding and managed the project over many years which transformed Hammersmith Hospital in London as the first filmless hospital in the United Kingdom. Dr Glass died a few months after the project came live but is credited with being one of the pioneers of PACS. The first large-scale PACS installation was in 1982 at the University of Kansas, Kansas City. Regulatory concerns In the US PACS are classified as Medical Devices, and hence if for sale are regulated by the USFDA. In general they are subject to Class 2 controls and hence require a 510(k), though individual PACS components may be subject to less stringent general controls. Some specific applications, such as the use for primary mammography interpretation, are additionally regulated within the scope of the Mammography Quality Standards Act. The Society for Imaging Informatics in Medicine (SIIM) is the worldwide professional and trade organization that provides an annual meeting and a peer-reviewed journal to promote research and education about PACS and related digital topics. See also DICOM Electronic Health Record (EHR) Electronic Medical Record (EMR) Enterprise Imaging Medical device Medical image sharing Medical imaging Medical software Radiographer Radiology Radiology Information System Teleradiology Vendor Neutral Archive (VNA) Visible Light Imaging X-ray References External links PACS History Web Site USC IPILab Research Article on Backup Computing in medical imaging Electronic health records
Picture archiving and communication system
[ "Technology" ]
4,059
[ "Electronic health records", "Information technology" ]
63,863
https://en.wikipedia.org/wiki/Verilog
Verilog, standardized as IEEE 1364, is a hardware description language (HDL) used to model electronic systems. It is most commonly used in the design and verification of digital circuits, with the highest level of abstraction being at the register-transfer level. It is also used in the verification of analog circuits and mixed-signal circuits, as well as in the design of genetic circuits. In 2009, the Verilog standard (IEEE 1364-2005) was merged into the SystemVerilog standard, creating IEEE Standard 1800-2009. Since then, Verilog has been officially part of the SystemVerilog language. The current version is IEEE standard 1800-2023. Overview Hardware description languages such as Verilog are similar to software programming languages because they include ways of describing the propagation time and signal strengths (sensitivity). There are two types of assignment operators; a blocking assignment (=), and a non-blocking (<=) assignment. The non-blocking assignment allows designers to describe a state-machine update without needing to declare and use temporary storage variables. Since these concepts are part of Verilog's language semantics, designers could quickly write descriptions of large circuits in a relatively compact and concise form. At the time of Verilog's introduction (1984), Verilog represented a tremendous productivity improvement for circuit designers who were already using graphical schematic capture software and specially written software programs to document and simulate electronic circuits. The designers of Verilog wanted a language with syntax similar to the C programming language, which was already widely used in engineering software development. Like C, Verilog is case-sensitive and has a basic preprocessor (though less sophisticated than that of ANSI C/C++). Its control flow keywords (if/else, for, while, case, etc.) are equivalent, and its operator precedence is compatible with C. Syntactic differences include: required bit-widths for variable declarations, demarcation of procedural blocks (Verilog uses begin/end instead of curly braces {}), and many other minor differences. Verilog requires that variables be given a definite size. In C these sizes are inferred from the 'type' of the variable (for instance an integer type may be 32 bits). A Verilog design consists of a hierarchy of modules. Modules encapsulate design hierarchy, and communicate with other modules through a set of declared input, output, and bidirectional ports. Internally, a module can contain any combination of the following: net/variable declarations (wire, reg, integer, etc.), concurrent and sequential statement blocks, and instances of other modules (sub-hierarchies). Sequential statements are placed inside a begin/end block and executed in sequential order within the block. However, the blocks themselves are executed concurrently, making Verilog a dataflow language. Verilog's concept of 'wire' consists of both signal values (4-state: "1, 0, floating, undefined") and signal strengths (strong, weak, etc.). This system allows abstract modeling of shared signal lines, where multiple sources drive a common net. When a wire has multiple drivers, the wire's (readable) value is resolved by a function of the source drivers and their strengths. A subset of statements in the Verilog language are synthesizable. Verilog modules that conform to a synthesizable coding style, known as RTL (register-transfer level), can be physically realized by synthesis software. Synthesis software algorithmically transforms the (abstract) Verilog source into a netlist, a logically equivalent description consisting only of elementary logic primitives (AND, OR, NOT, flip-flops, etc.) that are available in a specific FPGA or VLSI technology. Further manipulations to the netlist ultimately lead to a circuit fabrication blueprint (such as a photo mask set for an ASIC or a bitstream file for an FPGA). History Beginning Verilog was created by Prabhu Goel, Phil Moorby and Chi-Lai Huang between late 1983 and early 1984. Chi-Lai Huang had earlier worked on a hardware description LALSD, a language developed by Professor S.Y.H. Su, for his PhD work. The rights holder for this process, at the time proprietary, was "Automated Integrated Design Systems" (later renamed to Gateway Design Automation in 1985). Gateway Design Automation was purchased by Cadence Design Systems in 1990. Cadence now has full proprietary rights to Gateway's Verilog and the Verilog-XL, the HDL-simulator that would become the de facto standard (of Verilog logic simulators) for the next decade. Originally, Verilog was only intended to describe and allow simulation; the automated synthesis of subsets of the language to physically realizable structures (gates etc.) was developed after the language had achieved widespread usage. Verilog is a portmanteau of the words "verification" and "logic". Verilog-95 With the increasing success of VHDL at the time, Cadence decided to make the language available for open standardization. Cadence transferred Verilog into the public domain under the Open Verilog International (OVI) (now known as Accellera) organization. Verilog was later submitted to IEEE and became IEEE Standard 1364-1995, commonly referred to as Verilog-95. In the same time frame Cadence initiated the creation of Verilog-A to put standards support behind its analog simulator Spectre. Verilog-A was never intended to be a standalone language and is a subset of Verilog-AMS which encompassed Verilog-95. Verilog 2001 Extensions to Verilog-95 were submitted back to IEEE to cover the deficiencies that users had found in the original Verilog standard. These extensions became IEEE Standard 1364-2001 known as Verilog-2001. Verilog-2001 is a significant upgrade from Verilog-95. First, it adds explicit support for (2's complement) signed nets and variables. Previously, code authors had to perform signed operations using awkward bit-level manipulations (for example, the carry-out bit of a simple 8-bit addition required an explicit description of the Boolean algebra to determine its correct value). The same function under Verilog-2001 can be more succinctly described by one of the built-in operators: +, -, /, *, >>>. A generate–endgenerate construct (similar to VHDL's generate–endgenerate) allows Verilog-2001 to control instance and statement instantiation through normal decision operators (case–if–else). Using generate–endgenerate, Verilog-2001 can instantiate an array of instances, with control over the connectivity of the individual instances. File I/O has been improved by several new system tasks. And finally, a few syntax additions were introduced to improve code readability (e.g. always @*, named parameter override, C-style function/task/module header declaration). Verilog-2001 is the version of Verilog supported by the majority of commercial EDA software packages. Verilog 2005 Not to be confused with SystemVerilog, Verilog 2005 (IEEE Standard 1364-2005) consists of minor corrections, spec clarifications, and a few new language features (such as the uwire keyword). A separate part of the Verilog standard, Verilog-AMS, attempts to integrate analog and mixed signal modeling with traditional Verilog. SystemVerilog The advent of hardware verification languages such as OpenVera, and Verisity's e language encouraged the development of Superlog by Co-Design Automation Inc (acquired by Synopsys). The foundations of Superlog and Vera were donated to Accellera, which later became the IEEE standard P1800-2005: SystemVerilog. SystemVerilog is a superset of Verilog-2005, with many new features and capabilities to aid design verification and design modeling. As of 2009, the SystemVerilog and Verilog language standards were merged into SystemVerilog 2009 (IEEE Standard 1800-2009). Updates since 2009 The SystemVerilog standard was subsequently updated in 2012, 2017, and most recently in December 2023. Example A simple example of two flip-flops follows: module toplevel(clock,reset); input clock; input reset; reg flop1; reg flop2; always @ (posedge reset or posedge clock) if (reset) begin flop1 <= 0; flop2 <= 1; end else begin flop1 <= flop2; flop2 <= flop1; end endmodule The <= operator in Verilog is another aspect of its being a hardware description language as opposed to a normal procedural language. This is known as a "non-blocking" assignment. Its action does not register until after the always block has executed. This means that the order of the assignments is irrelevant and will produce the same result: flop1 and flop2 will swap values every clock. The other assignment operator = is referred to as a blocking assignment. When = assignment is used, for the purposes of logic, the target variable is updated immediately. In the above example, had the statements used the = blocking operator instead of <=, flop1 and flop2 would not have been swapped. Instead, as in traditional programming, the compiler would understand to simply set flop1 equal to flop2 (and subsequently ignore the redundant logic to set flop2 equal to flop1). An example counter circuit follows: module Div20x (rst, clk, cet, cep, count, tc); // TITLE 'Divide-by-20 Counter with enables' // enable CEP is a clock enable only // enable CET is a clock enable and // enables the TC output // a counter using the Verilog language parameter size = 5; parameter length = 20; input rst; // These inputs/outputs represent input clk; // connections to the module. input cet; input cep; output [size-1:0] count; output tc; reg [size-1:0] count; // Signals assigned // within an always // (or initial)block // must be of type reg wire tc; // Other signals are of type wire // The always statement below is a parallel // execution statement that // executes any time the signals // rst or clk transition from low to high always @ (posedge clk or posedge rst) if (rst) // This causes reset of the cntr count <= {size{1'b0}}; else if (cet && cep) // Enables both true begin if (count == length-1) count <= {size{1'b0}}; else count <= count + 1'b1; end // the value of tc is continuously assigned // the value of the expression assign tc = (cet && (count == length-1)); endmodule An example of delays: ... reg a, b, c, d; wire e; ... always @(b or e) begin a = b & e; b = a | b; #5 c = b; d = #6 c ^ e; end The always clause above illustrates the other type of method of use, i.e. it executes whenever any of the entities in the list (the b or e) changes. When one of these changes, a is immediately assigned a new value, and due to the blocking assignment, b is assigned a new value afterward (taking into account the new value of a). After a delay of 5 time units, c is assigned the value of b and the value of c ^ e is tucked away in an invisible store. Then after 6 more time units, d is assigned the value that was tucked away. Signals that are driven from within a process (an initial or always block) must be of type reg. Signals that are driven from outside a process must be of type wire. The keyword reg does not necessarily imply a hardware register. Definition of constants The definition of constants in Verilog supports the addition of a width parameter. The basic syntax is: <Width in bits>'<base letter><number> Examples: 12'h123 – Hexadecimal 123 (using 12 bits) 20'd44 – Decimal 44 (using 20 bits – 0 extension is automatic) 4'b1010 – Binary 1010 (using 4 bits) 6'o77 – Octal 77 (using 6 bits) Synthesizable constructs There are several statements in Verilog that have no analog in real hardware, such as the $display command. However, the examples presented here are the classic (and limited) subset of the language that has a direct mapping to real gates. // Mux examples — Three ways to do the same thing. // The first example uses continuous assignment wire out; assign out = sel ? a : b; // the second example uses a procedure // to accomplish the same thing. reg out; always @(a or b or sel) begin case(sel) 1'b0: out = b; 1'b1: out = a; endcase end // Finally — you can use if/else in a // procedural structure. reg out; always @(a or b or sel) if (sel) out = a; else out = b; The next interesting structure is a transparent latch; it will pass the input to the output when the gate signal is set for "pass-through", and captures the input and stores it upon transition of the gate signal to "hold". The output will remain stable regardless of the input signal while the gate is set to "hold". In the example below the "pass-through" level of the gate would be when the value of the if clause is true, i.e. gate = 1. This is read "if gate is true, the din is fed to latch_out continuously." Once the if clause is false, the last value at latch_out will remain and is independent of the value of din. // Transparent latch example reg latch_out; always @(gate or din) if(gate) latch_out = din; // Pass through state // Note that the else isn't required here. The variable // latch_out will follow the value of din while gate is // high. When gate goes low, latch_out will remain constant. The flip-flop is the next significant template; in Verilog, the D-flop is the simplest, and it can be modeled as: reg q; always @(posedge clk) q <= d; The significant thing to notice in the example is the use of the non-blocking assignment. A basic rule of thumb is to use <= when there is a posedge or negedge statement within the always clause. A variant of the D-flop is one with an asynchronous reset; there is a convention that the reset state will be the first if clause within the statement. reg q; always @(posedge clk or posedge reset) if(reset) q <= 0; else q <= d; The next variant is including both an asynchronous reset and asynchronous set condition; again the convention comes into play, i.e. the reset term is followed by the set term. reg q; always @(posedge clk or posedge reset or posedge set) if(reset) q <= 0; else if(set) q <= 1; else q <= d; Note: If this model is used to model a Set/Reset flip flop then simulation errors can result. Consider the following test sequence of events. 1) reset goes high 2) clk goes high 3) set goes high 4) clk goes high again 5) reset goes low followed by 6) set going low. Assume no setup and hold violations. In this example the always @ statement would first execute when the rising edge of reset occurs which would place q to a value of 0. The next time the always block executes would be the rising edge of clk which again would keep q at a value of 0. The always block then executes when set goes high which because reset is high forces q to remain at 0. This condition may or may not be correct depending on the actual flip flop. However, this is not the main problem with this model. Notice that when reset goes low, that set is still high. In a real flip flop this will cause the output to go to a 1. However, in this model it will not occur because the always block is triggered by rising edges of set and reset – not levels. A different approach may be necessary for set/reset flip flops. The final basic variant is one that implements a D-flop with a mux feeding its input. The mux has a d-input and feedback from the flop itself. This allows a gated load function. // Basic structure with an EXPLICIT feedback path always @(posedge clk) if(gate) q <= d; else q <= q; // explicit feedback path // The more common structure ASSUMES the feedback is present // This is a safe assumption since this is how the // hardware compiler will interpret it. This structure // looks much like a latch. The differences are the // '''@(posedge clk)''' and the non-blocking '''<=''' // always @(posedge clk) if(gate) q <= d; // the "else" mux is "implied" Note that there are no "initial" blocks mentioned in this description. There is a split between FPGA and ASIC synthesis tools on this structure. FPGA tools allow initial blocks where reg values are established instead of using a "reset" signal. ASIC synthesis tools don't support such a statement. The reason is that an FPGA's initial state is something that is downloaded into the memory tables of the FPGA. An ASIC is an actual hardware implementation. Initial and always There are two separate ways of declaring a Verilog process. These are the always and the initial keywords. The always keyword indicates a free-running process. The initial keyword indicates a process executes exactly once. Both constructs begin execution at simulator time 0, and both execute until the end of the block. Once an always block has reached its end, it is rescheduled (again). It is a common misconception to believe that an initial block will execute before an always block. In fact, it is better to think of the initial-block as a special-case of the always-block, one which terminates after it completes for the first time. //Examples: initial begin a = 1; // Assign a value to reg a at time 0 #1; // Wait 1 time unit b = a; // Assign the value of reg a to reg b end always @(a or b) // Any time a or b CHANGE, run the process begin if (a) c = b; else d = ~b; end // Done with this block, now return to the top (i.e. the @ event-control) always @(posedge a)// Run whenever reg a has a low to high change a <= b; These are the classic uses for these two keywords, but there are two significant additional uses. The most common of these is an always keyword without the @(...) sensitivity list. It is possible to use always as shown below: always begin // Always begins executing at time 0 and NEVER stops clk = 0; // Set clk to 0 #1; // Wait for 1 time unit clk = 1; // Set clk to 1 #1; // Wait 1 time unit end // Keeps executing — so continue back at the top of the begin The always keyword acts similar to the C language construct while(1) {..} in the sense that it will execute forever. The other interesting exception is the use of the initial keyword with the addition of the forever keyword. The example below is functionally identical to the always example above. initial forever // Start at time 0 and repeat the begin/end forever begin clk = 0; // Set clk to 0 #1; // Wait for 1 time unit clk = 1; // Set clk to 1 #1; // Wait 1 time unit end Fork/join The fork/join pair are used by Verilog to create parallel processes. All statements (or blocks) between a fork/join pair begin execution simultaneously upon execution flow hitting the fork. Execution continues after the join upon completion of the longest running statement or block between the fork and join. initial fork $write("A"); // Print char A $write("B"); // Print char B begin #1; // Wait 1 time unit $write("C"); // Print char C end join The way the above is written, it is possible to have either the sequences "ABC" or "BAC" print out. The order of simulation between the first $write and the second $write depends on the simulator implementation, and may purposefully be randomized by the simulator. This allows the simulation to contain both accidental race conditions as well as intentional non-deterministic behavior. Notice that VHDL cannot dynamically spawn multiple processes like Verilog. Race conditions The order of execution is not always guaranteed within Verilog. This can best be illustrated by a classic example. Consider the code snippet below: initial a = 0; initial b = a; initial begin #1; $display("Value a=%d Value of b=%d",a,b); end Depending on the order of execution of the initial blocks, it could be zero and zero, or alternately zero and some other arbitrary uninitialized value. The $display statement will always execute after both assignment blocks have completed, due to the #1 delay. Operators Note: These operators are not shown in order of precedence. Four-valued logic The IEEE 1364 standard defines a four-valued logic with four states: 0, 1, Z (high impedance), and X (unknown logic value). For the competing VHDL, a dedicated standard for multi-valued logic exists as IEEE 1164 with nine levels. System tasks System tasks are available to handle simple I/O and various design measurement functions during simulation. All system tasks are prefixed with $ to distinguish them from user tasks and functions. This section presents a short list of the most frequently used tasks. It is by no means a comprehensive list. $display – Print to screen a line followed by an automatic newline. $write – Print to screen a line without the newline. $swrite – Print to variable a line without the newline. $sscanf – Read from variable a format-specified string. (*Verilog-2001) $fopen – Open a handle to a file (read or write) $fdisplay – Print a line from a file followed by an automatic newline. $fwrite – Print to file a line without the newline. $fscanf – Read from file a format-specified string. (*Verilog-2001) $fclose – Close and release an open file handle. $readmemh – Read hex file content into a memory array. $readmemb – Read binary file content into a memory array. $monitor – Print out all the listed variables when any change value. $time – Value of current simulation time. $dumpfile – Declare the VCD (Value Change Dump) format output file name. $dumpvars – Turn on and dump the variables. $dumpports – Turn on and dump the variables in Extended-VCD format. $random – Return a random value. Program Language Interface (PLI) The PLI provides a programmer with a mechanism to transfer control from Verilog to a program function written in C language. It is officially deprecated by IEEE Std 1364-2005 in favor of the newer Verilog Procedural Interface, which completely replaces the PLI. The PLI (now VPI) enables Verilog to cooperate with other programs written in the C language such as test harnesses, instruction set simulators of a microcontroller, debuggers, and so on. For example, it provides the C functions tf_putlongp() and tf_getlongp() which are used to write and read the 64-bit integer argument of the current Verilog task or function, respectively. For 32-bit integers, tf_putp() and tf_getp() are used. Simulation software For information on Verilog simulators, see the list of Verilog simulators. See also Additional material List of HDL simulators Waveform viewer SystemVerilog Direct Programming Interface (DPI) Verilog Procedural Interface (VPI) Similar languages VHDL, the main competitor to Verilog and SystemVerilog. Verilog-A and Verilog-AMS: Verilog with analog extensions. SystemC — C++ library providing HDL event-driven semantics SystemVerilog e (verification language) Property Specification Language Chisel, an open-source language built on top of Scala References Notes Cornell ECE576 Course illustrating synthesis constructs (The HDL Testbench Bible) External links Standards development – The official standard for Verilog 2005 (not free). IEEE P1364 – Working group for Verilog (inactive). IEEE P1800 – Working group for SystemVerilog (replaces above). Verilog syntax – A 1995 description of the syntax in Backus-Naur form. This predates the IEEE-1364 standard. Language extensions Verilog AUTOs — An open-source meta-comment used by industry IP to simplify maintaining Verilog code. Hardware description languages IEEE DASC standards IEC standards Articles with example code Structured programming languages Domain-specific programming languages Programming languages created in 1984
Verilog
[ "Technology", "Engineering" ]
5,582
[ "Computer standards", "Electronic engineering", "Hardware description languages", "IEC standards" ]
63,866
https://en.wikipedia.org/wiki/Palermo%20Technical%20Impact%20Hazard%20Scale
The Palermo Technical Impact Hazard Scale is a logarithmic scale used by astronomers to rate the potential hazard of impact of a near-Earth object (NEO). It combines two types of data—probability of impact and estimated kinetic yield—into a single "hazard" value. A rating of 0 means the hazard is equivalent to the background hazard (defined as the average risk posed by objects of the same size or larger over the years until the date of the potential impact). A rating of +2 would indicate the hazard is 100 times as great as a random background event. Scale values less than −2 reflect events for which there are no likely consequences, while Palermo Scale values between −2 and 0 indicate situations that merit careful monitoring. A similar but less complex scale is the Torino Scale, which is used for simpler descriptions in the non-scientific media. , no asteroid has a cumulative rating for impacts above 0, and only three asteroids have ratings between −2 and 0. Historically, three asteroids had ratings above 0 and half a dozen more above −1, but most were downrated since. Scale The scale compares the likelihood of the detected potential impact with the average risk posed by objects of the same size or larger over the years until the date of the potential impact. This average risk from random impacts is known as the background risk. The Palermo Scale value, P, is defined by the equation: where pi is the impact probability T is the time interval over which pi is considered fB is the background impact frequency The background impact frequency is defined for this purpose as: where the energy threshold E is measured in megatons, and yr is the unit of T divided by one year. For instance, this formula implies that the expected value of the time from now until the next impact greater than 1 megatonne is 33 years, and that when it occurs, there is a 50% chance that it will be above 2.4 megatonnes. This formula is only valid over a certain range of E. However, another paper published in 2002 – the same year as the paper on which the Palermo scale is based – found a power law with different constants: This formula gives considerably lower rates for a given E. For instance, it gives the rate for bolides of 10 megatonnes or more (like the Tunguska explosion) as 1 per thousand years, rather than 1 per 210 years (or a 38% probability that it happens at least once in a century) as in the Palermo formula. However, the authors give a rather large uncertainty (once in 400 to 1800 years for 10 megatonnes), due in part to uncertainties in determining the energies of the atmospheric impacts that they used in their determination. Asteroids with high ratings In 2002 the near-Earth object reached a positive rating on the scale of 0.18, indicating a higher-than-background threat. The value was subsequently lowered after more measurements were taken. is no longer considered to pose any risk and was removed from the Sentry Risk Table on 1 August 2002. In September 2002, the highest Palermo rating was that of asteroid (29075) 1950 DA, with a value of 0.17 for a possible collision in the year 2880. By March 2022, the rating had been reduced to −2.0. As of October 2024, it has a rating of −0.93. For a brief period in late December 2004, with an observation arc of 190 days, asteroid (then known only by its provisional designation ) held the record for the highest Palermo scale value, with a value of 1.10 for a possible collision in the year 2029. The 1.10 value indicated that a collision with this object was considered to be almost 12.6 times as likely as a random background event: 1 in 37 instead of 1 in 472. With further observation through 2021 there is no risk from Apophis for the next 100+ years. , three asteroids have a cumulative Palermo Scale value above −2: (-0.69), (29075) 1950 DA (−0.93) and 101955 Bennu (−1.40). Four have cumulative Palermo Scale values between −2 and −3: (-2.70), (−2.77), (−2.86) and (−2.97). Of the 27 that have a cumulative Palermo Scale value between −3 and −4, four were discovered in 2024: (−3.30), (−3.50), (−3.63) and (−3.76). See also Asteroid impact avoidance Asteroid impact prediction Earth-grazing fireball Impact event List of asteroid close approaches to Earth List of Earth-crossing asteroids Time-domain astronomy References Further reading The primary reference for the Palermo Technical Scale is "Quantifying the risk posed by potential Earth impacts" by Chesley et al., Icarus 159, 423-432 (2002). External links Palermo Technical Impact Hazard Scale at the Sentry monitoring system by CNEOS at JPL from NASA Alert measurement systems Hazard scales Planetary defense Logarithmic scales of measurement
Palermo Technical Impact Hazard Scale
[ "Physics", "Mathematics", "Technology" ]
1,049
[ "Physical quantities", "Quantity", "Alert measurement systems", "Logarithmic scales of measurement", "Warning systems" ]
63,944
https://en.wikipedia.org/wiki/Key-agreement%20protocol
In cryptography, a key-agreement protocol is a protocol whereby two (or more) parties generate a cryptographic key as a function of information provided by each honest party so that no party can predetermine the resulting value. In particular, all honest participants influence the outcome. A key-agreement protocol is a specialisation of a key-exchange protocol. At the completion of the protocol, all parties share the same key. A key-agreement protocol precludes undesired third parties from forcing a key choice on the agreeing parties. A secure key agreement can ensure confidentiality and data integrity in communications systems, ranging from simple messaging applications to complex banking transactions. Secure agreement is defined relative to a security model, for example the Universal Model. More generally, when evaluating protocols, it is important to state security goals and the security model. For example, it may be required for the session key to be authenticated. A protocol can be evaluated for success only in the context of its goals and attack model. An example of an adversarial model is the Dolev–Yao model. In many key exchange systems, one party generates the key, and sends that key to the other party; the other party has no influence on the key. Exponential key exchange The first publicly known public-key agreement protocol that meets the above criteria was the Diffie–Hellman key exchange, in which two parties jointly exponentiate a generator with random numbers, in such a way that an eavesdropper cannot feasibly determine what the resultant shared key is. Exponential key agreement in and of itself does not specify any prior agreement or subsequent authentication between the participants. It has thus been described as an anonymous key agreement protocol. Symmetric key agreement Symmetric key agreement (SKA) is a method of key agreement that uses solely symmetric cryptography and cryptographic hash functions as cryptographic primitives. It is related to symmetric authenticated key exchange. SKA may assume the use of initial shared secrets or a trusted third party with whom the agreeing parties share a secret is assumed. If no third party is present, then achieving SKA can be trivial: we tautologically assume that two parties that share an initial secret and have achieved SKA. SKA contrasts with key-agreement protocols that include techniques from asymmetric cryptography, such as key encapsulation mechanisms. The initial exchange of a shared key must be done in a manner that is private and integrity-assured. Historically, this was achieved by physical means, such as by using a trusted courier. An example of a SKA protocol is the Needham–Schroeder protocol. It establishes a session key between two parties on the same network, using a server as a trusted third party. The original Needham–Schroeder protocol is vulnerable to a replay attack. Timestamps and nonces are included to fix this attack. It forms the basis for the Kerberos protocol. Types of key agreement Boyd et al. classify two-party key agreement protocols according to two criteria as follows: whether a pre-shared key already exists or not the method of generating the session key. The pre-shared key may be shared between the two parties, or each party may share a key with a trusted third party. If there is no secure channel (as may be established via a pre-shared key), it is impossible to create an authenticated session key. The session key may be generated via: key transport, key agreement and hybrid. If there is no trusted third party, then the cases of key transport and hybrid session key generation are indistinguishable. SKA is concerned with protocols in which the session key is established using only symmetric primitives. Authentication Anonymous key exchange, like Diffie–Hellman, does not provide authentication of the parties, and is thus vulnerable to man-in-the-middle attacks. A wide variety of cryptographic authentication schemes and protocols have been developed to provide authenticated key agreement to prevent man-in-the-middle and related attacks. These methods generally mathematically bind the agreed key to other agreed-upon data, such as the following: public–private key pairs shared secret keys passwords Public keys A widely used mechanism for defeating such attacks is the use of digitally signed keys that must be integrity-assured: if Bob's key is signed by a trusted third party vouching for his identity, Alice can have considerable confidence that a signed key she receives is not an attempt to intercept by Eve. When Alice and Bob have a public-key infrastructure, they may digitally sign an agreed Diffie–Hellman key, or exchanged Diffie–Hellman public keys. Such signed keys, sometimes signed by a certificate authority, are one of the primary mechanisms used for secure web traffic (including HTTPS, SSL or TLS protocols). Other specific examples are MQV, YAK and the ISAKMP component of the IPsec protocol suite for securing Internet Protocol communications. However, these systems require care in endorsing the match between identity information and public keys by certificate authorities in order to work properly. Hybrid systems Hybrid systems use public-key cryptography to exchange secret keys, which are then used in a symmetric-key cryptography systems. Most practical applications of cryptography use a combination of cryptographic functions to implement an overall system that provides all of the four desirable features of secure communications (confidentiality, integrity, authentication, and non-repudiation). Passwords Password-authenticated key agreement protocols require the separate establishment of a password (which may be smaller than a key) in a manner that is both private and integrity-assured. These are designed to resist man-in-the-middle and other active attacks on the password and the established keys. For example, DH-EKE, SPEKE, and SRP are password-authenticated variations of Diffie–Hellman. Other tricks If one has an integrity-assured way to verify a shared key over a public channel, one may engage in a Diffie–Hellman key exchange to derive a short-term shared key, and then subsequently authenticate that the keys match. One way is to use a voice-authenticated read-out of the key, as in PGPfone. Voice authentication, however, presumes that it is infeasible for a man-in-the-middle to spoof one participant's voice to the other in real-time, which may be an undesirable assumption. Such protocols may be designed to work with even a small public value, such as a password. Variations on this theme have been proposed for Bluetooth pairing protocols. In an attempt to avoid using any additional out-of-band authentication factors, Davies and Price proposed the use of the interlock protocol of Ron Rivest and Adi Shamir, which has been subject to both attack and subsequent refinement. See also Key (cryptography) Computer security Cryptanalysis Secure channel Digital signature Key encapsulation mechanism Key management Password-authenticated key agreement Interlock protocol Zero-knowledge password proof Quantum key distribution References Cryptography
Key-agreement protocol
[ "Mathematics", "Engineering" ]
1,443
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
63,966
https://en.wikipedia.org/wiki/Warchalking
Warchalking is the drawing of symbols in public places to advertise an open Wi-Fi network. Inspired by hobo symbols, the warchalking marks were conceived by a group of friends in June 2002 and publicised by Matt Jones who designed the set of icons and produced a downloadable document containing them. Within days of Jones publishing a blog entry about warchalking, articles appeared in dozens of publications and stories appeared on several major television news programs around the world. The word is formed by analogy to wardriving, the practice of driving around an area in a car to detect open Wi-Fi nodes. That term in turn is based on wardialing, the practice of dialing many phone numbers hoping to find a modem. Having found a Wi-Fi node, the warchalker draws a special symbol on a nearby object, such as a wall, the pavement, or a lamp post. Those offering Wi-Fi service might also draw such a symbol to advertise the availability of their Wi-Fi location, whether commercial or personal. References External links Computer security exploits Wi-Fi Graffiti and unauthorised signage
Warchalking
[ "Technology" ]
232
[ "Wireless networking", "Wi-Fi", "Computer security exploits" ]
63,967
https://en.wikipedia.org/wiki/Double%20pendulum
In physics and mathematics, in the area of dynamical systems, a double pendulum, also known as a chaotic pendulum, is a pendulum with another pendulum attached to its end, forming a simple physical system that exhibits rich dynamic behavior with a strong sensitivity to initial conditions. The motion of a double pendulum is governed by a pair of coupled ordinary differential equations and is chaotic. Analysis and interpretation Several variants of the double pendulum may be considered; the two limbs may be of equal or unequal lengths and masses, they may be simple pendulums or compound pendulums (also called complex pendulums) and the motion may be in three dimensions or restricted to the vertical plane. In the following analysis, the limbs are taken to be identical compound pendulums of length and mass , and the motion is restricted to two dimensions. In a compound pendulum, the mass is distributed along its length. If the double pendulum mass is evenly distributed, then the center of mass of each limb is at its midpoint, and the limb has a moment of inertia of about that point. It is convenient to use the angles between each limb and the vertical as the generalized coordinates defining the configuration of the system. These angles are denoted and . The position of the center of mass of each rod may be written in terms of these two coordinates. If the origin of the Cartesian coordinate system is taken to be at the point of suspension of the first pendulum, then the center of mass of this pendulum is at: and the center of mass of the second pendulum is at This is enough information to write out the Lagrangian. Lagrangian The Lagrangian is given by The first term is the linear kinetic energy of the center of mass of the bodies and the second term is the rotational kinetic energy around the center of mass of each rod. The last term is the potential energy of the bodies in a uniform gravitational field. The dot-notation indicates the time derivative of the variable in question. Using the values of and defined above, we have which leads to Similarly, for and we have and therefore Substituting the coordinates above into the definition of the Lagrangian, and rearranging the equation, gives The equations of motion can now be derived using the Euler–Lagrange equations, which are given by We begin with the equation of motion for . The derivatives of the Lagrangian are given by and Thus Combining these results and simplifying yields the first equation of motion, |equation= Similarly, the derivatives of the Lagrangian with respect to and are given by and Thus Plugging these results into the Euler-Lagrange equation and simplifying yields the second equation of motion, No closed form solutions for and as functions of time are known, therefore the system can only be solved numerically, using the Runge Kutta method or similar techniques. Chaotic motion The double pendulum undergoes chaotic motion, and clearly shows a sensitive dependence on initial conditions. The image to the right shows the amount of elapsed time before the pendulum flips over, as a function of initial position when released at rest. Here, the initial value of ranges along the -direction from −3.14 to 3.14. The initial value ranges along the -direction, from −3.14 to 3.14. The color of each pixel indicates whether either pendulum flips within: (black) (red) (green) (blue) or (purple). Initial conditions that do not lead to a flip within are plotted white. The boundary of the central white region is defined in part by energy conservation with the following curve: Within the region defined by this curve, that is ifthen it is energetically impossible for either pendulum to flip. Outside this region, the pendulum can flip, but it is a complex question to determine when it will flip. Similar behavior is observed for a double pendulum composed of two point masses rather than two rods with distributed mass. The lack of a natural excitation frequency has led to the use of double pendulum systems in seismic resistance designs in buildings, where the building itself is the primary inverted pendulum, and a secondary mass is connected to complete the double pendulum. See also Double inverted pendulum Pendulum (mechanics) Trebuchet Bolas Mass damper Mid-20th century physics textbooks use the term "double pendulum" to mean a single bob suspended from a string which is in turn suspended from a V-shaped string. This type of pendulum, which produces Lissajous curves, is now referred to as a Blackburn pendulum. References Further reading Eric W. Weisstein, Double pendulum (2005), ScienceWorld (contains details of the complicated equations involved) and "Double Pendulum" by Rob Morris, Wolfram Demonstrations Project, 2007 (animations of those equations). Peter Lynch, Double Pendulum, (2001). (Java applet simulation.) Northwestern University, Double Pendulum , (Java applet simulation.) Theoretical High-Energy Astrophysics Group at UBC, Double pendulum, (2005). External links Animations and explanations of a double pendulum and a physical double pendulum (two square plates) by Mike Wheatland (Univ. Sydney) Interactive Open Source Physics JavaScript simulation with detailed equations double pendulum Interactive Javascript simulation of a double pendulum Double pendulum physics simulation from www.myphysicslab.com using open source JavaScript code Simulation, equations and explanation of Rott's pendulum Double Pendulum Simulator - An open source simulator written in C++ using the Qt toolkit. Online Java simulator of the Imaginary exhibition. Chaotic maps Dynamical systems Mathematical physics Pendulums
Double pendulum
[ "Physics", "Mathematics" ]
1,132
[ "Functions and mappings", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Mechanics", "Mathematical relations", "Chaotic maps", "Mathematical physics", "Dynamical systems" ]
63,973
https://en.wikipedia.org/wiki/Wi-Fi
Wi-Fi () is a family of wireless network protocols based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access, allowing nearby digital devices to exchange data by radio waves. These are the most widely used computer networks, used globally in home and small office networks to link devices and to provide Internet access with wireless routers and wireless access points in public places such as coffee shops, restaurants, hotels, libraries, and airports. Wi-Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term "Wi-Fi Certified" to products that successfully complete interoperability certification testing. Non-compliant hardware is simply referred to as WLAN, and it may or may not work with "Wi-Fi Certified" devices. the Wi-Fi Alliance consisted of more than 800 companies from around the world. over 3.05 billion Wi-Fi-enabled devices are shipped globally each year. Wi-Fi uses multiple parts of the IEEE 802 protocol family and is designed to work well with its wired sibling, Ethernet. Compatible devices can network through wireless access points with each other as well as with wired devices and the Internet. Different versions of Wi-Fi are specified by various IEEE 802.11 protocol standards, with different radio technologies determining radio bands, maximum ranges, and speeds that may be achieved. Wi-Fi most commonly uses the UHF and SHF radio bands, with the 6 gigahertz SHF band used in newer generations of the standard; these bands are subdivided into multiple channels. Channels can be shared between networks, but, within range, only one transmitter can transmit on a channel at a time. Wi-Fi's radio bands work best for line-of-sight use. Many common obstructions, such as walls, pillars, home appliances, etc., may greatly reduce range, but this also helps minimize interference between different networks in crowded environments. The range of an access point is about indoors, while some access points claim up to a range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves or as large as many square kilometers using many overlapping access points with roaming permitted between them. Over time, the speed and spectral efficiency of Wi-Fi have increased. some versions of Wi-Fi, running on suitable hardware at close range, can achieve speeds of 9.6 Gbit/s (gigabit per second). History A 1985 ruling by the U.S. Federal Communications Commission released parts of the ISM bands for unlicensed use for communications. These frequency bands include the same 2.4 GHz bands used by equipment such as microwave ovens, and are thus subject to interference. In 1991 in Nieuwegein, the NCR Corporation and AT&T invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. NCR's Vic Hayes, who held the chair of IEEE 802.11 for ten years, along with Bell Labs engineer Bruce Tuch, approached the Institute of Electrical and Electronics Engineers (IEEE) to create a standard and were involved in designing the initial 802.11b and 802.11a specifications within the IEEE. They have both been subsequently inducted into the Wi-Fi NOW Hall of Fame. In 1989 in Australia, a team of scientists began working on wireless LAN technology. A prototype test bed for a wireless local area network (WLAN) was developed in 1992 by a team of researchers from the Radiophysics Division of the CSIRO (Commonwealth Scientific and Industrial Research Organisation) in Australia, led by John O'Sullivan. A patent for Wi Fi was lodged by the CSIRO in 1992 The first version of the 802.11 protocol was released in 1997, and provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most IEEE 802.11 products are sold. The major commercial breakthrough came with Apple Inc. adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. This was in collaboration with the same group that helped create the standard: Vic Hayes, Bruce Tuch, Cees Links, Rich McGinn, and others from Lucent. In the year 2000, Radiata, a group of Australian scientists connected to the CSIRO, were the first to use the 802.11a standard on chips connected to a Wi-Fi network. Wi-Fi uses a large number of patents held by many different organizations. Australia, the United States and the Netherlands simultaneously claim the invention of Wi-Fi, and a consensus has not been reached globally. In 2009, the Australian CSIRO was awarded $200 million after a patent settlement with 14 technology companies, with a further $220 million awarded in 2012 after legal proceedings with 23 companies. In 2016, the CSIRO's WLAN prototype test bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia. Etymology and terminology The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'." According to Phil Belanger, a founding member of the Wi-Fi Alliance, the term Wi-Fi was chosen from a list of ten names that Interbrand proposed. Interbrand also created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability. The name is often written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. The name Wi-Fi is not short-form for 'Wireless Fidelity', although the Wi-Fi Alliance did use the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created, and the Wi-Fi Alliance was also called the "Wireless Fidelity Alliance Inc." in some publications. IEEE is a separate, but related, organization and their website has stated "WiFi is a short name for Wireless Fidelity". The name Wi-Fi was partly chosen because it sounds similar to Hi-Fi, which consumers take to mean high fidelity or high quality. Interbrand hoped consumers would find the name catchy, and that they would assume this wireless protocol has high fidelity because of its name. Other technologies intended for fixed points, including Motorola Canopy, are usually called fixed wireless. Alternative wireless technologies include Zigbee, Z-Wave, Bluetooth and mobile phone standards. To connect to a Wi-Fi LAN, a computer must be equipped with a wireless network interface controller. The combination of a computer and an interface controller is called a station. Stations are identified by one or more MAC addresses. Wi-Fi nodes often operate in infrastructure mode in which all communications go through a base station. Ad hoc mode refers to devices communicating directly with each other, without communicating with an access point. A service set is the set of all the devices associated with a particular Wi-Fi network. Devices in a service set need not be on the same wavebands or channels. A service set can be local, independent, extended, mesh, or a combination. Each service set has an associated identifier, a 32-byte service set identifier (SSID), which identifies the network. The SSID is configured within the devices that are part of the network. A basic service set (BSS) is a group of stations that share the same wireless channel, SSID, and other settings that have wirelessly connected, usually to the same access point. Each BSS is identified by a MAC address called the BSSID. Certification The IEEE does not test equipment for compliance with their standards. The Wi-Fi Alliance was formed in 1999 to establish and enforce standards for interoperability and backward compatibility, and to promote wireless local-area-network technology. The Wi-Fi Alliance enforces the use of the Wi-Fi brand to technologies based on the IEEE 802.11 standards from the IEEE. Manufacturers with membership in the Wi-Fi Alliance, whose products pass the certification process, gain the right to mark those products with the Wi-Fi logo. Specifically, the certification process requires conformance to the IEEE 802.11 radio standards, the WPA and WPA2 security standards, and the EAP authentication standard. Certification may optionally include tests of IEEE 802.11 draft standards, interaction with cellular-phone technology in converged devices, and features relating to security set-up, multimedia, and power-saving. Not every Wi-Fi device is submitted for certification. The lack of Wi-Fi certification does not necessarily imply that a device is incompatible with other Wi-Fi devices. The Wi-Fi Alliance may or may not sanction derivative terms, such as Super Wi-Fi, coined by the US Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US. Versions and generations Equipment frequently supports multiple versions of Wi-Fi. To communicate, devices must use a common Wi-Fi version. The versions differ between the radio wavebands they operate on, the radio bandwidth they occupy, the maximum data rates they can support and other details. Some versions permit the use of multiple antennas, which permits greater speeds as well as reduced interference. Historically, the equipment listed the versions of Wi-Fi supported using the name of the IEEE standards. In 2018, the Wi-Fi Alliance introduced simplified Wi-Fi generational numbering to indicate equipment that supports Wi-Fi 4 (802.11n), Wi-Fi 5 (802.11ac) and Wi-Fi 6 (802.11ax). These generations have a high degree of backward compatibility with previous versions. The alliance has stated that the generational level 4, 5, or 6 can be indicated in the user interface when connected, along with the signal strength. The most important standards affecting Wi‑Fi are: 802.11a, 802.11b, 802.11g, 802.11n (Wi-Fi 4), 802.11h, 802.11i, 802.11-2007, 802.11–2012, 802.11ac (Wi-Fi 5), 802.11ad, 802.11af, 802.11-2016, 802.11ah, 802.11ai, 802.11aj, 802.11aq, 802.11ax (Wi-Fi 6), 802.11ay. Uses Internet Wi-Fi technology may be used to provide local network and Internet access to devices that are within Wi-Fi range of one or more routers that are connected to the Internet. The coverage of one or more interconnected access points can extend from an area as small as a few rooms to as large as many square kilometres. Coverage in the larger area may require a group of access points with overlapping coverage. For example, public outdoor Wi-Fi technology has been used successfully in wireless mesh networks in London. An international example is Fon. Wi-Fi provides services in private homes, businesses, as well as in public spaces. Wi-Fi hotspots may be set up either free of charge or commercially, often using a captive portal webpage for access. Organizations, enthusiasts, authorities and businesses, such as airports, hotels, and restaurants, often provide free or paid-use hotspots to attract customers, to provide services to promote business in selected areas. Routers often incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, are frequently set up in homes and other buildings, to provide Internet access for the structure. Similarly, battery-powered routers may include a mobile broadband modem and a Wi-Fi access point. When subscribed to a cellular data carrier, they allow nearby Wi-Fi stations to access the Internet. Many smartphones have a built-in mobile hotspot capability of this sort, though carriers often disable the feature, or charge a separate fee to enable it. Standalone devices such as MiFi- and WiBro-branded devices provide the capability. Some laptops that have a cellular modem card can also act as mobile Internet Wi-Fi access points. Many traditional university campuses in the developed world provide at least partial Wi-Fi coverage. Carnegie Mellon University built the first campus-wide wireless Internet network, called Wireless Andrew, at its Pittsburgh campus in 1993 before Wi-Fi branding existed. Many universities collaborate in providing Wi-Fi access to students and staff through the Eduroam international authentication infrastructure. City-wide In the early 2000s, many cities around the world announced plans to construct citywide Wi-Fi networks. There are many successful examples; in 2004, Mysore (Mysuru) became India's first Wi-Fi-enabled city. A company called WiFiyNet has set up hotspots in Mysore, covering the whole city and a few nearby villages. In 2005, St. Cloud, Florida and Sunnyvale, California, became the first cities in the United States to offer citywide free Wi-Fi (from MetroFi). Minneapolis has generated $1.2 million in profit annually for its provider. In May 2010, the then London mayor Boris Johnson pledged to have London-wide Wi-Fi by 2012. Several boroughs including Westminster and Islington already had extensive outdoor Wi-Fi coverage at that point. New York City announced a city-wide campaign to convert old phone booths into digital kiosks in 2014. The project, titled LinkNYC, has created a network of kiosks that serve as public Wi-Fi hotspots, high-definition screens and landlines. Installation of the screens began in late 2015. The city government plans to implement more than seven thousand kiosks over time, eventually making LinkNYC the largest and fastest public, government-operated Wi-Fi network in the world. The UK has planned a similar project across major cities of the country, with the project's first implementation in the London Borough of Camden. Officials in South Korea's capital Seoul were moving to provide free Internet access at more than 10,000 locations around the city, including outdoor public spaces, major streets, and densely populated residential areas. Seoul was planning to grant leases to KT, LG Telecom, and SK Telecom. The companies were supposed to invest $44 million in the project, which was to be completed in 2015. Geolocation Wi-Fi positioning systems use known positions of Wi-Fi hotspots to identify a device's location. It is used when GPS isn't suitable due to issues like signal interference or slow satellite acquisition. This includes assisted GPS, urban hotspot databases, and indoor positioning systems. Wi-Fi positioning relies on measuring signal strength (RSSI) and fingerprinting. Parameters like SSID and MAC address are crucial for identifying access points. The accuracy depends on nearby access points in the database. Signal fluctuations can cause errors, which can be reduced with noise-filtering techniques. For low precision, integrating Wi-Fi data with geographical and time information has been proposed. The Wi-Fi RTT capability introduced in IEEE 802.11mc allows for positioning based on round trip time measurement, an improvement over the RSSI method. The IEEE 802.11az standard promises further improvements in geolocation accuracy. Motion detection Wi-Fi sensing is used in applications such as motion detection and gesture recognition. Operational principles Wi-Fi stations communicate by sending each other data packets, blocks of data individually sent and delivered over radio on various channels. As with all radio, this is done by the modulation and demodulation of carrier waves. Different versions of Wi-Fi use different techniques, 802.11b uses direct-sequence spread spectrum on a single carrier, whereas 802.11a, Wi-Fi 4, 5 and 6 use orthogonal frequency-division multiplexing. Channels are used half duplex and can be time-shared by multiple networks. Any packet sent by one computer is locally received by stations tuned to that channel, even if that information is intended for just one destination. Stations typically ignore information not addressed to them. The use of the same channel also means that the data bandwidth is shared, so for example, available throughput to each device is halved when two stations are actively transmitting. As with other IEEE 802 LANs, stations come programmed with a globally unique 48-bit MAC address. The MAC addresses are used to specify both the destination and the source of each data packet. On the reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A scheme known as carrier-sense multiple access with collision avoidance (CSMA/CA) governs the way stations share channels. With CSMA/CA stations attempt to avoid collisions by beginning transmission only after the channel is sensed to be idle, but then transmit their packet data in its entirety. CSMA/CA cannot completely prevent collisions, as two stations may sense the channel to be idle at the same time and thus begin transmission simultaneously. A collision happens when a station receives signals from multiple stations on a channel at the same time. This corrupts the transmitted data and can require stations to re-transmit. The lost data and re-transmission reduces throughput, in some cases severely. Waveband The 802.11 standard provides several distinct radio frequency ranges for use in Wi-Fi communications: 900 MHz, 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, 6 GHz and 60 GHz bands. Each range is divided into a multitude of channels. In the standards, channels are numbered at 5 MHz spacing within a band (except in the 60 GHz band, where they are 2.16 GHz apart), and the number refers to the centre frequency of the channel. Although channels are numbered at 5 MHz spacing, transmitters generally occupy at least 20 MHz, and standards allow for neighbouring channels to be bonded together to form a wider channel for higher throughput. Countries apply their own regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges. 802.11b/g/n can use the 2.4 GHz band, operating in the United States under FCC Part 15 rules and regulations. In this frequency band, equipment may occasionally suffer interference from microwave ovens, cordless telephones, USB 3.0 hubs, Bluetooth and other devices. Spectrum assignments and operational limitations are not consistent worldwide: Australia and Europe allow for an additional two channels (12, 13) beyond the 11 permitted in the United States for the 2.4 GHz band, while Japan has three more (12–14). 802.11a/h/j/n/ac/ax can use the 5 GHz U-NII band, which, for much of the world, offers at least 23 non-overlapping 20 MHz channels. This is in contrast to the 2.4 GHz frequency band where the channels are only 5 MHz wide. In general, lower frequencies have longer range but have less capacity. The 5 GHz bands are absorbed to a greater degree by common building materials than the 2.4 GHz bands and usually give a shorter range. As 802.11 specifications evolved to support higher throughput, the protocols have become much more efficient in their bandwidth use. Additionally, they have gained the ability to aggregate channels together to gain still more throughput where the bandwidth for additional channels is available. 802.11n allows for double radio spectrum bandwidth (40 MHz) per channel compared to 802.11a or 802.11g (20 MHz). 802.11n can be set to limit itself to 20 MHz bandwidth to prevent interference in dense communities. In the 5 GHz band, 20 MHz, 40 MHz, 80 MHz, and 160 MHz channels are permitted with some restrictions, giving much faster connections. Communication stack Wi-Fi is part of the IEEE 802 protocol family. The data is organized into 802.11 frames that are very similar to Ethernet frames at the data link layer, but with extra address fields. MAC addresses are used as network addresses for routing over the LAN. Wi-Fi's MAC and physical layer (PHY) specifications are defined by IEEE 802.11 for modulating and receiving one or more carrier waves to transmit the data in the infrared, and 2.4, 3.6, 5, 6, or 60 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had many subsequent amendments. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is officially revoked when incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products. As a result, in the market place, each revision tends to become its own standard. In addition to 802.11, the IEEE 802 protocol family has specific provisions for Wi-Fi. These are required because Ethernet's cable-based media are not usually shared, whereas with wireless all transmissions are received by all stations within the range that employ that radio channel. While Ethernet has essentially negligible error rates, wireless communication media are subject to significant interference. Therefore, the accurate transmission is not guaranteed so delivery is, therefore, a best-effort delivery mechanism. Because of this, for Wi-Fi, the Logical Link Control (LLC) specified by IEEE 802.2 employs Wi-Fi's media access control (MAC) protocols to manage retries without relying on higher levels of the protocol stack. For internetworking purposes, Wi-Fi is usually layered as a link layer below the internet layer of the Internet Protocol. This means that nodes have an associated internet address and, with suitable connectivity, this allows full Internet access. Modes Infrastructure In infrastructure mode, which is the most common mode used, all communications go through a base station. For communications within the network, this introduces an extra use of the airwaves but has the advantage that any two stations that can communicate with the base station can also communicate through the base station, which limits issues associated with the hidden node problem and simplifies the protocols. Ad hoc and Wi-Fi direct Wi-Fi also allows communications directly from one computer to another without an access point intermediary. This is called ad hoc Wi-Fi transmission. Different types of ad hoc networks exist. In the simplest case, network nodes must talk directly to each other. In more complex protocols nodes may forward packets, and nodes keep track of how to reach other nodes, even if they move around. Ad hoc mode was first described by Chai Keong Toh in his 1996 patent of wireless ad hoc routing, implemented on Lucent WaveLAN 802.11a wireless on IBM ThinkPads over a size nodes scenario spanning a region of over a mile. The success was recorded in Mobile Computing magazine (1999) and later published formally in IEEE Transactions on Wireless Communications, 2002 and ACM SIGMETRICS Performance Evaluation Review, 2001. This wireless ad hoc network mode has proven popular with multiplayer video games on handheld game consoles, such as the Nintendo DS and PlayStation Portable. It is also popular on digital cameras, and other consumer electronics devices. Some devices can also share their Internet connection using ad hoc, becoming hotspots or virtual routers. Similarly, the Wi-Fi Alliance promotes the specification Wi-Fi Direct for file transfers and media sharing through a new discovery and security methodology. Wi-Fi Direct launched in October 2010. Another mode of direct communication over Wi-Fi is Tunneled Direct Link Setup (TDLS), which enables two devices on the same Wi-Fi network to communicate directly, instead of via the access point. Multiple access points An Extended Service Set may be formed by deploying multiple access points that are configured with the same SSID and security settings. Wi-Fi client devices typically connect to the access point that can provide the strongest signal within that service set. Increasing the number of Wi-Fi access points for a network provides redundancy, better range, support for fast roaming, and increased overall network-capacity by using more channels or by defining smaller cells. Except for the smallest implementations (such as home or small office networks), Wi-Fi implementations have moved toward "thin" access points, with more of the network intelligence housed in a centralized network appliance, relegating individual access points to the role of "dumb" transceivers. Outdoor applications may use mesh topologies. Performance Wi-Fi operational range depends on factors such as the frequency band, radio power output, receiver sensitivity, antenna gain, and antenna type as well as the modulation technique. Also, the propagation characteristics of the signals can have a big impact. At longer distances, and with greater signal absorption, speed is usually reduced. Transmitter power Compared to cell phones and similar technology, Wi-Fi transmitters are low-power devices. In general, the maximum amount of power that a Wi-Fi device can transmit is limited by local regulations, such as FCC Part 15 in the US. Equivalent isotropically radiated power (EIRP) in the European Union is limited to 20 dBm (100 mW). To reach requirements for wireless LAN applications, Wi-Fi has higher power consumption compared to some other standards designed to support wireless personal area network (PAN) applications. For example, Bluetooth provides a much shorter propagation range between 1 and 100 metres (1 and 100 yards) and so in general has a lower power consumption. Other low-power technologies such as Zigbee have fairly long range, but much lower data rate. The high power consumption of Wi-Fi makes battery life in some mobile devices a concern. Antenna An access point compliant with either 802.11b or 802.11g, using the stock omnidirectional antenna might have a range of . The same radio with an external semi-parabolic antenna (15 dB gain) with a similarly equipped receiver at the far end might have a range over 20 miles. Higher gain rating (dBi) indicates further deviation (generally toward the horizontal) from a theoretical, perfect isotropic radiator, and therefore the antenna can project or accept a usable signal further in particular directions, as compared to a similar output power on a more isotropic antenna. For example, an 8 dBi antenna used with a 100 mW driver has a similar horizontal range to a 6 dBi antenna being driven at 500 mW. This assumes that radiation in the vertical is lost; this may not be the case in some situations, especially in large buildings or within a waveguide. In the above example, a directional waveguide could cause the low-power 6 dBi antenna to project much further in a single direction than the 8 dBi antenna, which is not in a waveguide, even if they are both driven at 100 mW. On wireless routers with detachable antennas, it is possible to improve range by fitting upgraded antennas that provide a higher gain in particular directions. Outdoor ranges can be improved to many kilometres (miles) through the use of high gain directional antennas at the router and remote device(s). MIMO (multiple-input and multiple-output) Wi-Fi 4 and higher standards allow devices to have multiple antennas on transmitters and receivers. Multiple antennas enable the equipment to exploit multipath propagation on the same frequency bands giving much higher speeds and longer range. Wi-Fi 4 can more than double the range over previous standards. The Wi-Fi 5 standard uses the 5 GHz band exclusively, and is capable of multi-station WLAN throughput of at least 1 gigabit per second, and a single station throughput of at least 500 Mbit/s. As of the first quarter of 2016, The Wi-Fi Alliance certifies devices compliant with the 802.11ac standard as "Wi-Fi CERTIFIED ac". This standard uses several signal processing techniques such as multi-user MIMO and 4X4 Spatial Multiplexing streams, and wide channel bandwidth (160 MHz) to achieve its gigabit throughput. According to a study by IHS Technology, 70% of all access point sales revenue in the first quarter of 2016 came from 802.11ac devices. Radio propagation With Wi-Fi signals line-of-sight usually works best, but signals can transmit, absorb, reflect, refract, diffract and up and down fade through and around structures, both man-made and natural. Wi-Fi signals are very strongly affected by metallic structures (including rebar in concrete, low-e coatings in glazing), rock structures (including marble) and water (such as found in vegetation). Due to the complex nature of radio propagation at typical Wi-Fi frequencies, particularly around trees and buildings, algorithms can only approximately predict Wi-Fi signal strength for any given area in relation to a transmitter. This effect does not apply equally to long-range Wi-Fi, since longer links typically operate from towers that transmit above the surrounding foliage. Mobile use of Wi-Fi over wider ranges is limited, for instance, to uses such as in an automobile moving from one hotspot to another. Other wireless technologies are more suitable for communicating with moving vehicles. Distance records Distance records (using non-standard devices) include in June 2007, held by Ermanno Pietrosemoli and EsLaRed of Venezuela, transferring about 3 MB of data between the mountain-tops of El Águila and Platillon. The Swedish National Space Agency transferred data , using 6 watt amplifiers to reach an overhead stratospheric balloon. Interference Wi-Fi connections can be blocked or the Internet speed lowered by having other devices in the same area. Wi-Fi protocols are designed to share the wavebands reasonably fairly, and this often works with little to no disruption. To minimize collisions with Wi-Fi and non-Wi-Fi devices, Wi-Fi employs Carrier-sense multiple access with collision avoidance (CSMA/CA), where transmitters listen before transmitting and delay transmission of packets if they detect that other devices are active on the channel, or if noise is detected from adjacent channels or non-Wi-Fi sources. Nevertheless, Wi-Fi networks are still susceptible to the hidden node and exposed node problem. A standard speed Wi-Fi signal occupies five channels in the 2.4 GHz band. Interference can be caused by overlapping channels. Any two channel numbers that differ by five or more, such as 2 and 7, do not overlap (no adjacent-channel interference). The oft-repeated adage that channels 1, 6, and 11 are the only non-overlapping channels is, therefore, not accurate. Channels 1, 6, and 11 are the only group of three non-overlapping channels in North America. However, whether the overlap is significant depends on physical spacing. Channels that are four apart interfere a negligible amountmuch less than reusing channels (which causes co-channel interference)if transmitters are at least a few metres apart. In Europe and Japan where channel 13 is available, using Channels 1, 5, 9, and 13 for 802.11g and 802.11n is viable and recommended. However, many 2.4 GHz 802.11b and 802.11g access-points default to the same channel on initial startup, contributing to congestion on certain channels. Wi-Fi pollution, or an excessive number of access points in the area, can prevent access and interfere with other devices' use of other access points as well as with decreased signal-to-noise ratio (SNR) between access points. These issues can become a problem in high-density areas, such as large apartment complexes or office buildings with many Wi-Fi access points. Other devices use the 2.4 GHz band: microwave ovens, ISM band devices, security cameras, Zigbee devices, Bluetooth devices, video senders, cordless phones, baby monitors, and, in some countries, amateur radio, all of which can cause significant additional interference. It is also an issue when municipalities or other large entities (such as universities) seek to provide large area coverage. On some 5 GHz bands interference from radar systems can occur in some places. For base stations that support those bands they employ Dynamic Frequency Selection which listens for radar, and if it is found, it will not permit a network on that band. These bands can be used by low power transmitters without a licence, and with few restrictions. However, while unintended interference is common, users that have been found to cause deliberate interference (particularly for attempting to locally monopolize these bands for commercial purposes) have been issued large fines. Throughput Various layer-2 variants of IEEE 802.11 have different characteristics. Across all flavours of 802.11, maximum achievable throughputs are either given based on measurements under ideal conditions or in the layer-2 data rates. This, however, does not apply to typical deployments in which data are transferred between two endpoints of which at least one is typically connected to a wired infrastructure, and the other is connected to an infrastructure via a wireless link. This means that typically data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the packet size of an application determines the speed of the data transfer. This means that an application that uses small packets (e.g. VoIP) creates a data flow with high overhead traffic (low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e. the data rate) and the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached throughput graphs, which show measurements of UDP throughput measurements. Each represents an average throughput of 25 measurements (the error bars are there, but barely visible due to the small variation), is with specific packet size (small or large), and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. This text and measurements do not cover packet errors but information about this can be found at the above references. The table below shows the maximum achievable (application-specific) UDP throughput in the same scenarios (same references again) with various WLAN (802.11) flavours. The measurement hosts have been 25 metres (yards) apart from each other; loss is again ignored. Hardware Wi-Fi allows wireless deployment of local area networks (LANs). Also, spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs. However, building walls of certain materials, such as stone with high metal content, can block Wi-Fi signals. A Wi-Fi device is a short-range wireless device. Wi-Fi devices are fabricated on RF CMOS integrated circuit (RF circuit) chips. Since the early 2000s, manufacturers are building wireless network adapters into most laptops. The price of chipsets for Wi-Fi continues to drop, making it an economical networking option included in ever more devices. Different competitive brands of access points and client network-interfaces can inter-operate at a basic level of service. Products designated as "Wi-Fi Certified" by the Wi-Fi Alliance are backward compatible. Unlike mobile phones, any standard Wi-Fi device works anywhere in the world. Access point A wireless access point (WAP) connects a group of wireless devices to an adjacent wired LAN. An access point resembles a network hub, relaying data between connected wireless devices in addition to a (usually) single connected wired device, most often an Ethernet hub or switch, allowing wireless devices to communicate with other wired devices. Wireless adapter Wireless adapters allow devices to connect to a wireless network. These adapters connect to devices using various external or internal interconnects such as mini PCIe (mPCIe, M.2), USB, ExpressCard and previously PCI, Cardbus, and PC Card. As of 2010, most newer laptop computers come equipped with built-in internal adapters. Router Wireless routers integrate a Wireless Access Point, Ethernet switch, and internal router firmware application that provides IP routing, NAT, and DNS forwarding through an integrated WAN-interface. A wireless router allows wired and wireless Ethernet LAN devices to connect to a (usually) single WAN device such as a cable modem, DSL modem, or optical modem. A wireless router allows all three devices, mainly the access point and router, to be configured through one central utility. This utility is usually an integrated web server that is accessible to wired and wireless LAN clients and often optionally to WAN clients. This utility may also be an application that is run on a computer, as is the case with as Apple's AirPort, which is managed with the AirPort Utility on macOS and iOS. Bridge Wireless network bridges can act to connect two networks to form a single network at the data-link layer over Wi-Fi. The main standard is the wireless distribution system (WDS). Wireless bridging can connect a wired network to a wireless network. A bridge differs from an access point: an access point typically connects wireless devices to one wired network. Two wireless bridge devices may be used to connect two wired networks over a wireless link, useful in situations where a wired connection may be unavailable, such as between two separate homes or for devices that have no wireless networking capability (but have wired networking capability), such as consumer entertainment devices; alternatively, a wireless bridge can be used to enable a device that supports a wired connection to operate at a wireless networking standard that is faster than supported by the wireless network connectivity feature (external dongle or inbuilt) supported by the device (e.g., enabling Wireless-N speeds (up to the maximum supported speed on the wired Ethernet port on both the bridge and connected devices including the wireless access point) for a device that only supports Wireless-G). A dual-band wireless bridge can also be used to enable 5 GHz wireless network operation on a device that only supports 2.4 GHz wireless and has a wired Ethernet port. Repeater Wireless range-extenders or wireless repeaters can extend the range of an existing wireless network. Strategically placed range-extenders can elongate a signal area or allow for the signal area to reach around barriers such as those pertaining in L-shaped corridors. Wireless devices connected through repeaters suffer from an increased latency for each hop, and there may be a reduction in the maximum available data throughput. Besides, the effect of additional users using a network employing wireless range-extenders is to consume the available bandwidth faster than would be the case whereby a single user migrates around a network employing extenders. For this reason, wireless range-extenders work best in networks supporting low traffic throughput requirements, such as for cases whereby a single user with a Wi-Fi-equipped tablet migrates around the combined extended and non-extended portions of the total connected network. Also, a wireless device connected to any of the repeaters in the chain has data throughput limited by the "weakest link" in the chain between the connection origin and connection end. Networks using wireless extenders are more prone to degradation from interference from neighbouring access points that border portions of the extended network and that happen to occupy the same channel as the extended network. Embedded systems The security standard, Wi-Fi Protected Setup, allows embedded devices with a limited graphical user interface to connect to the Internet with ease. Wi-Fi Protected Setup has 2 configurations: The Push Button configuration and the PIN configuration. These embedded devices are also called The Internet of things and are low-power, battery-operated embedded systems. Several Wi-Fi manufacturers design chips and modules for embedded Wi-Fi, such as GainSpan. Increasingly in the last few years (particularly ), embedded Wi-Fi modules have become available that incorporate a real-time operating system and provide a simple means of wirelessly enabling any device that can communicate via a serial port. This allows the design of simple monitoring devices. An example is a portable ECG device monitoring a patient at home. This Wi-Fi-enabled device can communicate via the Internet. These Wi-Fi modules are designed by OEMs so that implementers need only minimal Wi-Fi knowledge to provide Wi-Fi connectivity for their products. In June 2014, Texas Instruments introduced the first ARM Cortex-M4 microcontroller with an onboard dedicated Wi-Fi MCU, the SimpleLink CC3200. It makes embedded systems with Wi-Fi connectivity possible to build as single-chip devices, which reduces their cost and minimum size, making it more practical to build wireless-networked controllers into inexpensive ordinary objects. Security The main issue with wireless network security is its simplified access to the network compared to traditional wired networks such as Ethernet. With wired networking, one must either gain access to a building (physically connecting into the internal network), or break through an external firewall. To access Wi-Fi, one must merely be within the range of the Wi-Fi network. Most business networks protect sensitive data and systems by attempting to disallow external access. Enabling wireless connectivity reduces security if the network uses inadequate or no encryption. An attacker who has gained access to a Wi-Fi network router can initiate a DNS spoofing attack against any other user of the network by forging a response before the queried DNS server has a chance to reply. Securing methods A common measure to deter unauthorized users involves hiding the access point's name by disabling the SSID broadcast. While effective against the casual user, it is ineffective as a security method because the SSID is broadcast in the clear in response to a client SSID query. Another method is to only allow computers with known MAC addresses to join the network, but determined eavesdroppers may be able to join the network by spoofing an authorized address. Wired Equivalent Privacy (WEP) encryption was designed to protect against casual snooping but it is no longer considered secure. Tools such as AirSnort or Aircrack-ng can quickly recover WEP encryption keys. Because of WEP's weakness the Wi-Fi Alliance approved Wi-Fi Protected Access (WPA) which uses TKIP. WPA was specifically designed to work with older equipment usually through a firmware upgrade. Though more secure than WEP, WPA has known vulnerabilities. The more secure WPA2 using Advanced Encryption Standard was introduced in 2004 and is supported by most new Wi-Fi devices. WPA2 is fully compatible with WPA. In 2017, a flaw in the WPA2 protocol was discovered, allowing a key replay attack, known as KRACK. A flaw in a feature added to Wi-Fi in 2007, called Wi-Fi Protected Setup (WPS), let WPA and WPA2 security be bypassed. The only remedy was to turn off Wi-Fi Protected Setup, which is not always possible. Virtual private networks can be used to improve the confidentiality of data carried through Wi-Fi networks, especially public Wi-Fi networks. A URI using the WIFI scheme can specify the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, so users can follow links from QR codes, for instance, to join networks without having to manually enter the data. A MeCard-like format is supported by Android and iOS 11+. Common format: WIFI:S:<SSID>;T:<WEP|WPA|blank>;P:<PASSWORD>;H:<true|false|blank>; Sample WIFI:S:MySSID;T:WPA;P:MyPassW0rd;; Data security risks Wi-Fi access points typically default to an encryption-free (open) mode. Novice users benefit from a zero-configuration device that works out-of-the-box, but this default does not enable any wireless security, providing open wireless access to a LAN. To turn security on requires the user to configure the device, usually via a software graphical user interface (GUI). On unencrypted Wi-Fi networks connecting devices can monitor and record data (including personal information). Such networks can only be secured by using other means of protection, such as a VPN, or Hypertext Transfer Protocol over Transport Layer Security (HTTPS). The older wireless-encryption standard, Wired Equivalent Privacy (WEP), has been shown easily breakable even when correctly configured. Wi-Fi Protected Access (WPA) encryption, which became available in devices in 2003, aimed to solve this problem. Wi-Fi Protected Access 2 (WPA2) ratified in 2004 is considered secure, provided a strong passphrase is used. The 2003 version of WPA has not been considered secure since it was superseded by WPA2 in 2004. In 2018, WPA3 was announced as a replacement for WPA2, increasing security; it rolled out on 26 June. Piggybacking Piggybacking refers to access to a wireless Internet connection by bringing one's computer within the range of another's wireless connection, and using that service without the subscriber's explicit permission or knowledge. During the early popular adoption of 802.11, providing open access points for anyone within range to use was encouraged to cultivate wireless community networks, particularly since people on average use only a fraction of their downstream bandwidth at any given time. Recreational logging and mapping of other people's access points have become known as wardriving. Indeed, many access points are intentionally installed without security turned on so that they can be used as a free service. Providing access to one's Internet connection in this fashion may breach the Terms of Service or contract with the ISP. These activities do not result in sanctions in most jurisdictions; however, legislation and case law differ considerably across the world. A proposal to leave graffiti describing available services was called warchalking. Piggybacking often occurs unintentionally – a technically unfamiliar user might not change the default "unsecured" settings to their access point and operating systems can be configured to connect automatically to any available wireless network. A user who happens to start up a laptop in the vicinity of an access point may find the computer has joined the network without any visible indication. Moreover, a user intending to join one network may instead end up on another one if the latter has a stronger signal. In combination with automatic discovery of other network resources (see DHCP and Zeroconf) this could lead wireless users to send sensitive data to the wrong middle-man when seeking a destination (see man-in-the-middle attack). For example, a user could inadvertently use an unsecured network to log into a website, thereby making the login credentials available to anyone listening, if the website uses an insecure protocol such as plain HTTP without TLS. On an unsecured access point, an unauthorized user can obtain security information (factory preset passphrase or Wi-Fi Protected Setup PIN) from a label on a wireless access point and use this information (or connect by the Wi-Fi Protected Setup pushbutton method) to commit unauthorized or unlawful activities. Societal aspects Wireless Internet access has become much more embedded in society. It has thus changed how the society functions in many ways. Influence on developing countries over half the world did not have access to the Internet, prominently rural areas in developing nations. Technology that has been implemented in more developed nations is often costly and energy inefficient. This has led to developing nations using more low-tech networks, frequently implementing renewable power sources that can solely be maintained through solar power, creating a network that is resistant to disruptions such as power outages. For instance, in 2007, a network between Cabo Pantoja and Iquitos in Peru was erected in which all equipment is powered only by solar panels. These long-range Wi-Fi networks have two main uses: offer Internet access to populations in isolated villages, and to provide healthcare to isolated communities. In the case of the latter example, it connects the central hospital in Iquitos to 15 medical outposts which are intended for remote diagnosis. Work habits Access to Wi-Fi in public spaces such as cafes or parks allows people, in particular freelancers, to work remotely. While the accessibility of Wi-Fi is the strongest factor when choosing a place to work (75% of people would choose a place that provides Wi-Fi over one that does not), other factors influence the choice of specific hotspots. These vary from the accessibility of other resources, like books, the location of the workplace, and the social aspect of meeting other people in the same place. Moreover, the increase of people working from public places results in more customers for local businesses thus providing an economic stimulus to the area. Additionally, in the same study it has been noted that wireless connection provides more freedom of movement while working. Both when working at home or from the office it allows the displacement between different rooms or areas. In some offices (notably Cisco offices in New York) the employees do not have assigned desks but can work from any office connecting their laptop to Wi-Fi hotspot. Housing The Internet has become an integral part of living. , 81.9% of American households have Internet access. Additionally, 89% of American households with broadband connect via wireless technologies. 72.9% of American households have Wi-Fi. Wi-Fi networks have also affected how the interior of homes and hotels are arranged. For instance, architects have described that their clients no longer wanted only one room as their home office, but would like to work near the fireplace or have the possibility to work in different rooms. This contradicts architect's pre-existing ideas of the use of rooms that they designed. Additionally, some hotels have noted that guests prefer to stay in certain rooms since they receive a stronger Wi-Fi signal. Health concerns The World Health Organization (WHO) says, "no health effects are expected from exposure to RF fields from base stations and wireless networks", but notes that they promote research into effects from other RF sources. (a category used when "a causal association is considered credible, but when chance, bias or confounding cannot be ruled out with reasonable confidence"), this classification was based on risks associated with wireless phone use rather than Wi-Fi networks. The United Kingdom's Health Protection Agency reported in 2007 that exposure to Wi-Fi for a year results in the "same amount of radiation from a 20-minute mobile phone call". A review of studies involving 725 people who claimed electromagnetic hypersensitivity, "...suggests that 'electromagnetic hypersensitivity' is unrelated to the presence of an EMF, although more research into this phenomenon is required." Alternatives Several other wireless technologies provide alternatives to Wi-Fi for different use cases: Bluetooth Low Energy, a low-power variant of Bluetooth Bluetooth, a short-distance network Cellular networks, used by smartphones LoRa, for long range wireless with low data rate NearLink, a short-range wireless technology standard WiMAX, for providing long range wireless internet connectivity Zigbee, a low-power, low data rate, short-distance communication protocol Some alternatives are "no new wires", re-using existing cable: G.hn, which uses existing home wiring, such as phone and power lines Several wired technologies for computer networking, which provide viable alternatives to Wi-Fi: Ethernet over twisted pair See also Gi-Fia term used by some trade press to refer to faster versions of the IEEE 802.11 standards HiperLAN High-speed multimedia radio Indoor positioning system Li-Fi List of WLAN channels Operating system Wi-Fi support Passive Wi-Fi Power-line communication San Francisco Digital Inclusion Strategy WiGig Wireless Broadband Alliance Wi-Fi Direct Explanatory notes References Further reading Australian inventions Telecommunications-related introductions in 1997 Networking standards Wireless communication systems Dutch inventions
Wi-Fi
[ "Technology", "Engineering" ]
10,804
[ "Computer standards", "Wireless networking", "Wi-Fi", "Computer networks engineering", "Wireless communication systems", "Networking standards" ]
64,020
https://en.wikipedia.org/wiki/Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.). According to some on-line dictionaries, a multiprocessor is a computer system having two or more processing units (multiple processors) each sharing main memory and peripherals, in order to simultaneously process programs. A 2009 textbook defined multiprocessor system similarly, but noted that the processors may share "some or all of the system’s memory and I/O facilities"; it also gave tightly coupled system as a synonymous term. At the operating system level, multiprocessing is sometimes used to refer to the execution of multiple concurrent processes in a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant. When used with this definition, multiprocessing is sometimes contrasted with multitasking, which may use just a single processor but switch it in time slices between tasks (i.e. a time-sharing system). Multiprocessing however means true parallel execution of multiple processes using more than one processor. Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the term parallel processing is generally used to denote that scenario. Other authors prefer to refer to the operating system techniques as multiprogramming and reserve the term multiprocessing for the hardware aspect of having more than one processor. The remainder of this article discusses multiprocessing only in this hardware sense. In Flynn's taxonomy, multiprocessors as defined above are MIMD machines. As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also contains message passing multicomputer systems. Key topics Processor symmetry In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating system software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized. Systems that treat all CPUs equally are called symmetric multiprocessing (SMP) systems. In systems where all CPUs are not equal, system resources may be divided in a number of ways, including asymmetric multiprocessing (ASMP), non-uniform memory access (NUMA) multiprocessing, and clustered multiprocessing. Master/slave multiprocessor system In a master/slave multiprocessor system, the master CPU is in control of the computer and the slave CPU(s) performs assigned tasks. The CPUs can be completely different in terms of speed and architecture. Some (or all) of the CPUs can share a common bus, each can also have a private bus (for private resources), or they may be isolated except for a common communications pathway. Likewise, the CPUs can share common RAM and/or have private RAM that the other processor(s) cannot access. The roles of master and slave can change from one CPU to another. Two early examples of a mainframe master/slave multiprocessor are the Bull Gamma 60 and the Burroughs B5000. An early example of a master/slave multiprocessor system of microprocessors is the Tandy/Radio Shack TRS-80 Model 16 desktop computer which came out in February 1982 and ran the multi-user/multi-tasking Xenix operating system, Microsoft's version of UNIX (called TRS-XENIX). The Model 16 has two microprocessors: an 8-bit Zilog Z80 CPU running at 4 MHz, and a 16-bit Motorola 68000 CPU running at 6 MHz. When the system is booted, the Z-80 is the master and the Xenix boot process initializes the slave 68000, and then transfers control to the 68000, whereupon the CPUs change roles and the Z-80 becomes a slave processor responsible for all I/O operations including disk, communications, printer and network, as well as the keyboard and integrated monitor, while the operating system and applications run on the 68000 CPU. The Z-80 can be used to do other tasks. The earlier TRS-80 Model II, which was released in 1979, could also be considered a multiprocessor system as it had both a Z-80 CPU and an Intel 8021 microcontroller in the keyboard. The 8021 made the Model II the first desktop computer system with a separate detachable lightweight keyboard connected with by a single thin flexible wire, and likely the first keyboard to use a dedicated microcontroller, both attributes that would later be copied years later by Apple and IBM. Instruction and data streams In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts (single instruction, multiple data or SIMD, often used in vector processing), multiple sequences of instructions in a single context (multiple instruction, single data or MISD, used for redundancy in fail-safe systems and sometimes applied to describe pipelined processors or hyper-threading), or multiple sequences of instructions in multiple contexts (multiple instruction, multiple data or MIMD). Processor coupling Tightly coupled multiprocessor system Tightly coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP or UMA), or may participate in a memory hierarchy with both local and shared memory (SM)(NUMA). The IBM p690 Regatta is an example of a high end SMP system. Intel Xeon processors dominated the multiprocessor market for business PCs and were the only major x86 option until the release of AMD's Opteron range of processors in 2004. Both ranges of processors had their own onboard cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system RAM. Chip multiprocessors, also known as multi-core computing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly coupled multiprocessing. Mainframe systems with multiple processors are often tightly coupled. Loosely coupled multiprocessor system Loosely coupled multiprocessor systems (often referred to as clusters) are based on multiple standalone relatively low processor count commodity computers interconnected via a high speed communication system (Gigabit Ethernet is common). A Linux Beowulf cluster is an example of a loosely coupled system. Tightly coupled systems perform better and are physically smaller than loosely coupled systems, but have historically required greater initial investments and may depreciate rapidly; nodes in a loosely coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster. Power consumption is also a consideration. Tightly coupled systems tend to be much more energy-efficient than clusters. This is because a considerable reduction in power consumption can be realized by designing components to work together from the beginning in tightly coupled systems, whereas loosely coupled systems use components that were not necessarily intended specifically for use in such systems. Loosely coupled systems have the ability to run different operating systems or OS versions on different systems. Disadvantages Merging data from multiple threads or processes may incur significant overhead due to conflict resolution, data consistency, versioning, and synchronization. See also Multiprocessor system architecture Symmetric multiprocessing Asymmetric multiprocessing Multi-core processor BMDFM – Binary Modular Dataflow Machine, a SMP MIMD runtime environment Software lockout OpenHMPP References Parallel computing Classes of computers Computing terminology
Multiprocessing
[ "Technology" ]
1,767
[ "Classes of computers", "Computing terminology", "Computers", "Computer systems" ]
64,045
https://en.wikipedia.org/wiki/Chromosomal%20crossover
Chromosomal crossover, or crossing over, is the exchange of genetic material during sexual reproduction between two homologous chromosomes' non-sister chromatids that results in recombinant chromosomes. It is one of the final phases of genetic recombination, which occurs in the pachytene stage of prophase I of meiosis during a process called synapsis. Synapsis begins before the synaptonemal complex develops and is not completed until near the end of prophase I. Crossover usually occurs when matching regions on matching chromosomes break and then reconnect to the other chromosome. Crossing over was described, in theory, by Thomas Hunt Morgan; the term crossover was coined by Morgan and Eleth Cattell. Hunt relied on the discovery of Frans Alfons Janssens who described the phenomenon in 1909 and had called it "chiasmatypie". The term chiasma is linked, if not identical, to chromosomal crossover. Morgan immediately saw the great importance of Janssens' cytological interpretation of chiasmata to the experimental results of his research on the heredity of Drosophila. The physical basis of crossing over was first demonstrated by Harriet Creighton and Barbara McClintock in 1931. The linked frequency of crossing over between two gene loci (markers) is the crossing-over value. For fixed set of genetic and environmental conditions, recombination in a particular region of a linkage structure (chromosome) tends to be constant and the same is then true for the crossing-over value which is used in the production of genetic maps. When Hotta et al. in 1977 compared meiotic crossing-over (recombination) in lily and mouse they concluded that diverse eukaryotes share a common pattern. This finding suggested that chromosomal crossing over is a general characteristic of eukaryotic meiosis. Origins There are two popular and overlapping theories that explain the origins of crossing-over, coming from the different theories on the origin of meiosis. The first theory rests upon the idea that meiosis evolved as another method of DNA repair, and thus crossing-over is a novel way to replace possibly damaged sections of DNA. The second theory comes from the idea that meiosis evolved from bacterial transformation, with the function of propagating diversity. In 1931, Barbara McClintock discovered a triploid maize plant. She made key findings regarding corn's karyotype, including the size and shape of the chromosomes. McClintock used the prophase and metaphase stages of mitosis to describe the morphology of corn's chromosomes, and later showed the first ever cytological demonstration of crossing over in meiosis. Working with student Harriet Creighton, McClintock also made significant contributions to the early understanding of codependency of linked genes. DNA repair theory Crossing over and DNA repair are very similar processes, which utilize many of the same protein complexes. In her report, "The Significance of Responses of the Genome to Challenge", McClintock studied corn to show how corn's genome would change itself to overcome threats to its survival. She used 450 self-pollinated plants that received from each parent a chromosome with a ruptured end. She used modified patterns of gene expression on different sectors of leaves of her corn plants to show that transposable elements ("controlling elements") hide in the genome, and their mobility allows them to alter the action of genes at different loci. These elements can also restructure the genome, anywhere from a few nucleotides to whole segments of chromosome. Recombinases and primases lay a foundation of nucleotides along the DNA sequence. One such particular protein complex that is conserved between processes is RAD51, a well conserved recombinase protein that has been shown to be crucial in DNA repair as well as cross over. Several other genes in D. melanogaster have been linked as well to both processes, by showing that mutants at these specific loci cannot undergo DNA repair or crossing over. Such genes include mei-41, mei-9, hdm, , and brca2. This large group of conserved genes between processes supports the theory of a close evolutionary relationship. Furthermore, DNA repair and crossover have been found to favor similar regions on chromosomes. In an experiment using radiation hybrid mapping on wheat's (Triticum aestivum L.) 3B chromosome, crossing over and DNA repair were found to occur predominantly in the same regions. Furthermore, crossing over has been correlated to occur in response to stressful, and likely DNA damaging, conditions. Links to bacterial transformation The process of bacterial transformation also shares many similarities with chromosomal cross over, particularly in the formation of overhangs on the sides of the broken DNA strand, allowing for the annealing of a new strand. Bacterial transformation itself has been linked to DNA repair many times. The second theory comes from the idea that meiosis evolved from bacterial transformation, with the function of propagating genetic diversity. Thus, this evidence suggests that it is a question of whether cross over is linked to DNA repair or bacterial transformation, as the two do not appear to be mutually exclusive. It is likely that crossing over may have evolved from bacterial transformation, which in turn developed from DNA repair, thus explaining the links between all three processes. Chemistry Meiotic recombination may be initiated by double-stranded breaks that are introduced into the DNA by exposure to DNA damaging agents, or the Spo11 protein. One or more exonucleases then digest the 5' ends generated by the double-stranded breaks to produce 3' single-stranded DNA tails (see diagram). The meiosis-specific recombinase Dmc1 and the general recombinase Rad51 coat the single-stranded DNA to form nucleoprotein filaments. The recombinases catalyze invasion of the opposite chromatid by the single-stranded DNA from one end of the break. Next, the 3' end of the invading DNA primes DNA synthesis, causing displacement of the complementary strand, which subsequently anneals to the single-stranded DNA generated from the other end of the initial double-stranded break. The structure that results is a cross-strand exchange, also known as a Holliday junction. The contact between two chromatids that will soon undergo crossing-over is known as a chiasma. The Holliday junction is a tetrahedral structure which can be 'pulled' by other recombinases, moving it along the four-stranded structure. MSH4 and MSH5 The MSH4 and MSH5 proteins form a hetero-oligomeric structure (heterodimer) in yeast and humans. In the yeast Saccharomyces cerevisiae, MSH4 and MSH5 act specifically to facilitate crossovers between homologous chromosomes during meiosis. The MSH4/MSH5 complex binds and stabilizes double Holliday junctions and promotes their resolution into crossover products. An MSH4 hypomorphic (partially functional) mutant of S. cerevisiae showed a 30% genome-wide reduction in crossover numbers and a large number of meioses with non-exchange chromosomes. Nevertheless, this mutant gave rise to spore viability patterns suggesting that segregation of non-exchange chromosomes occurred efficiently. Thus in S. cerevisiae proper segregation apparently does not entirely depend on crossovers between homologous pairs. Chiasma The grasshopper Melanoplus femur-rubrum was exposed to an acute dose of X-rays during each individual stage of meiosis, and chiasma frequency was measured. Irradiation during the leptotene-zygotene stages of meiosis (that is, prior to the pachytene period in which crossover recombination occurs) was found to increase subsequent chiasma frequency. Similarly, in the grasshopper Chorthippus brunneus, exposure to X-irradiation during the zygotene-early pachytene stages caused a significant increase in mean cell chiasma frequency. Chiasma frequency was scored at the later diplotene-diakinesis stages of meiosis. These results suggest that X-rays induce DNA damages that are repaired by a crossover pathway leading to chiasma formation. Class I and class II crossovers Double strand breaks (DSBs) are repaired by two pathways to generate crossovers in eukaryotes. The majority of them are repaired by MutL homologs MLH1 and MLH3, which defines the class I crossovers. The remaining are the result of the class II pathway, which is regulated by MUS81 endonuclease and FANCM translocase. There are interconnections between these two pathways—class I crossovers can compensate for the loss of class II pathway. In MUS81 knockout mice, class I crossovers are elevated, while total crossover counts at chiasmata are normal. However, the mechanisms underlining this crosstalk are not well understood. A recent study suggests that a scaffold protein called SLX4 may participate in this regulation. Specifically, SLX4 knockout mice largely phenocopies the MUS81 knockout—once again, an elevated class I crossovers while normal chiasmata count. In FANCM knockout mice, the class II pathway is hyperactivated, resulting in increased numbers of crossovers that are independent of the MLH1/MLH3 pathway. Consequences In most eukaryotes, a cell carries two versions of each gene, each referred to as an allele. Each parent passes on one allele to each offspring. An individual gamete inherits a complete haploid complement of alleles on chromosomes that are independently selected from each pair of chromatids lined up on the metaphase plate. Without recombination, all alleles for those genes linked together on the same chromosome would be inherited together. Meiotic recombination allows a more independent segregation between the two alleles that occupy the positions of single genes, as recombination shuffles the allele content between homologous chromosomes. Recombination results in a new arrangement of maternal and paternal alleles on the same chromosome. Although the same genes appear in the same order, some alleles are different. In this way, it is theoretically possible to have any combination of parental alleles in an offspring, and the fact that two alleles appear together in one offspring does not have any influence on the statistical probability that another offspring will have the same combination. This principle of "independent assortment" of genes is fundamental to genetic inheritance. However, the frequency of recombination is actually not the same for all gene combinations. This leads to the notion of "genetic distance", which is a measure of recombination frequency averaged over a (suitably large) sample of pedigrees. Loosely speaking, one may say that this is because recombination is greatly influenced by the proximity of one gene to another. If two genes are located close together on a chromosome, the likelihood that a recombination event will separate these two genes is less than if they were farther apart. Genetic linkage describes the tendency of genes to be inherited together as a result of their location on the same chromosome. Linkage disequilibrium describes a situation in which some combinations of genes or genetic markers occur more or less frequently in a population than would be expected from their distances apart. This concept is applied when searching for a gene that may cause a particular disease. This is done by comparing the occurrence of a specific DNA sequence with the appearance of a disease. When a high correlation between the two is found, it is likely that the appropriate gene sequence is really closer Non-homologous crossover Crossovers typically occur between homologous regions of matching chromosomes, but similarities in sequence and other factors can result in mismatched alignments. Most DNA is composed of base pair sequences repeated very large numbers of times. These repetitious segments, often referred to as satellites, are fairly homogeneous among a species. During DNA replication, each strand of DNA is used as a template for the creation of new strands using a partially-conserved mechanism; proper functioning of this process results in two identical, paired chromosomes, often called sisters. Sister chromatid crossover events are known to occur at a rate of several crossover events per cell per division in eukaryotes. Most of these events involve an exchange of equal amounts of genetic information, but unequal exchanges may occur due to sequence mismatch. These are referred to by a variety of names, including non-homologous crossover, unequal crossover, and unbalanced recombination, and result in an insertion or deletion of genetic information into the chromosome. While rare compared to homologous crossover events, these mutations are drastic, affecting many loci at the same time. They are considered the main driver behind the generation of gene duplications and are a general source of mutation within the genome. The specific causes of non-homologous crossover events are unknown, but several influential factors are known to increase the likelihood of an unequal crossover. One common vector leading to unbalanced recombination is the repair of double-strand breaks (DSBs). DSBs are often repaired using homology directed repair, a process which involves invasion of a template strand by the DSB strand (see figure below). Nearby homologous regions of the template strand are often used for repair, which can give rise to either insertions or deletions in the genome if a non-homologous but complementary part of the template strand is used. Sequence similarity is a major player in crossover – crossover events are more likely to occur in long regions of close identity on a gene. This means that any section of the genome with long sections of repetitive DNA is prone to crossover events. The presence of transposable elements is another influential element of non-homologous crossover. Repetitive regions of code characterize transposable elements; complementary but non-homologous regions are ubiquitous within transposons. Because chromosomal regions composed of transposons have large quantities of identical, repetitious code in a condensed space, it is thought that transposon regions undergoing a crossover event are more prone to erroneous complementary match-up; that is to say, a section of a chromosome containing a lot of identical sequences, should it undergo a crossover event, is less certain to match up with a perfectly homologous section of complementary code and more prone to binding with a section of code on a slightly different part of the chromosome. This results in unbalanced recombination, as genetic information may be either inserted or deleted into the new chromosome, depending on where the recombination occurred. While the motivating factors behind unequal recombination remain obscure, elements of the physical mechanism have been elucidated. Mismatch repair (MMR) proteins, for instance, are a well-known regulatory family of proteins, responsible for regulating mismatched sequences of DNA during replication and escape regulation. The operative goal of MMRs is the restoration of the parental genotype. One class of MMR in particular, MutSβ, is known to initiate the correction of insertion-deletion mismatches of up to 16 nucleotides. Little is known about the excision process in eukaryotes, but E. coli excisions involve the cleaving of a nick on either the 5' or 3' strand, after which DNA helicase and DNA polymerase III bind and generate single-stranded proteins, which are digested by exonucleases and attached to the strand by ligase. Multiple MMR pathways have been implicated in the maintenance of complex organism genome stability, and any of many possible malfunctions in the MMR pathway result in DNA editing and correction errors. Therefore, while it is not certain precisely what mechanisms lead to errors of non-homologous crossover, it is extremely likely that the MMR pathway is involved. See also Unequal crossing over Coefficient of coincidence Genetic distance Independent assortment Mitotic crossover Recombinant frequency References Cellular processes Modification of genetic information Molecular genetics
Chromosomal crossover
[ "Chemistry", "Biology" ]
3,367
[ "Modification of genetic information", "Molecular genetics", "Cellular processes", "Molecular biology" ]
64,085
https://en.wikipedia.org/wiki/Caramel
Caramel ( or ) is a confectionery product made by heating a range of sugars. It is used as a flavoring in puddings and desserts, as a filling in bonbons or candy bars, or as a topping for ice cream and custard. The process of caramelization consists of heating sugar slowly to around . As the sugar heats, the molecules break down and re-form into compounds with a characteristic colour and flavour. A variety of candies, desserts, toppings, and confections are made with caramel: brittles, nougats, pralines, flan, crème brûlée, crème caramel, and caramel apples. Ice creams sometimes are flavored with or contain swirls of caramel. Etymology The English word comes from French , borrowed from Spanish (18th century), itself possibly from Portuguese . Most likely that comes from Late Latin 'sugar cane', a diminutive of 'reed, cane', itself from Greek . Less likely, it comes from Medieval Latin , from 'cane' + 'honey'. Finally, some dictionaries connect it to Arabic 'ball of sweet'. Sauce Caramel sauce is made by mixing caramelized sugar with cream. Depending on the intended application, additional ingredients such as butter, fruit purees, liquors, or vanilla can be used. Caramel sauce is used in a range of desserts, especially as a topping for ice cream. When it is used for crème caramel or flan, it is known as clear caramel and only contains caramelized sugar and water. Butterscotch sauce is made with brown sugar, butter, and cream. Traditionally, butterscotch is a hard candy more in line with a toffee. Candy Caramel candy, or "caramels", and sometimes called "toffee" (though this also refers to other types of candy), is a soft, dense, chewy candy made by boiling a mixture of milk or cream, sugar(s), glucose, butter, and vanilla (or vanilla flavoring). The sugar and glucose are heated separately to reach ; the cream and butter are then added which cools the mixture. The mixture is then stirred and reheated until it reaches . Upon completion of cooking, vanilla or any additional flavorings and salt are added. Adding the vanilla or flavorings earlier would result in them burning off at the high temperatures. Adding salt earlier in the process would result in inverting the sugars as they cooked. Alternatively, all ingredients may be cooked together. In this procedure, the mixture is not heated above the firm ball stage (), so that caramelization of the milk occurs. This temperature is not high enough to caramelize sugar and this type of candy is often called milk caramel or cream caramel. Even though caramel candy is sometimes called "toffee" and is also compared with butterscotch, there is a difference. While toffee and butterscotch are more closely related than caramel, they do have most of the same ingredients. However, toffee and butterscotch use molasses or brown sugar while caramel uses white sugar. They are also cooked at different temperatures and they each have their own cooking techniques that make them unique in taste and shape. Salting Salted caramel was created in 1977 by French pastry chef Henri Le Roux in Quiberon, Brittany, in the form of a salted butter caramel with crushed nuts (caramel au beurre salé), using Breton demi-sel butter. It was named the "Best confectionery in France" () at the Paris in 1980. Le Roux registered the trademark "CBS" (caramel au beurre salé) the year after. It became a huge hit throughout France and other French-speaking European countries (notably Belgium and Switzerland which already had a tradition for fine chocolate and confectionery) and for years French, Belgian and Swiss children added it to their , a meal eaten around 4 pm in order to restore their energy after school. usually consists of bread with jam or caramel spread, croissants or pain au chocolat, fruit and hot chocolate. In the late 1990s, Parisian pastry chef Pierre Hermé introduced his salted butter and caramel macarons and, by 2000, high-end chefs started adding a bit of salt to caramel and chocolate dishes. In 2008 it entered the mass market, when Häagen-Dazs and Starbucks started selling it. Originally used in desserts, the confection has seen wide use elsewhere, including in hot chocolate and spirits such as vodka. Its popularity may come from its effects on the reward systems of the human brain, resulting in "hedonic escalation". Colouring Caramel colouring, a dark, bitter liquid, is the highly concentrated product of near total caramelization, used commercially as food and beverage colouring, e.g., in cola. Chemistry Caramelization is the removal of water from a sugar, proceeding to isomerization and polymerization of the sugars into various high-molecular-weight compounds. Compounds such as difructose anhydride may be created from the monosaccharides after water loss. Fragmentation reactions result in low-molecular-weight compounds that may be volatile and may contribute to flavor. Polymerization reactions lead to larger-molecular-weight compounds that contribute to the dark-brown color. Caramel can be produced in many forms such as sauce, chewy candy, or hard candy depending on how much of an ingredient is added and the temperature it is being prepared at. In modern recipes and in commercial production, glucose (from corn syrup or wheat) or invert sugar is added to prevent crystallization, making up 10–50% of the sugars by mass. "Wet caramels" made by heating sucrose and water instead of sucrose alone produce their own invert sugar due to thermal reaction, but not necessarily enough to prevent crystallization in traditional recipes. See also Caramel color Crème caramel Caramel apple, whole apples covered in a layer of caramel and/or chocolate Caramel corn, popcorn coated in caramel Dodol, a caramelized confection made with coconut milk Dulce de leche, caramelized, sweetened milk Maillard reaction Nougat, using egg white rather than milk products Tablet, Scottish candy made with condensed milk Toffee, a type of confection Fudge References External links Toppings Candy Amorphous solids Food colorings Glassforming liquids and melts sv:Kola
Caramel
[ "Physics" ]
1,377
[ "Amorphous solids", "Unsolved problems in physics" ]
64,087
https://en.wikipedia.org/wiki/Type%20design
Type design is the art and process of designing typefaces. This involves drawing each letterform using a consistent style. The basic concepts and design variables are described below. A typeface differs from other modes of graphic production such as handwriting and drawing in that it is a fixed set of alphanumeric characters with specific characteristics to be used repetitively. Historically, these were physical elements, called sorts, placed in a wooden frame; modern typefaces are stored and used electronically. It is the art of a type designer to develop a pleasing and functional typeface. In contrast, it is the task of the typographer (or typesetter) to lay out a page using a typeface that is appropriate to the work to be printed or displayed. Type designers use the basic concepts of strokes, counter, body, and structural groups when designing typefaces. There are also variables that type designers take into account when creating typefaces. These design variables are style, weight, contrast, width, posture, and case. History The technology of printing text using movable type was invented in China, but the vast number of Chinese characters, and the esteem with which calligraphy was held, meant that few distinctive, complete typefaces were created in China in the early centuries of printing. Gutenberg's most important innovation in the mid 15th century development of his press was not the printing itself, but the casting of Latinate types. Unlike Chinese characters, which are based on a uniform square area, European Latin characters vary in width, from the very wide "M" to the slender "l". Gutenberg developed an adjustable mold which could accommodate an infinite variety of widths. From then until at least 400 years later, type started with cutting punches, which would be struck into a brass "matrix". The matrix was inserted into the bottom of the adjustable meld and the negative space formed by the mold cavity plus the matrix acted as the master for each letter that was cast. The casting material was an alloy usually containing lead, which had a low melting point, cooled readily, and could be easily filed and finished. In those early days, type design had to not only imitate the familiar handwritten forms common to readers, but also account for the limitations of the printing process, such as the rough papers of uneven thicknesses, the squeezing or splashing properties of the ink, and the eventual wear on the type itself. Beginning in the 1890s, each character was drawn in a very large size for the American Type Founders Corporation and a few others using their technology—over a foot (30 cm) high. The outline was then traced by a Benton pantograph-based engraving machine with a pointer at the hand-held vertex and a cutting tool at the opposite vertex down to a size usually less than a quarter-inch (6 mm). The pantographic engraver was first used to cut punches, and later to directly create matrices. In the late 1960s through the 1980s, typesetting moved from metal to photo composition. During this time, type design made a similar transition from physical matrixes to hand drawn letters on vellum or mylar and then the precise cutting of "rubyliths". Rubylith was a common material in the printing trade, in which a red transparent film, very soft and pliable, was bonded to a supporting clear acetate. Placing the ruby over the master drawing of the letter, the craftsman would gently and precisely cut through the upper film and peel the non-image portions away. The resulting letterform, now existing as the remaining red material still adhering to the clear substrate, would then be ready to be photographed using a reproduction camera. With the coming of computers, type design became a form of computer graphics. Initially, this transition occurred with a program called Ikarus around 1980, but widespread transition began with programs such as Aldus Freehand and Adobe Illustrator, and finally to dedicated type design programs called font editors, such as Fontographer and FontLab. This process occurred rapidly: by the mid-1990s, virtually all commercial type design had transitioned to digital vector drawing programs. Each glyph design can be drawn or traced by a stylus on a digitizing board, or modified from a scanned drawing, or composed entirely within the program itself. Each glyph is then in a digital form, either in a bitmap (pixel-based) or vector (scalable outline) format. A given digitization of a typeface can easily be modified by another type designer; such a modified font is usually considered a derivative work, and is covered by the copyright of the original font software. Type design could be copyrighted typeface by typeface in many countries, though not the United States. The United States offered and continues to offer design patents as an option for typeface design protection. Basic concepts Stroke The shape of designed letterforms and other characters are defined by strokes arranged in specific combinations. This shaping and construction has a basis in the gestural movements of handwriting. The visual qualities of a given stroke are derived from factors surrounding its formation: the kind of tool used, the angle at which the tool is dragged across a surface, and the degree of pressure applied from beginning to end. The stroke is the positive form that establishes a character's archetypal shape. Counter The spaces created between and around strokes are called counters (also known as counterforms). These negative forms help to define the proportion, density, and rhythm of letterforms. The counter is an integral element in Western typography, however this concept may not apply universally to non-Western typographic traditions. More complex scripts, such as Chinese, which make use of compounding elements (radicals) within a single character may additionally require consideration of spacing not only between characters but also within characters. Body The overall proportion of characters, or their body, considers proportions of width and height for all cases involved (which in Latin are uppercase and lowercase), and individually for each character. In the former case, a grid system is used to delineate vertical proportions and gridlines (such as the baseline, mean line/x-height, cap line, descent line, and ascent line). In the latter case, letterforms of a typeface may be designed with variable bodies, making the typeface proportional, or they may be designed to fit within a single body measure, making the typeface fixed width or monospaced. Structural groups When designing letterforms, characters with analogous structures can be grouped in consideration of their shared visual qualities. In Latin, for example, archetypal groups can be made on the basis of the dominant strokes of each letter: verticals and horizontals (), diagonals (), verticals and diagonals (), horizontals and diagonals (), circular strokes (), circular strokes and verticals (), and verticals (). Design variables Type design takes into consideration a number of design variables which are delineated based on writing system and vary in consideration of functionality, aesthetic quality, cultural expectations, and historical context. Style Style describes several different aspects of typeface variability historically related to character and function. This includes variations in: Structural class (such as serif, sans serif, and script typefaces) Historical class (such as oldstyle, transitional, neoclassical, grotesque, humanist, etc.) Relative neutrality (ranging from neutral typefaces to stylized typefaces) Functional use (such as text, display, and caption typefaces) Weight Weight refers to the thickness or thinness of a typeface's strokes in a global sense. Typefaces usually have a default medium, or regular, weight which will produce the appearance of a uniform grey value when set in text. Categories of weight include hairline, thin, extra light, light, book, regular/medium, semibold, bold, black/heavy, and extra black/ultra. Variable fonts are computer fonts that are able to store and make use of a continuous range of weight (and size) variants of a single typeface. Contrast Contrast refers to the variation in weight that may exist internally within each character, between thin strokes and thick strokes. More extreme contrasts will produce texts with more uneven typographic color. At a smaller scale, strokes within a character may individually also exhibit contrasts in weight, which is called modulation. Width Each character within a typeface has its own overall width relative to its height. These proportions may be changed globally so that characters are narrowed or widened. Typefaces that are narrowed are called condensed typefaces, while those that are widened are called extended typefaces. Posture Letterform structures may be structured in a way that changes the angle between upright stem structures and the typeface's baseline, changing the overall posture of the typeface. In Latin script typefaces, a typeface is categorized as a Roman when this angle is perpendicular. A forward-leaning angle produces either an Italic, if the letterforms are designed with reanalyzed cursive forms, or an oblique, if the letterforms are slanted mechanically. A back-leaning angle produces a reverse oblique, or backslanted, posture. Case A proportion of writing systems are bicameral, distinguishing between two parallel sets of letters that vary in use based on prescribed grammar or convention. These sets of letters are known as cases. The larger case is called uppercase or capitals (also known as majuscule) and the smaller case is called lowercase (also known as minuscule). Typefaces may also include a set of small capitals, which are uppercase forms designed in the same height and weight as lowercase forms. Other writing systems are unicameral, meaning only one case exists for letterforms. Bicameral writing systems may have typefaces with unicase designs, which mix uppercase and lowercase letterforms within a single case. Principles The design of a legible text-based typeface remains one of the most challenging assignments in graphic design. The even visual quality of the reading material being of paramount importance, each drawn character (called a glyph) must be even in appearance with every other glyph regardless of order or sequence. Also, if the typeface is to be versatile, it must appear the same whether it is small or large. Because of optical illusions that occur when we apprehend small or large objects, this entails that in the best fonts, a version is designed for small use and another version is drawn for large, display, applications. Also, large letterforms reveal their shape, whereas small letterforms in text settings reveal only their textures: this requires that any typeface that aspires to versatility in both text and display, needs to be evaluated in both of these visual domains. A beautifully shaped typeface may not have a particularly attractive or legible texture when seen in text settings. Spacing is also an important part of type design. Each glyph consists not only of the shape of the character, but also the white space around it. The type designer must consider the relationship of the space within a letter form (the counter) and the letter spacing between them. Designing type requires many accommodations for the quirks of human perception, "optical corrections" required to make shapes look right, in ways that diverge from what might seem mathematically right. For example, round shapes need to be slightly bigger than square ones to appear "the same" size ("overshoot"), and vertical lines need to be thicker than horizontal ones to appear the same thickness. For a character to be perceived as geometrically round, it must usually be slightly "squared" off (made slightly wider at the shoulders). As a result of all these subtleties, excellence in type design is highly respected in the design professions. Profession Type design is performed by a type designer. It is a craft, blending elements of art and science. In the pre-digital era it was primarily learned through apprenticeship and professional training within the industry. Since the mid-1990s it has become the subject of dedicated degree programs at a handful of universities, including the MA Typeface Design at the University of Reading (UK) and the Type Media program at the KABK (Royal Academy of Art in the Hague). At the same time, the transition to digital type and font editors which can be inexpensive (or even open source and free) has led to a great democratization of type design; the craft is accessible to anyone with the interest to pursue it, nevertheless, it may take a very long time for the serious artist to master. References Further reading Stiebner, Erhardt D. & Dieter Urban. Initials and Decorative Alphabets. Poole, England: Blandford Press, 1985. Typography Articles containing video clips Communication design
Type design
[ "Engineering" ]
2,617
[ "Design", "Communication design" ]
64,141
https://en.wikipedia.org/wiki/Acetonitrile
Acetonitrile, often abbreviated MeCN (methyl cyanide), is the chemical compound with the formula and structure . This colourless liquid is the simplest organic nitrile (hydrogen cyanide is a simpler nitrile, but the cyanide anion is not classed as organic). It is produced mainly as a byproduct of acrylonitrile manufacture. It is used as a polar aprotic solvent in organic synthesis and in the purification of butadiene. The skeleton is linear with a short distance of 1.16 Å. Acetonitrile was first prepared in 1847 by the French chemist Jean-Baptiste Dumas. Applications Acetonitrile is used mainly as a solvent in the purification of butadiene in refineries. Specifically, acetonitrile is fed into the top of a distillation column filled with hydrocarbons including butadiene, and as the acetonitrile falls down through the column, it absorbs the butadiene which is then sent from the bottom of the tower to a second separating tower. Heat is then employed in the separating tower to separate the butadiene. In the laboratory, it is used as a medium-polarity non-protic solvent that is miscible with water and a range of organic solvents, but not saturated hydrocarbons. It has a convenient range of temperatures at which it is a liquid, and a high dielectric constant of 38.8. With a dipole moment of 3.92 D, acetonitrile dissolves a wide range of ionic and nonpolar compounds and is useful as a mobile phase in HPLC and LC–MS. It is widely used in battery applications because of its relatively high dielectric constant and ability to dissolve electrolytes. For similar reasons, it is a popular solvent in cyclic voltammetry. Its ultraviolet transparency UV cutoff, low viscosity and low chemical reactivity make it a popular choice for high-performance liquid chromatography (HPLC). Acetonitrile plays a significant role as the dominant solvent used in oligonucleotide synthesis from nucleoside phosphoramidites. Industrially, it is used as a solvent for the manufacture of pharmaceuticals and photographic film. Organic synthesis Acetonitrile is a common two-carbon building block in organic synthesis of many useful chemicals, including acetamidine hydrochloride, thiamine, and 1-naphthaleneacetic acid. Its reaction with cyanogen chloride affords malononitrile. As an electron pair donor Acetonitrile has a free electron pair at the nitrogen atom, which can form many transition metal nitrile complexes. Being weakly basic, it is an easily displaceable ligand. For example, bis(acetonitrile)palladium dichloride is prepared by heating a suspension of palladium chloride in acetonitrile: A related complex is tetrakis(acetonitrile)copper(I) hexafluorophosphate . The groups in these complexes are rapidly displaced by many other ligands. It also forms Lewis adducts with group 13 Lewis acids like boron trifluoride. In superacids, it is possible to protonate acetonitrile. Production Acetonitrile is a byproduct from the manufacture of acrylonitrile. Most is combusted to support the intended process but an estimated several thousand tons are retained for the above-mentioned applications. Production trends for acetonitrile thus generally follow those of acrylonitrile. Acetonitrile can also be produced by many other methods, but these are of no commercial importance as of 2002. Illustrative routes are by dehydration of acetamide or by hydrogenation of mixtures of carbon monoxide and ammonia. In , of acetonitrile were produced in the US. Acetonitrile shortage in 2008–2009 Starting in October 2008, the worldwide supply of acetonitrile was low because Chinese production was shut down for the Olympics. Furthermore, a U.S. factory was damaged in Texas during Hurricane Ike. Due to the global economic slowdown, the production of acrylonitrile used in acrylic fibers and acrylonitrile butadiene styrene (ABS) resins decreased. Acetonitrile is a byproduct in the production of acrylonitrile and its production also decreased, further compounding the acetonitrile shortage. The global shortage of acetonitrile continued through early 2009. Safety Toxicity Acetonitrile has only modest toxicity in small doses. It can be metabolised to produce hydrogen cyanide, which is the source of the observed toxic effects. Generally the onset of toxic effects is delayed, due to the time required for the body to metabolize acetonitrile to cyanide (generally about 2–12 hours). Cases of acetonitrile poisoning in humans (or, to be more specific, of cyanide poisoning after exposure to acetonitrile) are rare but not unknown, by inhalation, ingestion and (possibly) by skin absorption. The symptoms, which do not usually appear for several hours after the exposure, include breathing difficulties, slow pulse rate, nausea, and vomiting. Convulsions and coma can occur in serious cases, followed by death from respiratory failure. The treatment is as for cyanide poisoning, with oxygen, sodium nitrite, and sodium thiosulfate among the most commonly used emergency treatments. It has been used in formulations for nail polish remover, despite its toxicity. At least two cases have been reported of accidental poisoning of young children by acetonitrile-based nail polish remover, one of which was fatal. Acetone and ethyl acetate are often preferred as safer for domestic use, and acetonitrile has been banned in cosmetic products in the European Economic Area since March 2000. Metabolism and excretion In common with other nitriles, acetonitrile can be metabolised in microsomes, especially in the liver, to produce hydrogen cyanide, as was first shown by Pozzani et al. in 1959. The first step in this pathway is the oxidation of acetonitrile to glycolonitrile by an NADPH-dependent cytochrome P450 monooxygenase. The glycolonitrile then undergoes a spontaneous decomposition to give hydrogen cyanide and formaldehyde. Formaldehyde, a toxin and a carcinogen on its own, is further oxidized to formic acid, which is another source of toxicity. The metabolism of acetonitrile is much slower than that of other nitriles, which accounts for its relatively low toxicity. Hence, one hour after administration of a potentially lethal dose, the concentration of cyanide in the rat brain was that for a propionitrile dose 60 times lower (see table). The relatively slow metabolism of acetonitrile to hydrogen cyanide allows more of the cyanide produced to be detoxified within the body to thiocyanate (the rhodanese pathway). It also allows more of the acetonitrile to be excreted unchanged before it is metabolised. The main pathways of excretion are by exhalation and in the urine. See also Trichloroacetonitrile – a derivative of acetonitrile used to protect alcohol groups, and also used as a reagent in the Overman rearrangement References External links WebBook page for C2H3N International Chemical Safety Card 0088 National Pollutant Inventory - Acetonitrile fact sheet NIOSH Pocket Guide to Chemical Hazards Chemical Summary for Acetonitrile (CAS No. 75-05-8), Office of Pollution Prevention and Toxics, U.S. Environmental Protection Agency Simulation of acetonitrile How Did Organic Matter Reach Earth? Cosmic Detectives Trace Origin of Complex Organic Molecules, on: SciTechDaily. September 10, 2020. Source: Tokyo University of Science: Acetonitrile found in molecular cloud Sgr B2(M) at the center of our galaxy. Hazardous air pollutants 2 Solvents Ligands Organic compounds with 2 carbon atoms
Acetonitrile
[ "Chemistry" ]
1,769
[ "Organic compounds", "Ligands", "Coordination chemistry", "Organic compounds with 2 carbon atoms" ]
64,187
https://en.wikipedia.org/wiki/Computer%20Olympiad
The Computer Olympiad is a multi-games event in which computer programs compete against each other. For many games, the Computer Olympiads are an opportunity to claim the "world's best computer player" title. First contested in 1989, the majority of the games are board games but other games such as bridge take place as well. In 2010, several puzzles were included in the competition. History Developed in the 1980s by David Levy, the first Computer Olympiad took place in 1989 at the Park Lane Hotel in London. The games ran on a yearly basis until after the 1992 games, when the Olympiad's ruling committee was unable to find a new organiser. This resulted in the games being suspended until 2000 when the Mind Sports Olympiad resurrected them. Recently, the International Computer Games Association (ICGA) has adopted the Computer Olympiad and tries to organise the event on an annual basis. In the year 2024, parody website Mike Row Soft added an image of the Olympics with various Linux distros displayed inside the circles, and color-matched. Games contested The games which have been contested at each Olympiad are: 1st–5th Olympiads (1989–1992) 6th–10th Olympiads (2000–2004) After an eight-year hiatus, the Computer Olympiad was revived by bringing it into the Mind Sports Olympiad. The chess competition was a special event, since it was adopted by the International Correspondence Chess Association (ICCA) as the 17th World Microcomputer Chess Championship (WMCC 2000). The 5th Olympiad was in 2000 at London's Alexandra Palace; the 6th, in 2001 at Ad Fundunm at Maastricht University; the 7th, in 2002 in Maastricht; the 8th, in 2003 in Graz; and the 9th, in 2004 in Ramat Gan. The 7th Olympiad was adopted by the ICCA as the 10th World Computer Chess Championship (WCCC), and the 8th was held in conjugation with both 11th WCCC and the 10th Advances in Computer Games Conference. Because of this, no medals were awarded for the two chess events. The 9th was held in conjugation with WCCC and the Computers and Games 2004 Conference; no medals were awarded to the two chess events. Jonathan Schaeffer and J. W. H. M. Uiterwijk were the tournament directors. 10th–14th Olympiads (2005–2009) The 10th Olympiad was in 2005 in Taipei; the 11th, in 2006 in Turin; the 12th, in 2007 at the Amsterdam Science Park; the 13th, in 2008 at the Beijing Golden Century Golf Club; and the 14th, in 2009 in Pamplona. The 10th Olympiad wasa held at the same time and location as the 11th Advances in Computer Games and its organizing committee was made up of J. W. Hellemons (chair), H. H. L. M. Donkers, M. Greenspan, T-s Hsu, H. J. van den Herik, and M. Tiessen. Hand Talk, which won the gold medal in Computer Go, was originally written in assembly language by a retired chemistry professor of Sun Yat-sen University, China. The 11th Olympiad was held in conjugation with the 14th World Computer Chess Championship and the 5th Computer and Games Conference. Human FIDE 37th Chess Olympiad co-hosted this event; the 12th, with the 15th World Computer Chess Championship and the Computer Games Workshop; the 13th, with the International Computer Games Championship, the World Computer Chess Championship, and a scientific conference on computer games; and the 14th with the World Computer Chess Championship and a scientific conference on computer games. Rybka was retroactively disqualified from all ICCC events due to plagiarism. Rankings were adjusted appropriately. 15th–18th Olympiads (2010–2015) The 15th Olympiad was held in 2010 in Kanazawa, Japan along with the 18th World Computer Chess Championship (WCCC), and a scientific conference on computer games. The 16th Olympiad was held in 2011 at Tilburg University at the same time as the 19th WCCC. The 17th Olympiad was held in 2013 at Keio University's Collaboration Complex on the Hiyoshi Campus, and was at the same time as the 20th WCCC and a scientific conference on computer games. The 18th Olympiad was in 2015 at Leiden University and was organized by the International Computer Game Association, the Leiden Institute of Advanced Computer Science, and the Leiden Centre of Data Science. 19th–25th Olympiads (2016–2022) The 19th Olympiad was held 27 June – 3 July 2016 and the 20th Olympiad was held 1–7 July 2017, both at Leiden University and organized by the International Computer Game Association, the Leiden Institute of Advanced Computer Science, and the Leiden Centre of Data Science. The 21st Olympiad was held 7–13 July 2018 in Taipei, Taiwan alongside the 10th International Conference on Computers and Games. The World Computer Chess Championships took place from 13–19 July in Stockholm, Sweden. The 22nd Olympiad was held 11–17 August 2019 in Macau, China and the 23rd (2020), 24th (2021), and 25th (2022) Olympiads were held online due to the COVID-19 pandemic. Summary by game Abalone Abalone is a strategy game using a hexagonal patterned board with 14 marbles for each of two players. The objective is to push six of the opponent's marbles off the edge of the board. Amazons Amazons is played on a 10×10 chessboard by two players each with four amazons (queen chess pieces). Moves are made to block squares and the winner is the last player able to move his pieces to an unblocked square. Awari Awari is an abstract strategy game among the Mancala family of board games (pit and pebble games). Backgammon Backgammon is a board game for two players where the checker-like playing pieces are moved according to the roll of dice; a player wins by removing all of his pieces from the board before his opponent. Bridge Bridge is a trick-taking card game for four players. Bridge participation in the Computer Olympiad was largely discontinued when in 1996 the American Contract Bridge League established a new official World Computer Bridge Championship, to be run annually at a major bridge tournament. Starting in 1999, that event is now co-sponsored by the World Bridge Federation. Chess Chess is a two-player board game played on a checkered game-board with 64 squares arranged in an eight-by-eight grid. Each player begins with 16 pieces of varying characteristics, the objective being to capture one's opponent's king piece. Many computer-versus-computer events are held beyond those of the Computer Olympiad. Chinese chess Chinese chess is a strategy board game for two players from the same family as western or international chess. Known primarily as Xiangqi internationally, the game is referred to as Chinese chess in the Computer Olympiad competitions. Chinese dark chess Chinese dark chess is known as Banqi in Chinese. Clobber Connect Four Connect6 Dominoes Gin rummy GIPF Octi Octi is an abstract strategy game designed by Donald Green, with similarities to checkers and chess but allowing for multiple jumping, capturing, and special movement of pieces. The object of the game is to move one's pieces into the opponent's starting points. Poker Pool Also known as computational pool. See also Computer bridge Computer chess References External links Results of all events ACM News – Silicon Brains Compete at Games Computer olympiads Game artificial intelligence Computer chess competitions Recurring events established in 1989 Computer science competitions
Computer Olympiad
[ "Mathematics" ]
1,531
[ "Game theory", "Game artificial intelligence" ]
64,204
https://en.wikipedia.org/wiki/Kinetic%20theory%20of%20gases
The kinetic theory of gases is a simple classical model of the thermodynamic behavior of gases. Its introduction allowed many principal concepts of thermodynamics to be established. It treats a gas as composed of numerous particles, too small to be seen with a microscope, in constant, random motion. These particles are now known to be the atoms or molecules of the gas. The kinetic theory of gases uses their collisions with each other and with the walls of their container to explain the relationship between the macroscopic properties of gases, such as volume, pressure, and temperature, as well as transport properties such as viscosity, thermal conductivity and mass diffusivity. The basic version of the model describes an ideal gas. It treats the collisions as perfectly elastic and as the only interaction between the particles, which are additionally assumed to be much smaller than their average distance apart. Due to the time reversibility of microscopic dynamics (microscopic reversibility), the kinetic theory is also connected to the principle of detailed balance, in terms of the fluctuation-dissipation theorem (for Brownian motion) and the Onsager reciprocal relations. The theory was historically significant as the first explicit exercise of the ideas of statistical mechanics. History Kinetic theory of matter Antiquity In about 50 BCE, the Roman philosopher Lucretius proposed that apparently static macroscopic bodies were composed on a small scale of rapidly moving atoms all bouncing off each other. This Epicurean atomistic point of view was rarely considered in the subsequent centuries, when Aristotlean ideas were dominant. Modern era "Heat is motion" One of the first and boldest statements on the relationship between motion of particles and heat was by the English philosopher Francis Bacon in 1620. "It must not be thought that heat generates motion, or motion heat (though in some respects this be true), but that the very essence of heat ... is motion and nothing else." "not a ... motion of the whole, but of the small particles of the body." In 1623, in The Assayer, Galileo Galilei, in turn, argued that heat, pressure, smell and other phenomena perceived by our senses are apparent properties only, caused by the movement of particles, which is a real phenomenon. In 1665, in Micrographia, the English polymath Robert Hooke repeated Bacon's assertion, and in 1675, his colleague, Anglo-Irish scientist Robert Boyle noted that a hammer's "impulse" is transformed into the motion of a nail's constituent particles, and that this type of motion is what heat consists of. Boyle also believed that all macroscopic properties, including color, taste and elasticity, are caused by and ultimately consist of nothing but the arrangement and motion of indivisible particles of matter. In a lecture of 1681, Hooke asserted a direct relationship between the temperature of an object and the speed of its internal particles. "Heat ... is nothing but the internal Motion of the Particles of [a] Body; and the hotter a Body is, the more violently are the Particles moved." In a manuscript published 1720, the English philosopher John Locke made a very similar statement: "What in our sensation is heat, in the object is nothing but motion." Locke too talked about the motion of the internal particles of the object, which he referred to as its "insensible parts". In his 1744 paper Meditations on the Cause of Heat and Cold, Russian polymath Mikhail Lomonosov made a relatable appeal to everyday experience to gain acceptance of the microscopic and kinetic nature of matter and heat:Lomonosov also insisted that movement of particles is necessary for the processes of dissolution, extraction and diffusion, providing as examples the dissolution and diffusion of salts by the action of water particles on the of the “molecules of salt”, the dissolution of metals in mercury, and the extraction of plant pigments by alcohol. Also the transfer of heat was explained by the motion of particles. Around 1760, Scottish physicist and chemist Joseph Black wrote: "Many have supposed that heat is a tremulous ... motion of the particles of matter, which ... motion they imagined to be communicated from one body to another." Kinetic theory of gases In 1738 Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the pressure of the gas, and that their average kinetic energy determines the temperature of the gas. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic. Pioneers of the kinetic theory, whose work was also largely neglected by their contemporaries, were Mikhail Lomonosov (1747), Georges-Louis Le Sage (ca. 1780, published 1818), John Herapath (1816) and John James Waterston (1843), which connected their research with the development of mechanical explanations of gravitation. In 1856 August Krönig created a simple gas-kinetic model, which only considered the translational motion of the particles. In 1857 Rudolf Clausius developed a similar, but more sophisticated version of the theory, which included translational and, contrary to Krönig, also rotational and vibrational molecular motions. In this same work he introduced the concept of mean free path of a particle. In 1859, after reading a paper about the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. In his 1873 thirteen page article 'Molecules', Maxwell states: "we are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases." In 1871, Ludwig Boltzmann generalized Maxwell's achievement and formulated the Maxwell–Boltzmann distribution. The logarithmic connection between entropy and probability was also first stated by Boltzmann. At the beginning of the 20th century, atoms were considered by many physicists to be purely hypothetical constructs, rather than real objects. An important turning point was Albert Einstein's (1905) and Marian Smoluchowski's (1906) papers on Brownian motion, which succeeded in making certain accurate quantitative predictions based on the kinetic theory. Following the development of the Boltzmann equation, a framework for its use in developing transport equations was developed independently by David Enskog and Sydney Chapman in 1917 and 1916. The framework provided a route to prediction of the transport properties of dilute gases, and became known as Chapman–Enskog theory. The framework was gradually expanded throughout the following century, eventually becoming a route to prediction of transport properties in real, dense gases. Assumptions The application of kinetic theory to ideal gases makes the following assumptions: The gas consists of very small particles. This smallness of their size is such that the sum of the volume of the individual gas molecules is negligible compared to the volume of the container of the gas. This is equivalent to stating that the average distance separating the gas particles is large compared to their size, and that the elapsed time during a collision between particles and the container's wall is negligible when compared to the time between successive collisions. The number of particles is so large that a statistical treatment of the problem is well justified. This assumption is sometimes referred to as the thermodynamic limit. The rapidly moving particles constantly collide among themselves and with the walls of the container, and all these collisions are perfectly elastic. Interactions (i.e. collisions) between particles are strictly binary and uncorrelated, meaning that there are no three-body (or higher) interactions, and the particles have no memory. Except during collisions, the interactions among molecules are negligible. They exert no other forces on one another. Thus, the dynamics of particle motion can be treated classically, and the equations of motion are time-reversible. As a simplifying assumption, the particles are usually assumed to have the same mass as one another; however, the theory can be generalized to a mass distribution, with each mass type contributing to the gas properties independently of one another in agreement with Dalton's law of partial pressures. Many of the model's predictions are the same whether or not collisions between particles are included, so they are often neglected as a simplifying assumption in derivations (see below). More modern developments, such as the revised Enskog theory and the extended Bhatnagar–Gross–Krook model, relax one or more of the above assumptions. These can accurately describe the properties of dense gases, and gases with internal degrees of freedom, because they include the volume of the particles as well as contributions from intermolecular and intramolecular forces as well as quantized molecular rotations, quantum rotational-vibrational symmetry effects, and electronic excitation. While theories relaxing the assumptions that the gas particles occupy negligible volume and that collisions are strictly elastic have been successful, it has been shown that relaxing the requirement of interactions being binary and uncorrelated will eventually lead to divergent results. Equilibrium properties Pressure and kinetic energy In the kinetic theory of gases, the pressure is assumed to be equal to the force (per unit area) exerted by the individual gas atoms or molecules hitting and rebounding from the gas container's surface. Consider a gas particle traveling at velocity, , along the -direction in an enclosed volume with characteristic length, , cross-sectional area, , and volume, . The gas particle encounters a boundary after characteristic time The momentum of the gas particle can then be described as We combine the above with Newton's second law, which states that the force experienced by a particle is related to the time rate of change of its momentum, such that Now consider a large number, , of gas particles with random orientation in a three-dimensional volume. Because the orientation is random, the average particle speed, , in every direction is identical Further, assume that the volume is symmetrical about its three dimensions, , such that The total surface area on which the gas particles act is therefore The pressure exerted by the collisions of the gas particles with the surface can then be found by adding the force contribution of every particle and dividing by the interior surface area of the volume, The total translational kinetic energy of the gas is defined as providing the result This is an important, non-trivial result of the kinetic theory because it relates pressure, a macroscopic property, to the translational kinetic energy of the molecules, which is a microscopic property. Temperature and kinetic energy Rewriting the above result for the pressure as , we may combine it with the ideal gas law where is the Boltzmann constant and is the absolute temperature defined by the ideal gas law, to obtain which leads to a simplified expression of the average translational kinetic energy per molecule, The translational kinetic energy of the system is times that of a molecule, namely . The temperature, is related to the translational kinetic energy by the description above, resulting in which becomes Equation () is one important result of the kinetic theory: The average molecular kinetic energy is proportional to the ideal gas law's absolute temperature. From equations () and (), we have Thus, the product of pressure and volume per mole is proportional to the average translational molecular kinetic energy. Equations () and () are called the "classical results", which could also be derived from statistical mechanics; for more details, see: The equipartition theorem requires that kinetic energy is partitioned equally between all kinetic degrees of freedom, D. A monatomic gas is axially symmetric about each spatial axis, so that D = 3 comprising translational motion along each axis. A diatomic gas is axially symmetric about only one axis, so that D = 5, comprising translational motion along three axes and rotational motion along two axes. A polyatomic gas, like water, is not radially symmetric about any axis, resulting in D = 6, comprising 3 translational and 3 rotational degrees of freedom. Because the equipartition theorem requires that kinetic energy is partitioned equally, the total kinetic energy is Thus, the energy added to the system per gas particle kinetic degree of freedom is Therefore, the kinetic energy per kelvin of one mole of monatomic ideal gas (D = 3) is where is the Avogadro constant, and R is the ideal gas constant. Thus, the ratio of the kinetic energy to the absolute temperature of an ideal monatomic gas can be calculated easily: per mole: 12.47 J/K per molecule: 20.7 yJ/K = 129 μeV/K At standard temperature (273.15 K), the kinetic energy can also be obtained: per mole: 3406 J per molecule: 5.65 zJ = 35.2 meV. At higher temperatures (typically thousands of kelvins), vibrational modes become active to provide additional degrees of freedom, creating a temperature-dependence on D and the total molecular energy. Quantum statistical mechanics is needed to accurately compute these contributions. Collisions with container wall For an ideal gas in equilibrium, the rate of collisions with the container wall and velocity distribution of particles hitting the container wall can be calculated based on naive kinetic theory, and the results can be used for analyzing effusive flow rates, which is useful in applications such as the gaseous diffusion method for isotope separation. Assume that in the container, the number density (number per unit volume) is and that the particles obey Maxwell's velocity distribution: Then for a small area on the container wall, a particle with speed at angle from the normal of the area , will collide with the area within time interval , if it is within the distance from the area . Therefore, all the particles with speed at angle from the normal that can reach area within time interval are contained in the tilted pipe with a height of and a volume of . The total number of particles that reach area within time interval also depends on the velocity distribution; All in all, it calculates to be: Integrating this over all appropriate velocities within the constraint , , yields the number of atomic or molecular collisions with a wall of a container per unit area per unit time: This quantity is also known as the "impingement rate" in vacuum physics. Note that to calculate the average speed of the Maxwell's velocity distribution, one has to integrate over , , . The momentum transfer to the container wall from particles hitting the area with speed at angle from the normal, in time interval is: Integrating this over all appropriate velocities within the constraint , , yields the pressure (consistent with Ideal gas law): If this small area is punched to become a small hole, the effusive flow rate will be: Combined with the ideal gas law, this yields The above expression is consistent with Graham's law. To calculate the velocity distribution of particles hitting this small area, we must take into account that all the particles with that hit the area within the time interval are contained in the tilted pipe with a height of and a volume of ; Therefore, compared to the Maxwell distribution, the velocity distribution will have an extra factor of : with the constraint , , . The constant can be determined by the normalization condition to be , and overall: Speed of molecules From the kinetic energy formula it can be shown that where v is in m/s, T is in kelvin, and m is the mass of one molecule of gas in kg. The most probable (or mode) speed is 81.6% of the root-mean-square speed , and the mean (arithmetic mean, or average) speed is 92.1% of the rms speed (isotropic distribution of speeds). See: Average, Root-mean-square speed Arithmetic mean Mean Mode (statistics) Mean free path In kinetic theory of gases, the mean free path is the average distance traveled by a molecule, or a number of molecules per volume, before they make their first collision. Let be the collision cross section of one molecule colliding with another. As in the previous section, the number density is defined as the number of molecules per (extensive) volume, or . The collision cross section per volume or collision cross section density is , and it is related to the mean free path by Notice that the unit of the collision cross section per volume is reciprocal of length. Transport properties The kinetic theory of gases deals not only with gases in thermodynamic equilibrium, but also very importantly with gases not in thermodynamic equilibrium. This means using Kinetic Theory to consider what are known as "transport properties", such as viscosity, thermal conductivity, mass diffusivity and thermal diffusion. In its most basic form, Kinetic gas theory is only applicable to dilute gases. The extension of Kinetic gas theory to dense gas mixtures, Revised Enskog Theory, was developed in 1983-1987 by E. G. D. Cohen, J. M. Kincaid and M. Lòpez de Haro, building on work by H. van Beijeren and M. H. Ernst. Viscosity and kinetic momentum In books on elementary kinetic theory one can find results for dilute gas modeling that are used in many fields. Derivation of the kinetic model for shear viscosity usually starts by considering a Couette flow where two parallel plates are separated by a gas layer. The upper plate is moving at a constant velocity to the right due to a force F. The lower plate is stationary, and an equal and opposite force must therefore be acting on it to keep it at rest. The molecules in the gas layer have a forward velocity component which increase uniformly with distance above the lower plate. The non-equilibrium flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions. Inside a dilute gas in a Couette flow setup, let be the forward velocity of the gas at a horizontal flat layer (labeled as ); is along the horizontal direction. The number of molecules arriving at the area on one side of the gas layer, with speed at angle from the normal, in time interval is These molecules made their last collision at , where is the mean free path. Each molecule will contribute a forward momentum of where plus sign applies to molecules from above, and minus sign below. Note that the forward velocity gradient can be considered to be constant over a distance of mean free path. Integrating over all appropriate velocities within the constraint , , yields the forward momentum transfer per unit time per unit area (also known as shear stress): The net rate of momentum per unit area that is transported across the imaginary surface is thus Combining the above kinetic equation with Newton's law of viscosity gives the equation for shear viscosity, which is usually denoted when it is a dilute gas: Combining this equation with the equation for mean free path gives Maxwell-Boltzmann distribution gives the average (equilibrium) molecular speed as where is the most probable speed. We note that and insert the velocity in the viscosity equation above. This gives the well known equation (with subsequently estimated below) for shear viscosity for dilute gases: and is the molar mass. The equation above presupposes that the gas density is low (i.e. the pressure is low). This implies that the transport of momentum through the gas due to the translational motion of molecules is much larger than the transport due to momentum being transferred between molecules during collisions. The transfer of momentum between molecules is explicitly accounted for in Revised Enskog theory, which relaxes the requirement of a gas being dilute. The viscosity equation further presupposes that there is only one type of gas molecules, and that the gas molecules are perfect elastic and hard core particles of spherical shape. This assumption of elastic, hard core spherical molecules, like billiard balls, implies that the collision cross section of one molecule can be estimated by The radius is called collision cross section radius or kinetic radius, and the diameter is called collision cross section diameter or kinetic diameter of a molecule in a monomolecular gas. There are no simple general relation between the collision cross section and the hard core size of the (fairly spherical) molecule. The relation depends on shape of the potential energy of the molecule. For a real spherical molecule (i.e. a noble gas atom or a reasonably spherical molecule) the interaction potential is more like the Lennard-Jones potential or Morse potential which have a negative part that attracts the other molecule from distances longer than the hard core radius. The radius for zero Lennard-Jones potential may then be used as a rough estimate for the kinetic radius. However, using this estimate will typically lead to an erroneous temperature dependency of the viscosity. For such interaction potentials, significantly more accurate results are obtained by numerical evaluation of the required collision integrals. The expression for viscosity obtained from Revised Enskog Theory reduces to the above expression in the limit of infinite dilution, and can be written as where is a term that tends to zero in the limit of infinite dilution that accounts for excluded volume, and is a term accounting for the transfer of momentum over a non-zero distance between particles during a collision. Thermal conductivity and heat flux Following a similar logic as above, one can derive the kinetic model for thermal conductivity of a dilute gas: Consider two parallel plates separated by a gas layer. Both plates have uniform temperatures, and are so massive compared to the gas layer that they can be treated as thermal reservoirs. The upper plate has a higher temperature than the lower plate. The molecules in the gas layer have a molecular kinetic energy which increases uniformly with distance above the lower plate. The non-equilibrium energy flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions. Let be the molecular kinetic energy of the gas at an imaginary horizontal surface inside the gas layer. The number of molecules arriving at an area on one side of the gas layer, with speed at angle from the normal, in time interval is These molecules made their last collision at a distance above and below the gas layer, and each will contribute a molecular kinetic energy of where is the specific heat capacity. Again, plus sign applies to molecules from above, and minus sign below. Note that the temperature gradient can be considered to be constant over a distance of mean free path. Integrating over all appropriate velocities within the constraint , , yields the energy transfer per unit time per unit area (also known as heat flux): Note that the energy transfer from above is in the direction, and therefore the overall minus sign in the equation. The net heat flux across the imaginary surface is thus Combining the above kinetic equation with Fourier's law gives the equation for thermal conductivity, which is usually denoted when it is a dilute gas: Similarly to viscosity, Revised Enskog Theory yields an expression for thermal conductivity that reduces to the above expression in the limit of infinite dilution, and which can be written as where is a term that tends to unity in the limit of infinite dilution, accounting for excluded volume, and is a term accounting for the transfer of energy across a non-zero distance between particles during a collision. Diffusion coefficient and diffusion flux Following a similar logic as above, one can derive the kinetic model for mass diffusivity of a dilute gas: Consider a steady diffusion between two regions of the same gas with perfectly flat and parallel boundaries separated by a layer of the same gas. Both regions have uniform number densities, but the upper region has a higher number density than the lower region. In the steady state, the number density at any point is constant (that is, independent of time). However, the number density in the layer increases uniformly with distance above the lower plate. The non-equilibrium molecular flow is superimposed on a Maxwell–Boltzmann equilibrium distribution of molecular motions. Let be the number density of the gas at an imaginary horizontal surface inside the layer. The number of molecules arriving at an area on one side of the gas layer, with speed at angle from the normal, in time interval is These molecules made their last collision at a distance above and below the gas layer, where the local number density is Again, plus sign applies to molecules from above, and minus sign below. Note that the number density gradient can be considered to be constant over a distance of mean free path. Integrating over all appropriate velocities within the constraint , , yields the molecular transfer per unit time per unit area (also known as diffusion flux): Note that the molecular transfer from above is in the direction, and therefore the overall minus sign in the equation. The net diffusion flux across the imaginary surface is thus Combining the above kinetic equation with Fick's first law of diffusion gives the equation for mass diffusivity, which is usually denoted when it is a dilute gas: The corresponding expression obtained from Revised Enskog Theory may be written as where is a factor that tends to unity in the limit of infinite dilution, which accounts for excluded volume and the variation chemical potentials with density. Detailed balance Fluctuation and dissipation The kinetic theory of gases entails that due to the microscopic reversibility of the gas particles' detailed dynamics, the system must obey the principle of detailed balance. Specifically, the fluctuation-dissipation theorem applies to the Brownian motion (or diffusion) and the drag force, which leads to the Einstein–Smoluchowski equation: where is the mass diffusivity; is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, ; is the Boltzmann constant; is the absolute temperature. Note that the mobility can be calculated based on the viscosity of the gas; Therefore, the Einstein–Smoluchowski equation also provides a relation between the mass diffusivity and the viscosity of the gas. Onsager reciprocal relations The mathematical similarities between the expressions for shear viscocity, thermal conductivity and diffusion coefficient of the ideal (dilute) gas is not a coincidence; It is a direct result of the Onsager reciprocal relations (i.e. the detailed balance of the reversible dynamics of the particles), when applied to the convection (matter flow due to temperature gradient, and heat flow due to pressure gradient) and advection (matter flow due to the velocity of particles, and momentum transfer due to pressure gradient) of the ideal (dilute) gas. See also Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations Boltzmann equation Chapman–Enskog theory Collision theory Critical temperature Gas laws Heat Interatomic potential Magnetohydrodynamics Maxwell–Boltzmann distribution Mixmaster universe Thermodynamics Vicsek model Vlasov equation References Citations Sources cited de Groot, S. R., W. A. van Leeuwen and Ch. G. van Weert (1980), Relativistic Kinetic Theory, North-Holland, Amsterdam. Liboff, R. L. (1990), Kinetic Theory, Prentice-Hall, Englewood Cliffs, N. J. (reprinted in his Papers, 3, 167, 183.) Further reading Sydney Chapman and Thomas George Cowling (1939/1970), The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, (first edition 1939, second edition 1952), third edition 1970 prepared in co-operation with D. Burnett, Cambridge University Press, London Joseph Oakland Hirschfelder, Charles Francis Curtiss, and Robert Byron Bird (1964), Molecular Theory of Gases and Liquids, revised edition (Wiley-Interscience), ISBN 978-0471400653 Richard Lawrence Liboff (2003), Kinetic Theory: Classical, Quantum, and Relativistic Descriptions, third edition (Springer), ISBN 978-0-387-21775-8 Behnam Rahimi and Henning Struchtrup (2016), "Macroscopic and kinetic modelling of rarefied polyatomic gases", Journal of Fluid Mechanics, 806, 437–505, DOI 10.1017/jfm.2016.604 External links Early Theories of Gases Thermodynamics - a chapter from an online textbook Temperature and Pressure of an Ideal Gas: The Equation of State on Project PHYSNET. Introduction to the kinetic molecular theory of gases, from The Upper Canada District School Board Java animation illustrating the kinetic theory from University of Arkansas Flowchart linking together kinetic theory concepts, from HyperPhysics Interactive Java Applets allowing high school students to experiment and discover how various factors affect rates of chemical reactions. https://www.youtube.com/watch?v=47bF13o8pb8&list=UUXrJjdDeqLgGjJbP1sMnH8A A demonstration apparatus for the thermal agitation in gases. Gases Thermodynamics Classical mechanics
Kinetic theory of gases
[ "Physics", "Chemistry", "Mathematics" ]
6,052
[ "Matter", "Phases of matter", "Classical mechanics", "Mechanics", "Thermodynamics", "Statistical mechanics", "Gases", "Dynamical systems" ]
64,212
https://en.wikipedia.org/wiki/Potassium%20nitrate
Potassium nitrate is a chemical compound with a sharp, salty, bitter taste and the chemical formula . It is a potassium salt of nitric acid. This salt consists of potassium cations and nitrate anions , and is therefore an alkali metal nitrate. It occurs in nature as a mineral, niter (or nitre outside the US). It is a source of nitrogen, and nitrogen was named after niter. Potassium nitrate is one of several nitrogen-containing compounds collectively referred to as saltpetre (or saltpeter in the US). Major uses of potassium nitrate are in fertilizers, tree stump removal, rocket propellants and fireworks. It is one of the major constituents of traditional gunpowder (black powder). In processed meats, potassium nitrate reacts with hemoglobin and myoglobin generating a red color, becoming highly toxic and carcinogenic. Etymology Nitre, or potassium nitrate, because of its early and global use and production, has many names. As for nitrate, Egyptian and Hebrew words for it had the consonants n-t-r, indicating likely cognation in the Greek nitron, which was Latinised to nitrum or nitrium. Thence Old French had niter and Middle English nitre. By the 15th century, Europeans referred to it as saltpetre, specifically Indian saltpetre (Chilean saltpetre is sodium nitrate) and later as nitrate of potash, as the chemistry of the compound was more fully understood. The Arabs called it "Chinese snow" () as well as bārūd (), a term of uncertain origin that later came to mean gunpowder. It was called "Chinese salt" by the Iranians/Persians or "salt from Chinese salt marshes" ( ). The Tiangong Kaiwu, published in the 17th century by members of the Qing dynasty, detailed the production of gunpowder and other useful products from nature. Historical production From mineral sources In Mauryan India saltpeter manufacturers formed the Nuniya & Labana caste. Saltpeter finds mention in Kautilya's Arthashastra (compiled 300BC – 300AD), which mentions using its poisonous smoke as a weapon of war, although its use for propulsion did not appear until medieval times. A purification process for potassium nitrate was outlined in 1270 by the chemist and engineer Hasan al-Rammah of Syria in his book al-Furusiyya wa al-Manasib al-Harbiyya (The Book of Military Horsemanship and Ingenious War Devices). In this book, al-Rammah describes first the purification of barud (crude saltpeter mineral) by boiling it with minimal water and using only the hot solution, then the use of potassium carbonate (in the form of wood ashes) to remove calcium and magnesium by precipitation of their carbonates from this solution, leaving a solution of purified potassium nitrate, which could then be dried. This was used for the manufacture of gunpowder and explosive devices. The terminology used by al-Rammah indicated the gunpowder he wrote about originated in China. At least as far back as 1845, nitratite deposits were exploited in Chile and California. From caves Major natural sources of potassium nitrate were the deposits crystallizing from cave walls and the accumulations of bat guano in caves. Extraction is accomplished by immersing the guano in water for a day, filtering, and harvesting the crystals in the filtered water. Traditionally, guano was the source used in Laos for the manufacture of gunpowder for Bang Fai rockets. Calcium nitrate, or lime saltpetre, was discovered on the walls of stables, from the urine of barnyard animals. Nitraries Potassium nitrate was produced in a nitrary or "saltpetre works". The process involved burial of excrements (human or animal) in a field beside the nitraries, watering them and waiting until leaching allowed saltpeter to migrate to the surface by efflorescence. Operators then gathered the resulting powder and transported it to be concentrated by ebullition in the boiler plant. Besides "Montepellusanus", during the thirteenth century (and beyond) the only supply of saltpeter across Christian Europe (according to "De Alchimia" in 3 manuscripts of Michael Scot, 1180–1236) was "found in Spain in Aragon in a certain mountain near the sea". In 1561, Elizabeth I, Queen of England and Ireland, who was at war with Philip II of Spain, became unable to import saltpeter (of which the Kingdom of England had no home production), and had to pay "300 pounds gold" to the German captain Gerrard Honrik for the manual "Instructions for making saltpeter to growe" (the secret of the "Feuerwerkbuch" -the nitraries-). Nitre bed A nitre bed is a similar process used to produce nitrate from excrement. Unlike the leaching-based process of the nitrary, however, one mixes the excrements with soil and waits for soil microbes to convert amino-nitrogen into nitrates by nitrification. The nitrates are extracted from soil with water and then purified into saltpeter by adding wood ash. The process was discovered in the early 15th century and was very widely used until the Chilean mineral deposits were found. The Confederate side of the American Civil War had a significant shortage of saltpeter. As a result, the Nitre and Mining Bureau was set up to encourage local production, including by nitre beds and by providing excrement to government nitraries. On November 13, 1862, the government advertised in the Charleston Daily Courier for 20 or 30 "able bodied Negro men" to work in the new nitre beds at Ashley Ferry, S.C. The nitre beds were large rectangles of rotted manure and straw, moistened weekly with urine, "dung water", and liquid from privies, cesspools and drains, and turned over regularly. The National Archives published payroll records that account for more than 29,000 people compelled to such labor in the state of Virginia. The South was so desperate for saltpeter for gunpowder that one Alabama official reportedly placed a newspaper ad asking that the contents of chamber pots be saved for collection. In South Carolina, in April 1864, the Confederate government forced 31 enslaved people to work at the Ashley Ferry Nitre Works, outside Charleston. Perhaps the most exhaustive discussion of the niter-bed production is the 1862 LeConte text. He was writing with the express purpose of increasing production in the Confederate States to support their needs during the American Civil War. Since he was calling for the assistance of rural farming communities, the descriptions and instructions are both simple and explicit. He details the "French Method", along with several variations, as well as a "Swiss method". N.B. Many references have been made to a method using only straw and urine, but there is no such method in this work. French method Turgot and Lavoisier created the Régie des Poudres et Salpêtres a few years before the French Revolution. Niter-beds were prepared by mixing manure with either mortar or wood ashes, common earth and organic materials such as straw to give porosity to a compost pile typically high, wide, and long. The heap was usually under a cover from the rain, kept moist with urine, turned often to accelerate the decomposition, then finally leached with water after approximately one year, to remove the soluble calcium nitrate which was then converted to potassium nitrate by filtering through potash. Swiss method Joseph LeConte describes a process using only urine and not dung, referring to it as the Swiss method. Urine is collected directly, in a sandpit under a stable. The sand itself is dug out and leached for nitrates which are then converted to potassium nitrate using potash, as above. From nitric acid From 1903 until the World War I era, potassium nitrate for black powder and fertilizer was produced on an industrial scale from nitric acid produced using the Birkeland–Eyde process, which used an electric arc to oxidize nitrogen from the air. During World War I the newly industrialized Haber process (1913) was combined with the Ostwald process after 1915, allowing Germany to produce nitric acid for the war after being cut off from its supplies of mineral sodium nitrates from Chile (see nitratite). Modern production Potassium nitrate can be made by combining ammonium nitrate and potassium hydroxide. An alternative way of producing potassium nitrate without a by-product of ammonia is to combine ammonium nitrate, found in instant ice packs, and potassium chloride, easily obtained as a sodium-free salt substitute. Potassium nitrate can also be produced by neutralizing nitric acid with potassium hydroxide. This reaction is highly exothermic. On industrial scale it is prepared by the double displacement reaction between sodium nitrate and potassium chloride. Properties Potassium nitrate has an orthorhombic crystal structure at room temperature, which transforms to a trigonal system at . On cooling from , another trigonal phase forms between and . Sodium nitrate is isomorphous with calcite, the most stable form of calcium carbonate, whereas room-temperature potassium nitrate is isomorphous with aragonite, a slightly less stable polymorph of calcium carbonate. The difference is attributed to the similarity in size between nitrate () and carbonate () ions and the fact that the potassium ion () is larger than sodium () and calcium () ions. In the room-temperature structure of potassium nitrate, each potassium ion is surrounded by 6 nitrate ions. In turn, each nitrate ion is surrounded by 6 potassium ions. Potassium nitrate is moderately soluble in water, but its solubility increases with temperature. The aqueous solution is almost neutral, exhibiting pH 6.2 at for a 10% solution of commercial powder. It is not very hygroscopic, absorbing about 0.03% water in 80% relative humidity over 50 days. It is insoluble in alcohol and is not poisonous; it can react explosively with reducing agents, but it is not explosive on its own. Thermal decomposition Between , potassium nitrate reaches a temperature-dependent equilibrium with potassium nitrite: Uses Potassium nitrate has a wide variety of uses, largely as a source of nitrate. Nitric acid production Historically, nitric acid was produced by combining sulfuric acid with nitrates such as saltpeter. In modern times this is reversed: nitrates are produced from nitric acid produced via the Ostwald process. Oxidizer The most famous use of potassium nitrate is probably as the oxidizer in blackpowder. From the most ancient times until the late 1880s, blackpowder provided the explosive power for all the world's firearms. After that time, small arms and large artillery increasingly began to depend on cordite, a smokeless powder. Blackpowder remains in use today in black powder rocket motors, but also in combination with other fuels like sugars in "rocket candy" (a popular amateur rocket propellant). It is also used in fireworks such as smoke bombs. It is also added to cigarettes to maintain an even burn of the tobacco and is used to ensure complete combustion of paper cartridges for cap and ball revolvers. It can also be heated to several hundred degrees to be used for niter bluing, which is less durable than other forms of protective oxidation, but allows for specific and often beautiful coloration of steel parts, such as screws, pins, and other small parts of firearms. Meat processing Potassium nitrate has been a common ingredient of salted meat since antiquity or the Middle Ages. The widespread adoption of nitrate use is more recent and is linked to the development of large-scale meat processing. The use of potassium nitrate has been mostly discontinued because it gives slow and inconsistent results compared with sodium nitrite preparations such as "Prague powder" or pink "curing salt". Even so, potassium nitrate is still used in some food applications, such as salami, dry-cured ham, charcuterie, and (in some countries) in the brine used to make corned beef (sometimes together with sodium nitrite). In the Shetland Islands (UK) it is used in the curing of mutton to make reestit mutton, a local delicacy. When used as a food additive in the European Union, the compound is referred to as E252; it is also approved for use as a food additive in the United States and Australia and New Zealand (where it is listed under its INS number 252). Possible cancer risk Since October 2015, WHO classifies processed meat as Group 1 carcinogen (based on epidemiological studies, convincingly carcinogenic to humans). In April 2023 the French Court of Appeals of Limoges confirmed that food-watch NGO Yuka was legally legitimate in describing Potassium Nitrate E249 to E252 as a "cancer risk", and thus rejected an appeal by the French industry against the organisation. Fertilizer Potassium nitrate is used in fertilizers as a source of nitrogen and potassium – two of the macronutrients for plants. When used by itself, it has an NPK rating of 13-0-44. Pharmacology Used in some toothpastes for sensitive teeth. It has been used since 1980, although the efficacy is not strongly supported by the literature. Used historically to treat asthma. Used in some toothpastes to relieve asthma symptoms. Used in Thailand as main ingredient in kidney tablets to relieve the symptoms of cystitis, pyelitis and urethritis. Combats high blood pressure and was once used as a hypotensive. Other uses Used as an electrolyte in a salt bridge. Active ingredient of condensed aerosol fire suppression systems. When burned with the free radicals of a fire's flame, it produces potassium carbonate. Works as an aluminium cleaner. Component (usually about 98%) of some tree stump removal products. It accelerates the natural decomposition of the stump by supplying nitrogen for the fungi attacking the wood of the stump. In heat treatment of metals as a medium temperature molten salt bath, usually in combination with sodium nitrite. A similar bath is used to produce a durable blue/black finish typically seen on firearms. Its oxidizing quality, water solubility, and low cost make it an ideal short-term rust inhibitor. In glass toughening: molten potassium nitrate bath is used to increase glass strength and scratch-resistance. To induce flowering of mango trees in the Philippines. Thermal storage medium in power generation systems. Sodium and potassium nitrate salts are stored in a molten state with the solar energy collected by the heliostats at the Gemasolar Thermosolar Plant. Ternary salts, with the addition of calcium nitrate or lithium nitrate, have been found to improve the heat storage capacity in the molten salts. As a source of potassium ions for exchange with sodium ions in chemically strengthened glass. As an oxidizer in model rocket fuel called Rocket candy. As a constituent in homemade smoke bombs. In folklore and popular culture Potassium nitrate was once thought to induce impotence, and is still rumored to be in institutional food (such as military fare). There is no scientific evidence for such properties. In Bank Shot, El (Joanna Cassidy) propositions Walter Ballantine (George C. Scott), who tells her that he has been fed saltpeter in prison. In One Flew Over the Cuckoo's Nest, Randle is asked by the nurses to take his medications, but not knowing what they are, he mentions he does not want anyone to "slip me saltpeter". He then proceeds to imitate the motions of masturbation. In 1776, John Adams asks his wife Abigail to make saltpeter for the Continental Army. She, eventually, is able to do so in exchange for pins for sewing. In the Star Trek episode "Arena", Captain Kirk injures a gorn using a rudimentary cannon that he constructs using potassium nitrate as a key ingredient of gunpowder. In 21 Jump Street, Jenko, played by Channing Tatum, gives a rhyming presentation about potassium nitrate for his chemistry class. In Eating Raoul, Paul hires a dominatrix to impersonate a nurse and trick Raoul into consuming saltpeter in a ploy to reduce his sexual appetite for his wife. In The Simpsons episode "El Viaje Misterioso de Nuestro Jomer (The Mysterious Voyage of Our Homer)", Mr. Burns is seen pouring saltpeter into his chili entry, titled Old Elihu's Yale-Style Saltpeter Chili. In the Sharpe novel series by Bernard Cornwell, numerous mentions are made of an advantageous supply of saltpeter from India being a crucial component of British military supremacy in the Napoleonic Wars. In Sharpe's Havoc, the French Captain Argenton laments that France needs to scrape its supply from cesspits. In the Dr. Stone anime and manga series, the struggle for control over a natural saltpeter source from guano features prominently in the plot. In the farming lore from the Corn Belt of the 1800s, drought-killed corn in manured fields could accumulate saltpeter to the extent that upon opening the stalk for examination it would "fall as a fine powder upon the table". In the Slovenian short story Martin Krpan from Vrh pri Sveti Trojici, the titular character and Slovene folk hero Martin Krpan illegally smuggles "English salt" for a living. The exact nature of "English salt" is a matter of debate, but it may have been a euphemism for potassium nitrate (saltpeter) due to its role in manufacturing gunpowder. In Dexter: Original Sin's first episode, Dexter's first victim uses potassium nitrate to kill her victims. See also History of gunpowder Humberstone and Santa Laura Saltpeter Works Niter, a mineral form of potassium nitrate Nitratine Nitrocellulose Potassium perchlorate References Bibliography David Cressy. Saltpeter: The Mother of Gunpowder (Oxford University Press, 2013) 237 pp online review by Robert Tiegs Alan Williams. "The production of saltpeter in the Middle Ages", Ambix, 22 (1975), pp. 125–33. Maney Publishing, ISSN 0002-6980. External links International Chemical Safety Card 018402216 Gunpowder Inorganic fertilizers Nitrates Potassium compounds Preservatives Pyrotechnic oxidizers E-number additives
Potassium nitrate
[ "Chemistry" ]
3,836
[ "Oxidizing agents", "Nitrates", "Salts" ]
64,219
https://en.wikipedia.org/wiki/Bernoulli%27s%20principle
Bernoulli's principle is a key concept in fluid dynamics that relates pressure, density, speed and height. Bernoulli's principle states that an increase in the speed of a parcel of fluid occurs simultaneously with a decrease in either the pressure or the height above a datum. The principle is named after the Swiss mathematician and physicist Daniel Bernoulli, who published it in his book Hydrodynamica in 1738. Although Bernoulli deduced that pressure decreases when the flow speed increases, it was Leonhard Euler in 1752 who derived Bernoulli's equation in its usual form. Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of energy in a fluid is the same at all points that are free of viscous forces. This requires that the sum of kinetic energy, potential energy and internal energy remains constant. Thus an increase in the speed of the fluid—implying an increase in its kinetic energy—occurs with a simultaneous decrease in (the sum of) its potential energy (including the static pressure) and internal energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ) is the same everywhere. Bernoulli's principle can also be derived directly from Isaac Newton's second Law of Motion. When fluid is flowing horizontally from a region of high pressure to a region of low pressure, there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline. Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. Bernoulli's principle is only applicable for isentropic flows: when the effects of irreversible processes (like turbulence) and non-adiabatic processes (e.g. thermal radiation) are small and can be neglected. However, the principle can be applied to various types of flow within these bounds, resulting in various forms of Bernoulli's equation. The simple form of Bernoulli's equation is valid for incompressible flows (e.g. most liquid flows and gases moving at low Mach number). More advanced forms may be applied to compressible flows at higher Mach numbers. Incompressible flow equation In most flows of liquids, and of gases at low Mach number, the density of a fluid parcel can be considered to be constant, regardless of pressure variations in the flow. Therefore, the fluid can be considered to be incompressible, and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow. A common form of Bernoulli's equation is: where: is the fluid flow speed at a point, is the acceleration due to gravity, is the elevation of the point above a reference plane, with the positive -direction pointing upward—so in the direction opposite to the gravitational acceleration, is the static pressure at the chosen point, and is the density of the fluid at all points in the fluid. Bernoulli's equation and the Bernoulli constant are applicable throughout any region of flow where the energy per unit mass is uniform. Because the energy per unit mass of liquid in a well-mixed reservoir is uniform throughout, Bernoulli's equation can be used to analyze the fluid flow everywhere in that reservoir (including pipes or flow fields that the reservoir feeds) except where viscous forces dominate and erode the energy per unit mass. The following assumptions must be met for this Bernoulli equation to apply: the flow must be steady, that is, the flow parameters (velocity, density, etc.) at any point cannot change with time, the flow must be incompressible—even though pressure varies, the density must remain constant along a streamline; friction by viscous forces must be negligible. For conservative force fields (not limited to the gravitational field), Bernoulli's equation can be generalized as: where is the force potential at the point considered. For example, for the Earth's gravity . By multiplying with the fluid density , equation () can be rewritten as: or: where is dynamic pressure, is the piezometric head or hydraulic head (the sum of the elevation and the pressure head) and is the stagnation pressure (the sum of the static pressure and dynamic pressure ). The constant in the Bernoulli equation can be normalized. A common approach is in terms of total head or energy head : The above equations suggest there is a flow speed at which pressure is zero, and at even higher speeds the pressure is negative. Most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoulli's equation ceases to be valid before zero pressure is reached. In liquids—when the pressure becomes too low—cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, the changes in mass density become significant so that the assumption of constant density is invalid. Simplified form In many applications of Bernoulli's equation, the change in the term is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height is so small the term can be omitted. This allows the above equation to be presented in the following simplified form: where is called total pressure, and is dynamic pressure. Many authors refer to the pressure as static pressure to distinguish it from total pressure and dynamic pressure . In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure." The simplified form of Bernoulli's equation can be summarized in the following memorable word equation: Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own unique static pressure and dynamic pressure . Their sum is defined to be the total pressure . The significance of Bernoulli's principle can now be summarized as "total pressure is constant in any region free of viscous forces". If the fluid flow is brought to rest at some point, this point is called a stagnation point, and at this point the static pressure is equal to the stagnation pressure. If the fluid flow is irrotational, the total pressure is uniform and Bernoulli's principle can be summarized as "total pressure is constant everywhere in the fluid flow". It is reasonable to assume that irrotational flow exists in any situation where a large body of fluid is flowing past a solid body. Examples are aircraft in flight and ships moving in open bodies of water. However, Bernoulli's principle importantly does not apply in the boundary layer such as in flow through long pipes. Unsteady potential flow The Bernoulli equation for unsteady potential flow is used in the theory of ocean surface waves and acoustics. For an irrotational flow, the flow velocity can be described as the gradient of a velocity potential . In that case, and for a constant density , the momentum equations of the Euler equations can be integrated to: which is a Bernoulli equation valid also for unsteady—or time dependent—flows. Here denotes the partial derivative of the velocity potential with respect to time , and is the flow speed. The function depends only on time and not on position in the fluid. As a result, the Bernoulli equation at some moment applies in the whole fluid domain. This is also true for the special case of a steady irrotational flow, in which case and are constants so equation () can be applied in every point of the fluid domain. Further can be made equal to zero by incorporating it into the velocity potential using the transformation: resulting in: Note that the relation of the potential to the flow velocity is unaffected by this transformation: . The Bernoulli equation for unsteady potential flow also appears to play a central role in Luke's variational principle, a variational description of free-surface flows using the Lagrangian mechanics. Compressible flow equation Bernoulli developed his principle from observations on liquids, and Bernoulli's equation is valid for ideal fluids: those that are incompressible, irrotational, inviscid, and subjected to conservative forces. It is sometimes valid for the flow of gases: provided that there is no transfer of kinetic or potential energy from the gas flow to the compression or expansion of the gas. If both the gas pressure and volume change simultaneously, then work will be done on or by the gas. In this case, Bernoulli's equation—in its incompressible flow form—cannot be assumed to be valid. However, if the gas process is entirely isobaric, or isochoric, then no work is done on or by the gas (so the simple energy balance is not upset). According to the gas law, an isobaric or isochoric process is ordinarily the only way to ensure constant density in a gas. Also the gas density will be proportional to the ratio of pressure and absolute temperature; however, this ratio will vary upon compression or expansion, no matter what non-zero quantity of heat is added or removed. The only exception is if the net heat transfer is zero, as in a complete thermodynamic cycle or in an individual isentropic (frictionless adiabatic) process, and even then this reversible process must be reversed, to restore the gas to the original pressure and specific volume, and thus density. Only then is the original, unmodified Bernoulli equation applicable. In this case the equation can be used if the flow speed of the gas is sufficiently below the speed of sound, such that the variation in density of the gas (due to this effect) along each streamline can be ignored. Adiabatic flow at less than Mach 0.3 is generally considered to be slow enough. It is possible to use the fundamental principles of physics to develop similar equations applicable to compressible fluids. There are numerous equations, each tailored for a particular application, but all are analogous to Bernoulli's equation and all rely on nothing more than the fundamental principles of physics such as Newton's laws of motion or the first law of thermodynamics. Compressible flow in fluid dynamics For a compressible fluid, with a barotropic equation of state, and under the action of conservative forces, where: is the pressure is the density and indicates that it is a function of pressure is the flow speed is the potential associated with the conservative force field, often the gravitational potential In engineering situations, elevations are generally small compared to the size of the Earth, and the time scales of fluid flow are small enough to consider the equation of state as adiabatic. In this case, the above equation for an ideal gas becomes: where, in addition to the terms listed above: is the ratio of the specific heats of the fluid is the acceleration due to gravity is the elevation of the point above a reference plane In many applications of compressible flow, changes in elevation are negligible compared to the other terms, so the term can be omitted. A very useful form of the equation is then: where: is the total pressure is the total density Compressible flow in thermodynamics The most general form of the equation, suitable for use in thermodynamics in case of (quasi) steady flow, is: Here is the enthalpy per unit mass (also known as specific enthalpy), which is also often written as (not to be confused with "head" or "height"). Note that where is the thermodynamic energy per unit mass, also known as the specific internal energy. So, for constant internal energy the equation reduces to the incompressible-flow form. The constant on the right-hand side is often called the Bernoulli constant and denoted . For steady inviscid adiabatic flow with no additional sources or sinks of energy, is constant along any given streamline. More generally, when may vary along streamlines, it still proves a useful parameter, related to the "head" of the fluid (see below). When the change in can be ignored, a very useful form of this equation is: where is total enthalpy. For a calorically perfect gas such as an ideal gas, the enthalpy is directly proportional to the temperature, and this leads to the concept of the total (or stagnation) temperature. When shock waves are present, in a reference frame in which the shock is stationary and the flow is steady, many of the parameters in the Bernoulli equation suffer abrupt changes in passing through the shock. The Bernoulli parameter remains unaffected. An exception to this rule is radiative shocks, which violate the assumptions leading to the Bernoulli equation, namely the lack of additional sinks or sources of energy. Unsteady potential flow For a compressible fluid, with a barotropic equation of state, the unsteady momentum conservation equation With the irrotational assumption, namely, the flow velocity can be described as the gradient of a velocity potential . The unsteady momentum conservation equation becomes which leads to In this case, the above equation for isentropic flow becomes: Derivations Applications In modern everyday life there are many observations that can be successfully explained by application of Bernoulli's principle, even though no real fluid is entirely inviscid, and a small viscosity often has a large effect on the flow. Bernoulli's principle can be used to calculate the lift force on an airfoil, if the behaviour of the fluid flow in the vicinity of the foil is known. For example, if the air flowing past the top surface of an aircraft wing is moving faster than the air flowing past the bottom surface, then Bernoulli's principle implies that the pressure on the surfaces of the wing will be lower above than below. This pressure difference results in an upwards lifting force. Whenever the distribution of speed past the top and bottom surfaces of a wing is known, the lift forces can be calculated (to a good approximation) using Bernoulli's equations, which were established by Bernoulli over a century before the first man-made wings were used for the purpose of flight. The carburetor used in many reciprocating engines contains a venturi to create a region of low pressure to draw fuel into the carburetor and mix it thoroughly with the incoming air. The low pressure in the venturi can be explained by Bernoulli's principle. In the narrow throat the air is moving at its fastest speed and therefore it is at its lowest pressure. The carburetor may or may not use the difference between the two static pressures of the Venturi effect on the air flow in order to force the fuel to flow, and for a basic carburetor uses the difference in pressure between the throat and local air pressure in the float bowl.. An injector on a steam locomotive or a static boiler. The pitot tube and static port on an aircraft are used to determine the airspeed of the aircraft. These two devices are connected to the airspeed indicator, which determines the dynamic pressure of the airflow past the aircraft. Bernoulli's principle is used to calibrate the airspeed indicator so that it displays the indicated airspeed appropriate to the dynamic pressure. A De Laval nozzle utilizes Bernoulli's principle to create a force by turning pressure energy generated by the combustion of propellants into velocity. This then generates thrust by way of Newton's third law of motion. The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently, Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect. The maximum possible drain rate for a tank with a hole or tap at the base can be calculated directly from Bernoulli's equation and is found to be proportional to the square root of the height of the fluid in the tank. This is Torricelli's law, which is compatible with Bernoulli's principle. Increased viscosity lowers this drain rate; this is reflected in the discharge coefficient, which is a function of the Reynolds number and the shape of the orifice. The Bernoulli grip relies on this principle to create a non-contact adhesive force between a surface and the gripper. During a cricket match, bowlers continually polish one side of the ball. After some time, one side is quite rough and the other is still smooth. Hence, when the ball is bowled and passes through air, the speed on one side of the ball is faster than on the other, and this results in a pressure difference between the sides; this leads to the ball rotating ("swinging") while travelling through the air, giving advantage to the bowlers. Misconceptions Airfoil lift One of the most common erroneous explanations of aerodynamic lift asserts that the air must traverse the upper and lower surfaces of a wing in the same amount of time, implying that since the upper surface presents a longer path the air must be moving over the top of the wing faster than over the bottom. Bernoulli's principle is then cited to conclude that the pressure on top of the wing must be lower than on the bottom. Equal transit time applies to the flow around a body generating no lift, but there is no physical principle that requires equal transit time in cases of bodies generating lift. In fact, theory predicts – and experiments confirm – that the air traverses the top surface of a body experiencing lift in a shorter time than it traverses the bottom surface; the explanation based on equal transit time is false. While the equal-time explanation is false, it is not the Bernoulli principle that is false, because this principle is well established; Bernoulli's equation is used correctly in common mathematical treatments of aerodynamic lift. Common classroom demonstrations There are several common classroom demonstrations that are sometimes incorrectly explained using Bernoulli's principle. One involves holding a piece of paper horizontally so that it droops downward and then blowing over the top of it. As the demonstrator blows over the paper, the paper rises. It is then asserted that this is because "faster moving air has lower pressure". One problem with this explanation can be seen by blowing along the bottom of the paper: if the deflection was caused by faster moving air, then the paper should deflect downward; but the paper deflects upward regardless of whether the faster moving air is on the top or the bottom. Another problem is that when the air leaves the demonstrator's mouth it has the same pressure as the surrounding air; the air does not have lower pressure just because it is moving; in the demonstration, the static pressure of the air leaving the demonstrator's mouth is equal to the pressure of the surrounding air. A third problem is that it is false to make a connection between the flow on the two sides of the paper using Bernoulli's equation since the air above and below are different flow fields and Bernoulli's principle only applies within a flow field. As the wording of the principle can change its implications, stating the principle correctly is important. What Bernoulli's principle actually says is that within a flow of constant energy, when fluid flows through a region of lower pressure it speeds up and vice versa. Thus, Bernoulli's principle concerns itself with changes in speed and changes in pressure within a flow field. It cannot be used to compare different flow fields. A correct explanation of why the paper rises would observe that the plume follows the curve of the paper and that a curved streamline will develop a pressure gradient perpendicular to the direction of flow, with the lower pressure on the inside of the curve. Bernoulli's principle predicts that the decrease in pressure is associated with an increase in speed; in other words, as the air passes over the paper, it speeds up and moves faster than it was moving when it left the demonstrator's mouth. But this is not apparent from the demonstration. Other common classroom demonstrations, such as blowing between two suspended spheres, inflating a large bag, or suspending a ball in an airstream are sometimes explained in a similarly misleading manner by saying "faster moving air has lower pressure". See also Torricelli's law Coandă effect Euler equations – for the flow of an inviscid fluid Hydraulics – applied fluid mechanics for liquids Navier–Stokes equations – for the flow of a viscous fluid Teapot effect Terminology in fluid dynamics Notes References External links The Flow of Dry Water - The Feynman Lectures on Physics Science 101 Q: Is It Really Caused by the Bernoulli Effect? Bernoulli equation calculator Millersville University – Applications of Euler's equation NASA – Beginner's guide to aerodynamics Misinterpretations of Bernoulli's equation – Weltner and Ingelman-Sundberg Fluid dynamics Eponymous laws of physics Equations of fluid dynamics 1738 in science
Bernoulli's principle
[ "Physics", "Chemistry", "Engineering" ]
4,616
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Piping", "Fluid dynamics" ]
64,221
https://en.wikipedia.org/wiki/Biorhythm%20%28pseudoscience%29
The biorhythm theory is the pseudoscientific idea that peoples' daily lives are significantly affected by rhythmic cycles with periods of exactly 23, 28 and 33 days, typically a 23-day physical cycle, a 28-day emotional cycle, and a 33-day intellectual cycle. The idea was developed by German otolaryngologist Wilhelm Fliess in the late 19th century, and was popularized in the United States in the late 1970s. The proposal has been independently tested and, consistently, no validity for it has been found. According to the notion of biorhythms, a person's life is influenced by rhythmic biological cycles that affect his or her ability in various domains, such as mental, physical, and emotional activity. These cycles begin at birth and oscillate in a steady (sine wave) fashion throughout life, and by modeling them mathematically, it is suggested that a person's level of ability in each of these domains can be predicted from day to day. It is built on the idea that the biofeedback chemical and hormonal secretion functions within the body could show a sinusoidal behavior over time. Most biorhythm models use three cycles: a 23-day physical cycle, a 28-day emotional cycle, and a 33-day intellectual cycle. These cycles are to be adjusted based on the person's personal day clock which may run from 22 hours to 27 hours although 23-25 is the norm. Two ways one can find their personal day clock is to test one's grip and body temperature every 15 minutes for a few days or the same time each day for a few months. Although the 28-day cycle is the same length as the average woman's menstrual cycle and was originally described as a "female" cycle (see below), the two are not necessarily in synchronization. Each of these cycles varies between high and low extremes sinusoidally, with days where the cycle crosses the zero line described as "critical days" of greater risk or uncertainty. The numbers from +100% (maximum) to -100% (minimum) indicate where on each cycle the rhythms are on a particular day. In general, a rhythm at 0% is crossing the midpoint and is thought to have no real impact on one's life, whereas a rhythm at +100% (at the peak of that cycle) would give one an edge in that area, and a rhythm at -100% (at the bottom of that cycle) would make life more difficult in that area. There is no particular meaning to a day on which one's rhythms are all high or all low, except the obvious benefits or hindrances that these rare extremes are thought to have on one's life. In addition to the three popular cycles, various other cycles have been proposed, based on linear combination of the three, or on longer or shorter rhythms. Calculation Theories published state the equations for the cycles as: physical: , emotional: , intellectual: , where indicates the number of days since birth. Basic arithmetic shows that the combination of the simpler 23- and 28-day cycles repeats every 644 days (or 1 years), while the triple combination of 23-, 28-, and 33-day cycles repeats every 21,252 days (or 58.18+ years). History The 23- and 28-day rhythms used by biorhythmists were first devised in the late 19th century by Wilhelm Fliess, a Berlin physician and friend of Sigmund Freud. Fliess believed that he observed regularities at 23- and 28-day intervals in a number of phenomena, including births and deaths. He labeled the 23-day rhythm "male" and the 28-day rhythm "female", matching the menstrual cycle. In 1904, Viennese psychology professor Hermann Swoboda came to similar conclusions. Alfred Teltscher, professor of engineering at the University of Innsbruck, developed Swoboda's work and suggested that his students' good and bad days followed a rhythmic pattern; he believed that the brain's ability to absorb, mental ability, and alertness ran in 33-day cycles. One of the first academic researchers of biorhythms was Estonian-born Nikolai Pärna, who published a book in German called Rhythm, Life and Creation in 1923. The practice of consulting biorhythms was popularized in the 1970s by a series of books by Bernard Gittelson, including Biorhythm—A Personal Science, Biorhythm Charts of the Famous and Infamous, and Biorhythm Sports Forecasting. Gittelson's company, Biorhythm Computers, Inc., made a business selling personal biorhythm charts and calculators, but his ability to predict sporting events was not substantiated. Charting biorhythms for personal use was popular in the United States during the 1970s; many places (especially video arcades and amusement areas) had a biorhythm machine that provided charts upon entry of date of birth. Biorhythm programs were a common application on personal computers; and in the late 1970s, there were also handheld biorhythm calculators on the market, the Kosmos 1 and the Casio Biolator. Critical views There have been some three dozen published studies of biorhythm theory, but according to a study by Terence Hines, all of those either supported the null hypothesis that there is no correlation of human experience and the supposed biorhythms beyond what can be explained by coincidence, or, in cases where authors claimed to have evidence for biorhythm theory, methodological and statistical errors invalidated their conclusions. Hines therefore concluded that the theory is not valid. Supporters continued to defend the theory in spite of the lack of corroborating scientific evidence, leading to the charge that it had become a kind of pseudoscience due to its proponents' rejection of empirical testing: The physiologist Gordon Stein in the book Encyclopedia of Hoaxes (1993) wrote:Both the theoretical underpinning and the practical scientific verification of biorhythm theory are lacking. Without those, biorhythms became just another pseudoscientific claim that people are willing to accept without required evidence. Those pushing biorhythm calculators and books on a gullible public are guilty of making fraudulent claims. They are hoaxers of the public if they know what they are saying has no factual justification.A 1978 study of the incidence of industrial accidents found neither empirical nor theoretical support for the biorhythm model. In Underwood Dudley's book, Numerology: Or What Pythagoras Wrought, he provides an example of a situation in which a magician provides a woman her biorhythm chart that supposedly included the next two years of her life. The women sent letters to the magician describing how accurate the chart was. The magician purposely sent her a biorhythm chart based on a different birthdate. After he explained that he sent the wrong chart to her, he sent her another chart, also having the wrong birthdate. She then said that this new chart was even more accurate than the previous one. This kind of willful credulous belief in vague or inaccurate prognostication derives from motivated reasoning backed up by fallacious acceptance of confirmation bias, post hoc rationalization, and suggestibility. Wilhelm Fliess "was able to impose his number patterns on virtually everything" and worked to convince others that cycles happen within men and women every 23 and 28 days. Mathematically, Fliess's equation, n = 23x +28y is unconstrained as there are infinitely many solutions for x and y, meaning that Fliess and Sigmund Freud (who adopted this idea in the early 1890s) could predict anything they wanted with the combination. The skeptical evaluations of the various biorhythm proposals led to a number of critiques lambasting the subject published in the 1970s and 1980s. Biorhythm advocates who objected to the takedowns claimed that because circadian rhythms had been empirically verified in many organisms' sleep cycles, biorhythms were just as plausible. However, unlike biorhythms, which are claimed to have precise and unaltering periods, circadian rhythms are found by observing the cycle itself and the periods are found to vary in length based on biological and environmental factors. Assuming such factors were relevant to biorhythms would result in chaotic cycle combinations that remove any "predictive" features. Additional studies Several controlled, experimental studies found no correlation between the 23, 28 and 33 day cycles and academic performance. These studies include: James (1984) James hypothesized that if biorhythms were rooted in science, then each proposed biorhythm cycle would contribute to task performance. Further, he predicted that each type of biorhythm cycle (i.e., intellectual, physical, and emotional) would be most influential on tasks associated with the corresponding cycle type. For example, he postulated that intellectual biorhythm cycles would be most influential on academic testing performance. In order to test his hypotheses, James observed 368 participants, noting their performance on tasks associated with intellectual, physical, and emotional functioning. Based on data collected from his experimental research, James concluded that there was no relation between subjects' biorhythmic status (on any of the three cycle types), and their performance on the associated practical tests. Peveto (1980) Peveto examined the proposed relationship between biorhythms and academic performance, specifically in terms of reading ability. Through examination of the data collected, Peveto concluded that there were no significant differences in the academic performance of the students, in regards to reading, during the high, low, or critical positions of neither the physical biorhythm cycle, the emotional biorhythm cycle, nor the intellectual biorhythm cycle. As a result, it was concluded that biorhythm cycles have no effect on the academic performance of students, when academic performance was measured using reading ability. See also Biological rhythm Chronotherapy (treatment scheduling) Circadian rhythm Mood ring References Further reading Hines, T.M., "Comprehensive review of biorhythm theory". Psychology Department, Pace University, Pleasantville, NY. Psychol Rep. 1998 Aug;83(1):19–64. (ed. concluded that biorhythm theory is not valid.) D'Andrea, V.J., D.R. Black, and N.G. Stayrook, "Relation of the Fliess-Swoboda Biorhythm Theory to suicide occurrence". J Nerv Ment Dis. 1984 Aug;172(8):490–4. (ed. concluded that there was a validity to biorhythm when the innovative methods of the study are put to use.) Laxenaire M., and O. Laurent, "What is the current thinking on the biorhythm theory?" Ann Med Psychol (Paris). 1983 Apr;141(4):425–9. [French] (ed. Biorhythm theory is disregarded by the medical world though it has achieved a bit of fame with the public) Sleep Pseudoscience Waveforms
Biorhythm (pseudoscience)
[ "Physics", "Biology" ]
2,338
[ "Physical phenomena", "Behavior", "Waves", "Sleep", "Waveforms" ]
64,243
https://en.wikipedia.org/wiki/Aquatic%20ape%20hypothesis
The aquatic ape hypothesis (AAH), also referred to as aquatic ape theory (AAT) or the waterside hypothesis of human evolution, postulates that the ancestors of modern humans took a divergent evolutionary pathway from the other great apes by becoming adapted to a more aquatic habitat. While the hypothesis has some popularity with the lay public, it is generally ignored or classified as pseudoscience by anthropologists. The theory developed before major discoveries of ancient hominin fossils in East Africa. The hypothesis was initially proposed by the English marine biologist Alister Hardy in 1960, who argued that a branch of apes was forced by competition over terrestrial habitats to hunt for food such as shellfish on the coast and seabed, leading to adaptations that explained distinctive characteristics of modern humans such as functional hairlessness and bipedalism. The popular science writer Elaine Morgan supported this hypothesis in her 1972 book The Descent of Woman. In it, she contrasted the theory with zoologist and ethnologist Desmond Morris's theories of sexuality, which she believed to be rooted in sexism. Anthropologists do not take the hypothesis seriously: John Langdon characterized it as an "umbrella hypothesis" (a hypothesis that tries to explain many separate traits of humans as a result of a single adaptive pressure) that was not consistent with the fossil record, and said that its claim that it was simpler and therefore more likely to be true than traditional explanations of human evolution was not true. According to anthropologist John Hawkes, the AAH is not consistent with the fossil record. Traits that the hypothesis tries to explain evolved at vastly different times, and distributions of soft tissue the hypothesis alleges are unique to humans are common among other primates. History In 1942 the German pathologist Max Westenhöfer (1871–1957) discussed various human characteristics (hairlessness, subcutaneous fat, the regression of the olfactory organ, webbed fingers, direction of the body hair etc.) that could have derived from an aquatic past, quoting several other authors who had made similar speculations. As he did not believe human beings were apes, he believed this might have been during the Cretaceous, contrary to what is possible given the geologic and evolutionary biology evidence available at the time. He stated: "The postulation of an aquatic mode of life during an early stage of human evolution is a tenable hypothesis, for which further inquiry may produce additional supporting evidence." He later abandoned the concept. Independently of Westenhöfer's writings, the marine biologist Alister Hardy had since 1930 also hypothesized that humans may have had ancestors more aquatic than previously imagined, although his work, unlike Westenhöfer's, was rooted in the Darwinian consensus. On the advice of his colleagues, Hardy delayed presenting the hypothesis for approximately thirty years. After he had become a respected academic and knighted for contributions to marine biology, Hardy finally voiced his thoughts in a speech to the British Sub-Aqua Club in Brighton on 5 March 1960. Several national newspapers reported sensational presentations of Hardy's ideas, which he countered by explaining them more fully in an article in New Scientist on 17 March 1960: "My thesis is that a branch of this primitive ape-stock was forced by competition from life in the trees to feed on the sea-shores and to hunt for food, shellfish, sea-urchins etc., in the shallow waters off the coast." The idea was generally ignored by the scientific community after the article was published. Some interest was received, notably from the geographer Carl Sauer whose views on the role of the seashore in human evolution "stimulated tremendous progress in the study of coastal and aquatic adaptations" inside marine archaeology. In 1967, the hypothesis was mentioned in The Naked Ape, a popular book by the zoologist Desmond Morris, who reduced Hardy's phrase "more aquatic ape-like ancestors" to the bare "aquatic ape", commenting that "despite its most appealing indirect evidence, the aquatic theory lacks solid support". While traditional descriptions of 'savage' existence identified three common sources of sustenance: gathering of fruit and nuts, fishing, and hunting, in the 1950s, the anthropologist Raymond Dart focused on hunting and gathering as the likely organizing concept of human society in prehistory, and hunting was the focus of the screenwriter Robert Ardrey's 1961 best-seller African Genesis. Another screenwriter, Elaine Morgan, responded to this focus in her 1972 Descent of Woman, which parodied the conventional picture of "the Tarzanlike figure of the prehominid who came down from the trees, saw a grassland teeming with game, picked up a weapon and became a Mighty Hunter," and pictured a more peaceful scene of humans by the seashore. She took her lead from a section in Morris's 1967 book which referred to the possibility of an Aquatic Ape period in evolution, his name for the speculation by the biologist Alister Hardy in 1960. When it aroused no reaction in the academic community, she dropped the feminist criticism and wrote a series of books–The Aquatic Ape (1982), The Scars of Evolution (1990), The Descent of the Child (1994), The Aquatic Ape Hypothesis (1997) and The Naked Darwinist (2008)–which explored the issues in more detail. Books published on the topic since then have avoided the contentious term aquatic and used waterside instead. The Hardy/Morgan hypothesis Hardy's hypothesis as outlined in New Scientist was: Hardy argued a number of features of modern humans are characteristic of aquatic adaptations. He pointed to humans' lack of body hair as being analogous to the same lack seen in whales and hippopotamuses, and noted the layer of subcutaneous fat humans have that Hardy believed other apes lacked, although it has been shown that captive apes with ample access to food have levels of subcutaneous fat similar to humans. Additional features cited by Hardy include the location of the trachea in the throat rather than the nasal cavity, the human propensity for front-facing copulation, tears and eccrine sweating, though these claimed pieces of evidence have alternative evolutionary adaptationist explanations that do not invoke an aquatic context. The diving reflex is sometimes cited as evidence. This is exhibited strongly in aquatic mammals, such as seals, otters and dolphins. It also exists as a lesser response in other animals, including human babies up to 6 months old (see infant swimming). However adult humans generally exhibit a mild response. Hardy additionally posited that bipedalism evolved first as an aid to wading before becoming the usual means of human locomotion, and tool use evolved out of the use of rocks to crack open shellfish. These last arguments were cited by later proponents of AAH as an inspiration for their research programs. Morgan summed up her take on the hypothesis in 2011: Reactions The AAH is generally ignored by anthropologists, although it has a following outside academia and conferences on the topic have received celebrity endorsement, for example from David Attenborough. Despite being debunked, it returns periodically, being promoted as recently as 2019. Academics who have commented on the aquatic ape hypothesis include categorical opponents (generally members of the community of academic anthropology) who reject almost all of the claims related to the hypothesis. Other academics have argued that the rejection of Hardy and Morgan is partially unfair given that other explanations which suffer from similar problems are not so strongly opposed. A conference devoted to the subject was held at Valkenburg, Netherlands, in 1987. Its 22 participants included academic proponents and opponents of the hypothesis and several neutral observers headed by the anthropologist Vernon Reynolds of the University of Oxford. His summary at the end was: Critiques The AAH is considered to be a classic example of pseudoscience among the scholarly community, and has been met with significant skepticism. The Nature editor and paleontologist Henry Gee has argued that the hypothesis has equivalent merit to creationism, and should be similarly dismissed. In a 1997 critique, anthropologist John Langdon considered the AAH under the heading of an "umbrella hypothesis" and argued that the difficulty of ever disproving such a thing meant that although the idea has the appearance of being a parsimonious explanation, it actually was no more powerful an explanation than the null hypothesis that human evolution is not particularly guided by interaction with bodies of water. Langdon argued that however popular the idea was with the public, the "umbrella" nature of the idea means that it cannot serve as a proper scientific hypothesis. Langdon also objected to Morgan's blanket opposition to the "savannah hypothesis" which he took to be the "collective discipline of paleoanthropology". He observed that some anthropologists had regarded the idea as not worth the trouble of a rebuttal. In addition, the evidence cited by AAH proponents mostly concerned developments in soft tissue anatomy and physiology, whilst paleoanthropologists rarely speculated on evolutionary development of anatomy beyond the musculoskeletal system and brain size as revealed in fossils. After a brief description of the issues under 26 different headings, he produced a summary critique of these with mainly negative judgments. His main conclusion was that the AAH was unlikely ever to be disproved on the basis of comparative anatomy, and that the one body of data that could potentially disprove it was the fossil record. In a blog post originally published in 2005 and continually updated since, anthropologist John D. Hawks said that anthropologists don't accept the AAH for several reasons. Hardy and Morgan situated the alleged aquatic period of human nature in a period of the fossil record that is now known not to contain any aquatic ancestors. The traits the AAH tries to explain actually evolved at wildly different time periods. The AAH claims that the alleged aquatic nature of humanity is responsible for human patterns of hair, fat, and sweat, but actually all of these things are similar in humans to other primates. To the extent they are exceptional in any primate relative to other primates, or in primates relative to other mammals, they are exceptional for well-understood thermodynamic reasons. Palaeontologist Riley Black concurred with the pseudoscience label, and described the AAH as a "classic case of picking evidence that fits a preconceived conclusion and ignoring everything else". Physical anthropologist Eugenie Scott has described the aquatic ape hypothesis as an instance of "crank anthropology" akin to other pseudoscientific ideas in anthropology such as alien-human interbreeding and Bigfoot. In The Accidental Species: Misunderstandings of Human Evolution (2013), Henry Gee remarked on how a seafood diet can aid in the development of the human brain. He nevertheless criticized the AAH because "it's always a problem identifying features [such as body fat and hairlessness] that humans have now and inferring that they must have had some adaptive value in the past." Also "it's notoriously hard to infer habits [such as swimming] from anatomical structures". Popular support for the AAH has become an embarrassment to some anthropologists, who want to explore the effects of water on human evolution without engaging with the AAH, which they consider "emphasizes adaptations to deep water (or at least underwater) conditions". Foley and Lahr suggest that "to flirt with anything watery in paleoanthropology can be misinterpreted", but argue "there is little doubt that throughout our evolution we have made extensive use of terrestrial habitats adjacent to fresh water, since we are, like many other terrestrial mammals, a heavily water-dependent species." But they allege that "under pressure from the mainstream, AAH supporters tended to flee from the core arguments of Hardy and Morgan towards a more generalized emphasis on fishy things." In "The Waterside Ape", a pair of 2016 BBC Radio documentaries, David Attenborough discussed what he thought was a "move towards mainstream acceptance" for the AAH in the light of new research findings. He interviewed scientists supportive of the idea, including Kathlyn Stewart and Michael Crawford who had published papers in a special issue of the Journal of Human Evolution on "The Role of Freshwater and Marine Resources in the Evolution of the Human Diet, Brain and Behavior". Responding to the documentaries in a newspaper article, paleoanthropologist Alice Roberts criticized Attenborough's promotion of AAH and dismissed the idea as a distraction "from the emerging story of human evolution that is more interesting and complex". She argued that AAH had become "a theory of everything" that is simultaneously "too extravagant and too simple". Philosopher Daniel Dennett, in his discussion of evolutionary philosophy, commented "During the last few years, when I have found myself in the company of distinguished biologists, evolutionary theorists, paleoanthropologists and other experts, I have often asked them to tell me, please, exactly why Elaine Morgan must be wrong about the aquatic theory. I haven't yet had a reply worth mentioning, aside from those who admit, with a twinkle in their eyes, that they have also wondered the same thing." He challenged both Elaine Morgan and the scientific establishment in that "Both sides are indulging in adapt[at]ionist Just So stories". Along the same lines, historian Erika Lorraine Milam noted that independent of Morgan's work, certain standard explanations of human development in paleoanthropology have been roundly criticized for lacking evidence, while being based on sexist assumptions. Anatomy lecturer Bruce Charlton gave Morgan's book Scars of Evolution an enthusiastic review in the British Medical Journal in 1991, calling it "exceptionally well written" and "a good piece of science". In 1995, paleoanthropologist Phillip Tobias declared that the savannah hypothesis was dead, because the open conditions did not exist when humanity's precursors stood upright and that therefore the conclusions of the Valkenburg conference were no longer valid. Tobias praised Morgan's book Scars of Evolution as a "remarkable book", though he said that he did not agree with all of it. Tobias and his student further criticised the orthodox hypothesis by arguing that the coming out of the forest of man's precursors had been an unexamined assumption of evolution since the days of Lamarck, and followed by Darwin, Wallace and Haeckel, well before Raymond Dart used it. Reactions of Hardy and Morgan Alister Hardy was astonished and mortified in 1960 when the national Sunday papers carried banner headlines "Oxford professor says man a sea ape", causing problems with his Oxford colleagues. As he later said to his ex-pupil Desmond Morris, "Of course I then had to write an article to refute this saying no this is just a guess, a rough hypothesis, this isn't a proven fact. And of course we're not related to dolphins." Elaine Morgan's 1972 book Descent of Woman became an international best-seller, a Book of the Month selection in the United States and was translated into ten languages. The book was praised for its feminism but paleoanthropologists were disappointed with its promotions of the AAH. Morgan removed the feminist critique and left her AAH ideas intact, publishing the book as The Aquatic Ape 10 years later, but it did not garner any more positive reaction from scientists. Related academic and independent research Wading and bipedalism AAH proponent Algis Kuliukas, performed experiments to measure the comparative energy used when lacking orthograde posture with using fully upright posture. Although it is harder to walk upright with bent knees on land, this difference gradually diminishes as the depth of water increases and is still practical in thigh-high water. In a critique of the AAH, Henry Gee questioned any link between bipedalism and diet. Gee writes that early humans have been bipedal for 5 million years, but our ancestors' "fondness for seafood" emerged a mere 200,000 years ago. Diet Evidence supports aquatic food consumption in Homo as early as the Pliocene, but its linkage to brain evolution remains controversial. Further, there is no evidence that humans ate fish in significant amounts earlier than tens to hundreds of thousands of years ago. Supporters argue that the avoidance of taphonomic bias is the problem, as most hominin fossils occur in lake-side environments, and the presence of fish remains is therefore not proof of fish consumption. They also claim that the archaeological record of human fishing and coastal settlement is fundamentally deficient due to postglacial sea level rise. In their 1989 book The Driving Force: Food, Evolution and The Future, Michael Crawford and David Marsh claimed that omega-3 fatty acids were vital for the development of the brain: Crawford and Marsh opined that the brain size in aquatic mammals is similar to humans, and that other primates and carnivores lost relative brain capacity. Cunnane, Stewart, Crawford, and colleagues published works arguing a correlation between aquatic diet and human brain evolution in their "shore-based diet scenario", acknowledging the Hardy/Morgan's thesis as a foundation work of their model. As evidence, they describe health problems in landlocked communities, such as cretinism in the Alps and goitre in parts of Africa due to salt-derived iodine deficiency, and state that inland habitats cannot naturally meet human iodide requirements. Biologists Caroline Pond and Dick Colby were highly critical, saying that the work provided "no significant new information that would be of interest to biologists" and that its style was "speculative, theoretical and in many places so imprecise as to be misleading." British palaeontologist Henry Gee, who remarked on how a seafood diet can aid in the development of the human brain, nevertheless criticized AAH because inferring aquatic behavior from body fat and hairlessness patterns is an unjustifiable leap. Diving behavior and performance Professor of animal physiology and experienced scuba and freediver Erika Schagatay researches human diving abilities and oxygen stress. She suggests that such abilities are consistent with selective pressure for underwater foraging during human evolution, and discussed other anatomical traits speculated as diving adaptations by Hardy/Morgan. John Langdon suggested that such traits could be enabled by a human developmental plasticity. See also Endurance running hypothesis Hunting hypothesis Mermaid Stoned ape theory Triune brain References Bibliography 1960 in biology Biological hypotheses Human evolution Aquatic mammals Hypothetical life forms Fringe science Pseudoscience
Aquatic ape hypothesis
[ "Biology" ]
3,743
[ "Biological hypotheses", "Hypothetical life forms" ]