id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
351077
https://en.wikipedia.org/wiki/Transparency%20and%20translucency
Transparency and translucency
In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without appreciable scattering of light. On a macroscopic scale (one in which the dimensions are much larger than the wavelengths of the photons in question), the photons can be said to follow Snell's law. Translucency (also called translucence or translucidity) allows light to pass through but does not necessarily (again, on the macroscopic scale) follow Snell's law; the photons can be scattered at either of the two interfaces, or internally, where there is a change in the index of refraction. In other words, a translucent material is made up of components with different indices of refraction. A transparent material is made up of components with a uniform index of refraction. Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. The opposite property of translucency is opacity. Other categories of visual appearance, related to the perception of regular or diffuse reflection and transmission of light, have been organized under the concept of cesia in an order system with three variables, including transparency, translucency and opacity among the involved aspects. When light encounters a material, it can interact with it in several different ways. These interactions depend on the wavelength of the light and the nature of the material. Photons interact with an object by some combination of reflection, absorption and transmission. Some materials, such as plate glass and clean water, transmit much of the light that falls on them and reflect little of it; such materials are called optically transparent. Many liquids and aqueous solutions are highly transparent. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are mostly responsible for excellent optical transmission. Materials that do not transmit light are called opaque. Many such substances have a chemical composition which includes what are referred to as absorption centers. Many substances are selective in their absorption of white light frequencies. They absorb certain portions of the visible spectrum while reflecting others. The frequencies of the spectrum which are not absorbed are either reflected or transmitted for our physical observation. This is what gives rise to color. The attenuation of light of all frequencies and wavelengths is due to the combined mechanisms of absorption and scattering. Transparency can provide almost perfect camouflage for animals able to achieve it. This is easier in dimly-lit or turbid seawater than in good illumination. Many marine animals such as jellyfish are highly transparent. Etymology late Middle English: from Old French, from medieval Latin - 'visible through', from Latin , from - 'through' + 'be visible'. late 16th century (in the Latin sense): from Latin - 'shining through', from the verb , from - 'through' + 'to shine'. late Middle English , from Latin 'darkened'. The current spelling (rare before the 19th century) has been influenced by the French form. Introduction With regard to the absorption of light, primary material considerations include: At the electronic level, absorption in the ultraviolet and visible (UV-Vis) portions of the spectrum depends on whether the electron orbitals are spaced (or "quantized") such that electrons can absorb a quantum of light (or photon) of a specific frequency. For example, in most glasses, electrons have no available energy levels above them in the range of that associated with visible light, or if they do, the transition to them would violate selection rules, meaning there is no appreciable absorption in pure (undoped) glasses, making them ideal transparent materials for windows in buildings. At the atomic or molecular level, physical absorption in the infrared portion of the spectrum depends on the frequencies of atomic or molecular vibrations or chemical bonds, and on selection rules. Nitrogen and oxygen are not greenhouse gases because there is no molecular dipole moment. With regard to the scattering of light, the most critical factor is the length scale of any or all of these structural features relative to the wavelength of the light being scattered. Primary material considerations include: Crystalline structure: whether the atoms or molecules exhibit the 'long-range order' evidenced in crystalline solids. Glassy structure: Scattering centers include fluctuations in density or composition. Microstructure: Scattering centers include internal surfaces such as grain boundaries, crystallographic defects, and microscopic pores. Organic materials: Scattering centers include fiber and cell structures and boundaries. Diffuse reflection - Generally, when light strikes the surface of a (non-metallic and non-glassy) solid material, it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g., the grain boundaries of a polycrystalline material or the cell or fiber boundaries of an organic material), and by its surface, if it is rough. Diffuse reflection is typically characterized by omni-directional reflection angles. Most of the objects visible to the naked eye are identified via diffuse reflection. Another term commonly used for this type of reflection is "light scattering". Light scattering from the surfaces of objects is our primary mechanism of physical observation. Light scattering in liquids and solids depends on the wavelength of the light being scattered. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension (or spatial scale) of the scattering center. Visible light has a wavelength scale on the order of 0.5 μm. Scattering centers (or particles) as small as 1 μm have been observed directly in the light microscope (e.g., Brownian motion). Transparent ceramics Optical transparency in polycrystalline materials is limited by the amount of light scattered by their microstructural features. Light scattering depends on the wavelength of the light. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension of the scattering center. For example, since visible light has a wavelength scale on the order of a micrometre, scattering centers will have dimensions on a similar spatial scale. Primary scattering centers in polycrystalline materials include microstructural defects such as pores and grain boundaries. In addition to pores, most of the interfaces in a typical metal or ceramic object are in the form of grain boundaries, which separate tiny regions of crystalline order. When the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent. In the formation of polycrystalline materials (metals and ceramics) the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during formation (or pressing) of the object. Moreover, the size of the grain boundaries scales directly with particle size. Thus, a reduction of the original particle size well below the wavelength of visible light (about 1/15 of the light wavelength, or roughly 600 nm / 15 = 40 nm) eliminates much of the light scattering, resulting in a translucent or even transparent material. Computer modeling of light transmission through translucent ceramic alumina has shown that microscopic pores trapped near grain boundaries act as primary scattering centers. The volume fraction of porosity had to be reduced below 1% for high-quality optical transmission (99.99 percent of theoretical density). This goal has been readily accomplished and amply demonstrated in laboratories and research facilities worldwide using the emerging chemical processing methods encompassed by the methods of sol-gel chemistry and nanotechnology. Transparent ceramics have created interest in their applications for high energy lasers, transparent armor windows, nose cones for heat seeking missiles, radiation detectors for non-destructive testing, high energy physics, space exploration, security and medical imaging applications. Large laser elements made from transparent ceramics can be produced at a relatively low cost. These components are free of internal stress or intrinsic birefringence, and allow relatively large doping levels or optimized custom-designed doping profiles. This makes ceramic laser elements particularly important for high-energy lasers. The development of transparent panel products will have other potential advanced applications including high strength, impact-resistant materials that can be used for domestic windows and skylights. Perhaps more important is that walls and other applications will have improved overall strength, especially for high-shear conditions found in high seismic and wind exposures. If the expected improvements in mechanical properties bear out, the traditional limits seen on glazing areas in today's building codes could quickly become outdated if the window area actually contributes to the shear resistance of the wall. Currently available infrared transparent materials typically exhibit a trade-off between optical performance, mechanical strength and price. For example, sapphire (crystalline alumina) is very strong, but it is expensive and lacks full transparency throughout the 3–5 μm mid-infrared range. Yttria is fully transparent from 3–5 μm, but lacks sufficient strength, hardness, and thermal shock resistance for high-performance aerospace applications. A combination of these two materials in the form of the yttrium aluminium garnet (YAG) is one of the top performers in the field. Absorption of light in solids When light strikes an object, it usually has not just a single frequency (or wavelength) but many. Objects have a tendency to selectively absorb, reflect, or transmit light of certain frequencies. That is, one object might reflect green light while absorbing all other frequencies of visible light. Another object might selectively transmit blue light while absorbing all other frequencies of visible light. The manner in which visible light interacts with an object is dependent upon the frequency of the light, the nature of the atoms in the object, and often, the nature of the electrons in the atoms of the object. Some materials allow much of the light that falls on them to be transmitted through the material without being reflected. Materials that allow the transmission of light waves through them are called optically transparent. Chemically pure (undoped) window glass and clean river or spring water are prime examples of this. Materials that do not allow the transmission of any light wave frequencies are called opaque. Such substances may have a chemical composition which includes what are referred to as absorption centers. Most materials are composed of materials that are selective in their absorption of light frequencies. Thus they absorb only certain portions of the visible spectrum. The frequencies of the spectrum which are not absorbed are either reflected back or transmitted for our physical observation. In the visible portion of the spectrum, this is what gives rise to color. Absorption centers are largely responsible for the appearance of specific wavelengths of visible light all around us. Moving from longer (0.7 μm) to shorter (0.4 μm) wavelengths: Red, orange, yellow, green, and blue (ROYGB) can all be identified by our senses in the appearance of color by the selective absorption of specific light wave frequencies (or wavelengths). Mechanisms of selective light wave absorption include: Electronic: Transitions in electron energy levels within the atom (e.g., pigments). These transitions are typically in the ultraviolet (UV) and/or visible portions of the spectrum. Vibrational: Resonance in atomic/molecular vibrational modes. These transitions are typically in the infrared portion of the spectrum. UV-Vis: electronic transitions In electronic absorption, the frequency of the incoming light wave is at or near the energy levels of the electrons within the atoms that compose the substance. In this case, the electrons will absorb the energy of the light wave and increase their energy state, often moving outward from the nucleus of the atom into an outer shell or orbital. The atoms that bind together to make the molecules of any particular substance contain a number of electrons (given by the atomic number Z in the periodic table). Recall that all light waves are electromagnetic in origin. Thus they are affected strongly when coming into contact with negatively charged electrons in matter. When photons (individual packets of light energy) come in contact with the valence electrons of an atom, one of several things can and will occur: A molecule absorbs the photon, some of the energy may be lost via luminescence, fluorescence and phosphorescence. A molecule absorbs the photon, which results in reflection or scattering. A molecule cannot absorb the energy of the photon and the photon continues on its path. This results in transmission (provided no other absorption mechanisms are active). Most of the time, it is a combination of the above that happens to the light that hits an object. The states in different materials vary in the range of energy that they can absorb. Most glasses, for example, block ultraviolet (UV) light. What happens is the electrons in the glass absorb the energy of the photons in the UV range while ignoring the weaker energy of photons in the visible light spectrum. But there are also existing special glass types, like special types of borosilicate glass or quartz that are UV-permeable and thus allow a high transmission of ultraviolet light. Thus, when a material is illuminated, individual photons of light can make the valence electrons of an atom transition to a higher electronic energy level. The photon is destroyed in the process and the absorbed radiant energy is transformed to electric potential energy. Several things can happen, then, to the absorbed energy: It may be re-emitted by the electron as radiant energy (in this case, the overall effect is in fact a scattering of light), dissipated to the rest of the material (i.e., transformed into heat), or the electron can be freed from the atom (as in the photoelectric effects and Compton effects). Infrared: bond stretching The primary physical mechanism for storing mechanical energy of motion in condensed matter is through heat, or thermal energy. Thermal energy manifests itself as energy of motion. Thus, heat is motion at the atomic and molecular levels. The primary mode of motion in crystalline substances is vibration. Any given atom will vibrate around some mean or average position within a crystalline structure, surrounded by its nearest neighbors. This vibration in two dimensions is equivalent to the oscillation of a clock's pendulum. It swings back and forth symmetrically about some mean or average (vertical) position. Atomic and molecular vibrational frequencies may average on the order of 1012 cycles per second (Terahertz radiation). When a light wave of a given frequency strikes a material with particles having the same or (resonant) vibrational frequencies, those particles will absorb the energy of the light wave and transform it into thermal energy of vibrational motion. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When infrared light of these frequencies strikes an object, the energy is reflected or transmitted. If the object is transparent, then the light waves are passed on to neighboring atoms through the bulk of the material and re-emitted on the opposite side of the object. Such frequencies of light waves are said to be transmitted. Transparency in insulators An object may be not transparent either because it reflects the incoming light or because it absorbs the incoming light. Almost all solids reflect a part and absorb a part of the incoming light. When light falls onto a block of metal, it encounters atoms that are tightly packed in a regular lattice and a "sea of electrons" moving randomly between the atoms. In metals, most of these are non-bonding electrons (or free electrons) as opposed to the bonding electrons typically found in covalently bonded or ionically bonded non-metallic (insulating) solids. In a metallic bond, any potential bonding electrons can easily be lost by the atoms in a crystalline structure. The effect of this delocalization is simply to exaggerate the effect of the "sea of electrons". As a result of these electrons, most of the incoming light in metals is reflected back, which is why we see a shiny metal surface. Most insulators (or dielectric materials) are held together by ionic bonds. Thus, these materials do not have free conduction electrons, and the bonding electrons reflect only a small fraction of the incident wave. The remaining frequencies (or wavelengths) are free to propagate (or be transmitted). This class of materials includes all ceramics and glasses. If a dielectric material does not include light-absorbent additive molecules (pigments, dyes, colorants), it is usually transparent to the spectrum of visible light. Color centers (or dye molecules, or "dopants") in a dielectric absorb a portion of the incoming light. The remaining frequencies (or wavelengths) are free to be reflected or transmitted. This is how colored glass is produced. Most liquids and aqueous solutions are highly transparent. For example, water, cooking oil, rubbing alcohol, air, and natural gas are all clear. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are chiefly responsible for their excellent optical transmission. The ability of liquids to "heal" internal defects via viscous flow is one of the reasons why some fibrous materials (e.g., paper or fabric) increase their apparent transparency when wetted. The liquid fills up numerous voids making the material more structurally homogeneous. Light scattering in an ideal defect-free crystalline (non-metallic) solid that provides no scattering centers for incoming light will be due primarily to any effects of anharmonicity within the ordered lattice. Light transmission will be highly directional due to the typical anisotropy of crystalline substances, which includes their symmetry group and Bravais lattice. For example, the seven different crystalline forms of quartz silica (silicon dioxide, SiO2) are all clear, transparent materials. Optical waveguides Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Guided light wave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions to act as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation is relatively lossless. An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis by the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The refractive index is the parameter reflecting the speed of light in a material. (Refractive index is the ratio of the speed of light in vacuum to the speed of light in a given medium. The refractive index of vacuum is therefore 1.) The larger the refractive index, the more slowly light travels in that medium. Typical values for core and cladding of an optical fiber are 1.48 and 1.46, respectively. When light traveling in a dense medium hits a boundary at a steep angle, the light will be completely reflected. This effect, called total internal reflection, is used in optical fibers to confine light in the core. Light travels along the fiber bouncing back and forth off of the boundary. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles will be propagated. This range of angles is called the acceptance cone of the fiber. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Optical waveguides are used as components in integrated optical circuits (e.g., combined with lasers or light-emitting diodes, LEDs) or as the transmission medium in local and long-haul optical communication systems. Mechanisms of attenuation Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance traveled through a transmission medium. It is an important factor limiting the transmission of a signal across large distances. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the very high quality of transparency of modern optical transmission media. The medium is usually a fiber of silica glass that confines the incident light beam to the inside. In optical fibers, the main source of attenuation is scattering from molecular level irregularities, called Rayleigh scattering, due to structural disorder and compositional fluctuations of the glass structure. This same phenomenon is seen as one of the limiting factors in the transparency of infrared missile domes. Further attenuation is caused by light absorbed by residual materials, such as metals or water ions, within the fiber core and inner cladding. Light leakage due to bending, splices, connectors, or other outside forces are other factors resulting in attenuation. At high optical powers, scattering can also be caused by nonlinear optical processes in the fiber. As camouflage Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant, but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. Gelatinous planktonic animals are between 50 and 90 percent transparent. A transparency of 50 percent is enough to make an animal invisible to a predator such as cod at a depth of ; better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 percent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters. For the same reason, transparency in air is even harder to achieve, but a partial example is found in the glass frogs of the South American rain forest, which have translucent skin and pale greenish limbs. Several Central American species of clearwing (ithomiine) butterflies and many dragonflies and allied insects also have wings which are mostly transparent, a form of crypsis that provides some protection from predators.
Physical sciences
Optics
null
351091
https://en.wikipedia.org/wiki/Transparency%20%28human%E2%80%93computer%20interaction%29
Transparency (human–computer interaction)
Any change in a computing system, such as a new feature or new component, is transparent if the system after change adheres to previous external interface as much as possible while changing its internal behavior. The purpose is to shield change from all systems (or human users) on the other end of the interface. Confusingly, the term refers to the overall invisibility of the component, it does not refer to visibility of component's internals (as in white box or open system). The term transparent is widely used in computing marketing in substitution of the term invisible, since the term invisible has a bad connotation (usually seen as something that the user can't see, and has no control over) while the term transparent has a good connotation (usually associated with not hiding anything). The vast majority of the times, the term transparent is used in a misleading way to refer to the actual invisibility of a computing process, which is also described by the term opaque, especially with regards to data structures. Because of this misleading and counter-intuitive definition, modern computer literature tends to prefer use of "agnostic" over "transparent". The term is used particularly often with regard to an abstraction layer that is invisible either from its upper or lower neighboring layer. Also temporarily used later around 1969, in IBM and Honeywell programming manuals, the term referred to a certain computer programming technique. An application code was transparent when it was clear of the low-level detail (such as device-specific management) and contained only the logic solving a main problem. It was achieved through encapsulation – putting the code into modules that hid internal details, making them invisible for the main application. Examples For example, the Network File System is transparent, because it introduces the access to files stored remotely on the network in a way uniform with previous local access to a file system, so the user might even not notice it while using the folder hierarchy. The early File Transfer Protocol (FTP) is considerably less transparent, because it requires each user to learn how to access files through an ftp client. Similarly, some file systems allow transparent compression and decompression of data, enabling users to store more files on a medium without any special knowledge; some file systems encrypt files transparently. This approach does not require running a compression or encryption utility manually. In software engineering, it is also considered good practice to develop or use abstraction layers for database access, so that the same application will work with different databases; here, the abstraction layer allows other parts of the program to access the database transparently (see Data Access Object, for example). In object-oriented programming, transparency is facilitated through the use of interfaces that hide actual implementations done with different underlying classes. Types of transparency in distributed system Transparency means that any form of distributed system should hide its distributed nature from its users, appearing and functioning as a normal centralized system. There are many types of transparency: Access transparency – Regardless of how resource access and representation has to be performed on each individual computing entity, the users of a distributed system should always access resources in a single, uniform way. Example: SQL Queries Location transparency – Users of a distributed system should not have to be aware of where a resource is physically located. Example: Pages in the Web Migration transparency – Users should not be aware of whether a resource or computing entity possesses the ability to move to a different physical or logical location. Relocation transparency – Should a resource move while in use, this should not be noticeable to the end user. Replication transparency – If a resource is replicated among several locations, it should appear to the user as a single resource. Concurrent transparency – While multiple users may compete for and share a single resource, this should not be apparent to any of them. Failure transparency – Always try to hide any failure and recovery of computing entities and resources. Persistence transparency – Whether a resource lies in volatile or permanent memory should make no difference to the user. Security transparency – Negotiation of cryptographically secure access of resources must require a minimum of user intervention, or users will circumvent the security in preference of productivity. Formal definitions of most of these concepts can be found in RM-ODP, the Open Distributed Processing Reference Model (ISO 10746). The degree to which these properties can or should be achieved may vary widely. Not every system can or should hide everything from its users. For instance, due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. If one expects real-time interaction with the distributed system, this may be very noticeable.
Technology
Software development: General
null
4232899
https://en.wikipedia.org/wiki/Lava%20field
Lava field
A lava field, sometimes called a lava bed, is a large, mostly flat area of lava flows. Such features are generally composed of highly fluid basalt lava, and can extend for tens or hundreds of kilometers across the underlying terrain. Morphology and structure The final morphology of a lava field can reveal properties such as internal structure, composition, and mechanics of the lava flow when it was fluid. The ridges and patterns on top of the lava field show the direction of the lava channels and the often active lava tubes that may be underneath the solidified "crust." It can also reveal whether the lava flow can be classified as pāhoehoe or 'a'ā. The two main types of lava field structures are defined as sheet flow lava and pillow lava. Sheet flow lava appears like a wrinkled or folded sheet, while pillow lava is bulbous, and often looks like a pile of pillows atop one another. An important aspect of lava flow morphology is a phenomenon known as lava flow inflation. This occurs in pāhoehoe flows that have a high effusion rate, and initially forms a thin crust atop the lava flow. The fluid lava underneath the crust continues to increase due to the sustained high effusion rate, and thus the entire "structure" increases in size, up to four meters in height. This anomaly can expose important physics and mechanisms behind lava flow that was not previously known. The structure of lava fields also vary based on geographic location. For example, in subaqueous lava fields, sheet flow lava is found near volcanoes characterized by fast-flowing centers, like the Galapagos Rift, while on the other hand pillow lava fields are found near more slow-flowing centers, like the Mid-Atlantic Ridge. Mapping and prediction The extent of large lava fields is most readily studied from the air or in satellite photos, where their commonly dark, near-black color contrasts sharply with the rest of the landscape. Current computer models are mostly unable to predict the placement of lava fields due to the inability to anticipate random environmental influences. Computer modeling is consistently increasing in quality, but the many micro factors directing lava flow and shape, such as source geometry and lava extrusion rate, limit the accuracy that is currently available. Notable examples Boring Lava Field (United States) Harrat Rahat, which threatened the city of Medina in the 13th century (Saudi Arabia) Hell's Half Acre Lava Field (Idaho, United States) Reykjanes, Iceland (peninsula is mainly a barren waste of lava fields) St. George, Utah, United States (city built around fields and bluffs covered in lava rocks) Mackenzie Large Igneous Province, Canada
Physical sciences
Volcanic landforms
Earth science
4234786
https://en.wikipedia.org/wiki/Volvariella%20volvacea
Volvariella volvacea
Volvariella volvacea (also known as paddy straw mushroom or straw mushroom) is a species of edible mushroom cultivated throughout East and Southeast Asia and used extensively in Asian cuisine. They are often available fresh in regions they are cultivated, but elsewhere are more frequently found canned or dried. Worldwide, straw mushrooms are the third-most-consumed mushroom. Description In their button stage, straw mushrooms resemble poisonous death caps, but can be distinguished by several mycological features, including their pink spore print (spore prints of death caps are white). The two mushrooms have different distributions, with the death cap generally not found where the straw mushroom grows natively, but immigrants, particularly those from Southeast Asia to California and Australia, have been poisoned due to misidentification. Uses Straw mushrooms are grown on rice straw beds and are most commonly picked when immature (often labelled "unpeeled"), during their button or egg phase, and before the veil ruptures. They are adaptable, taking four to five days to mature, and are most successfully grown in subtropical climates with high annual rainfall. No record has been found of their cultivation before the 19th century. Nutrition One cup () of straw mushrooms is nutritionally dense and provides of food energy, 27.7 μg selenium (50.36% of RDA), 699 mg sodium (46.60%), 2.6 mg iron (32.50%), 0.242 mg copper (26.89%), 69 μg vitamin B9 (folate) (17.25%), 111 mg phosphorus (15.86%), 0.75 mg vitamin B5 (pantothenic acid) (15.00%), 6.97 g protein (13.94%), 4.5 g total dietary fiber (11.84%), and 1.22 mg zinc (11.09%).
Biology and health sciences
Edible fungi
Plants
4235754
https://en.wikipedia.org/wiki/Herpesviridae
Herpesviridae
Herpesviridae is a large family of DNA viruses that cause infections and certain diseases in animals, including humans. The members of this family are also known as herpesviruses. The family name is derived from the Greek word ἕρπειν ( 'to creep'), referring to spreading cutaneous lesions, usually involving blisters, seen in flares of herpes simplex 1, herpes simplex 2 and herpes zoster (shingles). In 1971, the International Committee on the Taxonomy of Viruses (ICTV) established Herpesvirus as a genus with 23 viruses among four groups. As of 2020, 115 species are recognized, all but one of which are in one of the three subfamilies. Herpesviruses can cause both latent and lytic infections. Nine herpesvirus types are known to primarily infect humans, at least five of which are extremely widespread among most human populations, and which cause common diseases: herpes simplex 1 and 2 (HSV-1 and HSV-2, also known as HHV-1 and HHV-2; both of which can cause orolabial and genital herpes), varicella zoster (VZV or HHV-3; the cause of chickenpox and shingles), Epstein–Barr (EBV or HHV-4; implicated in several diseases, including mononucleosis and some cancers), and human cytomegalovirus (HCMV or HHV-5). More than 90% of adults have been infected with at least one of these, and a latent form of the virus remains in almost all humans who have been infected. Other human herpesviruses are human herpesvirus 6A and 6B (HHV-6A and HHV-6B) and human herpesvirus 7 (HHV-7), which are the etiological agents for Roseola, and HHV-8 (also known as KSHV) which is responsible for causing Kaposi's sarcoma-associated herpesvirus. HHV here stands for "Human Herpesvirus". In total, more than 130 herpesviruses are known, some of them from mammals, birds, fish, reptiles, amphibians, and molluscs. Among the animal herpesviruses are pseudorabies virus causing Aujeszky's disease in pigs, and bovine herpesvirus 1 causing bovine infectious rhinotracheitis and pustular vulvovaginitis. Taxonomy Subfamily Alphaherpesvirinae Iltovirus Mardivirus Scutavirus Simplexvirus Varicellovirus Subfamily Betaherpesvirinae Cytomegalovirus Muromegalovirus Proboscivirus Quwivirus Roseolovirus Subfamily Gammaherpesvirinae Bossavirus Lymphocryptovirus Macavirus Manticavirus Patagivirus Percavirus Rhadinovirus Additionally, the species Iguanid herpesvirus 2 is currently unassigned to a genus and subfamily. See Herpesvirales#Taxonomy for information on taxonomic history, phylogenetic research, and the nomenclatural system. Structure All members of the Herpesviridae share a common structure; a relatively large, monopartite, double-stranded, linear DNA genome encoding 100–200 genes encased within an icosahedral protein cage (with T=16 symmetry) called the capsid, which is itself wrapped in a protein layer called the tegument containing both viral proteins and viral mRNAs and a lipid bilayer membrane called the envelope. This whole particle is known as a virion. The structural components of a typical HSV virion are the Lipid bilayer envelope, Tegument, DNA, Glycoprotein spikes and Nucleocapsid. The four-component Herpes simplex virion encompasses the double-stranded DNA genome into an icosahedral nucleocapsid. There is tegument around. Tegument contains filaments, each 7 nm wide. It is an amorphous layer with some structured regions. Finally, it is covered with a lipoprotein envelope. There are spikes made of glycoprotein protruding from each virion. These can expand the diameter of the virus to 225 nm. The diameters of virions without spikes are around 186 nm. There are at least two unglycosylated membrane proteins in the outer envelope of the virion. There are also 11 glycoproteins. These are gB, gC, gD, gE, gG, gH, gI, gJ, gK, gL and gM. Tegument contains 26 proteins. They have duties such as capsid transport to the nucleus and other organelles, activation of early gene transcription, and mRNA degradation. The icosahedral nucleocapsid is similar to that of tailed bacteriophage in the order Caudovirales. This capsid has 161 capsomers consisting of 150 hexons and 11 pentons, as well as a portal complex that allows entry and exit of DNA into the capsid. Life cycle All herpesviruses are nuclear-replicating—the viral DNA is transcribed to mRNA within the infected cell's nucleus. Infection is initiated when a viral particle contacts a cell with specific types of receptor molecules on the cell surface. Following binding of viral envelope glycoproteins to cell membrane receptors, the virion is internalized and dismantled, allowing viral DNA to migrate to the cell nucleus. Within the nucleus, replication of viral DNA and transcription of viral genes occurs. During symptomatic infection, infected cells transcribe lytic viral genes. In some host cells, a small number of viral genes termed latency-associated transcript (LAT) accumulate, instead. In this fashion, the virus can persist in the cell (and thus the host) indefinitely. While primary infection is often accompanied by a self-limited period of clinical illness, long-term latency is symptom-free. Chromatin dynamics regulate the transcription competency of entire herpes virus genomes. When the virus enters a cell, the cellular immune response is to protect the cell. The cell does so by wrapping the viral DNA around histones and condensing it into chromatin, causing the virus to become dormant, or latent. If cells are unsuccessful and the chromatin is loosely bundled, the viral DNA is still accessible. The viral particles can turn on their genes and replicate using cellular machinery to reactivate, starting a lytic infection. Reactivation of latent viruses has been implicated in a number of diseases (e.g. shingles, pityriasis rosea). Following activation, transcription of viral genes transitions from LAT to multiple lytic genes; these lead to enhanced replication and virus production. Often, lytic activation leads to cell death. Clinically, lytic activation is often accompanied by emergence of nonspecific symptoms, such as low-grade fever, headache, sore throat, malaise, and rash, as well as clinical signs such as swollen or tender lymph nodes and immunological findings such as reduced levels of natural killer cells. In animal models, local trauma and system stress have been found to induce reactivation of latent herpesvirus infection. Cellular stressors like transient interruption of protein synthesis and hypoxia are also sufficient to induce viral reactivation. Evolution The three mammalian subfamilies – Alpha-, Beta- and Gamma-herpesviridae – arose approximately 180 to 220 mya. The major sublineages within these subfamilies were probably generated before the mammalian radiation of 80 to 60 mya. Speciations within sublineages took place in the last 80 million years probably with a major component of cospeciation with host lineages. All the currently known bird and reptile species are alphaherpesviruses. Although the branching order of the herpes viruses has not yet been resolved, because herpes viruses and their hosts tend to coevolve this is suggestive that the alphaherpesviruses may have been the earliest branch. The time of origin of the genus Iltovirus has been estimated to be 200 mya while those of the mardivirus and simplex genera have been estimated to be between 150 and 100 mya. Immune system evasions Herpesviruses are known for their ability to establish lifelong infections. One way this is possible is through immune evasion. Herpesviruses have many different ways of evading the immune system. One such way is by encoding a protein mimicking human interleukin 10 (hIL-10) and another is by downregulation of the major histocompatibility complex II (MHC II) in infected cells. cmvIL-10 Research conducted on cytomegalovirus (CMV) indicates that the viral human IL-10 homolog, cmvIL-10, is important in inhibiting pro-inflammatory cytokine synthesis. The cmvIL-10 protein has 27% identity with hIL-10 and only one conserved residue out of the nine amino acids that make up the functional site for cytokine synthesis inhibition on hIL-10. There is, however, much similarity in the functions of hIL-10 and cmvIL-10. Both have been shown to down regulate IFN-γ, IL-1α, GM-CSF, IL-6 and TNF-α, which are all pro-inflammatory cytokines. They have also been shown to play a role in downregulating MHC I and MHC II and up regulating HLA-G (non-classical MHC I). These two events allow for immune evasion by suppressing the cell-mediated immune response and natural killer cell response, respectively. The similarities between hIL-10 and cmvIL-10 may be explained by the fact that hIL-10 and cmvIL-10 both use the same cell surface receptor, the hIL-10 receptor. One difference in the function of hIL-10 and cmvIL-10 is that hIL-10 causes human peripheral blood mononuclear cells (PBMC) to both increase and decrease in proliferation whereas cmvIL-10 only causes a decrease in proliferation of PBMCs. This indicates that cmvIL-10 may lack the stimulatory effects that hIL-10 has on these cells. It was found that cmvIL-10 functions through phosphorylation of the Stat3 protein. It was originally thought that this phosphorylation was a result of the JAK-STAT pathway. However, despite evidence that JAK does indeed phosphorylate Stat3, its inhibition has no significant influence on cytokine synthesis inhibition. Another protein, PI3K, was also found to phosphorylate Stat3. PI3K inhibition, unlike JAK inhibition, did have a significant impact on cytokine synthesis. The difference between PI3K and JAK in Stat3 phosphorylation is that PI3K phosphorylates Stat3 on the S727 residue whereas JAK phosphorylates Stat3 on the Y705 residue. This difference in phosphorylation positions seems to be the key factor in Stat3 activation leading to inhibition of pro-inflammatory cytokine synthesis. In fact, when a PI3K inhibitor is added to cells, the cytokine synthesis levels are significantly restored. The fact that cytokine levels are not completely restored indicates there is another pathway activated by cmvIL-10 that is inhibiting cytokine system synthesis. The proposed mechanism is that cmvIL-10 activates PI3K which in turn activates PKB (Akt). PKB may then activate mTOR, which may target Stat3 for phosphorylation on the S727 residue. MHC downregulation Another one of the many ways in which herpes viruses evade the immune system is by down regulation of MHC I and MHC II. This is observed in almost every human herpesvirus. Down regulation of MHC I and MHC II can come about by many different mechanisms, most causing the MHC to be absent from the cell surface. As discussed above, one way is by a viral chemokine homolog such as IL-10. Another mechanism to down regulate MHCs is to encode viral proteins that detain the newly formed MHC in the endoplasmic reticulum (ER). The MHC cannot reach the cell surface and therefore cannot activate the T cell response. The MHCs can also be targeted for destruction in the proteasome or lysosome. The ER protein TAP also plays a role in MHC down regulation. Viral proteins inhibit TAP preventing the MHC from picking up a viral antigen peptide. This prevents proper folding of the MHC and therefore the MHC does not reach the cell surface. Human herpesvirus types Below are the nine distinct viruses in this family known to cause disease in humans. Zoonotic herpesviruses In addition to the herpesviruses considered endemic in humans, some viruses associated primarily with animals may infect humans. These are zoonotic infections: Animal herpesviruses In animal virology, the best known herpesviruses belong to the subfamily Alphaherpesvirinae. Research on pseudorabies virus (PrV), the causative agent of Aujeszky's disease in pigs, has pioneered animal disease control with genetically modified vaccines. PrV is now extensively studied as a model for basic processes during lytic herpesvirus infection, and for unraveling molecular mechanisms of herpesvirus neurotropism, whereas bovine herpesvirus 1, the causative agent of bovine infectious rhinotracheitis and pustular vulvovaginitis, is analyzed to elucidate molecular mechanisms of latency. The avian infectious laryngotracheitis virus is phylogenetically distant from these two viruses and serves to underline similarity and diversity within the Alphaherpesvirinae. Research Research is currently ongoing into a variety of side-effect or co-conditions related to the herpesviruses. These include: Alzheimer's disease atherosclerosis cholangiocarcinoma chronic fatigue syndrome Crohn's disease dysautonomia fibromyalgia Irritable bowel syndrome labile hypertension lupus Ménière's disease multiple sclerosis pancreatic cancer pancreatitis pityriasis rosea Type II Diabetes
Biology and health sciences
Specific viruses
Health
4236766
https://en.wikipedia.org/wiki/Foot%E2%80%93pound%E2%80%93second%20system%20of%20units
Foot–pound–second system of units
The foot–pound–second system (FPS system) is a system of units built on three fundamental units: the foot for length, the (avoirdupois) pound for either mass or force (see below), and the second for time. Variants Collectively, the variants of the FPS system were the most common system in technical publications in English until the middle of the 20th century. Errors can be avoided and translation between the systems facilitated by labelling all physical quantities consistently with their units. Especially in the context of the FPS system this is sometimes known as the Stroud system after William Stroud, who popularized it. Pound as mass unit When the pound is used as a unit of mass, the core of the coherent system is similar and functionally equivalent to the corresponding subsets of the International System of Units (SI), using metre, kilogram and second (MKS), and the earlier centimetre–gram–second system of units (CGS). This system is often called the Absolute English System. In this sub-system, the unit of force is a derived unit known as the poundal. The international standard symbol for the pound as unit of mass rather than force is lb. Everett (1861) proposed the metric dyne and erg as the units of force and energy in the FPS system. Latimer Clark's (1891) "Dictionary of Measures" contains celo (acceleration), vel or velo (velocity) and pulse (momentum) as proposed names for FPS absolute units. Pound as force unit The technical or gravitational FPS system or British gravitational system is a coherent variant of the FPS system that is most common among engineers in the United States. It takes the pound-force as a fundamental unit of force instead of the pound as a fundamental unit of mass. In this sub-system, the unit of mass is a derived unit known as the slug. In the context of the gravitational FPS system, the pound-force (lbf) is sometimes referred to as the pound (lb). Pound-force as force unit and pound-mass as mass unit Another variant of the FPS system uses both the pound-mass and the pound-force, but neither the slug nor the poundal. The resulting system is sometimes also known as the English engineering system. Despite its name, the system is based on United States customary units of measure; it is not used in England. Other units Molar units The unit of substance in the FPS system is the pound-mole (lb-mol) = . Until the SI decided to adopt the gram-mole, the mole was directly derived from the mass unit as (mass unit)/(atomic mass unit). The unit (lbf⋅s2/ft)-mol also appears in a former definition of the atmosphere. Electromagnetic units The electrostatic and electromagnetic systems are derived from units of length and force, mainly. As such, these are ready extensions of any system of containing length, mass, time. Stephen Dresner gives the derived electrostatic and electromagnetic units in both the foot–pound–second and foot–slug–second systems. In practice, these are most associated with the centimetre–gram–second system. The 1929 "International Critical Tables" gives in the symbols and systems fpse = FPS electrostatic system and fpsm = FPS electromagnetic system. Under the conversions for charge, the following are given. The CRC Handbook of Chemistry and Physics 1979 (Edition 60), also lists fpse and fpsm as standard abbreviations. Electromagnetic FPS (EMU, stat-) 1 fpsm unit = 117.581866 cgsm unit (Biot-second) Electrostatic FPS (ESU, ab-) 1 fpse unit = 3583.8953 cgse unit (Franklin) 1 fpse unit = 1.1954588×10−7 abs coulomb Units of light The candle and the foot-candle were the first defined units of light, defined in the Metropolitan Gas Act (1860). The foot-candle is the intensity of light at one foot from a standard candle. The units were internationally recognized in 1881, and adopted into the metric system. Conversions Together with the fact that the term "weight" is used for the gravitational force in some technical contexts (physics, engineering) and for mass in others (commerce, law), and that the distinction often does not matter in practice, the coexistence of variants of the FPS system causes confusion over the nature of the unit "pound". Its relation to international metric units is expressed in kilograms, not newtons, though, and in earlier times it was defined by means of a mass prototype to be compared with a two-pan balance which is agnostic of local gravitational differences. In July 1959, the various national foot and avoirdupois pound standards were replaced by the international foot of precisely and the international pound of precisely , making conversion between the systems a matter of simple arithmetic. The conversion for the poundal is given by 1 pdl = 1 lb·ft/s2 = (precisely). To convert between the absolute and gravitational FPS systems one needs to fix the standard acceleration g which relates the pound to the pound-force. While g strictly depends on one's location on the Earth surface, since 1901 in most contexts it is fixed conventionally at precisely g0 =  ≈ .
Physical sciences
Measurement systems
Basics and measurement
4237207
https://en.wikipedia.org/wiki/Error%20correction%20code
Error correction code
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code, or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code. FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in multicast. Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used in modems and in cellular networks. FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate a bit-error rate (BER) signal which can be used as feedback to fine-tune the analog receiving electronics. FEC information is added to mass storage (magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used as ECC computer memory on systems that require special provisions for reliability. The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective signal-to-noise ratio. The noisy-channel coding theorem of Claude Shannon can be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems like polar code come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame. Method ECC is accomplished by adding redundancy to the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are systematic, while those that do not are non-systematic. A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1) repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below. This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is: Up to one bit of triplet in error, or up to two bits of triplet omitted (cases not shown in table). Though simple to implement and widely used, this triple modular redundancy is a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits). Averaging noise to reduce errors ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data. Because of this "risk-pooling" effect, digital communication systems that use ECC tend to work well above a certain minimum signal-to-noise ratio and not at all below it. This all-or-nothing tendency – the cliff effect – becomes more pronounced as stronger codes are used that more closely approach the theoretical Shannon limit. Interleaving ECC coded data can reduce the all or nothing properties of transmitted ECC codes when the channel errors tend to occur in bursts. However, this method has limits; it is best used on narrowband data. Most telecommunication systems use a fixed channel code designed to tolerate the expected worst-case bit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: some instances of hybrid automatic repeat-request use a fixed ECC method as long as the ECC can handle the error rate, then switch to ARQ when the error rate gets too high; adaptive modulation and coding uses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed. Types The two main categories of ECC codes are block codes and convolutional codes. Block codes work on fixed-size blocks (packets) of bits or symbols of predetermined size. Practical block codes can generally be hard-decoded in polynomial time to their block length. Convolutional codes work on bit or symbol streams of arbitrary length. They are most often soft decoded with the Viterbi algorithm, though other algorithms are sometimes used. Viterbi decoding allows asymptotically optimal decoding efficiency with increasing constraint length of the convolutional code, but at the expense of exponentially increasing complexity. A convolutional code that is terminated is also a 'block code' in that it encodes a block of input data, but the block size of a convolutional code is generally arbitrary, while block codes have a fixed size dictated by their algebraic characteristics. Types of termination for convolutional codes include "tail-biting" and "bit-flushing". There are many types of block codes; Reed–Solomon coding is noteworthy for its widespread use in compact discs, DVDs, and hard disk drives. Other examples of classical block codes include Golay, BCH, Multidimensional parity, and Hamming codes. Hamming ECC is commonly used to correct NAND flash memory errors. This provides single-bit error correction and 2-bit error detection. Hamming codes are only suitable for more reliable single-level cell (SLC) NAND. Denser multi-level cell (MLC) NAND may use multi-bit correcting ECC such as BCH or Reed–Solomon. NOR Flash typically does not use any error correction. Classical block codes are usually decoded using hard-decision algorithms, which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using soft-decision algorithms like the Viterbi, MAP or BCJR algorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding. Nearly all classical block codes apply the algebraic properties of finite fields. Hence classical block codes are often referred to as algebraic codes. In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such as LDPC codes lack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates. Most forward error correction codes correct only bit-flips, but not bit-insertions or bit-deletions. In this setting, the Hamming distance is the appropriate way to measure the bit error rate. A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes. The Levenshtein distance is a more appropriate way to measure the bit error rate when using such codes. Code-rate and the tradeoff between reliability and data rate The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the ratio between the number of information bits and the total number of bits (i.e., information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code. The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate. In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection. One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero: His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement. The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication. Concatenated ECC codes for improved performance Classical (algebraic) block codes and convolutional codes are frequently combined in concatenated coding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed–Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended. Concatenated codes have been standard practice in satellite and deep space communications since Voyager 2 first used the technique in its 1986 encounter with Uranus. The Galileo craft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna. Low-density parity-check (LDPC) Low-density parity-check (LDPC) codes are a class of highly efficient linear block codes made from many single parity check (SPC) codes. They can provide performance very close to the channel capacity (the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel. LDPC codes were first introduced by Robert G. Gallager in his PhD thesis in 1960, but due to the computational effort in implementing encoder and decoder and the introduction of Reed–Solomon codes, they were mostly ignored until the 1990s. LDPC codes are now used in many recent high-speed communication standards, such as DVB-S2 (Digital Video Broadcasting – Satellite – Second Generation), WiMAX (IEEE 802.16e standard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n), 10GBase-T Ethernet (802.3an) and G.hn/G.9960 (ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within 3GPP MBMS (see fountain codes). Turbo codes Turbo coding is an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of the Shannon limit. Predating LDPC codes in terms of practical application, they now provide similar performance. One of the earliest commercial applications of turbo coding was the CDMA2000 1x (TIA IS-2000) digital cellular technology developed by Qualcomm and sold by Verizon Wireless, Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access, 1xEV-DO (TIA IS-856). Like 1x, EV-DO was developed by Qualcomm, and is sold by Verizon Wireless, Sprint, and other carriers (Verizon's marketing name for 1xEV-DO is Broadband Access, Sprint's consumer and business marketing names for 1xEV-DO are Power Vision and Mobile Broadband, respectively). Local decoding and testing of codes Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool in computational complexity theory, e.g., for the design of probabilistically checkable proofs. Locally decodable codes are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. Locally testable codes are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal. Not all testing codes are locally decoding and testing of codes Not all locally decodable codes (LDCs) are locally testable codes (LTCs) neither locally correctable codes (LCCs), q-query LCCs are bounded exponentially while LDCs can have subexponential lengths. Interleaving Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Many communication channels are not memoryless: errors typically occur in bursts rather than independently. If the number of errors within a code word exceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a more uniform distribution of errors. Therefore, interleaving is widely used for burst error-correction. The analysis of modern iterated codes, like turbo codes and LDPC codes, typically assumes an independent distribution of errors. Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word. For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance. The iterative decoding algorithm works best when there are not short cycles in the factor graph that represents the decoder; the interleaver is chosen to avoid short cycles. Interleaver designs include: rectangular (or uniform) interleavers (similar to the method using skip factors described above) convolutional interleavers random interleavers (where the interleaver is a known random permutation) S-random interleaver (where the interleaver is a known random permutation with the constraint that no input symbols within distance S appear within a distance of S in the output). a contention-free quadratic permutation polynomial (QPP). An example of use is in the 3GPP Long Term Evolution mobile telecommunication standard. In multi-carrier communication systems, interleaving across carriers may be employed to provide frequency diversity, e.g., to mitigate frequency-selective fading or narrowband interference. Example Transmission without interleaving: Error-free message: Transmission with a burst error: Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword is altered in one bit and can be corrected, but the codeword is altered in three bits, so either it cannot be decoded at all or it might be decoded incorrectly. With interleaving: Error-free code words: Interleaved: Transmission with a burst error: Received code words after deinterleaving: In each of the codewords "", "", "", and "", only one bit is altered, so one-bit error-correcting code will decode everything correctly. Transmission without interleaving: Original transmitted sentence: Received sentence with a burst error: The term "" ends up mostly unintelligible and difficult to correct. With interleaving: Transmitted sentence: Error-free transmission: Received sentence with a burst error: Received sentence after deinterleaving: No word is completely lost and the missing letters can be recovered with minimal guesswork. Disadvantages of interleaving Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded. Also interleavers hide the structure of errors; without an interleaver, more advanced decoding algorithms can take advantage of the error structure and achieve more reliable communication than a simpler decoder combined with an interleaver. An example of such an algorithm is based on neural network structures. Software for error-correcting codes Simulating the behaviour of error-correcting codes (ECCs) in software is a common practice to design, validate and improve ECCs. The upcoming wireless 5G standard raises a new range of applications for the software ECCs: the Cloud Radio Access Networks (C-RAN) in a Software-defined radio (SDR) context. The idea is to directly use software ECCs in the communications. For instance in the 5G, the software ECCs could be located in the cloud and the antennas connected to this computing resources: improving this way the flexibility of the communication network and eventually increasing the energy efficiency of the system. In this context, there are various available Open-source software listed below (non exhaustive). AFF3CT(A Fast Forward Error Correction Toolbox): a full communication chain in C++ (many supported codes like Turbo, LDPC, Polar codes, etc.), very fast and specialized on channel coding (can be used as a program for simulations or as a library for the SDR). IT++: a C++ library of classes and functions for linear algebra, numerical optimization, signal processing, communications, and statistics. OpenAir: implementation (in C) of the 3GPP specifications concerning the Evolved Packet Core Networks. List of error-correcting codes AN codes Algebraic geometry code BCH code, which can be designed to correct any arbitrary number of errors per code block. Barker code used for radar, telemetry, ultra sound, Wifi, DSSS mobile phone networks, GPS etc. Berger code Constant-weight code Convolutional code Expander codes Group codes Golay codes, of which the Binary Golay code is of practical interest Goppa code, used in the McEliece cryptosystem Hadamard code Hagelbarger code Hamming code Latin square based code for non-white noise (prevalent for example in broadband over powerlines) Lexicographic code Linear Network Coding, a type of erasure correcting code across networks instead of point-to-point links Long code Low-density parity-check code, also known as Gallager code, as the archetype for sparse graph codes LT code, which is a near-optimal rateless erasure correcting code (Fountain code) m of n codes Nordstrom-Robinson code, used in Geometry and Group Theory Online code, a near-optimal rateless erasure correcting code Polar code (coding theory) Raptor code, a near-optimal rateless erasure correcting code Reed–Solomon error correction Reed–Muller code Repeat-accumulate code Repetition codes, such as Triple modular redundancy Spinal code, a rateless, nonlinear code based on pseudo-random hash functions Tornado code, a near-optimal erasure correcting code, and the precursor to Fountain codes Turbo code Walsh–Hadamard code Cyclic redundancy checks (CRCs) can correct 1-bit errors for messages at most bits long for optimal generator polynomials of degree , see Locally Recoverable Codes
Technology
Software development: General
null
1035900
https://en.wikipedia.org/wiki/Saber-toothed%20predator
Saber-toothed predator
A saber-tooth (alternatively spelled sabre-tooth) is any member of various extinct groups of predatory therapsids, predominantly carnivoran mammals, that are characterized by long, curved saber-shaped canine teeth which protruded from the mouth when closed. Among the earliest animals that can be described as "sabertooths" are the gorgonopsids, a group of non-mammalian therapsids that lived during the Middle-Late Permian, around 270-252 million years ago. Saber-toothed mammals have been found almost worldwide from the Eocene epoch to the end of the Pleistocene epoch (42 million years ago – 11,000 years ago). One of the best-known genera is the machairodont or "saber-toothed cat" Smilodon, the species of which, especially S. fatalis, are popularly referred to as "saber-toothed tigers", although they are not closely related to tigers (Panthera). Despite some similarities, not all saber-tooths are closely related to saber-toothed cats or felids in-general. Instead, many members are classified into different families of Feliformia, such as Barbourofelidae and Nimravidae; the oxyaenid "creodont" genera Machaeroides and Apataelurus; and two extinct lineages of metatherian mammals, the thylacosmilids of Sparassodonta, and deltatheroideans, which are more closely related to marsupials. In this regard, these saber-toothed mammals can be viewed as examples of convergent evolution. This convergence is remarkable due not only to the development of elongated canines, but also a suite of other characteristics, such as a wide gape and bulky forelimbs, which is so consistent that it has been termed the "saber-tooth suite." Of the feliform lineages, the family Nimravidae is the oldest, entering the landscape around 42 mya and becoming extinct by 7.2 mya. Barbourofelidae entered around 16.9 mya and were extinct by 9 mya. These two would have shared some habitats. Morphology The different groups of saber-toothed predators evolved their saber-toothed characteristics entirely independently. They are most known for having maxillary canines which extended down from the mouth when the mouth was closed. Saber-toothed cats were generally more robust than today's cats and were quite bear-like in build. They are believed to have been excellent hunters, taking animals such as sloths, mammoths, and other large prey. Evidence from the numbers found at the La Brea Tar Pits suggests that Smilodon, like modern lions, was a social carnivore. The first saber-tooths to appear were non-mammalian synapsids, such as the gorgonopsids; they were one of the first groups of animals within Synapsida to experience the specialization of saber teeth, and many had long canines. Some had two pairs of upper canines with two jutting down from each side, but most had one pair of upper extreme canines. Because of their primitiveness, they are extremely easy to tell from machairodonts. Several defining characteristics are a lack of a coronoid process, many sharp "premolars" more akin to pegs than scissors, and very long skulls. Despite their large canines, however, most gorgonopsians probably lacked the other specializations found in true saber-toothed predator ecomorphs. Two gorgonopsians, Smilesaurus and Inostrancevia, had exceptionally large canines and may have been closer functional analogues to later sabertooths. The second appearance is in Deltatheroida, a lineage of Cretaceous metatherians. At least one genus, Lotheridium, possessed long canines, and given both the predatory habits of the clade as well as the generally incomplete material, this may have been a more widespread adaptation. The third appearance of long canines is Thylacosmilus, which is the most distinctive of the saber-tooth mammals and is also easy to tell apart. It differs from machairodonts in possessing a very prominent flange and a tooth that is triangular in cross section. The root of the canines is more prominent than in machairodonts and a true sagittal crest is absent. The fourth instance of saber-teeth is from the clade Oxyaenidae. The small and slender Machaeroides bore canines that were thinner than in the average machairodont. Its muzzle was longer and narrower. The fifth saber-tooth appearance is the ancient feliform (carnivoran) family Nimravidae. Both groups have short skulls with tall sagittal crests, and their general skull shape is very similar. Some have distinctive flanges, and some have none at all, so this confuses the matter further. Machairodonts were almost always bigger, though, and their canines were longer and more stout for the most part, but exceptions do appear. The sixth appearance is the barbourofelids. These feliform carnivorans are very closely related to actual cats. The best-known barbourofelid is the eponymous Barbourofelis, which differs from most machairodonts by having a much heavier and more stout mandible, smaller orbits, massive and almost knobby flanges, and canines that are farther back. The average machairodont had well-developed incisors, but barbourofelids' were more extreme. The seventh and last saber-toothed group to evolve were the machairodonts themselves. Diet The evolution of enlarged canines in Tertiary carnivores was a result of large mammals being the source of prey for saber-toothed predators. The development of the saber-toothed condition appears to represent a shift in function and killing behavior, rather than one in predator-prey relations. Many hypotheses exist concerning saber-tooth killing methods, some of which include attacking soft tissue such as the belly and throat, where biting deep was essential to generate killing blows. The elongated teeth also aided with strikes reaching major blood vessels in these large mammals. However, the precise functional advantage of the saber-tooth's bite, particularly in relation to prey size, is a mystery. A new point-to-point bite model is introduced in the article by Andersson et al., showing that for saber-tooth cats, the depth of the killing bite decreases dramatically with increasing prey size. The extended gape of saber-toothed cats results in a considerable increase in bite depth when biting into prey with a radius of less than 10 cm. For the saber-tooth, this size-reversed functional advantage suggests predation on species within a similar size range to those attacked by present-day carnivorans, rather than "megaherbivores" as previously believed. A disputing view of the cat's hunting technique and ability is presented by C. K. Brain in The Hunters or the Hunted?, in which he attributes the cat's prey-killing abilities to its large neck muscles rather than its jaws. Large cats use both the upper and lower jaw to bite down and bring down the prey. The strong bite of the jaw is accredited to the strong temporalis muscle that attach from the skull to the coronoid process of the jaw. The larger the coronoid process, the larger the muscle that attaches there, so the stronger the bite. As C.K. Brain points out, the saber-toothed cats had a greatly reduced coronoid process and therefore a disadvantageously weak bite. The cat did, however, have an enlarged mastoid process, a muscle attachment at the base of the skull, which attaches to neck muscles. According to C.K. Brain, the saber-tooth would use a "downward thrust of the head, powered by the neck muscles" to drive the large upper canines into the prey. This technique was "more efficient than those of true cats". Biology The similarity in all these unrelated families involves the convergent evolution of the saber-like canines as a hunting adaptation. Meehan et al. note that it took around 8 million years for a new type of saber-toothed cat to fill the niche of an extinct predecessor in a similar ecological role; this has happened at least four times with different families of animals developing this adaptation. Although the adaptation of the saber-like canines made these creatures successful, it seems that the shift to obligate carnivorism, along with co-evolution with large prey animals, led the saber-toothed cats of each time period to extinction. As per Van Valkenburgh, the adaptations that made saber-toothed cats successful also made the creatures vulnerable to extinction. In her example, trends toward an increase in size, along with greater specialization, acted as a "macro-evolutionary ratchet": when large prey became scarce or extinct, these creatures would be unable to adapt to smaller prey or consume other sources of food, and would be unable to reduce their size so as to need less food. More recently, it has been suggested that Thylacosmilus differed radically from its placental counterparts in possessing differently shaped canines and lacking incisors. This suggests that it was not ecologically analogous to other saber-teeth and possibly an entrail specialist. Another study has found that other saber toothed species similarly had diverse lifestyles and that superficial anatomical similarities obscure them. Phylogeny of feliform saber-tooths The following cladogram shows the relationships between the feliform saber-tooths, including the Nimravidae, Barbourofelidae and Machairodontinae.Piras P, Maiorino L, Teresi L, Meloro C, Lucci F, Kotsakis T, Raia P (2013) Data from: Bite of the cats: relationships between functional integration and mechanical performance as revealed by mandible geometry. Dryad Digital Repository. https://dx.doi.org/10.5061/dryad.kp8t3 Saber-toothed groups are marked with background colors. Saber-tooth genera Saber-tooth taxonomy All saber-toothed mammals lived between 33.7 million and 9,000 years ago, but the evolutionary lines that led to the various saber-tooth genera started to diverge much earlier. It is thus a polyphyletic grouping. The lineage that led to Thylacosmilus was the first to split off, in the late Cretaceous. It is a metatherian, and thus more closely related to kangaroos and opossums than the felines. The hyaenodonts diverged next, possibly before Laurasiatheria, then the oxyaenids, and then the nimravids, before the diversification of the truly feline saber-tooths. Clade Therapsida Clade: †Gorgonopsia †Inostrancevia †Smilesaurus Class: Mammalia Clade: Metatheria (diverged ?, in the Cretaceous) Order: †Deltatheroida (an extinct group of metatherian carnivores) Family: †Deltatheridiidae †Lotheridium Order: †Sparassodonta (an extinct group of metatherian carnivores) Family: †Thylacosmilidae †Patagosmilus †Anachlysictis †Thylacosmilus Subclass: Placentalia Order: †Hyaenodonta †Boualitomus Family: †Sinopidae Genus: †Sinopa Superfamily: †Hyaenodontoidea Family: †Hyaenodontidae Subfamily: †Hyaenodontinae Tribe: †Hyaenodontini Genus: †Hyaenodon Family: †Proviverridae †Parvagula Superfamily: †Hyainailouroidea Family: †Hyainailouridae (paraphyletic family) Subfamily: †Hyainailourinae (paraphyletic subfamily) Tribe: †Leakitheriini Genus: †Leakitherium Tribe: †Metapterodontini Genus: †Metapterodon Order: †Oxyaenodonta Family: †Oxyaenidae Subfamily: †Machaeroidinae Genus: †Apataelurus Genus: †Machaeroides Order Carnivora Family †Nimravidae (diverged from the feliforms 48–55 Ma BP, in the late Eocene) Subfamily †Nimravinae (Dinictis) Subfamily †Hoplophoninae Suborder Feliformia ('cat-like' carnivores) Family †Barbourofelidae (sister taxa to Felidae) Family Felidae (true cats) Subfamily †Machairodontinae (diverged ?, in the ?) Tribe †Homotherini †Homotherium †Machairodus †Xenosmilus Tribe †Metailurini †Dinofelis †Metailurus Tribe †Smilodontini †Megantereon †Paramachairodus †Smilodon
Biology and health sciences
Mammals: General
Animals
1036104
https://en.wikipedia.org/wiki/Midge
Midge
A midge is any small fly, including species in several families of non-mosquito nematoceran Diptera. Midges are found (seasonally or otherwise) on practically every land area outside permanently arid deserts and the frigid zones. Some midges, such as many Phlebotominae (sand fly) and Simuliidae (black fly), are vectors of various diseases. Many others play useful roles as prey for insectivores, such as various frogs and swallows. Others are important as detritivores, and form part of various nutrient cycles. The habits of midges vary greatly from species to species, though within any particular family, midges commonly have similar ecological roles. Examples of families that include species of midges include: Blephariceridae, net-winged midges Cecidomyiidae, gall midges Ceratopogonidae, biting midges (also known as no-see-ums or punkies in North America and sandflies in Australia) Chaoboridae, phantom midges Chironomidae, non-biting midges (also known as muckleheads, muffleheads or lake flies in the Great Lakes region of North America) Deuterophlebiidae, mountain midges Dixidae, meniscus midges Scatopsidae, dung midges Thaumaleidae, solitary midges Examples The Ceratopogonidae (biting midges) include serious blood-sucking pests, feeding both on humans and other mammals. Some of them spread the livestock diseases known as blue tongue and African horse sickness – other species though, are at least partly nectar feeders, and some even suck insect bodily fluids. Many midges are known for having symbiotic relationships with many other organisms. These can be commensal, parasitic or mutualistic relationships. Many of the commensal relationships are found within the family Chironomidae. Other ceratopogonid midges are major pollinators of Theobroma cacao (cocoa tree). Having natural pollinators has beneficial effects in both agricultural and biological products because it increases crop yield and also density of predators of the midges (still beneficial to all parties). The term "midge" is a vague term that refers to a large and diverse group of organisms. Although many are known as "bloodsuckers," there are many different roles that they play in their respective ecosystems. There is, for example, no objective basis for excluding the Psychodidae from the list, and some of them (or midge-like taxa commonly included in the family, such as Phlebotomus) are blood-sucking pests and disease vectors. Most midges, apart from the gall midges (Cecidomyiidae), are aquatic during the larval stage. Some Cecidomyiidae (e.g., the Hessian fly) are considered significant pests of some plant species. The larvae of some Chironomidae contain hemoglobin and are sometimes referred to as bloodworms. Non-biting midge flies are commonly considered a minor nuisance around bodies of water.
Biology and health sciences
Flies (Diptera)
Animals
1036259
https://en.wikipedia.org/wiki/Refrigerator
Refrigerator
A refrigerator, commonly shortened to fridge, is a commercial and home appliance consisting of a thermally insulated compartment and a heat pump (mechanical, electronic or chemical) that transfers heat from its inside to its external environment so that its inside is cooled to a temperature below the room temperature. Refrigeration is an essential food storage technique around the world. The low temperature reduces the reproduction rate of bacteria, so the refrigerator lowers the rate of spoilage. A refrigerator maintains a temperature a few degrees above the freezing point of water. The optimal temperature range for perishable food storage is . A freezer is a specialized refrigerator, or portion of a refrigerator, that maintains its contents’ temperature below the freezing point of water. The refrigerator replaced the icebox, which had been a common household appliance for almost a century and a half. The United States Food and Drug Administration recommends that the refrigerator be kept at or below and that the freezer be regulated at . The first cooling systems for food involved ice. Artificial refrigeration began in the mid-1750s, and developed in the early 1800s. In 1834, the first working vapor-compression refrigeration system, using the same technology seen in air conditioners, was built. The first commercial ice-making machine was invented in 1854. In 1913, refrigerators for home use were invented. In 1923 Frigidaire introduced the first self-contained unit. The introduction of Freon in the 1920s expanded the refrigerator market during the 1930s. Home freezers as separate compartments (larger than necessary just for ice cubes) were introduced in 1940. Frozen foods, previously a luxury item, became commonplace. Freezer units are used in households as well as in industry and commerce. Commercial refrigerator and freezer units were in use for almost 40 years prior to the common home models. The freezer-over-refrigerator style had been the basic style since the 1940s, until modern, side-by-side refrigerators broke the trend. A vapor compression cycle is used in most household refrigerators, refrigerator–freezers and freezers. Newer refrigerators may include automatic defrosting, chilled water, and ice from a dispenser in the door. Domestic refrigerators and freezers for food storage are made in a range of sizes. Among the smallest are Peltier-type refrigerators designed to chill beverages. A large domestic refrigerator stands as tall as a person and may be about wide with a capacity of . Refrigerators and freezers may be free standing, or built into a kitchen. The refrigerator allows the modern household to keep food fresh for longer than before. Freezers allow people to buy perishable food in bulk and eat it at leisure, and make bulk purchases. History Technology development Ancient origins Ancient Iranians were among the first to invent a form of cooler utilizing the principles of evaporative cooling and radiative cooling called yakhchāls. These complexes used subterranean storage spaces, a large thickly insulated above-ground domed structure, and outfitted with badgirs (wind-catchers) and series of qanats (aqueducts). Pre-electric refrigeration In modern times, before the invention of the modern electric refrigerator, icehouses and iceboxes were used to provide cool storage for most of the year. Placed near freshwater lakes or packed with snow and ice during the winter, they were once very common. Natural means are still used to cool foods today. On mountainsides, runoff from melting snow is a convenient way to cool drinks, and during the winter one can keep milk fresh much longer just by keeping it outdoors. The word "refrigeratory" was used at least as early as the 17th century. Artificial refrigeration The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time. In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum. In 1820, the British scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate in Great Britain, Jacob Perkins, built the first working vapor-compression refrigeration system. It was a closed-cycle device that could operate continuously. A similar attempt was made in 1842, by American physician, John Gorrie, who built a working prototype, but it was a commercial failure. American engineer Alexander Twining took out a British patent in 1850 for a vapor compression system that used ether. The first practical vapor compression refrigeration system was built by James Harrison, a Scottish Australian. His 1856 patent was for a vapor compression system using ether, alcohol or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong, Victoria, and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapor-compression refrigeration to breweries and meat packing houses, and by 1861, a dozen of his systems were in operation. The first gas absorption refrigeration system (compressor-less and powered by a heat-source) was developed by Edward Toussaint of France in 1859 and patented in 1860. It used gaseous ammonia dissolved in water ("aqua ammonia"). Carl von Linde, an engineering professor at the Technological University Munich in Germany, patented an improved method of liquefying gases in 1876, creating the first reliable and efficient compressed-ammonia refrigerator. His new process made possible the use of gases such as ammonia (NH3), sulfur dioxide (SO2) and methyl chloride (CH3Cl) as refrigerants, which were widely used for that purpose until the late 1920s despite safety concerns. In 1895 he discovered the refrigeration cycle. Electric refrigerators In 1894, Hungarian inventor and industrialist István Röck started to manufacture a large industrial ammonia refrigerator which was powered by electric compressors (together with the Esslingen Machine Works). Its electric compressors were manufactured by the Ganz Works. At the 1896 Millennium Exhibition, Röck and the Esslingen Machine Works presented a 6-tonne capacity artificial ice producing plant. In 1906, the first large Hungarian cold store (with a capacity of 3,000 tonnes, the largest in Europe) opened in Tóth Kálmán Street, Budapest, the machine was manufactured by the Ganz Works. Until nationalisation after the Second World War, large-scale industrial refrigerator production in Hungary was in the hands of Röck and Ganz Works. Commercial refrigerator and freezer units, which go by many other names, were in use for almost 40 years prior to the common home models. They used gas systems such as ammonia (R-717) or sulfur dioxide (R-764), which occasionally leaked, making them unsafe for home use. Practical household refrigerators were introduced in 1915 and gained wider acceptance in the United States in the 1930s as prices fell and non-toxic, non-flammable synthetic refrigerants such as Freon-12 (R-12) were introduced. However, R-12 proved to be damaging to the ozone layer, causing governments to issue a ban on its use in new refrigerators and air-conditioning systems in 1994. The less harmful replacement for R-12, R-134a (tetrafluoroethane), has been in common use since 1990, but R-12 is still found in many old systems. Refrigeration, continually operated, typically consumes up to 50% of the energy used by a supermarket. Doors, made of glass to allow inspection of contents, improve efficiency significantly over open display cases, which use 1.3 times the energy. Residential refrigerators In 1913, the first electric refrigerators for home and domestic use were invented and produced by Fred W. Wolf of Fort Wayne, Indiana, with models consisting of a unit that was mounted on top of an ice box. His first device, produced over the next few years in several hundred units, was called DOMELRE. In 1914, engineer Nathaniel B. Wales of Detroit, Michigan, introduced an idea for a practical electric refrigeration unit, which later became the basis for the Kelvinator. A self-contained refrigerator, with a compressor on the bottom of the cabinet was invented by Alfred Mellowes in 1916. Mellowes produced this refrigerator commercially but was bought out by William C. Durant in 1918, who started the Frigidaire company to mass-produce refrigerators. In 1918, Kelvinator company introduced the first refrigerator with any type of automatic control. The absorption refrigerator was invented by Baltzar von Platen and Carl Munters from Sweden in 1922, while they were still students at the Royal Institute of Technology in Stockholm. It became a worldwide success and was commercialized by Electrolux. Other pioneers included Charles Tellier, David Boyle, and Raoul Pictet. Carl von Linde was the first to patent and make a practical and compact refrigerator. These home units usually required the installation of the mechanical parts, motor and compressor, in the basement or an adjacent room while the cold box was located in the kitchen. There was a 1922 model that consisted of a wooden cold box, water-cooled compressor, an ice cube tray and a compartment, and cost $714. (A 1922 Model-T Ford cost about $476.) By 1923, Kelvinator held 80 percent of the market for electric refrigerators. Also in 1923 Frigidaire introduced the first self-contained unit. About this same time porcelain-covered metal cabinets began to appear. Ice cube trays were introduced more and more during the 1920s; up to this time freezing was not an auxiliary function of the modern refrigerator. The first refrigerator to see widespread use was the General Electric "Monitor-Top" refrigerator introduced in 1927, so-called, by the public, because of its resemblance to the gun turret on the ironclad warship USS Monitor of the 1860s. The compressor assembly, which emitted a great deal of heat, was placed above the cabinet, and enclosed by a decorative ring. Over a million units were produced. As the refrigerating medium, these refrigerators used either sulfur dioxide, which is corrosive to the eyes and may cause loss of vision, painful skin burns and lesions, or methyl formate, which is highly flammable, harmful to the eyes, and toxic if inhaled or ingested. The introduction of Freon in the 1920s expanded the refrigerator market during the 1930s and provided a safer, low-toxicity alternative to previously used refrigerants. Separate freezers became common during the 1940s; the term for the unit, popular at the time, was deep freeze. These devices, or appliances, did not go into mass production for use in the home until after World War II. The 1950s and 1960s saw technical advances like automatic defrosting and automatic ice making. More efficient refrigerators were developed in the 1970s and 1980s, even though environmental issues led to the banning of very effective (Freon) refrigerants. Early refrigerator models (from 1916) had a cold compartment for ice cube trays. From the late 1920s fresh vegetables were successfully processed through freezing by the Postum Company (the forerunner of General Foods), which had acquired the technology when it bought the rights to Clarence Birdseye's successful fresh freezing methods. Styles of refrigerators The majority of refrigerators were white in the early 1950s, but between the mid-1950s and the present, manufacturers and designers have added color. Pastel colors, such as pink and turquoise, gained popularity in the late 1950s and early 1960s. Certain versions also had brushed chrome plating, which is akin to a stainless steel appearance. During the latter part of the 1960s and the early 1970s, earth tone colors were popular, including Harvest Gold, Avocado Green and almond. In the 1980s, black became fashionable. In the late 1990s stainless steel came into vogue. Since 1961 the Color Marketing Group has attempted to coordinate the colors of appliances and other consumer goods. Freezer Freezer units are used in households and in industry and commerce. Food stored at or below is safe indefinitely. Most household freezers maintain temperatures from , although some freezer-only units can achieve and lower. Refrigerator freezers generally do not achieve lower than , since the same coolant loop serves both compartments: Lowering the freezer compartment temperature excessively causes difficulties in maintaining above-freezing temperature in the refrigerator compartment. Domestic freezers can be included as a separate compartment in a refrigerator, or can be a separate appliance. Domestic freezers may be either upright, resembling a refrigerator, or chest freezers, wider than tall with the lid or door on top, sacrificing convenience for efficiency and partial immunity to power outages. Many modern upright freezers come with an ice dispenser built into their door. Some upscale models include thermostat displays and controls. Home freezers as separate compartments (larger than necessary just for ice cubes), or as separate units, were introduced in the United States in 1940. Frozen foods, previously a luxury item, became commonplace. In 1955 the domestic deep freezer, which was cold enough to allow the owners to freeze fresh food themselves rather than buying food already frozen with Clarence Birdseye's process, went on sale. Walk-in freezer There are walk in freezers, as the name implies, they allow for one to walk into the freezer. Safety regulations requires an emergency releases and employers should check to ensure no one will trapped inside when the unit gets locked as hypothermia is possible if one is in freezer for longer periods of time. Refrigerator technologies Compressor refrigerators A vapor compression cycle is used in most household refrigerators, refrigerator–freezers and freezers. In this cycle, a circulating refrigerant such as R134a enters a compressor as low-pressure vapor at or slightly below the temperature of the refrigerator interior. The vapor is compressed and exits the compressor as high-pressure superheated vapor. The superheated vapor travels under pressure through coils or tubes that make up the condenser; the coils or tubes are passively cooled by exposure to air in the room. The condenser cools the vapor, which liquefies. As the refrigerant leaves the condenser, it is still under pressure but is now only slightly above room temperature. This liquid refrigerant is forced through a metering or throttling device, also known as an expansion valve (essentially a pin-hole sized constriction in the tubing) to an area of much lower pressure. The sudden decrease in pressure results in explosive-like flash evaporation of a portion (typically about half) of the liquid. The latent heat absorbed by this flash evaporation is drawn mostly from adjacent still-liquid refrigerant, a phenomenon known as auto-refrigeration. This cold and partially vaporized refrigerant continues through the coils or tubes of the evaporator unit. A fan blows air from the compartment ("box air") across these coils or tubes and the refrigerant completely vaporizes, drawing further latent heat from the box air. This cooled air is returned to the refrigerator or freezer compartment, and so keeps the box air cold. Note that the cool air in the refrigerator or freezer is still warmer than the refrigerant in the evaporator. Refrigerant leaves the evaporator, now fully vaporized and slightly heated, and returns to the compressor inlet to continue the cycle. Modern domestic refrigerators are extremely reliable because motor and compressor are integrated within a welded container, "sealed unit", with greatly reduced likelihood of leakage or contamination. By comparison, externally-coupled refrigeration compressors, such as those in automobile air conditioning, inevitably leak fluid and lubricant past the shaft seals. This leads to a requirement for periodic recharging and, if ignored, possible compressor failure. Dual compartment designs Refrigerators with two compartments need special design to control the cooling of refrigerator or freezer compartments. Typically, the compressors and condenser coils are mounted at the top of the cabinet, with a single fan to cool them both. This arrangement has a few downsides: each compartment cannot be controlled independently and the more humid refrigerator air is mixed with the dry freezer air. Multiple manufacturers offer dual compressor models. These models have separate freezer and refrigerator compartments that operate independently of each other, sometimes mounted within a single cabinet. Each has its own separate compressor, condenser and evaporator coils, insulation, thermostat, and door. A hybrid between the two designs is using a separate fan for each compartment, the Dual Fan approach. Doing so allows for separate control and airflow on a single compressor system. Absorption refrigerators An absorption refrigerator works differently from a compressor refrigerator, using a source of heat, such as combustion of liquefied petroleum gas, solar thermal energy or an electric heating element. These heat sources are much quieter than the compressor motor in a typical refrigerator. A fan or pump might be the only mechanical moving parts; reliance on convection is considered impractical. Other uses of an absorption refrigerator (or "chiller") include large systems used in office buildings or complexes such as hospitals and universities. These large systems are used to chill a brine solution that is circulated through the building. Peltier effect refrigerators The Peltier effect uses electricity to pump heat directly; refrigerators employing this system are sometimes used for camping, or in situations where noise is not acceptable. They can be totally silent (if a fan for air circulation is not fitted) but are less energy-efficient than other methods. Ultra-low temperature refrigerators "Ultra-cold" or "ultra-low temperature (ULT)" (typically ) freezers, as used for storing biological samples, also generally employ two stages of cooling, but in cascade. The lower temperature stage uses methane, or a similar gas, as a refrigerant, with its condenser kept at around −40°C by a second stage which uses a more conventional refrigerant. For much lower temperatures, laboratories usually purchase liquid nitrogen (), kept in a Dewar flask, into which the samples are suspended. Cryogenic chest freezers can achieve temperatures of down to , and may include a liquid nitrogen backup. Other refrigerators Alternatives to the vapor-compression cycle not in current mass production include: Acoustic cooling Air cycle Magnetic cooling Malone engine Pulse tube Stirling cycle Thermoelectric cooling Thermionic cooling Vortex tube Water cycle systems. Layout Many modern refrigerator/freezers have the freezer on top and the refrigerator on the bottom. Most refrigerator-freezers—except for manual defrost models or cheaper units—use what appears to be two thermostats. Only the refrigerator compartment is properly temperature controlled. When the refrigerator gets too warm, the thermostat starts the cooling process and a fan circulates the air around the freezer. During this time, the refrigerator also gets colder. The freezer control knob only controls the amount of air that flows into the refrigerator via a damper system. Changing the refrigerator temperature will inadvertently change the freezer temperature in the opposite direction. Changing the freezer temperature will have no effect on the refrigerator temperature. The freezer control may also be adjusted to compensate for any refrigerator adjustment. This means the refrigerator may become too warm. However, because only enough air is diverted to the refrigerator compartment, the freezer usually re-acquires the set temperature quickly, unless the door is opened. When a door is opened, either in the refrigerator or the freezer, the fan in some units stops immediately to prevent excessive frost build up on the freezer's evaporator coil, because this coil is cooling two areas. When the freezer reaches temperature, the unit cycles off, no matter what the refrigerator temperature is. Modern computerized refrigerators do not use the damper system. The computer manages fan speed for both compartments, although air is still blown from the freezer. Features Newer refrigerators may include: Automatic defrosting A power failure warning that alerts the user by flashing a temperature display. It may display the maximum temperature reached during the power failure, and whether frozen food has defrosted or may contain harmful bacteria. Chilled water and ice from a dispenser in the door. Water and ice dispensing became available in the 1970s. In some refrigerators, the process of making ice is built-in so the user doesn't have to manually use ice trays. Some refrigerators have water chillers and water filtration systems. Cabinet rollers that lets the refrigerator roll out for easier cleaning Adjustable shelves and trays A status indicator that notifies when it is time to change the water filter An in-door ice caddy, which relocates the ice-maker storage to the freezer door and saves approximately of usable freezer space. It is also removable, and helps to prevent ice-maker clogging. A cooling zone in the refrigerator door shelves. Air from the freezer section is diverted to the refrigerator door, to cool milk or juice stored in the door shelf. A drop down door built into the refrigerator main door, giving easy access to frequently used items such as milk, thus saving energy by not having to open the main door. A Fast Freeze function to rapidly cool foods by running the compressor for a predetermined amount of time and thus temporarily lowering the freezer temperature below normal operating levels. It is recommended to use this feature several hours before adding more than 1 kg of unfrozen food to the freezer. For freezers without this feature, lowering the temperature setting to the coldest will have the same effect. Freezer Defrost: Early freezer units accumulated ice crystals around the freezing units. This was a result of humidity introduced into the units when the doors to the freezer were opened condensing on the cold parts, then freezing. This frost buildup required periodic thawing ("defrosting") of the units to maintain their efficiency. Manual Defrost (referred to as Cyclic) units are still available. Advances in automatic defrosting eliminating the thawing task were introduced in the 1950s, but are not universal, due to energy performance and cost. These units used a counter that only defrosted the freezer compartment (Freezer Chest) when a specific number of door openings had been made. The units were just a small timer combined with an electrical heater wire that heated the freezer's walls for a short amount of time to remove all traces of frost/frosting. Also, early units featured freezer compartments located within the larger refrigerator, and accessed by opening the refrigerator door, and then the smaller internal freezer door; units featuring an entirely separate freezer compartment were introduced in the early 1960s, becoming the industry standard by the middle of that decade. These older freezer compartments were the main cooling body of the refrigerator, and only maintained a temperature of around , which is suitable for keeping food for a week. Butter heater: In the early 1950s, the butter conditioner's patent was filed and published by the inventor Nave Alfred E. This feature was supposed to "provide a new and improved food storage receptacle for storing butter or the like which may quickly and easily be removed from the refrigerator cabinet for the purpose of cleaning." Because of the high interest to the invention, companies in UK, New Zealand, and Australia started to include the feature into the mass refrigerator production and soon it became a symbol of the local culture. However, not long after that it was removed from production as according to the companies this was the only way for them to meet new ecology regulations and they found it inefficient to have a heat generating device inside a refrigerator. Later advances included automatic ice units and self compartmentalized freezing units. Types of domestic refrigerators Domestic refrigerators and freezers for food storage are made in a range of sizes. Among the smallest is a Peltier refrigerator advertised as being able to hold 6 cans of beer. A large domestic refrigerator stands as tall as a person and may be about wide with a capacity of . Some models for small households fit under kitchen work surfaces, usually about high. Refrigerators may be combined with freezers, either stacked with refrigerator or freezer above, below, or side by side. A refrigerator without a frozen food storage compartment may have a small section just to make ice cubes. Freezers may have drawers to store food in, or they may have no divisions (chest freezers). Refrigerators and freezers may be free-standing, or built into a kitchen's cabinet. Three distinct classes of refrigerator are common: Compressor refrigerators Compressor refrigerators are by far the most common type; they make a noticeable noise, but are most efficient and give greatest cooling effect. Portable compressor refrigerators for recreational vehicle (RV) and camping use are expensive but effective and reliable. Refrigeration units for commercial and industrial applications can be made in various sizes, shapes and styles to fit customer needs. Commercial and industrial refrigerators may have their compressors located away from the cabinet (similar to split system air conditioners) to reduce noise nuisance and reduce the load on air conditioning in hot weather. Absorption refrigerator Absorption refrigerators may be used in caravans and trailers, and dwellings lacking electricity, such as farms or rural cabins, where they have a long history. They may be powered by any heat source: gas (natural or propane) or kerosene being common. Models made for camping and RV use often have the option of running (inefficiently) on 12 volt battery power. Peltier refrigerators Peltier refrigerators are powered by electricity, usually 12 volt DC, but mains-powered wine coolers are available. Peltier refrigerators are inexpensive but inefficient and become progressively more inefficient with increased cooling effect; much of this inefficiency may be related to the temperature differential across the short distance between the "hot" and "cold" sides of the Peltier cell. Peltier refrigerators generally use heat sinks and fans to lower this differential; the only noise produced comes from the fan. Reversing the polarity of the voltage applied to the Peltier cells results in a heating rather than cooling effect. Other specialized cooling mechanisms may be used for cooling, but have not been applied to domestic or commercial refrigerators. Magnetic refrigerator Magnetic refrigerators are refrigerators that work on the magnetocaloric effect. The cooling effect is triggered by placing a metal alloy in a magnetic field. Acoustic refrigerators are refrigerators that use resonant linear reciprocating motors/alternators to generate a sound that is converted to heat and cold using compressed helium gas. The heat is discarded and the cold is routed to the refrigerator. Energy efficiency In a house without air-conditioning (space heating and/or cooling) refrigerators consume more energy than any other home device. In the early 1990s a competition was held among the major US manufacturers to encourage energy efficiency. Current US models that are Energy Star qualified use 50% less energy than the average 1974 model used. The most energy-efficient unit made in the US consumes about half a kilowatt-hour per day (equivalent to 20 W continuously). But even ordinary units are reasonably efficient; some smaller units use less than 0.2 kWh per day (equivalent to 8 W continuously). Larger units, especially those with large freezers and icemakers, may use as much as 4 kW·h per day (equivalent to 170 W continuously). The European Union uses a letter-based mandatory energy efficiency rating label, with A being the most efficient, instead of the Energy Star. For US refrigerators, the Consortium on Energy Efficiency (CEE) further differentiates between Energy Star qualified refrigerators. Tier 1 refrigerators are those that are 20% to 24.9% more efficient than the Federal minimum standards set by the National Appliance Energy Conservation Act (NAECA). Tier 2 are those that are 25% to 29.9% more efficient. Tier 3 is the highest qualification, for those refrigerators that are at least 30% more efficient than Federal standards. About 82% of the Energy Star qualified refrigerators are Tier 1, with 13% qualifying as Tier 2, and just 5% at Tier 3. Besides the standard style of compressor refrigeration used in ordinary household refrigerators and freezers, there are technologies such as absorption and magnetic refrigeration. Although these designs generally use a much more energy than compressor refrigeration, other qualities such as silent operation or the ability to use gas can favor their use in small enclosures, a mobile environment or in environments where failure of refrigeration must not be possible. Many refrigerators made in the 1930s and 1940s were far more efficient than most that were made later. This is partly due to features added later, such as auto-defrost, that reduced efficiency. Additionally, after World War 2, refrigerator style became more important than efficiency. This was especially true in the US in the 1970s, when side-by-side models (known as American fridge-freezers outside of the US) with ice dispensers and water chillers became popular. The amount of insulation used was also often decreased to reduce refrigerator case size and manufacturing costs. Improvement Over time standards of refrigerator energy efficiency have been introduced and tightened, which has driven steady improvement; 21st-century refrigerators are typically three times more energy-efficient than in the 1930s. The efficiency of older refrigerators can be improved by regular defrosting (if the unit is manual defrost) and cleaning, replacing deteriorated door seals with new ones, not setting the thermostat colder than actually required (a refrigerator does not usually need to be colder than ), and replacing insulation, where applicable. Cleaning condenser coils to remove dust impeding heat flow, and ensuring that there is space for air flow around the condenser can improve efficiency. Auto defrosting Frost-free refrigerators and freezers use electric fans to cool the appropriate compartment. This could be called a "fan forced" refrigerator, whereas manual defrost units rely on colder air lying at the bottom, versus the warm air at the top to achieve adequate cooling. The air is drawn in through an inlet duct and passed through the evaporator where it is cooled, the air is then circulated throughout the cabinet via a series of ducts and vents. Because the air passing the evaporator is supposedly warm and moist, frost begins to form on the evaporator (especially on a freezer's evaporator). In cheaper and/or older models, a defrost cycle is controlled via a mechanical timer. This timer is set to shut off the compressor and fan and energize a heating element located near or around the evaporator for about 15 to 30 minutes at every 6 to 12 hours. This melts any frost or ice build-up and allows the refrigerator to work normally once more. It is believed that frost free units have a lower tolerance for frost, due to their air-conditioner-like evaporator coils. Therefore, if a door is left open accidentally (especially the freezer), the defrost system may not remove all frost, in this case, the freezer (or refrigerator) must be defrosted. If the defrosting system melts all the ice before the timed defrosting period ends, then a small device (called a defrost limiter) acts like a thermostat and shuts off the heating element to prevent too large a temperature fluctuation, it also prevents hot blasts of air when the system starts again, should it finish defrosting early. On some early frost-free models, the defrost limiter also sends a signal to the defrost timer to start the compressor and fan as soon as it shuts off the heating element before the timed defrost cycle ends. When the defrost cycle is completed, the compressor and fan are allowed to cycle back on. Frost-free refrigerators, including some early frost-free refrigerators/freezers that used a cold plate in their refrigerator section instead of airflow from the freezer section, generally don't shut off their refrigerator fans during defrosting. This allows consumers to leave food in the main refrigerator compartment uncovered, and also helps keep vegetables moist. This method also helps reduce energy consumption, because the refrigerator is above freeze point and can pass the warmer-than-freezing air through the evaporator or cold plate to aid the defrosting cycle. Inverter With the advent of digital inverter compressors, the energy consumption is even further reduced than a single-speed induction motor compressor, and thus contributes far less in the way of greenhouse gases. The energy consumption of a refrigerator is also dependent on the type of refrigeration being done. For instance, Inverter Refrigerators consume comparatively less energy than a typical non-inverter refrigerator. In an inverter refrigerator, the compressor is used conditionally on requirement basis. For instance, an inverter refrigerator might use less energy during the winters than it does during the summers. This is because the compressor works for a shorter time than it does during the summers. Further, newer models of inverter compressor refrigerators take into account various external and internal conditions to adjust the compressor speed and thus optimize cooling and energy consumption. Most of them use at least 4 sensors which help detect variance in external temperature, internal temperature owing to opening of the refrigerator door or keeping new food inside; humidity and usage patterns. Depending on the sensor inputs, the compressor adjusts its speed. For example, if door is opened or new food is kept, the sensor detects an increase in temperature inside the cabin and signals the compressor to increase its speed till a pre-determined temperature is attained. After which, the compressor runs at a minimum speed to just maintain the internal temperature. The compressor typically runs between 1200 and 4500 rpm. Inverter compressors not only optimizes cooling but is also superior in terms of durability and energy efficiency. A device consumes maximum energy and undergoes maximum wear and tear when it switches itself on. As an inverter compressor never switches itself off and instead runs on varying speed, it minimizes wear and tear and energy usage. LG played a significant role in improving inverter compressors as we know it by reducing the friction points in the compressor and thus introducing Linear Inverter Compressors. Conventionally, all domestic refrigerators use a reciprocating drive which is connected to the piston. But in a linear inverter compressor, the piston which is a permanent magnet is suspended between two electromagnets. The AC changes the magnetic poles of the electromagnet, which results in the push and pull that compresses the refrigerant. LG claims that this helps reduce energy consumption by 32% and noise by 25% compared to their conventional compressors. Form factor The physical design of refrigerators also plays a large part in its energy efficiency. The most efficient is the chest-style freezer, as its top-opening design minimizes convection when opening the doors, reducing the amount of warm moist air entering the freezer. On the other hand, in-door ice dispensers cause more heat leakage, contributing to an increase in energy consumption. Impact Global adoption The gradual global adoption of refrigerators marks a transformative era in food preservation and domestic convenience. Since the refrigerators introduction in the 20th century, refrigerators have transitioned from being luxurious items to everyday commodities which have altered the understandings of food storage practices. Refrigerators have significantly impacted various aspects of many individual's daily lives by providing food safety to people around the world spanning across a wide variety of cultural and socioeconomic backgrounds. The global adoption of refrigerators has also changed how societies handle their food supply. The introduction of the refrigerator in different societies has resulted in the monetization and industrialized mass food production systems which are commonly linked to increased food waste, animal wastes, and dangerous chemical wastes being traced back into different ecosystems. In addition, refrigerators have also provided an easier way to access food for many individuals around the world, with many options that commercialization has promoted leaning towards low-nutrient dense foods. After consumer refrigerators became financially viable for production and sale on a large scale, their prevalence around the globe expanded greatly. In the United States, an estimated 99.5% of households have a refrigerator. Refrigerator ownership is more common in developed Western countries, but has stayed relatively low in Eastern and developing countries despite its growing popularity. Throughout Eastern Europe and the Middle East, only 80% of the population own refrigerators. In addition to this, 65% of the population in China are stated to have refrigerators. The distribution of consumer refrigerators is also skewed as urban areas exhibit larger refrigeration ownership percentages compared to rural areas. Supplantation of the ice trade The ice trade was an industry in the 19th and 20th centuries of the harvesting, transportation, and sale of natural and artificial ice for the purposes of refrigeration and consumption. The majority of the ice used for trade was harvested from North America and transported globally with some smaller operations working out of Norway. With the introduction of more affordable large and home scale refrigeration around the 1920s, the need for large scale ice harvest and transportation was no longer needed, and the ice trade subsequently slowed and shrank to smaller scale local services or disappeared altogether. Effect on diet and lifestyle The refrigerator allows households to keep food fresh for longer than before. The most notable improvement is for meat and other highly perishable wares, which previously needed to be preserved or otherwise processed for long-term storage and transport. This change in the supply chains of food products led to a marked increase in the quality of food in areas where refrigeration was being used. Additionally, the increased freshness and shelf life of food caused by the advent of refrigeration in addition to growing global communication methods has resulted in an increase in cultural exchange through food products from different regions of the world. There have also been claims that this increase in the quality of food is responsible for an increase in the height of United States citizens around the early 1900s. Refrigeration has also contributed to a decrease in the quality of food in some regions. By allowing, in part, for the phenomenon of globalization in the food sector, refrigeration has made the creation and transportation of ultra-processed foods and convenience foods inexpensive, leading to their prevalence, especially in lower-income regions. These regions of lessened access to higher quality foods are referred to as food deserts. Freezers allow people to buy food in bulk and eat it at leisure, and bulk purchases may save money. Ice cream, a popular commodity of the 20th century, could previously only be obtained by traveling to where the product was made and eating it on the spot. Now it is a common food item. Ice on demand not only adds to the enjoyment of cold drinks, but is useful for first-aid, and for cold packs that can be kept frozen for picnics or in case of emergency. Temperature zones and ratings Residential units The capacity of a refrigerator is measured in either liters or cubic feet. Typically the volume of a combined refrigerator-freezer is split with 1/3 to 1/4 of the volume allocated to the freezer although these values are highly variable. Temperature settings for refrigerator and freezer compartments are often given arbitrary numbers by manufacturers (for example, 1 through 9, warmest to coldest), but generally is ideal for the refrigerator compartment and for the freezer. Some refrigerators must be within certain external temperature parameters to run properly. This can be an issue when placing units in an unfinished area, such as a garage. Some refrigerators are now divided into four zones to store different types of food: (freezer) (meat zone) (cooling zone) (crisper) European freezers, and refrigerators with a freezer compartment, have a four-star rating system to grade freezers. Although both the three- and four-star ratings specify the same storage times and same minimum temperature of , only a four-star freezer is intended for freezing fresh food, and may include a "fast freeze" function (runs the compressor continually, down to as low as ) to facilitate this. Three (or fewer) stars are used for frozen food compartments that are only suitable for storing frozen food; introducing fresh food into such a compartment is likely to result in unacceptable temperature rises. This difference in categorization is shown in the design of the 4-star logo, where the "standard" three stars are displayed in a box using "positive" colours, denoting the same normal operation as a 3-star freezer, and the fourth star showing the additional fresh food/fast freeze function is prefixed to the box in "negative" colours or with other distinct formatting. Most European refrigerators include a moist cold refrigerator section (which does require (automatic) defrosting at irregular intervals) and a (rarely frost-free) freezer section. Commercial refrigeration temperatures (from warmest to coolest) Refrigerators , and not greater than maximum refrigerator temperature at Freezer, Reach-in Freezer, Walk-in Freezer, Ice Cream Cryogenics Cryocooler: below -153 °C (-243.4 °F) Dilution refrigerator: down to -273.148 °C (-459.6664 °F) Disposal An increasingly important environmental concern is the disposal of old refrigerators—initially because freon coolant damages the ozone layer—but as older generation refrigerators wear out, the destruction of CFC-bearing insulation also causes concern. Modern refrigerators usually use a refrigerant called HFC-134a (1,1,1,2-Tetrafluoroethane), which does not deplete the ozone layer, unlike Freon. R-134a is becoming much rarer in Europe, where newer refrigerants are being used instead. The main refrigerant now used is R-600a (also known as isobutane), which has a smaller effect on the atmosphere if released. There have been reports of refrigerators exploding if the refrigerant leaks isobutane in the presence of a spark. If the coolant leaks into the refrigerator, at times when the door is not being opened (such as overnight) the concentration of coolant in the air within the refrigerator can build up to form an explosive mixture that can be ignited either by a spark from the thermostat or when the light comes on as the door is opened, resulting in documented cases of serious property damage and injury or even death from the resulting explosion. Disposal of discarded refrigerators is regulated, often mandating the removal of doors for safety reasons. Children have been asphyxiated while playing with discarded refrigerators, particularly older models with latching doors. Since the 1950s regulations in many places have banned the use of refrigerator doors that cannot be opened by pushing from inside. Modern units use a magnetic door gasket that holds the door sealed but allows it to be pushed open from the inside. This gasket was invented, developed and manufactured by Max Baermann (1903–1984) of Bergisch Gladbach/Germany. Regarding total life-cycle costs, many governments offer incentives to encourage recycling of old refrigerators. One example is the Phoenix refrigerator program launched in Australia. This government incentive picked up old refrigerators, paying their owners for "donating" the refrigerator. The refrigerator was then refurbished, with new door seals, a thorough cleaning, and the removal of items such as the cover that is strapped to the back of many older units. The resulting refrigerators, now over 10% more efficient, were then given to low-income families. The United States also has a program for collecting and replacing older, less-efficient refrigerators and other white goods. These programs seek to replace large appliances that are old and inefficient or faulty by newer, more energy-efficient appliances, to reduce the cost imposed on lower-income families, and reduce pollution caused by the older appliances. Gallery
Technology
Household appliances
null
1037854
https://en.wikipedia.org/wiki/Free%20electron%20model
Free electron model
In solid-state physics, the free electron model is a quantum mechanical model for the behaviour of charge carriers in a metallic solid. It was developed in 1927, principally by Arnold Sommerfeld, who combined the classical Drude model with quantum mechanical Fermi–Dirac statistics and hence it is also known as the Drude–Sommerfeld model. Given its simplicity, it is surprisingly successful in explaining many experimental phenomena, especially the Wiedemann–Franz law which relates electrical conductivity and thermal conductivity; the temperature dependence of the electron heat capacity; the shape of the electronic density of states; the range of binding energy values; electrical conductivities; the Seebeck coefficient of the thermoelectric effect; thermal electron emission and field electron emission from bulk metals. The free electron model solved many of the inconsistencies related to the Drude model and gave insight into several other properties of metals. The free electron model considers that metals are composed of a quantum electron gas where ions play almost no role. The model can be very predictive when applied to alkali and noble metals. Ideas and assumptions In the free electron model four main assumptions are taken into account: Free electron approximation: The interaction between the ions and the valence electrons is mostly neglected, except in boundary conditions. The ions only keep the charge neutrality in the metal. Unlike in the Drude model, the ions are not necessarily the source of collisions. Independent electron approximation: The interactions between electrons are ignored. The electrostatic fields in metals are weak because of the screening effect. Relaxation-time approximation: There is some unknown scattering mechanism such that the electron probability of collision is inversely proportional to the relaxation time , which represents the average time between collisions. The collisions do not depend on the electronic configuration. Pauli exclusion principle: Each quantum state of the system can only be occupied by a single electron. This restriction of available electron states is taken into account by Fermi–Dirac statistics (see also Fermi gas). Main predictions of the free-electron model are derived by the Sommerfeld expansion of the Fermi–Dirac occupancy for energies around the Fermi level. The name of the model comes from the first two assumptions, as each electron can be treated as free particle with a respective quadratic relation between energy and momentum. The crystal lattice is not explicitly taken into account in the free electron model, but a quantum-mechanical justification was given a year later (1928) by Bloch's theorem: an unbound electron moves in a periodic potential as a free electron in vacuum, except for the electron mass me becoming an effective mass m* which may deviate considerably from me (one can even use negative effective mass to describe conduction by electron holes). Effective masses can be derived from band structure computations that were not originally taken into account in the free electron model. From the Drude model Many physical properties follow directly from the Drude model, as some equations do not depend on the statistical distribution of the particles. Taking the classical velocity distribution of an ideal gas or the velocity distribution of a Fermi gas only changes the results related to the speed of the electrons. Mainly, the free electron model and the Drude model predict the same DC electrical conductivity σ for Ohm's law, that is with where is the current density, is the external electric field, is the electronic density (number of electrons/volume), is the mean free time and is the electron electric charge. Other quantities that remain the same under the free electron model as under Drude's are the AC susceptibility, the plasma frequency, the magnetoresistance, and the Hall coefficient related to the Hall effect. Properties of an electron gas Many properties of the free electron model follow directly from equations related to the Fermi gas, as the independent electron approximation leads to an ensemble of non-interacting electrons. For a three-dimensional electron gas we can define the Fermi energy as where is the reduced Planck constant. The Fermi energy defines the energy of the highest energy electron at zero temperature. For metals the Fermi energy is in the order of units of electronvolts above the free electron band minimum energy. Density of states The 3D density of states (number of energy states, per energy per volume) of a non-interacting electron gas is given by: where is the energy of a given electron. This formula takes into account the spin degeneracy but does not consider a possible energy shift due to the bottom of the conduction band. For 2D the density of states is constant and for 1D is inversely proportional to the square root of the electron energy. Fermi level The chemical potential of electrons in a solid is also known as the Fermi level and, like the related Fermi energy, often denoted . The Sommerfeld expansion can be used to calculate the Fermi level () at higher temperatures as: where is the temperature and we define as the Fermi temperature ( is Boltzmann constant). The perturbative approach is justified as the Fermi temperature is usually of about 105 K for a metal, hence at room temperature or lower the Fermi energy and the chemical potential are practically equivalent. Compressibility of metals and degeneracy pressure The total energy per unit volume (at ) can also be calculated by integrating over the phase space of the system, we obtain which does not depend on temperature. Compare with the energy per electron of an ideal gas: , which is null at zero temperature. For an ideal gas to have the same energy as the electron gas, the temperatures would need to be of the order of the Fermi temperature. Thermodynamically, this energy of the electron gas corresponds to a zero-temperature pressure given by where is the volume and is the total energy, the derivative performed at temperature and chemical potential constant. This pressure is called the electron degeneracy pressure and does not come from repulsion or motion of the electrons but from the restriction that no more than two electrons (due to the two values of spin) can occupy the same energy level. This pressure defines the compressibility or bulk modulus of the metal This expression gives the right order of magnitude for the bulk modulus for alkali metals and noble metals, which show that this pressure is as important as other effects inside the metal. For other metals the crystalline structure has to be taken into account. Magnetic response According to the Bohr–Van Leeuwen theorem, a classical system at thermodynamic equilibrium cannot have a magnetic response. The magnetic properties of matter in terms of a microscopic theory are purely quantum mechanical. For an electron gas, the total magnetic response is paramagnetic and its magnetic susceptibility given by where is the vacuum permittivity and the is the Bohr magneton. This value results from the competition of two contributions: a diamagnetic contribution (known as Landau's diamagnetism) coming from the orbital motion of the electrons in the presence of a magnetic field, and a paramagnetic contribution (Pauli's paramagnetism). The latter contribution is three times larger in absolute value than the diamagnetic contribution and comes from the electron spin, an intrinsic quantum degree of freedom that can take two discrete values and it is associated to the electron magnetic moment. Corrections to Drude's model Heat capacity One open problem in solid-state physics before the arrival of quantum mechanics was to understand the heat capacity of metals. While most solids had a constant volumetric heat capacity given by Dulong–Petit law of about at large temperatures, it did correctly predict its behavior at low temperatures. In the case of metals that are good conductors, it was expected that the electrons contributed also the heat capacity. The classical calculation using Drude's model, based on an ideal gas, provides a volumetric heat capacity given by . If this was the case, the heat capacity of a metals should be 1.5 of that obtained by the Dulong–Petit law. Nevertheless, such a large additional contribution to the heat capacity of metals was never measured, raising suspicions about the argument above. By using Sommerfeld's expansion one can obtain corrections of the energy density at finite temperature and obtain the volumetric heat capacity of an electron gas, given by: , where the prefactor to is considerably smaller than the 3/2 found in , about 100 times smaller at room temperature and much smaller at lower . Evidently, the electronic contribution alone does not predict the Dulong–Petit law, i.e. the observation that the heat capacity of a metal is still constant at high temperatures. The free electron model can be improved in this sense by adding the contribution of the vibrations of the crystal lattice. Two famous quantum corrections include the Einstein solid model and the more refined Debye model. With the addition of the latter, the volumetric heat capacity of a metal at low temperatures can be more precisely written in the form, , where and are constants related to the material. The linear term comes from the electronic contribution while the cubic term comes from Debye model. At high temperature this expression is no longer correct, the electronic heat capacity can be neglected, and the total heat capacity of the metal tends to a constant given by the Dulong–petit law. Mean free path Notice that without the relaxation time approximation, there is no reason for the electrons to deflect their motion, as there are no interactions, thus the mean free path should be infinite. The Drude model considered the mean free path of electrons to be close to the distance between ions in the material, implying the earlier conclusion that the diffusive motion of the electrons was due to collisions with the ions. The mean free paths in the free electron model are instead given by (where is the Fermi speed) and are in the order of hundreds of ångströms, at least one order of magnitude larger than any possible classical calculation. The mean free path is then not a result of electron–ion collisions but instead is related to imperfections in the material, either due to defects and impurities in the metal, or due to thermal fluctuations. Thermal conductivity and thermopower While Drude's model predicts a similar value for the electric conductivity as the free electron model, the models predict slightly different thermal conductivities. The thermal conductivity is given by for free particles, which is proportional to the heat capacity and the mean free path which depend on the model ( is the mean (square) speed of the electrons or the Fermi speed in the case of the free electron model). This implies that the ratio between thermal and electric conductivity is given by the Wiedemann–Franz law, where is the Lorenz number, given by The free electron model is closer to the measured value of V2/K2 while the Drude prediction is off by about half the value, which is not a large difference. The close prediction to the Lorenz number in the Drude model was a result of the classical kinetic energy of electron being about 100 smaller than the quantum version, compensating the large value of the classical heat capacity. However, Drude's mode predicts the wrong order of magnitude for the Seebeck coefficient (thermopower), which relates the generation of a potential difference by applying a temperature gradient across a sample . This coefficient can be showed to be , which is just proportional to the heat capacity, so the Drude model predicts a constant that is hundred times larger than the value of the free electron model. While the latter get as coefficient that is linear in temperature and provides much more accurate absolute values in the order of a few tens of μV/K at room temperature. However this models fails to predict the sign change of the thermopower in lithium and noble metals like gold and silver. Inaccuracies and extensions The free electron model presents several inadequacies that are contradicted by experimental observation. We list some inaccuracies below: Temperature dependence The free electron model presents several physical quantities that have the wrong temperature dependence, or no dependence at all like the electrical conductivity. The thermal conductivity and specific heat are well predicted for alkali metals at low temperatures, but fails to predict high temperature behaviour coming from ion motion and phonon scattering. Hall effect and magnetoresistance The Hall coefficient has a constant value in Drude's model and in the free electron model. This value is independent of temperature and the strength of the magnetic field. The Hall coefficient is actually dependent on the band structure and the difference with the model can be quite dramatic when studying elements like magnesium and aluminium that have a strong magnetic field dependence. The free electron model also predicts that the traverse magnetoresistance, the resistance in the direction of the current, does not depend on the strength of the field. In almost all the cases it does. Directional The conductivity of some metals can depend of the orientation of the sample with respect to the electric field. Sometimes even the electrical current is not parallel to the field. This possibility is not described because the model does not integrate the crystallinity of metals, i.e. the existence of a periodic lattice of ions. Diversity in the conductivity Not all materials are electrical conductors, some do not conduct electricity very well (insulators), some can conduct when impurities are added like semiconductors. Semimetals, with narrow conduction bands also exist. This diversity is not predicted by the model and can only by explained by analysing the valence and conduction bands. Additionally, electrons are not the only charge carriers in a metal, electron vacancies or holes can be seen as quasiparticles carrying positive electric charge. Conduction of holes leads to an opposite sign for the Hall and Seebeck coefficients predicted by the model. Other inadequacies are present in the Wiedemann–Franz law at intermediate temperatures and the frequency-dependence of metals in the optical spectrum. More exact values for the electrical conductivity and Wiedemann–Franz law can be obtained by softening the relaxation-time approximation by appealing to the Boltzmann transport equations. The exchange interaction is totally excluded from this model and its inclusion can lead to other magnetic responses like ferromagnetism. An immediate continuation to the free electron model can be obtained by assuming the empty lattice approximation, which forms the basis of the band structure model known as the nearly free electron model. Adding repulsive interactions between electrons does not change very much the picture presented here. Lev Landau showed that a Fermi gas under repulsive interactions, can be seen as a gas of equivalent quasiparticles that slightly modify the properties of the metal. Landau's model is now known as the Fermi liquid theory. More exotic phenomena like superconductivity, where interactions can be attractive, require a more refined theory.
Physical sciences
Basics_2
Physics
1037904
https://en.wikipedia.org/wiki/Lycopodium
Lycopodium
Lycopodium (from Greek lykos, wolf and podion, diminutive of pous, foot) is a genus of clubmosses, also known as ground pines or creeping cedars, in the family Lycopodiaceae. Two very different circumscriptions of the genus are in use. In the Pteridophyte Phylogeny Group classification of 2016 (PPG I), Lycopodium is one of nine genera in the subfamily Lycopodioideae, and has from nine to 15 species. In other classifications, the genus is equivalent to the whole of the subfamily, since it includes all of the other genera. More than 40 species are accepted. Description They are flowerless, vascular, terrestrial or epiphytic plants, with widely branched, erect, prostrate, or creeping stems, with small, simple, needle-like or scale-like leaves that cover the stem and branches thickly. The stems usually creep along the ground, forking at intervals. The leaves contain a single, unbranched vascular strand, and are microphylls by definition. They are usually arranged in spirals. The kidney-shaped (reniform) spore-cases (sporangia) contain spores of one kind only, (isosporous, homosporous), and are borne on the upper surface of the leaf blade of specialized leaves (sporophylls) arranged in a cone-like strobilus at the end of upright stems. Each sporangium contains numerous small spores. The club-shaped appearance of these fertile stems gives the clubmosses their common name. Lycopods reproduce asexually by spores. The plants have an underground sexual phase that produces gametes, and this alternates in the lifecycle with the spore-producing plant. The prothallium developed from the spore is a subterranean mass of tissue of considerable size, and bears both the male and female organs (antheridia and archegonia). They are more commonly distributed vegetatively, though, through above- or below-ground rhizomes. Taxonomy The genus Lycopodium was first published by Carl Linnaeus in 1753. He placed it in the Musci (mosses) along with genera such as Sphagnum, and included species such as Lycopodium selaginoides, now placed in the genus Selaginella in a different order from Lycopodium. Different sources use substantially different circumscriptions of the genus. Traditionally, Lycopodium was considered to be the only extant genus in the family Lycopodiaceae, so includes all the species in the family, although sometimes excluding one placed in the monotypic genus Phylloglossum. Other sources divide Lycopodiaceae species into three broadly defined genera, Lycopodium, Huperzia (including Phylloglossum) and Lycopodiella. In this approach, Lycopodium sensu lato has about 40 species. In the Pteridophyte Phylogeny Group classification of 2016 (PPG I), the broadly defined genus is equivalent to the subfamily Lycopodioideae, and Lycopodium is one of 16 genera in the family Lycopodiaceae, with between 9 and 15 species. Species Using the narrow circumscription of Lycopodium, in which it is one of nine genera in the subfamily Lycopodioideae, the Checklist of Ferns and Lycophytes of the World recognized the following species : Lycopodium clavatum L. – stag's-horn clubmoss; subcosmopolitan Lycopodium diaphanum (P.Beauv.) Sw. – Tristan da Cunha Lycopodium japonicum Thunb. – eastern Asia (Japan west and south to India and Sri Lanka) Lycopodium lagopus (Laest. ex C.Hartm.) Zinserl. ex Kuzen. – circumpolar arctic and subarctic Lycopodium papuanum Nessel – New Guinea Lycopodium venustulum Gaudich. – Hawaii, Western Samoa, the Society Islands Lycopodium vestitum Desv. ex Poir. – northwest South America (Andes) Uses The spores of Lycopodium species are harvested and are sold as lycopodium powder. Lycopodium sp. herb has been used in the traditional Austrian medicine internally as tea or externally as compresses for treatment of disorders of the locomotor system, skin, liver and bile, kidneys and urinary tract, infections, rheumatism, and gout, though claims of efficacy are unproven. It has also been used in some United States government chemical warfare test programs such as Operation Dew. Lycopodium powder was also used to determine the molecular size of oleic acid.
Biology and health sciences
Lycophytes
Plants
1038227
https://en.wikipedia.org/wiki/Forearc
Forearc
A forearc is a region in a subduction zone between an oceanic trench and the associated volcanic arc. Forearc regions are present along convergent margins and eponymously form 'in front of' the volcanic arcs that are characteristic of convergent plate margins. A back-arc region is the companion region behind the volcanic arc. Many forearcs have an accretionary wedge which may form a topographic ridge known as an outer arc ridge that parallels the volcanic arc. A forearc basin between the accretionary wedge and the volcanic arc can accumulate thick deposits of sediment, sometimes referred to as an outer arc trough. Due to collisional stresses as one tectonic plate subducts under another, forearc regions are sources for powerful earthquakes. Formation During subduction, an oceanic plate is thrust below another tectonic plate, which can be oceanic or continental. Water and other volatiles in the subducting plate cause flux melting in the upper mantle, creating magma that rises and penetrates the overriding plate, forming a volcanic arc. The weight of the subducting slab flexes the overriding plate and creates an oceanic trench. This area between the trench and the arc is called the forearc region, with the area behind the arc and away from the trench known as the back-arc region. The mantle region between the overriding plate and the subducting slab experiences corner flow near the back-arc driven by the down dip motion of the subducting slab. At the same time, the temperature of the mantle wedge closer to the trench is dominated by the denser and colder subducting slab, resulting in a cold, stagnant portion of the mantle wedge. Initial theories proposed that the oceanic trenches and magmatic arcs were the primary suppliers of the accretionary sedimentation wedges in the forearc regions. More recent discovery suggests that some of the accreted material in the forearc region is from a mantle source along with trench turbidites derived from continental material. This theory holds due to evidence of pelagic sediments and continental crust being subducted in processes known as sediment subduction and subduction erosion respectively. Over geological time there is constant recycling of the forearc deposits due to erosion, deformation and sedimentary subduction. The constant circulation of material in the forearc region (accretionary prism, forearc basin and trench) generates a mixture of igneous, metamorphic and sedimentary sequences. In general, there is an increase in metamorphic grade from trench to arc where highest grade (blueschist to eclogite) is structurally uplifted (in the prisms) compared to the younger deposits (basins). Forearc regions are also where ophiolites are emplaced should obduction occur, but such deposits are not continuous and can often be removed by erosion. As tectonic plates converge, the closing of an ocean will result in the convergence of two landmasses, each of which is either an island arc or continental margin. When these two bodies collide, the result is orogenesis, at which time the underthrusting oceanic crust slows down. In early stages of arc-continent collision, there is uplift and erosion of the accretionary prism and forearc basin. In the later stages of collision, the forearc region may be sutured, rotated and shortened which can form syn-collisional folds and thrust belts. Structure At the surface, the forearc region can include a forearc basin(s), outer-arc high, accretionary prism and the trench itself. The forearc subduction interface can include a seismogenic zone, where megathrust earthquakes can occur, a decoupled zone, and a viscously coupled zone. The accretionary prism is located at the slope of the trench break where there is significantly decreased slope angle. Between the break and the magmatic arc, a sedimentary basin filled with erosive material from the volcanic arc and substrate can accumulate into a forearc basin which overlays the oldest thrust slices in the wedge of the forearc region. In general, the forearc topography (specifically in the trench region) is trying to achieve an equilibrium between buoyancy and tectonic forces caused by subduction. Upward motion of the forearc is related to buoyancy forces and the downward motion is associated with the tectonic forcing which causes the oceanic lithosphere to descend. The relationship between surface slope and subduction thrust also plays a huge role in the variation of forearc structure and deformation. A subduction wedge can be classified as either stable with little deformation or unstable with pervasive internal deformation (see section on Models). Some common deformation in forearc sediments are synsedimentary deformation and olistostromes, such as that seen in the Magnitogorsk forearc region. Models There are two models which characterize a forearc basin formation and deformation and are dependent on sediment deposition and subsidence (see figure). The first model represents a forearc basin formed with little to no sediment supply. Conversely, the second model represents a basin with a healthy sediment supply. Basin depth depends on the supply of oceanic plate sediments, continentally derived clastic material and orthogonal convergence rates. The accretionary flux (sediment supply in and out) also determines the rate at which the sedimentation wedges grow within the forearc. The age of the oceanic crust along with the convergent velocity controls the coupling across the converging interface of the continental and oceanic crust. The strength of this coupling controls the deformation associated with the event and can be seen in the forearc region deformation signatures. Seismicity The intense interaction between the overriding and underthrusting plates in the forearc regions have shown to evolve strong coupling mechanisms which result in megathrust earthquakes such as the Tohoku-oki earthquake which occurred off the Pacific coast of Northeast Japan (Tian and Liu. 2013). These mega thrust earthquakes may be correlated with low values of heat flow generally associated with forearc regions. Geothermal data shows a heat flow of ~30–40 mW/m2, which indicates cold, strong mantle. Examples One good example is the Mariana forearc, where scientists have done extensive research. In this setting there is an erosive margin and forearc slope which consists of 2 km high and 30 km diameter serpentine- mud volcanoes. The erosive properties of these volcanoes are consistent with the metamorphic grades (blueschists) expected for this region in the forearc. There is evidence from geothermal data and models which show the slab-mantle interface, levels of friction and the cool oceanic lithosphere at the trench. Other examples are: Central Andean Forearc Banda Forearc Savu-Wetar Forearc Luzon arc-forearc Tohoku Forearc Between Western Cordillera and Peru-Chile Trench
Physical sciences
Tectonics
Earth science
1039075
https://en.wikipedia.org/wiki/Aroma%20compound
Aroma compound
An aroma compound, also known as an odorant, aroma, fragrance or flavoring, is a chemical compound that has a smell or odor. For an individual chemical or class of chemical compounds to impart a smell or fragrance, it must be sufficiently volatile for transmission via the air to the olfactory system in the upper part of the nose. As examples, various fragrant fruits have diverse aroma compounds, particularly strawberries which are commercially cultivated to have appealing aromas, and contain several hundred aroma compounds. Generally, molecules meeting this specification have molecular weights of less than 310. Flavors affect both the sense of taste and smell, whereas fragrances affect only smell. Flavors tend to be naturally occurring, and the term fragrances may also apply to synthetic compounds, such as those used in cosmetics. Aroma compounds can naturally be found in various foods, such as fruits and their peels, wine, spices, floral scent, perfumes, fragrance oils, and essential oils. For example, many form biochemically during the ripening of fruits and other crops. Wines have more than 100 aromas that form as byproducts of fermentation. Also, many of the aroma compounds play a significant role in the production of compounds used in the food service industry to flavor, improve, and generally increase the appeal of their products. An odorizer may add a detectable odor to a dangerous odorless substance, like propane, natural gas, or hydrogen, as a safety measure. Aroma compounds classified by structure Esters Linear terpenes Cyclic terpenes Note: Carvone, depending on its chirality, offers two different smells. Aromatic Amines Other aroma compounds Alcohols Furaneol (strawberry) 1-Hexanol (herbaceous, woody) cis-3-Hexen-1-ol (fresh cut grass) Menthol (peppermint) Aldehydes High concentrations of aldehydes tend to be very pungent and overwhelming, but low concentrations can evoke a wide range of aromas. Acetaldehyde (ethereal) Hexanal (green, grassy) cis-3-Hexenal (green tomatoes) Furfural (burnt oats) Hexyl cinnamaldehyde Isovaleraldehyde – nutty, fruity, cocoa-like Anisic aldehyde – floral, sweet, hawthorn. It is a crucial component of chocolate, vanilla, strawberry, raspberry, apricot, and others. Cuminaldehyde (4-propan-2-ylbenzaldehyde) – Spicy, cumin-like, green Esters Fructone (fruity, apple-like) Ethyl methylphenylglycidate (Strawberry) alpha-Methylbenzyl acetate (Gardenia) Ketones Cyclopentadecanone (musk-ketone) Dihydrojasmone (fruity woody floral) Oct-1-en-3-one (blood, metallic, mushroom-like) 2-Acetyl-1-pyrroline (fresh bread, jasmine rice) 6-Acetyl-2,3,4,5-tetrahydropyridine (fresh bread, tortillas, popcorn) Lactones gamma-Decalactone intense peach flavor gamma-Nonalactone coconut odor, popular in suntan lotions delta-Octalactone creamy note Jasmine lactone powerful fatty-fruity peach and apricot Massoia lactone powerful creamy coconut Wine lactone sweet coconut odor Sotolon (maple syrup, curry, fenugreek) Thiols Thioacetone (2-propanethione) A lightly studied organosulfur. Its smell is so potent it can be detected several hundred meters downwind mere seconds after a container is opened. Allyl thiol (2-propenethiol; allyl mercaptan; CH2=CHCH2SH) (garlic volatiles and garlic breath) (Methylthio)methanethiol (CH3SCH2SH), the "mouse thiol", found in mouse urine and functions as a semiochemical for female mice Ethanethiol, commonly called ethyl mercaptan (added to propane or other liquefied-petroleum gases used as fuel gases) 2-Methyl-2-propanethiol, commonly called tert-butyl mercaptan, is added as a blend of other components to natural gas used as fuel gas. Butane-1-thiol, commonly called butyl mercaptan, is a chemical intermediate. Grapefruit mercaptan (grapefruit) Methanethiol, commonly called methyl mercaptan (after eating Asparagus) Furan-2-ylmethanethiol, also called furfuryl mercaptan (roasted coffee) Benzyl mercaptan (leek or garlic-like) Miscellaneous compounds Methylphosphine and dimethylphosphine (garlic-metallic, two of the most potent odorants known) Phosphine (zinc phosphide poisoned bait) Diacetyl (butter flavor) Acetoin (butter flavor) Nerolin (orange flowers) Tetrahydrothiophene (added to natural gas) 2,4,6-Trichloroanisole (cork taint) Substituted pyrazines Aroma-compound receptors Animals that are capable of smell detect aroma compounds with their olfactory receptors. Olfactory receptors are cell-membrane receptors on the surface of sensory neurons in the olfactory system that detect airborne aroma compounds. Aroma compounds can then be identified by gas chromatography-olfactometry, which involves a human operator sniffing the GC effluent. In mammals, olfactory receptors are expressed on the surface of the olfactory epithelium in the nasal cavity. Safety and regulation In 2005–06, fragrance mix was the third-most-prevalent allergen in patch tests (11.5%). 'Fragrance' was voted Allergen of the Year in 2007 by the American Contact Dermatitis Society. An academic study in the United States published in 2016 has shown that "34.7 % of the population reported health problems, such as migraine headaches and respiratory difficulties, when exposed to fragranced products". The composition of fragrances is usually not disclosed in the label of the products, hiding the actual chemicals of the formula, which raises concerns among some consumers. In the United States, this is because the law regulating cosmetics protects trade secrets. In the United States, fragrances are regulated by the Food and Drug Administration if present in cosmetics or drugs, by the Consumer Products Safety Commission if present in consumer products. No pre-market approval is required, except for drugs. Fragrances are also generally regulated by the Toxic Substances Control Act of 1976 that "grandfathered" existing chemicals without further review or testing and put the burden of proof that a new substance is not safe on the EPA. The EPA, however, does not conduct independent safety testing but relies on data provided by the manufacturer. A 2019 study of the top-selling skin moisturizers found 45% of those marketed as "fragrance-free" contained fragrance. List of chemicals used as fragrances In 2010, the International Fragrance Association published a list of 3,059 chemicals used in 2011 based on a voluntary survey of its members, identifying about 90% of the world's production volume of fragrances.
Physical sciences
Substance
Chemistry
614763
https://en.wikipedia.org/wiki/Stark%20effect
Stark effect
The Stark effect is the shifting and splitting of spectral lines of atoms and molecules due to the presence of an external electric field. It is the electric-field analogue of the Zeeman effect, where a spectral line is split into several components due to the presence of the magnetic field. Although initially coined for the static case, it is also used in the wider context to describe the effect of time-dependent electric fields. In particular, the Stark effect is responsible for the pressure broadening (Stark broadening) of spectral lines by charged particles in plasmas. For most spectral lines, the Stark effect is either linear (proportional to the applied electric field) or quadratic with a high accuracy. The Stark effect can be observed both for emission and absorption lines. The latter is sometimes called the inverse Stark effect, but this term is no longer used in the modern literature. History The effect is named after the German physicist Johannes Stark, who discovered it in 1913. It was independently discovered in the same year by the Italian physicist Antonino Lo Surdo. The discovery of this effect contributed importantly to the development of quantum theory and Stark was awarded with the Nobel Prize in Physics in the year 1919. Inspired by the magnetic Zeeman effect, and especially by Hendrik Lorentz's explanation of it, Woldemar Voigt performed classical mechanical calculations of quasi-elastically bound electrons in an electric field. By using experimental indices of refraction he gave an estimate of the Stark splittings. This estimate was a few orders of magnitude too low. Not deterred by this prediction, Stark undertook measurements on excited states of the hydrogen atom and succeeded in observing splittings. By the use of the Bohr–Sommerfeld ("old") quantum theory, Paul Epstein and Karl Schwarzschild were independently able to derive equations for the linear and quadratic Stark effect in hydrogen. Four years later, Hendrik Kramers derived formulas for intensities of spectral transitions. Kramers also included the effect of fine structure, with corrections for relativistic kinetic energy and coupling between electron spin and orbital motion. The first quantum mechanical treatment (in the framework of Werner Heisenberg's matrix mechanics) was by Wolfgang Pauli. Erwin Schrödinger discussed at length the Stark effect in his third paper on quantum theory (in which he introduced his perturbation theory), once in the manner of the 1916 work of Epstein (but generalized from the old to the new quantum theory) and once by his (first-order) perturbation approach. Finally, Epstein reconsidered the linear and quadratic Stark effect from the point of view of the new quantum theory. He derived equations for the line intensities which were a decided improvement over Kramers's results obtained by the old quantum theory. While the first-order-perturbation (linear) Stark effect in hydrogen is in agreement with both the old Bohr–Sommerfeld model and the quantum-mechanical theory of the atom, higher-order corrections are not. Measurements of the Stark effect under high field strengths confirmed the correctness of the new quantum theory. Mechanism Overview Imagine an atom with occupied 2s and 2p electron states. In the Bohr model, these states are degenerate. However, in the presence of an external electric field, these electron orbitals will hybridize into eigenstates of the perturbed Hamiltonian (where each perturbed hybrid state can be written as a superpositon of unperturbed states). Since the 2s and 2p states have opposite parity, these hybrid states will lack inversion symmetry and will possess a time-averaged electric dipole moment. If this dipole moment is aligned with the electric field, the energy of the state will shift down; if this dipole moment is anti-aligned with the electric field, the energy of the state will shift up. Thus, the Stark effect causes a splitting of the original degeneracy. Other things being equal, the effect of the electric field is greater for outer electron shells because the electron is more distant from the nucleus, resulting in a larger electric dipole moment upon hybridization. Multipole expansion The Stark effect originates from the interaction between a charge distribution (atom or molecule) and an external electric field. The interaction energy of a continuous charge distribution , confined within a finite volume , with an external electrostatic potential is This expression is valid classically and quantum-mechanically alike. If the potential varies weakly over the charge distribution, the multipole expansion converges fast, so only a few first terms give an accurate approximation. Namely, keeping only the zero- and first-order terms, where we introduced the electric field and assumed the origin 0 to be somewhere within . Therefore, the interaction becomes where and are, respectively, the total charge (zero moment) and the dipole moment of the charge distribution. Classical macroscopic objects are usually neutral or quasi-neutral (), so the first, monopole, term in the expression above is identically zero. This is also the case for a neutral atom or molecule. However, for an ion this is no longer true. Nevertheless, it is often justified to omit it in this case, too. Indeed, the Stark effect is observed in spectral lines, which are emitted when an electron "jumps" between two bound states. Since such a transition only alters the internal degrees of freedom of the radiator but not its charge, the effects of the monopole interaction on the initial and final states exactly cancel each other. Perturbation theory Turning now to quantum mechanics an atom or a molecule can be thought of as a collection of point charges (electrons and nuclei), so that the second definition of the dipole applies. The interaction of atom or molecule with a uniform external field is described by the operator This operator is used as a perturbation in first- and second-order perturbation theory to account for the first- and second-order Stark effect. First order Let the unperturbed atom or molecule be in a g-fold degenerate state with orthonormal zeroth-order state functions . (Non-degeneracy is the special case g = 1). According to perturbation theory the first-order energies are the eigenvalues of the g × g matrix with general element If g = 1 (as is often the case for electronic states of molecules) the first-order energy becomes proportional to the expectation (average) value of the dipole operator , Since the electric dipole moment is a vector (tensor of the first rank), the diagonal elements of the perturbation matrix Vint vanish between states that have a definite parity. Atoms and molecules possessing inversion symmetry do not have a (permanent) dipole moment and hence do not show a linear Stark effect. In order to obtain a non-zero matrix Vint for systems with an inversion center it is necessary that some of the unperturbed functions have opposite parity (obtain plus and minus under inversion), because only functions of opposite parity give non-vanishing matrix elements. Degenerate zeroth-order states of opposite parity occur for excited hydrogen-like (one-electron) atoms or Rydberg states. Neglecting fine-structure effects, such a state with the principal quantum number n is n2-fold degenerate and where is the azimuthal (angular momentum) quantum number. For instance, the excited n = 4 state contains the following states, The one-electron states with even are even under parity, while those with odd are odd under parity. Hence hydrogen-like atoms with n>1 show first-order Stark effect. The first-order Stark effect occurs in rotational transitions of symmetric top molecules (but not for linear and asymmetric molecules). In first approximation a molecule may be seen as a rigid rotor. A symmetric top rigid rotor has the unperturbed eigenstates with 2(2J+1)-fold degenerate energy for |K| > 0 and (2J+1)-fold degenerate energy for K=0. Here DJMK is an element of the Wigner D-matrix. The first-order perturbation matrix on basis of the unperturbed rigid rotor function is non-zero and can be diagonalized. This gives shifts and splittings in the rotational spectrum. Quantitative analysis of these Stark shift yields the permanent electric dipole moment of the symmetric top molecule. Second order As stated, the quadratic Stark effect is described by second-order perturbation theory. The zeroth-order eigenproblem is assumed to be solved. The perturbation theory gives with the components of the polarizability tensor α defined by The energy E(2) gives the quadratic Stark effect. Neglecting the hyperfine structure (which is often justified — unless extremely weak electric fields are considered), the polarizability tensor of atoms is isotropic, For some molecules this expression is a reasonable approximation, too. For the ground state is always positive, i.e., the quadratic Stark shift is always negative. Problems The perturbative treatment of the Stark effect has some problems. In the presence of an electric field, states of atoms and molecules that were previously bound (square-integrable), become formally (non-square-integrable) resonances of finite width. These resonances may decay in finite time via field ionization. For low lying states and not too strong fields the decay times are so long, however, that for all practical purposes the system can be regarded as bound. For highly excited states and/or very strong fields ionization may have to be accounted for. (
Physical sciences
Atomic physics
Physics
615222
https://en.wikipedia.org/wiki/Multivariable%20calculus
Multivariable calculus
Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one. Multivariable calculus may be thought of as an elementary part of calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus. Introduction In single-variable calculus, operations like differentiation and integration are made to functions of a single variable. In multivariate calculus, it is required to generalize these to multiple variables, and the domain is therefore multi-dimensional. Care is therefore required in these generalizations, because of two key differences between 1D and higher dimensional spaces: There are infinite ways to approach a single point in higher dimensions, as opposed to two (from the positive and negative direction) in 1D; There are multiple extended objects associated with the dimension; for example, for a 1D function, it must be represented as a curve on the 2D Cartesian plane, but a function with two variables is a surface in 3D, while curves can also live in 3D space. The consequence of the first difference is the difference in the definition of the limit and differentiation. Directional limits and derivatives define the limit and differential along a 1D parametrized curve, reducing the problem to the 1D case. Further higher-dimensional objects can be constructed from these operators. The consequence of the second difference is the existence of multiple types of integration, including line integrals, surface integrals and volume integrals. Due to the non-uniqueness of these integrals, an antiderivative or indefinite integral cannot be properly defined. Limits A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions. A limit along a path may be defined by considering a parametrised path in n-dimensional Euclidean space. Any function can then be projected on the path as a 1D function . The limit of to the point along the path can hence be defined as Note that the value of this limit can be dependent on the form of , i.e. the path chosen, not just the point which the limit approaches. For example, consider the function If the point is approached through the line , or in parametric form: Then the limit along the path will be: On the other hand, if the path (or parametrically, ) is chosen, then the limit becomes: Since taking different paths towards the same point yields different values, a general limit at the point cannot be defined for the function. A general limit can be defined if the limits to a point along all possible paths converge to the same value, i.e. we say for a function that the limit of to some point is L, if and only if for all continuous functions such that . Continuity From the concept of limit along a path, we can then derive the definition for multivariate continuity in the same manner, that is: we say for a function that is continuous at the point , if and only if for all continuous functions such that . As with limits, being continuous along one path does not imply multivariate continuity. Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example. For example, for a real-valued function with two real-valued parameters, , continuity of in for fixed and continuity of in for fixed does not imply continuity of . Consider It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle . Furthermore, the functions defined for constant and and by and are continuous. Specifically, for all and . Therefore, and moreover, along the coordinate axes, and . Therefore the function is continuous along both individual arguments. However, consider the parametric path . The parametric function becomes Therefore, It is hence clear that the function is not multivariate continuous, despite being continuous in both coordinates. Theorems regarding multivariate limits and continuity All properties of linearity and superposition from single-variable calculus carry over to multivariate calculus. Composition: If and are both multivariate continuous functions at the points and respectively, then is also a multivariate continuous function at the point . Multiplication: If and are both continuous functions at the point , then is continuous at , and is also continuous at provided that . If is a continuous function at point , then is also continuous at the same point. If is Lipschitz continuous (with the appropriate normed spaces as needed) in the neighbourhood of the point , then is multivariate continuous at . From the Lipschitz continuity condition for we have where is the Lipschitz constant. Note also that, as is continuous at , for every there exists a such that . Hence, for every , choose ; there exists an such that for all satisfying , , and . Hence converges to regardless of the precise form of . Differentiation Directional derivative The derivative of a single-variable function is defined as Using the extension of limits discussed above, one can then extend the definition of the derivative to a scalar-valued function along some path : Unlike limits, for which the value depends on the exact form of the path , it can be shown that the derivative along the path depends only on the tangent vector of the path at , i.e. , provided that is Lipschitz continuous at , and that the limit exits for at least one such path. For continuous up to the first derivative (this statement is well defined as is a function of one variable), we can write the Taylor expansion of around using Taylor's theorem to construct the remainder: where . Substituting this into , where . Lipschitz continuity gives us for some finite , . It follows that . Note also that given the continuity of , as . Substituting these two conditions into , whose limit depends only on as the dominant term. It is therefore possible to generate the definition of the directional derivative as follows: The directional derivative of a scalar-valued function along the unit vector at some point is or, when expressed in terms of ordinary differentiation, which is a well defined expression because is a scalar function with one variable in . It is not possible to define a unique scalar derivative without a direction; it is clear for example that . It is also possible for directional derivatives to exist for some directions but not for others. Partial derivative The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant. A partial derivative may be thought of as the directional derivative of the function along a coordinate axis. Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator () is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function. Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable. Multiple integration The multiple integral extends the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration. The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves. Fundamental theorem of calculus in multiple dimensions In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus: Gradient theorem Stokes' theorem Divergence theorem Green's theorem. In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds. Applications and uses Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular, Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics. Multivariate calculus is used in the optimal control of continuous time dynamic systems. It is used in regression analysis to derive formulas for estimating relationships among various sets of empirical data. Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. In economics, for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus. Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus.
Mathematics
Calculus and analysis
null
615626
https://en.wikipedia.org/wiki/Xanthorrhoea
Xanthorrhoea
Xanthorrhoea () is a genus of about 30 species of succulent flowering plants in the family Asphodelaceae. They are endemic to Australia. Common names for the plants include grasstree, grass gum-tree (for resin-yielding species), kangaroo tail, balga (Western Australia), yakka (South Australia), yamina (Tasmania), and black boy (or "blackboy"). The most common species is Xanthorrhoea australis, and some of these names are applied specifically to this species. Description All species in the genus are perennials and have a secondary thickening meristem in the stem. Many, but not all, species develop an above ground stem. The stem may take up to twenty years to emerge. Plants begin as a crown of rigid grass-like leaves, the caudex slowly growing beneath. The main stem or branches continue to develop beneath the crown. This is rough-surfaced, built from accumulated leaf-bases around the secondarily thickened trunk. The trunk is sometimes unbranched, some species will branch if the growing point is damaged, and others naturally grow numerous branches. Flowers are borne on a long spike above a bare section called a scape; the total length can be over three to four metres long in some species. Flowering occurs in a distinct period, which varies for each species, and often stimulated by bushfire. Fires will burn the leaves and blacken the trunk, but the plant survives as the dead leaves around the stem serve as insulation against the heat of a wildfire. Many Xanthorrhoeas bloom for the first time when they are one-hundred or more years of age. The rate of growth of Xanthorrhoea is slow. Some species grow slowly ( in height per year), but increase their rate of growth in response to season and rainfall. After the initial establishment phase, the rate of growth varies widely from species to species. Thus, while a member of the fastest-growing Xanthorrhoea may be 200 years old, a member of a more slowly growing species of equal height may have aged to 600 years. Systematics Taxonomy Xanthorrhoea is part of the family Asphodelaceae, containing related genera such as Aloe, Alstroemeria, Gasteria, Haworthia and Hemerocallis (to name a few), but is placed within its own monotypic subfamily, the Xanthorrhoeoideae. The Xanthorrhoeoideae are monocots, part of the greater order of Asparagales. A reference to its yellow resin, Xanthorrhoea literally means "yellow-flow" in Ancient Greek. Smith named it, in 1798, from ('yellow, golden') and ('flowing, flow'). The invalid Acoroides ('Acorus-like') was a temporary designation in Solander's manuscript from his voyage with Cook, originally not meant for publication. Kingia and Dasypogon are unrelated Australian plants with a similar growth habit to Xanthorrhoea. Both genera have, at times, been confused with xanthorrhoeas and misnamed as "grasstrees". Some plant classification systems, such as Cronquist, have included a wide range of other genera in the same family as Xanthorrhoea. However, future anatomical and phylogenetic research supported the views of Dahlgren, whom regarded Xanthorrhoea as the sole taxon of the family Xanthorrhoeaceae sensu stricto, which is now treated as a subfamily, Xanthorrhoeoideae. Names Common names for Xanthorrhoea include grasstree, grass gum-tree (for its resin-yielding species), and kangaroo tail. The name grasstree is applied to many other plants. They are also known as balga grass plants, which derives from the word balga in the Noongar language of south-west of Western Australia, particularly for X. preissii. Its meaning is "black boy" or "blackboy", a name which was applied to the plant for many years. Some thought that Aboriginal peoples used the name balga because the trunk blackened after a bushfire resembles a child-like black figure. The name is now seen as racist, and Xanthorrhoea are more commonly known as grass tree. However a 2015 report written by Aboriginal Tasmanian authors, who refer to the plant as yamina, says "yamina forest on lungtalanana is important to the community. yamina are also commonly known as black boys. They are called this because the plant has a thick black trunk". In South Australia, Xanthorrhoea is commonly known as yakka, also spelled yacca and yacka, a name probably from the Kaurna language (Yakko, or alternatively Kurru). The Ngarrindjeri name is Bukkup. Some of the above names are applied specifically to Xanthorrhoea australis, the most common species. Diversity and distribution The genus is endemic to Australia, occurring in all national states and territories. Some species have a restricted range, others are widely distributed. According to the World Checklist of Selected Plant Families, the following species are accepted: Habitat Grasstrees grow in coastal heaths, and wet and dry forests of Australia. They are drought and frost tolerant. The grass tree mainly occurs in soils that are very free draining and consequently low in nutrients. It survives in the poorest soils, with a shallow root system, enabling it easily access nutrients from decaying litter, while storing all the food reserves in its stem. Ecology The grass tree has developed adaptations that help it better suit the environment where it occurs. If a fire breaks out, the grass tree has a special physiological adaptation called thermal insulation that helps protect the plant. The grass tree holds its thick, dead leaves around its stem which serves as insulation, and helps to protect the plant against the heat of the fire. They need fire to clear away dead leaves and promote flowering, as these slow-growing trees were among the first flowering plants to evolve. Grass trees have developed a structural adaptation which helps the grass tree take advantage of soil fertilized with ash after fire, producing a flowering stalk in the aftermath. The grass tree forms a mycorrhizal relationship with fungi deep in its root system, wherein fungi live in a mutually beneficial relationship with the grass tree roots. The fungus increases the tree root's access to water and nutrients and therefore increases tree growth especially in poor conditions. The grass tree also suffers from a condition known as phytophthora dieback. Phytophthora cinnamomi is a discrete soil borne pathogen that attacks and destroys vascular root systems, causing hosts to perish through lack of nutrients and water. It is spread through infected plants and the movement of contaminated soil and gravel. The leaves of the grass tree are hosts to another fungi, Pseudodactylaria xanthorrhoeae. Cultivation Xanthorrhoea may be cultivated, as seed is easily collected and germinated. While they do grow slowly, quite attractive plants with short trunks () and leaf crowns up to (to the top of the leaves) can be achieved in 10 years. The slow growth rate means that it can take 30 years to achieve a specimen with a significant trunk. Most Xanthorrhoea sold in nurseries are established plants taken from bushland. Nurseries charge high prices for the plants. However, there is a very low survival rate for nursery-purchased plants (mainly due to over watering), which may take several years to die. The most successful examples of transplanting have been where a substantial amount of soil, greater than , has been taken with the plants. The genus Xanthorrhoea, more commonly known as the grass tree, is an iconic plant that epitomizes the Australian bush in its ability to live in poor nutrient soils and respond to wildfire. Commonly-grown species for the garden include Xanthorrhoea australis, X. malacophylla, and X. preissii. Uses Xanthorrhoea is important to the Aboriginal peoples. It is a highly-valued resource with many uses. The flowering spike may be utilised as the lightweight handle of a composite spear with a hardwood sharp shaft inserted into the end. It is also soaked in water and the nectar from the flowers gives a sweet-tasting drink. In the bush the flowers could reveal directions, since flowers on the warmer, sunnier side – usually north – of the spike often open before the flowers on the cooler side facing away from the sun. The resin from Xanthorrhoea plants is used in spear-making and is an invaluable adhesive for Aboriginal people, often used to patch up leaky coolamons (water containers) and even yidaki (didgeridoos). The dried flower stalk scape was also used to generate fire by the hand drill friction method. On the Tasmanian island of lungtalanana, Aboriginal people use the leaves for weaving. Resin collected from the plant was used in Australia until the mid-twentieth century for the following purposes: Burnt as an incense in churches A base component for a varnish used on furniture and in dwellings A polish and a coating used on metal surfaces including stoves, tin cans used for storing meat and brass instruments A component used in industrial processes, such as sizing paper, as well as making soap, perfumes, and early gramophone records Gallery
Biology and health sciences
Asparagales
Plants
616293
https://en.wikipedia.org/wiki/Soft%20matter
Soft matter
Soft matter or soft condensed matter is a type of matter that can be deformed or structurally altered by thermal or mechanical stress which is of similar magnitude to thermal fluctuations. The science of soft matter is a subfield of condensed matter physics. Soft materials include liquids, colloids, polymers, foams, gels, granular materials, liquid crystals, flesh, and a number of biomaterials. These materials share an important common feature in that predominant physical behaviors occur at an energy scale comparable with room temperature thermal energy (of order of kT), and that entropy is considered the dominant factor. At these temperatures, quantum aspects are generally unimportant. When soft materials interact favorably with surfaces, they become squashed without an external compressive force. Pierre-Gilles de Gennes, who has been called the "founding father of soft matter," received the Nobel Prize in Physics in 1991 for discovering that methods developed for studying order phenomena in simple systems can be generalized to the more complex cases found in soft matter, in particular, to the behaviors of liquid crystals and polymers. History The current understanding of soft matter grew from Albert Einstein's work on Brownian motion, understanding that a particle suspended in a fluid must have a similar thermal energy to the fluid itself (of order of kT). This work built on established research into systems that would now be considered colloids. The crystalline optical properties of liquid crystals and their ability to flow were first described by Friedrich Reinitzer in 1888, and further characterized by Otto Lehmann in 1889. The experimental setup that Lehmann used to investigate the two melting points of cholesteryl benzoate are still used in the research of liquid crystals as of about 2019. In 1920, Hermann Staudinger, recipient of the 1953 Nobel Prize in Chemistry, was the first person to suggest that polymers are formed through covalent bonds that link smaller molecules together. The idea of a macromolecule was unheard of at the time, with the scientific consensus being that the recorded high molecular weights of compounds like natural rubber were instead due to particle aggregation. The use of hydrogel in the biomedical field was pioneered in 1960 by Drahoslav Lím and Otto Wichterle. Together, they postulated that the chemical stability, ease of deformation, and permeability of certain polymer networks in aqueous environments would have a significant impact on medicine, and were the inventors of the soft contact lens. These seemingly separate fields were dramatically influenced and brought together by Pierre-Gilles de Gennes. The work of de Gennes across different forms of soft matter was key to understanding its universality, where material properties are not based on the chemistry of the underlying structure, more so on the mesoscopic structures the underlying chemistry creates. He extended the understanding of phase changes in liquid crystals, introduced the idea of reptation regarding the relaxation of polymer systems, and successfully mapped polymer behavior to that of the Ising model. Distinctive physics Interesting behaviors arise from soft matter in ways that cannot be predicted, or are difficult to predict, directly from its atomic or molecular constituents. Materials termed soft matter exhibit this property due to a shared propensity of these materials to self-organize into mesoscopic physical structures. The assembly of the mesoscale structures that form the macroscale material is governed by low energies, and these low energy associations allow for the thermal and mechanical deformation of the material. By way of contrast, in hard condensed matter physics it is often possible to predict the overall behavior of a material because the molecules are organized into a crystalline lattice with no changes in the pattern at any mesoscopic scale. Unlike hard materials, where only small distortions occur from thermal or mechanical agitation, soft matter can undergo local rearrangements of the microscopic building blocks. A defining characteristic of soft matter is the mesoscopic scale of physical structures. The structures are much larger than the microscopic scale (the arrangement of atoms and molecules), and yet are much smaller than the macroscopic (overall) scale of the material. The properties and interactions of these mesoscopic structures may determine the macroscopic behavior of the material. The large number of constituents forming these mesoscopic structures, and the large degrees of freedom this causes, results in a general disorder between the large-scale structures. This disorder leads to the loss of long-range order that is characteristic of hard matter. For example, the turbulent vortices that naturally occur within a flowing liquid are much smaller than the overall quantity of liquid and yet much larger than its individual molecules, and the emergence of these vortices controls the overall flowing behavior of the material. Also, the bubbles that compose a foam are mesoscopic because they individually consist of a vast number of molecules, and yet the foam itself consists of a great number of these bubbles, and the overall mechanical stiffness of the foam emerges from the combined interactions of the bubbles. Typical bond energies in soft matter structures are of similar scale to thermal energies. Therefore the structures are constantly affected by thermal fluctuations and undergo Brownian motion. The ease of deformation and influence of low energy interactions regularly result in slow dynamics of the mesoscopic structures which allows some systems to remain out of equilibrium in metastable states. This characteristic can allow for recovery of initial state through an external stimulus, which is often exploited in research. Self-assembly is an inherent characteristic of soft matter systems. The characteristic complex behavior and hierarchical structures arise spontaneously as a system evolves towards equilibrium. Self-assembly can be classified as static when the resulting structure is due to a free energy minimum, or dynamic when the system is caught in a metastable state. Dynamic self-assembly can be utilized in the functional design of soft materials with these metastable states through kinetic trapping. Soft materials often exhibit both elasticity and viscous responses to external stimuli such as shear induced flow or phase transitions. However, excessive external stimuli often result in nonlinear responses. Soft matter becomes highly deformed before crack propagation, which differs significantly from the general fracture mechanics formulation. Rheology, the study of deformation under stress, is often used to investigate the bulk properties of soft matter. Classes of soft matter Soft matter consists of a diverse range of interrelated systems and can be broadly categorized into certain classes. These classes are by no means distinct, as often there are overlaps between two or more groups. Polymers Polymers are large molecules composed of repeating subunits whose characteristics are governed by their environment and composition. Polymers encompass synthetic plastics, natural fibers and rubbers, and biological proteins. Polymer research finds applications in nanotechnology, from materials science and drug delivery to protein crystallization. Foams Foams consist of a liquid or solid through which a gas has been dispersed to form cavities. This structure imparts a large surface-area-to-volume ratio on the system. Foams have found applications in insulation and textiles, and are undergoing active research in the biomedical field of drug delivery and tissue engineering. Foams are also used in automotive for water and dust sealing and noise reduction. Gels Gels consist of non-solvent-soluble 3D polymer scaffolds, which are covalently or physically cross-linked, that have a high solvent/content ratio. Research into functionalizing gels that are sensitive to mechanical and thermal stress, as well as solvent choice, has given rise to diverse structures with characteristics such as shape-memory, or the ability to bind guest molecules selectively and reversibly. Colloids Colloids are non-soluble particles suspended in a medium, such as proteins in an aqueous solution. Research into colloids is primarily focused on understanding the organization of matter, with the large structures of colloids, relative to individual molecules, large enough that they can be readily observed. Liquid crystals Liquid crystals can consist of proteins, small molecules, or polymers, that can be manipulated to form cohesive order in a specific direction. They exhibit liquid-like behavior in that they can flow, yet they can obtain close-to-crystal alignment. One feature of liquid crystals is their ability to spontaneously break symmetry. Liquid crystals have found significant applications in optical devices such as liquid-crystal displays (LCD). Biological membranes Biological membranes consist of individual phospholipid molecules that have self-assembled into a bilayer structure due to non-covalent interactions. The localized, low energy associated with the forming of the membrane allows for the elastic deformation of the large-scale structure. Experimental characterization Due to the importance of mesoscale structures in the overarching properties of soft matter, experimental work is primarily focused on the bulk properties of the materials. Rheology is often used to investigate the physical changes of the material under stress. Biological systems, such as protein crystallization, are often investigated through X-ray and neutron crystallography, while nuclear magnetic resonance spectroscopy can be used in understanding the average structure and lipid mobility of membranes. Scattering Scattering techniques, such as wide-angle X-ray scattering, small-angle X-ray scattering, neutron scattering, and dynamic light scattering can also be used for materials when probing for the average properties of the constituents. These methods can determine particle-size distribution, shape, crystallinity and diffusion of the constituents in the system. There are limitations in the application of scattering techniques to some systems, as they can be more suited to isotropic and dilute samples. Computational Computational methods are often employed to model and understand soft matter systems, as they have the ability to strictly control the composition and environment of the structures being investigated, as well as span from microscopic to macroscopic length scales. Computational methods are limited, however, by their suitability to the system and must be regularly validated against experimental results to ensure accuracy. The use of informatics in the prediction of soft matter properties is also a growing field in computer science thanks to the large amount of data available for soft matter systems. Microscopy Optical microscopy can be used in the study of colloidal systems, but more advanced methods like transmission electron microscopy (TEM) and atomic force microscopy (AFM) are often used to characterize forms of soft matter due to their applicability to mapping systems at the nanoscale. These imaging techniques are not universally appropriate to all classes of soft matter and some systems may be more suited to one kind of analysis than another. For example, there are limited applications in imaging hydrogels with TEM due to the processes required for imaging. However, fluorescence microscopy can be readily applied. Liquid crystals are often probed using polarized light microscopy to determine the ordering of the material under various conditions, such as temperature or electric field. Applications Soft materials are important in a wide range of technological applications, and each soft material can often be associated with multiple disciplines. Liquid crystals, for example, were originally discovered in the biological sciences when the botanist and chemist Friedrich Reinitzer was investigating cholesterols. Now, however, liquid crystals have also found applications as liquid-crystal displays, liquid crystal tunable filters, and liquid crystal thermometers. Active liquid crystals are another example of soft materials, where the constituent elements in liquid crystals can self-propel. Polymers have found diverse applications, from the natural rubber found in latex gloves to the vulcanized rubber found in tires. Polymers encompass a large range of soft matter, with applications in material science. An example of this is hydrogel. With the ability to undergo shear thinning, hydrogels are well suited for the development of 3D printing. Due to their stimuli responsive behavior, 3D printing of hydrogels has found applications in a diverse range of fields, such as soft robotics, tissue engineering, and flexible electronics. Polymers also encompass biological molecules such as proteins, where research insights from soft matter research have been applied to better understand topics like protein crystallization. Foams can naturally occur, such as the head on a beer, or be created intentionally, such as by fire extinguishers. The physical properties available to foams have resulted in applications which can be based on their viscosity, with more rigid and self-supporting forms of foams being used as insulation or cushions, and foams that exhibit the ability to flow being used in the cosmetic industry as shampoos or makeup. Foams have also found biomedical applications in tissue engineering as scaffolds and biosensors. Historically the problems considered in the early days of soft matter science were those pertaining to the biological sciences. As such, an important application of soft matter research is biophysics, with a major goal of the discipline being the reduction of the field of cell biology to the concepts of soft matter physics. Applications of soft matter characteristics are used to understand biologically relevant topics such as membrane mobility, as well as the rheology of blood.
Physical sciences
Basics_2
Physics
616351
https://en.wikipedia.org/wiki/Molniya%20orbit
Molniya orbit
A Molniya orbit (, "Lightning") is a type of satellite orbit designed to provide communications and remote sensing coverage over high latitudes. It is a highly elliptical orbit with an inclination of 63.4 degrees, an argument of perigee of 270 degrees, and an orbital period of approximately half a sidereal day. The name comes from the Molniya satellites, a series of Soviet/Russian civilian and military communications satellites which have used this type of orbit since the mid-1960s. A variation on the Molniya orbit is the so-called Three Apogee (TAP) orbit, whose period is a third of a sidereal day. The Molniya orbit has a long dwell time over the hemisphere of interest, while moving very quickly over the other. In practice, this places it over either Russia or Canada for the majority of its orbit, providing a high angle of view to communications and monitoring satellites covering these high-latitude areas. Geostationary orbits, which are necessarily inclined over the equator, can only view these regions from a low angle, hampering performance. In practice, a satellite in a Molniya orbit serves the same purpose for high latitudes as a geostationary satellite does for equatorial regions, except that multiple satellites are required for continuous coverage. Satellites placed in Molniya orbits have been used for television broadcasting, telecommunications, military communications, relaying, weather monitoring, early warning systems and classified surveillance purposes. History The Molniya orbit was discovered by Soviet scientists in the 1960s as a high-latitude communications alternative to geostationary orbits, which require large launch energies to achieve a high perigee and to change inclination to orbit over the equator (especially when launched from Russian latitudes). As a result, OKB-1 sought a less energy-demanding orbit. Studies found that this could be achieved using a highly elliptical orbit with an apogee over Russian territory. The orbit's name refers to the "lightning" speed with which the satellite passes through the perigee. The first use of the Molniya orbit was by the communications satellite series of the same name. After two launch failures, and one satellite failure in 1964, the first successful satellite to use this orbit, Molniya 1-1, launched on 23 April 1965. The early Molniya-1 satellites were used for civilian television, telecommunication and long-range military communications, but they were also fitted with cameras used for weather monitoring, and possibly for assessing clear areas for Zenit spy satellites. The original Molniya satellites had a lifespan of approximately 1.5 years, as their orbits were disrupted by perturbations, and they had to be constantly replaced. The succeeding series, the Molniya-2, provided both military and civilian broadcasting and was used to create the Orbita television network, spanning the Soviet Union. These were in turn replaced by the Molniya-3 design. A satellite called Mayak was designed to supplement and replace the Molniya satellites in 1997, but the project was cancelled, and the Molniya-3 was replaced by the Meridian satellites, the first of which launched in 2006. The Soviet US-K early warning satellites, which watch for American rocket launches, were launched in Molniya orbits from 1967, as part of the Oko system. From 1971, the American Jumpseat and Trumpet military satellites were launched into Molniya orbits (and possibly used to intercept Soviet communications from the Molniya satellites). Detailed information about both projects remains classified . This was followed by the American SDS constellation, which operates with a mixture of Molniya and geostationary orbits. These satellites are used to relay signals from lower flying satellites back to ground stations in the United States and have been active in some capacity since 1976. A Russian satellite constellation called Tyulpan was designed in 1994 to support communications at high latitudes, but it did not progress past the planning phase. In 2015 and 2017 Russia launched two Tundra satellites into a Molniya orbit, despite their name, as part of its EKS early warning system. Uses Much of the area of the former Soviet Union, and Russia in particular, is located at high northern latitudes. To broadcast to these latitudes from a geostationary orbit (above the Earth's equator) requires considerable power due to the low elevation angles, and the extra distance and atmospheric attenuation that comes with it. Sites located above 81° latitude are unable to view geostationary satellites at all, and as a rule of thumb, elevation angles of less than 10° can cause problems, depending on the communications frequency. A satellite in a Molniya orbit is better suited to communications in these regions, because it looks more directly down on them during large portions of its orbit. With an apogee altitude as high as and an apogee sub-satellite point of 63.4 degrees north, it spends a considerable portion of its orbit with excellent visibility in the northern hemisphere, from Russia as well as from northern Europe, Greenland and Canada. While satellites in Molniya orbits require considerably less launch energy than those in geostationary orbits (especially launching from high latitudes), their ground stations need steerable antennas to track the spacecraft, links must be switched between satellites in a constellation and range changes cause variations in signal amplitude. Additionally, there is a greater need for station-keeping, and the spacecraft will pass through the Van Allen radiation belt four times per day. Southern hemisphere proposals Similar orbits with an argument of perigee of 90° could allow high-latitude coverage in the southern hemisphere. A proposed constellation, the Antarctic Broadband Program, would have used satellites in an inverted Molniya orbit to provide broadband internet service to facilities in Antarctica. Initially funded by the now defunct Australian Space Research Programme, it did not progress beyond initial development. Molniya constellations Permanent high-latitude coverage of a large area of Earth (like the whole of Russia, where the southern parts are about 45°N) requires a constellation of at least three spacecraft in Molniya orbits. If three spacecraft are used, then each spacecraft will be active for a period of eight hours per orbit, centered around apogee, as illustrated in figure 4. Figure 5 shows the satellite's field of view around the apogee. The Earth completes half a rotation in twelve hours, so the apogees of successive Molniya orbits will alternate between one half of the northern hemisphere and the other. For the original Molniya orbit, the apogees were placed over Russia and North America, but by changing the right ascension of the ascending node this can be varied. The coverage from a satellite in a Molniya orbit over Russia is shown in figures 6 to 8, and over North America in figures 9 to 11. The orbits of the three spacecraft should then have the same orbital parameters, but different right ascensions of the ascending nodes, with their passes over the apogees separated by 7.97 hours. Since each satellite has an operational period of approximately eight hours, when one spacecraft travels four hours after its apogee passage (see figure 8 or figure 11), then the next satellite will enter its operational period, with the view of the earth shown in figure 6 (or figure 9), and the switch-over can take place. Note that the two spacecraft at the time of switch-over are separated by about , so that the ground stations only have to move their antennas a few degrees to acquire the new spacecraft. Diagrams Properties A typical Molniya orbit has the following properties: Argument of perigee: 270° Inclination: 63.4° Period: 718 minutes Eccentricity: 0.74 Semi-major axis: Argument of perigee The argument of perigee is set at 270°, causing the satellite to experience apogee at the most northerly point of its orbit. For any future applications over the southern hemisphere, it would instead be set at 90°. Orbital inclination In general, the oblateness of the Earth perturbs the argument of perigee (), so that it gradually changes with time. If we only consider the first-order coefficient , the perigee will change according to equation , unless it is constantly corrected with station-keeping thruster burns. where is the orbital inclination, is the eccentricity, is mean motion in degrees per day, is the perturbing factor, is the radius of the earth, is the semimajor axis, and is in degrees per day. To avoid this expenditure of fuel, the Molniya orbit uses an inclination of 63.4°, for which the factor is zero, so that there is no change in the position of perigee over time. An orbit designed in this manner is called a frozen orbit. Orbital period To ensure the geometry relative to the ground stations repeats every 24 hours, the period should be about half a sidereal day, keeping the longitudes of the apogees constant. However, the oblateness of the Earth also perturbs the right ascension of the ascending node (), changing the nodal period and causing the ground track to drift over time at the rate shown in equation . where is in degrees per day. Since the inclination of a Molniya orbit is fixed (as above), this perturbation is degrees per day. To compensate, the orbital period is adjusted so that the longitude of the apogee changes enough to cancel out this effect. Eccentricity The eccentricity of the orbit is based on the differences in altitudes of its apogee and perigee. To maximise the amount of time that the satellite spends over the apogee, the eccentricity should be set as high as possible. However, the perigee needs to be high enough to keep the satellite substantially above the atmosphere to minimize drag (~600km), and the orbital period needs to be kept to approximately half a sidereal day (as above). These two factors constrain the eccentricity, which becomes approximately 0.737. Semi-major axis The exact height of a satellite in a Molniya orbit varies between missions, but a typical orbit will have a perigee altitude of approximately and an apogee altitude of , for a semi-major axis of . Modelling To track satellites using Molniya orbits, scientists use the SDP4 simplified perturbations model, which calculates the location of a satellite based on orbital shape, drag, radiation, gravitation effects from the sun and moon, and earth resonance terms.
Physical sciences
Orbital mechanics
Astronomy
616450
https://en.wikipedia.org/wiki/Theaceae
Theaceae
Theaceae (), the tea family, is a family of flowering plants comprising shrubs and trees, including the economically important tea plant, and the ornamental camellias. It can be described as having from seven to 40 genera, depending on the source and the method of circumscription used. The family Ternstroemiaceae has been included within Theaceae; however, the APG III system of 2009 places it instead in Pentaphylacaceae. Most but not all species are native to China and East Asia. Family traits Plants in this family are characterized by simple leaves that are alternate spiral to distichous, serrated, and usually glossy. Most of the genera have evergreen foliage, but Stewartia and Franklinia are deciduous. The toothed margins are generally associated with a characteristic Theoid leaf tooth, which is crowned by a glandular, deciduous tip. The flowers in this family are usually pink or white and large and showy, often with a strong scent. The calyx consists of five or more sepals, which are often persistent in the fruiting stage, and the corolla is five-merous, rarely numerous. Plants in Theaceae are multistaminate, usually with 20-100+ stamens either free or adnate to the base of the corolla, and are also distinctive because of the presence of pseudopollen. The pseudopollen is produced from connective cells, and has either rib-like or circular thickenings. The ovary is often hairy and narrows gradually into the style, which may be branched or cleft. The carpels are typically opposite from the petals, or the sepals in the case of Camellia. The fruits are loculicidal capsules, indehiscent baccate fruits or sometimes pome-like. The seeds are few and sometimes winged, or in some genera covered by fleshy tissue or unwinged and nude. Genera Eight genera are currently accepted: Apterosperma Camellia , including Dankia , Piquetia (Pierre) H.Hallier, Thea L., Yunnanea Hu Franklinia Gordonia , including Laplacea Polyspora Pyrenaria , including Dubardella H.J.Lam, Glyptocarpa Hu, Parapyrenaria H.T.Chang, Sinopyrenaria Hu, Tutcheria Dunn Schima Stewartia , including Hartia Dunn The fossil Pentapetalum trifasciculandricus, about 91 million years old, may belong to the Theaceae or the Pentaphylacaceae. Distribution Members of the family are found in Southeast Asia and Malesia, tropical South America and the Southeast United States. Three genera (Franklinia, Gordonia and Stewartia) have species native to the Southeast United States, with Franklinia being endemic there, and under some interpretations, also Gordonia with the Asian species formerly included in that genus being transferred to Polyspora. Biochemistry There is distinctive chemistry within the family Theaceae. Sometimes, single crystals of calcium oxalate are present in Theaceous plants. Ellagic acid and common polyphenols including flavonols, flavones and proanthocyanins are widely distributed throughout the family. Gallic acid and catechins only occur in Camellia sect. Thea (C. sinensis, C. taliensis and C. irrawadiensis.) Caffeine and its precursors theobromine and theophylline are only found in sect. Thea and are not found in other species of Camellia or other Theaceae. Caffeine content in the tea bush makes up 2.5-4% of the leaf's dry weight, and this high content of catechins and caffeine in the tea bush is the result of artificial selection by humans for these characters. Triterpenes and their glycosides (saponins) are found widely throughout the family in the seeds, leaves, wood and bark. Plants in this family are also known to accumulate aluminum and fluoride. Economic importance The best known genus is Camellia, which includes the plant whose leaves are used to produce tea (Camellia sinensis). In parts of Asia, other species are used as a beverage, including C. taliensis, C. grandibractiata, C. kwangsiensis, C. gymnogyna, C. crassicolumna, C. tachangensis, C. ptilophylla, and C. irrawadiensis. Several species are grown widely as ornamentals for their flowers and handsome foliage.
Biology and health sciences
Ericales
null
616670
https://en.wikipedia.org/wiki/Biochemical%20engineering
Biochemical engineering
Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Applications Biotechnology Biotechnology and biochemical engineering are closely related to each other as biochemical engineering can be considered a sub-branch of biotechnology. One of the primary focuses of biotechnology is in the medical field, where biochemical engineers work to design pharmaceuticals, artificial organs, biomedical devices, chemical sensors, and drug delivery systems. Biochemical engineers use their knowledge of chemical processes in biological systems in order to create tangible products that improve people's health. Specific areas of studies include metabolic, enzyme, and tissue engineering. The study of cell cultures is widely used in biochemical engineering and biotechnology due to its many applications in developing natural fuels, improving the efficiency in producing drugs and pharmaceutical processes, and also creating cures for disease. Other medical applications of biochemical engineering within biotechnology are genetics testing and pharmacogenomics. Food Industry Biochemical engineers primarily focus on designing systems that will improve the production, processing, packaging, storage, and distribution of food. Some commonly processed foods include wheat, fruits, and milk which undergo processes such as milling, dehydration, and pasteurization in order to become products that can be sold. There are three levels of food processing: primary, secondary, and tertiary. Primary food processing involves turning agricultural products into other products that can be turned into food, secondary food processing is the making of food from readily available ingredients, and tertiary food processing is commercial production of ready-to eat or heat-and-serve foods. Drying, pickling, salting, and fermenting foods were some of the oldest food processing techniques used to preserve food by preventing yeasts, molds, and bacteria to cause spoiling. Methods for preserving food have evolved to meet current standards of food safety but still use the same processes as the past. Biochemical engineers also work to improve the nutritional value of food products, such as in golden rice, which was developed to prevent vitamin A deficiency in certain areas where this was an issue. Efforts to advance preserving technologies can also ensure lasting retention of nutrients as foods are stored. Packaging plays a key role in preserving as well as ensuring the safety of the food by protecting the product from contamination, physical damage, and tampering. Packaging can also make it easier to transport and serve food. A common job for biochemical engineers working in the food industry is to design ways to perform all these processes on a large scale in order to meet the demands of the population. Responsibilities for this career path include designing and performing experiments, optimizing processes, consulting with groups to develop new technologies, and preparing project plans for equipment and facilities. Pharmaceuticals In the pharmaceutical industry, bioprocess engineering plays a crucial role in the large-scale production of biopharmaceuticals, such as monoclonal antibodies, vaccines, and therapeutic proteins. The development and optimization of bioreactors and fermentation systems are essential for the mass production of these products, ensuring consistent quality and high yields. For example, recombinant proteins like insulin and erythropoietin are produced through cell culture systems using genetically modified cells. The bioprocess engineer’s role is to optimize variables like temperature, pH, nutrient availability, and oxygen levels to maximize the efficiency of these systems. The growing field of gene therapy also relies on bioprocessing techniques to produce viral vectors, which are used to deliver therapeutic genes to patients. This involves scaling up processes from laboratory to industrial scale while maintaining safety and regulatory compliance . As the demand for biopharmaceutical products increases, advancements in bioprocess engineering continue to enable more sustainable and cost-effective manufacturing methods. Education Auburn University University of Georgia (Biochemical Engineering) Michigan Technological University McMaster University Technical University of Munich University of Natural Resources and Life Sciences, Vienna Keck Graduate Institute of Applied Life Sciences (KGI Amgen Bioprocessing Center) Kungliga Tekniska högskolan- KTH – Royal Institute of Technology (Dept. of Industrial Biotechnology) Queensland University of Technology (QUT) University of Cape Town (Centre for Bioprocess Engineering Research) SUNY-ESF (Bioprocess Engineering Program) Université de Sherbrooke University of British Columbia UC Berkeley UC Davis Savannah Technical College University of Illinois Urbana-Champaign (Integrated Bioprocessing Research Laboratory) University of Iowa (Chemical and Biochemical Engineering) University of Minnesota (Bioproducts and Biosystems Engineering) East Carolina University Jacob School of Biotechnology and Bioengineering, Allahabad, India Indian Institute of Technology, Varanasi Indian Institute of Technology Kharagpur Institute of Chemical Technology, Mumbai Jadavpur University Universidade Federal de Itajubá (UNIFEI) Universiti Malaysia Kelantan (UMK) Universidade Federal de São João del Rei-UFSJ Federal University of Technology – Paraná Universidade Federal do Paraná-UFPR São Paulo State University Universidade Federal do Pará-UFPA University of Louvain (UCLouvain) University of Stellenbosch North Carolina Agricultural and Technical State University North Carolina State University Virginia Tech Ege University/Turkey (Department of Bioengineering) National University of Costa Rica University of Brawijaya (Department of Agricultural Engineering) University of Indonesia University College London (Department of Biochemical Engineering) Universiti Teknologi Malaysia Universiti Kuala Lumpur Malaysian Institute of Chemical and Bioengineering Technology University of Zagreb, Faculty of food technology and biotechnology, Croatia Villanova University Wageningen University University College Dublin Obafemi Awolowo University University of Birmingham Universidad Autónoma de Coahuila (Facultad de Ciencias Biológicas) Silpakorn University Thailand Universiti Malaysia Perlis (UniMAP), School of Bioprocess Engineering (SBE) Technische Universität Berlin, Chair of Bioprocess Engineering University of Queensland Technical University of Denmark, Department of Chemical and Biochemical Engineering, BioEng Research Centre South Dakota School of Mines and Technology National Institute of Applied Science and Technology Tunis (Industrial Biology Engineering Program) Technical University Hamburg (TUHH) Mapua University Biochemical engineering is not a major offered by many universities and is instead an area of interest under the chemical engineering. The following universities are known to offer degrees in biochemical engineering: Brown University – Providence, RI Christian Brothers University – Memphis, TN Colorado School of Mines – Golden, CO Rowan University – Glassboro, NJ University of Colorado Boulder – Boulder, CO University of Georgia – Athens, GA University of California, Davis – Davis, CA University College London – London, United Kingdom University of Southern California – Los Angeles, CA University of Western Ontario – Ontario, Canada Indian Institute of Technology (BHU) Varanasi – Varanasi, UP Indian Institute of Technology Delhi – Delhi Institute of Technology Tijuana – México University of Baghdad, College of Engineering, Al-Khwarizmi Biochemical
Technology
Disciplines
null
616698
https://en.wikipedia.org/wiki/Borage
Borage
Borage ( or ; Borago officinalis), also known as starflower, is an annual herb in the flowering plant family Boraginaceae native to the Mediterranean region. Although the plant contains small amounts of pyrrolizidine alkaloids, some parts are edible and its seeds provide oil. Description B. officinalis grows to a height of , and is bristly or hairy all over the stems and leaves; the leaves are alternate, simple, and long. The flowers are complete, perfect with five narrow, triangular-pointed petals. Flowers are most often blue, although pink flowers are sometimes observed. White-flowered types are also cultivated. The blue flower is genetically dominant over the white flower. The flowers arise along scorpioid cymes to form large floral displays with multiple flowers blooming simultaneously, suggesting that borage has a high degree of geitonogamy (intraplant pollination). It has an indeterminate growth habit. In temperate climates such as in the UK, its flowering season is relatively long, from June to September. In milder climates, borage blooms continuously for most of the year. It can be invasive. Chemistry The seeds consist of 26–38% borage seed oil, of which 17–28% is gamma-linolenic acid (GLA, an omega-6 fatty acid), making borage seed oil the richest known source of GLA. The oil also contains the fatty acids palmitic acid (10–11%), stearic acid (3.5–4.5%), oleic acid (16–20%), linoleic acid (35–38%), eicosenoic acid (3.5–5.5%), erucic acid (1.5–3.5%), and nervonic acid (1.5%). Healthy adults typically produce ample GLA from dietary linoleic acid, but borage seed oil is often marketed as a GLA supplement, under the names "starflower oil" or "borage oil". The leaves contain small amounts (2–10 ppm in the dried herb) of the liver-toxic pyrrolizidine alkaloids (PA) intermedine, lycopsamine, amabiline, and supinine and the nontoxic saturated PA thesinine. PAs are also present in borage seed oil, but may be removed by processing. Distribution and habitat It is native to the Mediterranean region, and has naturalized in many other locales. It grows satisfactorily in gardens in most of Europe, such as Denmark, France, Germany, the United Kingdom, and Ireland. It is not a perennial, but it remains in the garden from year to year by self-seeding. Toxicity In addition to the liver-toxic pyrrolizidine alkaloids found in the leaves and seed oil, the German Federal Institute for Risk Assessment has advised that honey from borage contains PAs, transferred to the honey through pollen collected at borage plants, and advise that commercial honey production could select for raw honey with limited PA content to prevent contamination. Uses Traditionally, borage was cultivated for culinary and medicinal uses, although today, commercial cultivation is mainly as an oilseed. Borage is used as either a fresh vegetable or a dried herb. As a fresh vegetable, borage, with a cucumber-like taste, is often used in salads or as a garnish. The flower has a sweet, honey-like taste and is often used to decorate desserts and cocktails, sometimes frozen in ice cubes. Vegetable use of borage is common in Germany, in the Spanish regions of Aragón and Navarre, on the Greek island of Crete, and in the northern Italian region of Liguria. Although often used in soups, one of the better known German borage recipes is the Frankfurt speciality grüne Soße ("green sauce"). In Liguria, Italy, borage (in Italian, borragine) is commonly used as a filling of the traditional pasta ravioli and pansoti. It is used to flavour pickled gherkins in Poland and Russia. The flowers produce copious nectar which is used by honeybees to make a light and delicate honey. Beverage Borage is traditionally used as a garnish in the Pimms Cup cocktail, but is nowadays often replaced by a long sliver of cucumber peel or by mint. It is also one of the key botanicals in Gilpin's Westmorland Extra Dry Gin. The author of Cups and their Customs notes that a sprig or two of borage "communicates a peculiar refreshing flavour" to any cool drink. In Persian cuisine, borage tea (using the dried purple flowers) is called گل گاوزبان : gol gâvzabân, "cow's-tongue-flower". Herbal medicine Traditionally, Borago officinalis has been used in hyperactive gastrointestinal, respiratory and cardiovascular disorders, such as gastrointestinal (colic, cramps, diarrhea), airways (asthma, bronchitis), cardiovascular, (cardiotonic, antihypertensive and blood purifier), urinary (diuretic and kidney/bladder disorders). One case of status epilepticus has been reported that was associated with borage oil ingestion. A methanol extract of borage has shown strong amoebicidal activity in vitro. The 50% inhibitory concentration () of the extract against Entamoeba histolytica was 33 μg/mLl. Companion planting Borage is used in companion planting. It is said to protect or nurse legumes, spinach, brassicas, and strawberries. It is also said to be a good companion plant to tomatoes because it confuses the mother moths of tomato hornworms or manduca looking for a place to lay their eggs. Claims that it improves tomato growth and makes them taste better remain unsubstantiated. In culture Pliny the Elder and Dioscorides said that borage was the nepenthe (νηπενθές : nēpenthés) mentioned in Homer, which caused forgetfulness when mixed with wine. King Henry VIII's last wife, Catherine Parr, used borage in a concoction to treat melancholy. Francis Bacon thought that borage had "an excellent spirit to repress the fuliginous vapour of dusky melancholie". John Gerard's Herball mentions an old verse concerning the plant: "Ego Borago, Gaudia semper ago (I, Borage, bring always joys)". He asserts:
Biology and health sciences
Boraginales
Plants
616769
https://en.wikipedia.org/wiki/Local%20Bubble
Local Bubble
The Local Bubble, or Local Cavity, is a relative cavity in the interstellar medium (ISM) of the Orion Arm in the Milky Way. It contains the closest of celestial neighbours and among others, the Local Interstellar Cloud (which contains the Solar System), the neighbouring G-Cloud, the Ursa Major moving group (the closest stellar moving group) and the Hyades (the nearest open cluster). It is estimated to be at least 1000 light years in size, and is defined by its neutral-hydrogen density of about 0.05 atoms/cm3, or approximately one tenth of the average for the ISM in the Milky Way (0.5 atoms/cm3), and one sixth that of the Local Interstellar Cloud (0.3 atoms/cm3). The exceptionally sparse gas of the Local Bubble is the result of supernovae that exploded within the past ten to twenty million years. Geminga, a pulsar in the constellation Gemini, was once thought to be the remnant of a single supernova that created the Local Bubble, but now multiple supernovae in subgroup B1 of the Pleiades moving group are thought to have been responsible, becoming a remnant supershell. Other research suggests that the subgroups Lower Centaurus–Crux (LCC) and Upper Centaurus–Lupus (UCL), of the Scorpius–Centaurus association created both the local bubble and the Loop I Bubble. With LCC being responsible for the Local Bubble and UCL being responsible for the Loop I Bubble. It was found that 14 to 20 supernovae originated from LCC and UCL, which could have formed these bubbles. Description The Solar System has been traveling through the region currently occupied by the Local Bubble for the last five to ten million years. Its current location lies in the Local Interstellar Cloud (LIC), a minor region of denser material within the Bubble. The LIC formed where the Local Bubble and the Loop I Bubble met. The gas within the LIC has a density of approximately 0.3 atoms per cubic centimeter. The Local Bubble is not spherical, but seems to be narrower in the galactic plane, becoming somewhat egg-shaped or elliptical, and may widen above and below the galactic plane, becoming shaped like an hourglass. It abuts other bubbles of less dense interstellar medium (ISM), including, in particular, the Loop I Bubble. The Loop I Bubble was cleared, heated and maintained by supernovae and stellar winds in the Scorpius–Centaurus association, some 500 light years from the Sun. The Loop I Bubble contains the star Antares (also known as α Sco, or Alpha Scorpii), as shown on the diagram above right. Several tunnels connect the cavities of the Local Bubble with the Loop I Bubble, called the "Lupus Tunnel". Other bubbles which are adjacent to the Local Bubble are the Loop II Bubble and the Loop III Bubble. In 2019, researchers found interstellar iron in Antarctica which they relate to the Local Interstellar Cloud, which might be related to the formation of the Local Bubble. Observation Launched in February 2003 and active until April 2008, a small space observatory called Cosmic Hot Interstellar Plasma Spectrometer (CHIPS or CHIPSat) examined the hot gas within the Local Bubble. The Local Bubble was also the region of interest for the Extreme Ultraviolet Explorer mission (1992–2001), which examined hot EUV sources within the bubble. Sources beyond the edge of the bubble were identified but attenuated by the denser interstellar medium. In 2019, the first 3D map of the Local Bubble has been reported using the observations of diffuse interstellar bands. In 2020, the shape of the dusty envelope surrounding the Local Bubble was retrieved and modeled from 3D maps of the dust density obtained from stellar extinction data. Impact on star formation In January 2022, a paper in the journal Nature found that observations and modelling had determined that the action of the expanding surface of the bubble had collected gas and debris and was responsible for the formation of all young, nearby stars. These new stars are typically in molecular clouds like the Taurus molecular cloud and the open star cluster Pleiades. Connection to radioactive isotopes on earth On earth several radioactive isotopes were connected to supernovae occurring relatively nearby to the solar system. The most common source is found in deep sea ferromanganese crusts. Such nodules are constantly growing and deposit iron, manganese and other elements. Samples are divided into layers which are dated for example with Beryllium-10. Some of these layers have higher concentrations of radioactive isotopes. The isotope most commonly associated with supernovae on earth is Iron-60 from deep sea sediments, Antarctic snow, and lunar soil. Other isotopes are Manganese-53 and Plutonium-244 from deep sea materials. Supernova-originated Aluminium-26, which was expected from cosmic ray studies, was not confirmed. Iron-60 and Manganese-53 have a peak 1.7–3.2 Million years ago and Iron-60 has a second peak 6.5–8.7 Million years ago. The older peak likely originated when the solar system moved through the Orion-Eridanus superbubble and the younger peak was generated when the solar system entered the local bubble 4.5 Million years ago. One of the supernovae creating the younger peak might have created the pulsar PSR B1706-16 and turned Zeta Ophiuchi into a runaway star. Both originated from UCL and were released by a supernova 1.78 ± 0.21 Million years ago. Another explanation for the older peak is that it was produced by one supernova in the Tucana-Horologium association 7-9 Million years ago.
Physical sciences
Notable patches of universe
Astronomy
616814
https://en.wikipedia.org/wiki/Zone%20of%20Avoidance
Zone of Avoidance
The Zone of Avoidance (ZOA, ZoA), or Zone of Galactic Obscuration (ZGO), is the area of the sky that is obscured by the Milky Way. The Zone of Avoidance was originally called the Zone of Few Nebulae in an 1878 paper by English astronomer Richard Proctor that referred to the distribution of "nebulae" in John Herschel's General Catalogue of Nebulae. Background When viewing space from Earth, the attenuation, interstellar dust and stars in the plane of the Milky Way (the galactic plane) obstruct the view of around 20% of the extragalactic sky at visible wavelengths. As a result, optical galaxy catalogues are usually incomplete close to the galactic plane. Modern developments Many projects have attempted to bridge the gap in knowledge caused by the Zone of Avoidance. The dust and gas in the Milky Way cause extinction at optical wavelengths, and foreground stars can be confused with background galaxies. However, the effect of extinction drops at longer wavelengths, such as the infrared, and the Milky Way is effectively transparent at radio wavelengths. Surveys in the infrared, such as IRAS and 2MASS, have given a more complete picture of the extragalactic sky. Two very large nearby galaxies, Maffei 1 and Maffei 2, were discovered in the Zone of Avoidance by Paolo Maffei by their infrared emission in 1968. Even so, approximately 10% of the sky remains difficult to survey as extragalactic objects can be confused with stars in the Milky Way. Projects to survey the Zone of Avoidance at radio wavelengths, particularly using the 21 cm spin-flip emission line of neutral atomic hydrogen (known in astronomical parlance as H I line), have detected many galaxies that could not be detected in the infrared. Examples of galaxies detected from their HI emission include Dwingeloo 1 and Dwingeloo 2, discovered in 1994 and 1996, respectively. Recent astronomical studies revealed a supercluster of galaxies, termed the Vela Supercluster, in the Great Attractor's theorized location.
Physical sciences
Notable patches of universe
Astronomy
616967
https://en.wikipedia.org/wiki/Inflammatory%20bowel%20disease
Inflammatory bowel disease
Inflammatory bowel disease (IBD) is a group of inflammatory conditions of the colon and small intestine, with Crohn's disease and ulcerative colitis (UC) being the principal types. Crohn's disease affects the small intestine and large intestine, as well as the mouth, esophagus, stomach and the anus, whereas UC primarily affects the colon and the rectum. Signs and symptoms In spite of Crohn's and UC being very different diseases, both may present with any of the following symptoms: abdominal pain, diarrhea, rectal bleeding, severe internal cramps/muscle spasms in the region of the pelvis and weight loss. Anemia is the most prevalent extraintestinal complication of inflammatory bowel disease (IBD). Associated complaints or diseases include arthritis, pyoderma gangrenosum, primary sclerosing cholangitis, and non-thyroidal illness syndrome (NTIS). Associations with deep vein thrombosis (DVT) and bronchiolitis obliterans organizing pneumonia (BOOP) have also been reported. Diagnosis is generally by assessment of inflammatory markers in stool followed by colonoscopy with biopsy of pathological lesions. Causes IBD is a complex disease which arises as a result of the interaction of environmental and genetic factors leading to immunological responses and inflammation in the intestine. Diet People living with IBD are very interested in diet, but little is known about the impact of diet on these patients. Recent reviews underlined the important role of nutritional counselling in IBD patients. Patients should be encouraged to adopt diets that are best supported by evidence and involve monitoring for the objective resolution of inflammation. A 2022 study found that diets with increased intake of fruits and vegetables, reduction of processed meats and refined carbohydrates, and preference of water for hydration were associated with lower risk of active symptoms with IBD, although increased intake of fruits and vegetables alone did not reduce risk of symptoms with Crohn's disease. A 2022 scientific review also found generally positive outcomes for IBD patients who adhered to the Mediterranean diet (high fruit and vegetable intake). Dietary patterns are associated with a risk for ulcerative colitis. In particular, subjects who were in the highest tertile of the healthy dietary pattern had a 79% lower risk of ulcerative colitis. Gluten sensitivity is common in IBD and associated with having flareups. Gluten sensitivity was reported in 23.6% and 27.3% of Crohn's disease and ulcerative colitis patients, respectively. A diet high in protein, particularly animal protein, and/or high in sugar may be associated with increased risk of IBD and relapses. Bile acids Emerging evidence indicates that bile acids are important etiological agents in IBD pathogenesis. IBD patients have a consistent pattern of an increased abundance of primary bile acids such as cholic acid and chenodeoxycholic acid (and their conjugated forms), and a decreased abundance of secondary bile acids such as lithocholic acid and deoxycholic acid. Microbiota The human microbiota consists of 10–100 trillion microorganisms. Several studies have confirmed that the microbiota composition is different in patients with IBD compared to healthy individuals. This difference is more pronounced in patients with Crohn's disease than in those with ulcerative colitis. In IBD patients, there is a decrease or absence of beneficial bacteria such as Bifidobacterium longum, Eubacterium rectale, Faecalibacterium prausnitzii, and Roseburia intestinalis, while harmful species like Bacteroides fragilis, Ruminococcus torques, and Ruminococcus are more abundant. The activation of reactive oxygen species and reactive nitrogen species leads to oxidative stress for both host cells and the gut microbiome. Consequently, in IBD, there is a microbial imbalance, known as dysbiosis, characterized by an increase in functional pathways involved in the microbial response to oxidative stress. This oxidative stress can promote the growth of certain species such as R. gnavus. Another opportunistic bacterium called A. muciniphila contributes to IBD development and is more prevalent in individuals lacking NOD-like receptor 6 (NLRP6). Both R. gnavus and A. muciniphila are bacterial species that are more abundant in IBD. Patients with IBD often exhibit stronger antibody and T-cell responses to microbial antigens. The gut microbiome employs various approaches to interact with the host immune system. For instance, B. fragilis, which is symbiotic in humans, can transfer immune regulatory molecules to immune cells through the secretion of outer membrane vesicles. This mechanism plays a protective role in IBD by activating the non-classical autophagy pathway, dependent on Atg16L1 and NOD2 genes. B. thetaiotaomicron induces the differentiation of T regulatory cells (Tregs) to modulate gut immunity, thus increasing the expression of Gata3 and FoxP3 genes. The colonization of Clostridium spp. can enhance the aggregation of RORγT+ FOXP3 Treg cells, which inhibit the development of Th2 and Th17 cells. Ultimately, this colonization could decrease the response of colonic Th2 and Th17 cells. Also F. prausnitzii attracts CD4 and CD8a (DP8α) regulatory T cells. E. coli Nissle 1917 has the capability to inhibit the growth of Salmonella and other harmful bacteria. It prevents these pathogens from adhering to and invading intestinal epithelial cells, which significantly reduces the likelihood of inflammation in the gut and may also prevent the onset of IBD. Breach of intestinal barrier Loss of integrity of the intestinal epithelium plays a key pathogenic role in IBD. Dysfunction of the innate immune system as a result of abnormal signaling through immune receptors called toll-like receptors (TLRs)—which activates an immune response to molecules that are broadly shared by multiple pathogens—contributes to acute and chronic inflammatory processes in IBD colitis and associated cancer. Changes in the composition of the intestinal microbiota are an important environmental factor in the development of IBD. Detrimental changes in the intestinal microbiota induce an inappropriate (uncontrolled) immune response that results in damage to the intestinal epithelium. Breaches in this critical barrier (the intestinal epithelium) allow further infiltration of microbiota that, in turn, elicit further immune responses. IBD is a multifactorial disease that is nonetheless driven in part by an exaggerated immune response to gut microbiota that causes defects in epithelial barrier function. Oxidative stress and DNA damage Oxidative stress and DNA damage likely have a role in the pathophysiology of IBD. Oxidative DNA damage as measured by 8-OHdG levels was found to be significantly increased in people with IBD compared to healthy controls, and in inflamed mucosa compared with noninflamed mucosa. Antioxidant capacity as measured by the total action of all antioxidants detected in blood plasma or body fluids was found to be significantly decreased in people with IBD compared to healthy controls, and in inflamed mucosa compared with noninflamed mucosa. Genetics A genetic component to IBD has been recognized for over a century. Research that has contributed to understanding of the genetics include studies of ethnic groups (e.g., Ashkenazi Jews, Irish), familial clustering, epidemiological studies, and twin studies. With the advent of molecular genetics, understanding of the genetic basis has expanded considerably, particularly in the past decade. The first gene linked to IBD was NOD2 in 2001. Genome-wide association studies have since added to understanding of the genomics and pathogenesis of the disease. More than 200 single nucleotide polymorphisms (SNPs or "snips") are now known to be associated with susceptibility to IBD. One of the largest genetic studies of IBD was published in 2012. The analysis explained more of the variance in Crohn's disease and ulcerative colitis than previously reported. The results suggested that commensal microbiota are altered in such a way that they act as pathogens in inflammatory bowel diseases. Other studies show that mutations in IBD-associated genes might interfere with the cellular activity and interactions with the microbiome that promote normal immune responses. Many studies identified that microRNAs dysregulation involved in IBD and to promote colorectal cancer. By 2020, single-cell RNA sequencing analysis was launched by a small consortium using IBD patient biopsy material in a search for therapeutic targets. According to an article published on Nature, ETS2 gene plays a vital role in the development of the disease. Diagnosis The diagnosis is usually confirmed by biopsies on colonoscopy. Fecal calprotectin is useful as an initial investigation, which may suggest the possibility of IBD, as this test is sensitive but not specific for IBD. Classification Inflammatory bowel diseases are autoimmune diseases, in which the body's own immune system attacks elements of the digestive system. The chief types of IBD are Crohn's disease (CD) and ulcerative colitis (UC). Several other conditions are variously referred to either as being inflammatory bowel diseases or as being similar to but distinct from inflammatory bowel diseases. These conditions include: Microscopic colitis with subtypes Collagenous colitis Lymphocytic colitis Diversion colitis Behçet's disease Differential diagnosis Crohn's disease and ulcerative colitis are both common differential diagnoses for the other, and confidently diagnosing a patient with one of the two diseases may sometimes not be possible. No disease specific markers are currently known in the blood that would enable the reliable separation of patients with Crohn's disease and ulcerative colitis. Physicians tell the difference between Crohn's disease and UC by the location and nature of the inflammatory changes. Crohn's can affect any part of the gastrointestinal tract, from mouth to anus (skip lesions), although a majority of the cases start in the terminal ileum. Ulcerative colitis, in contrast, is restricted to the colon and the rectum. Microscopically, ulcerative colitis is restricted to the mucosa (epithelial lining of the gut), while Crohn's disease affects the full thickness of the bowel wall ("transmural lesions"). Lastly, Crohn's disease and ulcerative colitis present with extra-intestinal manifestations (such as liver problems, arthritis, skin manifestations and eye problems) in different proportions. In 10–15% of cases, a definitive diagnosis neither of Crohn's disease nor of ulcerative colitis can be made because of idiosyncrasies in the presentation. In these cases, a diagnosis of indeterminate colitis may be made. Irritable bowel syndrome can present with similar symptoms as either disease, as can nonsteroidal anti-inflammatory drug (NSAID) enteritis and intestinal tuberculosis. Conditions that can be mistaken particularly for Crohn's disease include Behçet's disease and coeliac disease, while conditions that can be symptomatically similar to ulcerative colitis in particular include acute self-limiting colitis, amebic colitis, schistosomiasis and colon cancer. Other diseases may cause an increased excretion of fecal calprotectin, such as infectious diarrhea, untreated celiac disease, necrotizing enterocolitis, intestinal cystic fibrosis and neoplastic pediatric tumor cells. Liver function tests are often elevated in IBD, and are often mild and generally return spontaneously to normal levels. The most relevant mechanisms of elevated liver functions tests in IBD are drug-induced hepatotoxicity and fatty liver. Treatment Surgery CD and UC are chronic inflammatory diseases, and are not medically curable. However, Ulcerative Colitis can in most cases be cured by proctocolectomy, although this may not eliminate extra-intestinal symptoms. An ileostomy will collect feces in a bag. Alternatively, a pouch can be created from the small intestine; this serves as the rectum and prevents the need for a permanent ileostomy. Between one-quarter and one-half of patients with ileo-anal pouches do have to manage occasional or chronic pouchitis. Surgery cannot cure Crohn's disease but may be needed to treat complications such as abscesses, strictures or fistulae. Severe cases may require surgery, such as bowel resection, strictureplasty or a temporary or permanent colostomy or ileostomy. In Crohn's disease, surgery involves removing the worst inflamed segments of the intestine and connecting the healthy regions, but unfortunately, it does not cure Crohn's or eliminate the disease. At some point after the first surgery, Crohn's disease can recur in the healthy parts of the intestine, usually at the resection site. (For example, if a patient with Crohn's disease has an ileocecal anastomosis, in which the caecum and terminal ileum are removed and the ileum is joined to the ascending colon, their Crohn's will nearly always flare-up near the anastomosis or in the rest of the ascending colon). Medical therapies Medical treatment of IBD is individualised to each patient. The choice of which drugs to use and by which route to administer them (oral, rectal, injection, infusion) depends on factors including the type, distribution, and severity of the patient's disease, as well as other historical and biochemical prognostic factors, and patient preferences. For example, mesalazine is more useful in ulcerative colitis than in Crohn's disease. Generally, depending on the level of severity, IBD may require immunosuppression to control the symptoms, with drugs such as prednisone, tumor necrosis factor inhibitors (TNF inhibitors), azathioprine, methotrexate, or 6-mercaptopurine. Steroids, such as the glucocorticoid prednisone, are frequently used to control disease flares and were once acceptable as a maintenance drug. Biological therapy for inflammatory bowel disease, especially the TNF inhibitors, are used in people with more severe or resistant Crohn's disease and sometimes in ulcerative colitis. Treatment is usually started by administering drugs with high anti-inflammatory effects, such as prednisone. Once the inflammation is successfully controlled, another drug to keep the disease in remission, such as mesalazine in UC, is the main treatment. If further treatment is required, a combination of an immunosuppressive drug (such as azathioprine) with mesalazine (which may also have an anti-inflammatory effect) may be needed, depending on the patient. Controlled release budesonide is used for mild ileal Crohn's disease. Nutritional and dietetic therapies Exclusive enteral nutrition is a first-line therapy in pediatric Crohn's disease with weaker data in adults. Evidence supporting exclusive enteral nutrition in ulcerative colitis is lacking. Nutritional deficiencies play a prominent role in IBD. Malabsorption, diarrhea, and GI blood loss are common features of IBD. Deficiencies of B vitamins, fat-soluble vitamins, essential fatty acids, and key minerals such as magnesium, zinc, and selenium are extremely common and benefit from replacement therapy. Dietary interventions, including certain exclusion diets like the specific carbohydrate diet (SCD) can be beneficial for symptom management. Dietary fiber interventions, such as psyillium supplementation (a mixture of soluble and insoluble fibers), may relieve symptoms as well as induce/maintain remission by altering the microbiome composition of the GI tract, thereby improving regulation of immune function, reducing inflammation, and helping to restore the intestinal mucosal lining. Low serum levels of alanine transaminase can be a marker of sarcopenia which is underdiagnosed in patients with IBD and associated with a higher disease activity. Anemia is commonly present in both ulcerative colitis and Crohn's disease. Due to raised levels of inflammatory cytokines which lead to the increased expression of hepcidin, parenteral iron is the preferred treatment option as it bypasses the gastrointestinal system, has lower incidence of adverse events and enables quicker treatment. Hepcidin itself is also an anti-inflammatory agent. In the murine model very low levels of iron restrict hepcidin synthesis, worsening the inflammation that is present. Enteral nutrition has been found to be efficient to improve hemoglobin level in patients with IBD, especially combined with erythropoietin. Gastrointestinal bleeding, occurring especially during ulcerative colitis relapse, can contribute to anemia when chronic, and may be life-threatening when acute. To limit the possible risk of dietary intake disturbing hemostasis in acute gastrointestinal bleeding, temporary fasting is often considered necessary in hospital settings. The effectiveness of this approach is unknown; a Cochrane review in 2016 found no published clinical trials including children. Low levels of vitamin D are associated with crohn's disease and ulcerative colitis and people with more severe cases of inflammatory bowel disease often have lower vitamin D levels. It is not clear if vitamin D deficiency causes inflammatory bowel disease or is a symptom of the disease. There is some evidence that vitamin D supplementation therapy may be associated with improvements in scores for clinical inflammatory bowel disease activity and biochemical markers. Vitamin D treatment may be associated with less inflammatory bowel disease reoccurrence of symptoms (relapse). It is not clear if this treatment improves the person's quality of life or what the clinical response to vitamin D treatment. The ideal treatment regime and dose of vitamin D therapy has not been well enough studied. Microbiome There is preliminary evidence of an infectious contribution to IBD in some patients that may benefit from antibiotic therapy, such as with rifaximin. The evidence for a benefit of rifaximin is mostly limited to Crohn's disease with less convincing evidence supporting use in ulcerative colitis. The use of oral probiotic supplements to modify the composition and behaviour of the microbiome has been considered as a possible therapy for both induction and maintenance of remission in people with Crohn's disease and ulcerative colitis. A Cochrane review in 2020 did not find clear evidence of improved remission likelihood, nor lower adverse events, in people with Crohn's disease, following probiotic treatment. For ulcerative colitis, there is low-certainty evidence that probiotic supplements may increase the probability of clinical remission. People receiving probiotics were 73% more likely to experience disease remission and over 2x as likely to report improvement in symptoms compared to those receiving a placebo, with no clear difference in minor or serious adverse effects. Although there was no clear evidence of greater remission when probiotic supplements were compared with 5‐aminosalicylic acid treatment as a monotherapy, the likelihood of remission was 22% higher if probiotics were used in combination with 5-aminosalicylic acid therapy. Whereas in people who are already in remission, it is unclear whether probiotics help to prevent future relapse, either as a monotherapy or combination therapy. Fecal microbiota transplant is a relatively new treatment option for IBD which has attracted attention since 2010. Some preliminary studies have suggested benefits similar to those in Clostridioides difficile infection but a review of use in IBD shows that FMT is safe, but of variable efficacy. Systematic reviews showed that 33% of ulcerative colitis, and 50% of Crohn's disease patients reach clinical remission after fecal microbiota transplant. Alternative medicine Complementary and alternative medicine approaches have been used in inflammatory bowel disorders. Evidence from controlled studies of these therapies has been reviewed; risk of bias was quite heterogeneous. The best supportive evidence was found for herbal therapy, with Plantago ovata and curcumin in UC maintenance therapy, wormwood in CD, mind/body therapy and self-intervention in UC, and acupuncture in UC and CD. Novel approaches Stem cell therapy is undergoing research as a possible treatment for IBD. A review of studies suggests a promising role, although there are substantial challenges, including cost and characterization of effects, which limit the current use in clinical practice. Psychological interventions Patients with IBD have a higher prevalence of depressive and anxiety disorders compared to the general population, women with IBD are more likely than men to develop affective disorders since up to 65% of them may have depression and anxiety disorder. Currently, there is no evidence to recommend psychological treatment, such as psychotherapy, stress management and patient's education, to all adults with IBD in general. These treatments had no effect on quality of life, emotional well-being and disease activity. The need for these approaches should be individually assessed and further researched to identify subgroups and determine type of therapy that may benefit individuals with IBD. In adolescents population such treatments may be beneficial on quality of life and depression, although only short-term effects have been found, which also imposes the need for further research. A meta analysis of interventions to improve mood (including talking therapy, antidepressants, and exercise) in people with IBD found that they reduced inflammatory markers such as C-reactive protein and faecal calprotectin. Psychological therapies reduced inflammation more than antidepressants or exercise. Treatment standards Crohn's and Colitis Australia, the peak body for IBD in Australia, where prevalence is one of the highest in the world, reviewed the quality of care for patients admitted to Australian hospitals. They found that only one hospital met accepted standards for multidisciplinary care, but that care was improved with the availability of even minimal specialised services. Prognosis While IBD can limit quality of life because of pain, vomiting, and diarrhea, it is rarely fatal on its own. Fatalities due to complications such as toxic megacolon, bowel perforation and surgical complications are also rare. Fatigue is a common symptom of IBD and can be a burden. Around one-third of individuals with IBD experience persistent gastrointestinal symptoms similar to irritable bowel syndrome (IBS) in the absence of objective evidence of disease activity. Despite enduring the side-effects of long-term therapies, this cohort has a quality of life that is not significantly different to that of individuals with uncontrolled, objectively active disease, and escalation of therapy to biological agents is typically ineffective in resolving their symptoms. The cause of these IBS-like symptoms is unclear, but it has been suggested that changes in the gut-brain axis, epithelial barrier dysfunction, and the gut flora may be partially responsible. While patients of IBD do have an increased risk of colorectal cancer, this is usually caught much earlier than the general population in routine surveillance of the colon by colonoscopy, and therefore patients are much more likely to survive. New evidence suggests that patients with IBD may have an elevated risk of endothelial dysfunction and coronary artery disease. The goal of treatment is toward achieving remission, after which the patient is usually switched to a lighter drug with fewer potential side effects. Every so often, an acute resurgence of the original symptoms may appear; this is known as a "flare-up". Depending on the circumstances, it may go away on its own or require medication. The time between flare-ups may be anywhere from weeks to years, and varies wildly between patients – a few have never experienced a flare-up. Life with IBD can be challenging; however, many with the condition lead relatively normal lives. IBD carries a psychological burden due to stigmatization of being diagnosed, leading to high levels of anxiety, depression, and a general reduction in the quality of life. Although living with IBD can be difficult, there are numerous resources available to help families navigate the ins and out of IBD, such as the Crohn's and Colitis Foundation of America (CCFA). Epidemiology IBD resulted in a global total of 51,000 deaths in 2013 and 55,000 deaths in 1990. The increased incidence of IBD since World War II has been correlated to the increase in meat consumption worldwide, supporting the claim that animal protein intake is associated with IBD. However, there are many environmental risk factors that have been linked to the increased and decreased risk of IBD, such as smoking, air pollution and greenspace, urbanization and Westernization. Inflammatory bowel diseases are increasing in Europe. Incidence and prevalence of IBD has risen steadily for the last decades in Asia, which could be related changes in diet and other environmental factors. Around 0.8% of people in the UK have IBD. Similarly, around 270,000 (0.7%) of people in Canada have IBD, with that number expected to rise to 400,000 (1%) by 2030. Research The following treatment strategies are not used routinely, but appear promising in some forms of IBD. Initial reports suggest that helminthic therapy may not only prevent but even control IBD: a drink with roughly 2,500 ova of the Trichuris suis helminth taken twice monthly decreased symptoms markedly in many patients. It is even speculated that an effective "immunization" procedure could be developed—by ingesting the cocktail at an early age. Prebiotics and probiotics are focusing increasing interest as treatments for IBD. Currently, there is evidence to support the use of certain probiotics in addition to standard treatments in people with ulcerative colitis but there is no sufficient data to recommend probiotics in people with Crohn's disease. Both single strain and multi-strain probiotics have been researched for mild to moderate cases of ulcerative colitis. The most clinically researched multi-strain probiotic with over 70 human trials is the De Simone Formulation. Further research is required to identify specific probiotic strains or their combinations and prebiotic substances for therapies of intestinal inflammation. Currently, the probiotic strain, frequency, dose and duration of the probiotic therapy are not established. In severely ill people with IBD there is a risk of the passage of viable bacteria from the gastrointestinal tract to the internal organs (bacterial translocation) and subsequent bacteremia, which can cause serious adverse health consequences. Live bacteria might not be essential because of beneficial effects of probiotics seems to be mediated by their DNA and by secreted soluble factors, and their therapeutic effects may be obtained by systemic administration rather than oral administration. In 2005 New Scientist published a joint study by Bristol University and the University of Bath on the apparent healing power of cannabis on IBD. Reports that cannabis eased IBD symptoms indicated the possible existence of cannabinoid receptors in the intestinal lining, which respond to molecules in the plant-derived chemicals. CB1 cannabinoid receptors – which are known to be present in the brain – exist in the endothelial cells which line the gut, it is thought that they are involved in repairing the lining of the gut when damaged. The team deliberately damaged the cells to cause inflammation of the gut lining and then added synthetically produced cannabinoids; the result was that gut started to heal: the broken cells were repaired and brought back closer together to mend the tears. It is believed that in a healthy gut, natural endogenous cannabinoids are released from endothelial cells when they are injured, which then bind to the CB1 receptors. The process appears to set off a wound-healing reaction, and when people use cannabis, the cannabinoids bind to these receptors in the same way. Previous studies have shown that CB1 receptors located on the nerve cells in the gut respond to cannabinoids by slowing gut motility, therefore reducing the painful muscle contractions associated with diarrhea. CB2, another cannabinoid receptor predominantly expressed by immune cells, was detected in the gut of people with IBD at a higher concentration. These receptors, which also respond to chemicals in cannabis, appear to be associated with apoptosis – programmed cell death – and may have a role in suppressing the overactive immune system and reducing inflammation by mopping up excess cells. Activation of the endocannabinoid system was found efficient in ameliorating colitis and increasing the survival rate of mice, and reducing remote organ changes induced by colitis, further suggest that modulation of this system is a potential therapeutic approach for IBDs and the associated remote organ lesions. Alicaforsen is a first generation antisense oligodeoxynucleotide designed to bind specifically to the human ICAM-1 messenger RNA through Watson-Crick base pair interactions in order to subdue expression of ICAM-1. ICAM-1 propagates an inflammatory response promoting the extravasation and activation of leukocytes (white blood cells) into inflamed tissue. Increased expression of ICAM-1 has been observed within the inflamed intestinal mucosa of people with ulcerative colitis, pouchitis and Crohn's, where ICAM-1 over production correlated with disease activity. This suggests that ICAM-1 is a potential therapeutic target in the treatment of these diseases. Cannabinoid CB2 receptor agonists are found to decrease the induction of ICAM-1 and VCAM-1 surface expression in human brain tissues and primary human brain endothelial cells (BMVEC) exposed to various pro-inflammatory mediators. In 2014, an alliance among the Broad Institute, Amgen and Massachusetts General Hospital formed with the intention to "collect and analyze patient DNA samples to identify and further validate genetic targets." In 2015, a meta-analysis on 938 IBD patients and 953 controls, IBD was significantly associated with having higher odds of vitamin D deficiency. Gram-positive bacteria present in the lumen could be associated with extending the time of relapse for ulcerative colitis. Bidirectional pathways between depression and IBD have been suggested and psychological processes have been demonstrated to influence self-perceived physical and psychological health over time. IBD-disease activity may impact quality of life and over time may significantly affect individual's mental well-being, which may be related to the increased risk to develop anxiety and/or depression. On the other hand, psychological distress may also influence IBD activity. Higher rates of anxiety and depression are observed among those with IBD compared to healthy individuals, which correlated with disease severity. Part of this phenotypic correlation is due to a shared genetic overlap between IBD and psychiatric comorbidities. Moreover, anxiety and depression rates increase during active disease compared with inactive phases. Flu vaccines are recommended for people with IBD in the UK; however, research suggests that vaccine uptake is low. Researchers analysed data on 13,631 adults with IBD on immune-suppressing drugs during the 2018 – 2019 flu season. Only half of this population received a vaccine during this period and few (32%) were vaccinated before the flu circulated in the community. This could be due to the belief that flu vaccines cause IBD flares; however, the same study did not find a link between vaccination and IBD flares. In other species IBD also occurs in dogs and is thought to arise from a combination of host genetics, intestinal microenvironment, environmental components and the immune system. There is an ongoing discussion, however, that the term "chronic enteropathy" might be better to use than "inflammatory bowel disease" in dogs because it differs from IBD in humans in how the dogs respond to treatment. For example, many dogs respond to only dietary changes compared to humans with IBD, who often need immunosuppressive treatment. Some dogs may also need immunosuppressant or antibiotic treatment when dietary changes are not enough. After having excluded other diseases that can lead to vomiting, diarrhea, and abdominal pain in dogs, intestinal biopsies are often performed to investigate what kind of inflammation is occurring (lymphoplasmacytic, eosinophilic, or granulomatous). In dogs, low levels of cobalamin in the blood have been shown to be a risk factor for negative outcome.
Biology and health sciences
Specific diseases
Health
616975
https://en.wikipedia.org/wiki/Asian%20black%20bear
Asian black bear
The Asian black bear (Ursus thibetanus), also known as the Asiatic black bear, moon bear and white-chested bear, is a medium-sized bear species native to Asia that is largely adapted to an arboreal lifestyle. It lives in the Himalayas, southeastern Iran, the northern parts of the Indian subcontinent, Mainland Southeast Asia, the Korean Peninsula, China, the Russian Far East, the islands of Honshū and Shikoku in Japan, and Taiwan. It is listed as vulnerable on the IUCN Red List, and is threatened by deforestation and poaching for its body parts, which are used in traditional medicine. Taxonomy Ancestral and sister taxa Biologically and morphologically, Asian black bears represent the beginning of the arboreal specializations attained by sloth bears and sun bears. Asian black bears have karyotypes nearly identical to those of the five other ursine bears, and, as is typical in the genus, they have 74 chromosomes. From an evolutionary perspective, Asian black bears are the least changed of the Old World bears, with certain scientists arguing that it is likely that all other lineages of ursine bear stem from this species. Scientists have proposed that Asian black bears are either a surviving, albeit modified, form of Ursus etruscus, specifically the early, small variety of the Middle Villafranchian (Upper Pliocene to Lower Pleistocene) or a larger form of Ursus minimus, an extinct species that arose 4,000,000 years ago. With the exception of the age of the bones, it is often difficult to distinguish the remains of Ursus minimus with those of modern Asian black bears. Asian black bears are close relatives to American black bears, with which they share a common European ancestor; the two species are thought to have diverged 3,000,000 years ago, though genetic evidence is inconclusive. Both the American and Asian black species are considered sister taxa and are more closely related to each other than to the other species of bear. The earliest known specimens of Asian black bears are known from the Early Pliocene of Moldova. The earliest American black bear fossils, which were located in Port Kennedy, Pennsylvania, greatly resemble the Asian black species. The first mtDNA study undertaken on Asian black bears suggested that the species arose after the American black bears, while a second study could not statistically resolve the branching order of sloth bears and the two black species, suggesting that these three species underwent a rapid radiation event. A third study suggested that American black bears and Asian black bears diverged as sister taxa after the sloth bear lineage and before the sun bear lineage. Further investigations on the entire mitochondrial cytochrome b sequence indicate that the divergence of continental Asian and Japanese black bear populations might have occurred when bears crossed the land bridge between the Korean peninsula and Japan 500,000 years ago, which is consistent with paleontological evidence. Subspecies Until the Late Pleistocene, two further subspecies ranged across Europe and West Asia. These are U. t. mediterraneus from Western Europe and the Caucasus and U. t. permjak from Eastern Europe, particularly the Ural Mountains. Hybrids Asian black bears are reproductively compatible with several other bear species, and have on occasion produced hybrid offspring. According to Jack Hanna's Monkeys on the Interstate, a bear captured in Sanford, Florida, was thought to have been the offspring of an escaped female Asian black bear and a male American black bear, and Scherren's Some notes on hybrid bears published in 1907 mentioned a successful mating between an Asian black bear and a sloth bear. In 1975, within Venezuela's "Las Delicias" Zoo, a female Asian black bear shared its enclosure with a male spectacled bear, and produced several hybrid descendants. In 2005, a possible Asian black bear–sun bear hybrid cub was captured in the Mekong River watershed of eastern Cambodia. An Asian black bear/brown bear hybrid, taken from a bile farm, is housed at the Animals Asia Foundation's China Moon Bear Rescue . Characteristics The Asian black bear has black fur, a light brown muzzle, and a distinct whitish or creamy patch on the chest, which is sometimes V-shaped. Its ears are bell shaped, proportionately longer than those of other bears, and stick out sideways from the head. Its tail is short, around long. Adults measure at the shoulder, and in length. Adult males weigh with an average weight of about . Adult females weigh , and large ones up to . The Asian black bear is similar in general build to brown bear (Ursus arctos), but is lighter and smaller. The lips and nose are larger and more mobile than those of brown bears. The skulls of the Asian black bear is relatively small, but massive, particularly in the lower jaw. Adult males have skulls measuring in length and in width, while female skulls are long and wide. Compared to other bears of the genus Ursus, the projections of the skull are weakly developed; the sagittal crest is low and short, even in old specimens, and does not exceed more than 19–20% of the total length of the skull, unlike in the brown bear, which has a sagittal crest comprising up to 41% of the skull's length. Although mostly herbivorous, the jaw structure of Asian black bears is not as specialized for plant eating as that of giant pandas: Asian black bears have much narrower zygomatic arches, and the weight ratio of the two pterygoid muscles is also much smaller in Asian black bears. The lateral slips of the temporal muscles are thicker and stronger in Asian black bears. In contrast to the polar bear, the Asian black bear has a powerful upper body for climbing trees, and relatively weak hind legs which are shorter than those in the brown bear and American black bear. An Asian black bear with broken hind legs can still climb effectively. It is the most bipedal of all bears, and can walk upright for over . The heel pads on the forefeet are larger than those of most other bear species. Their claws, which are primarily used for climbing and digging, are slightly longer on the fore foot at than the back foot at , and are larger and more hooked than those of the American black bear. On average, adult Asian black bears are slightly smaller than American black bears, though large males can exceed the size of several other bear species. The famed British sportsman known as the "Old Shekarry" wrote of how an Asian black bear he shot in India probably weighed no less than based on how many people it took to lift its body. The largest Asian black bear on record allegedly weighed . Zoo-kept specimens can weigh up to . Although their senses are more acute than those of brown bears, their eyesight is poor, and their hearing range is moderate, the upper limit being 30 kHz. Distribution and habitat The Asian black bear once ranged as far west as Western Europe during the Middle Pleistocene and early Late Pleistocene, though it now occurs very patchily throughout its former range, which is limited to Asia. Today, it occurs from southeastern Iran eastward through Afghanistan and Pakistan, across the foothills of the Himalayas in India and Myanmar to mainland Southeast Asia, except Malaysia. Its range in northeastern and southern China is patchy, and it is absent in much of east-central China. Other population clusters exist in the southern Russian Far East and in North Korea. A small remnant population survives in South Korea. It also occurs on the Japanese islands of Honshu and Shikoku, as well as on Taiwan and the Chinese island of Hainan. It typically inhabits deciduous forests, mixed forests and thornbrush forests. In the summer, it usually inhabits altitudes of around in the Himalayas but rarely above . In winter, it descends to altitudes below . In Japan, it also occurs at sea level. There is no definitive estimate as to the number of Asian black bears: Japan posed estimates of 8–14,000 bears living on Honshū, though the reliability of this is now doubted. Although their reliability is unclear, rangewide estimates of 5–6,000 bears have been presented by Russian biologists. In 2012, Japanese Ministry of the Environment estimated the population at 15–20,000. Rough density estimates without corroborating methodology or data have been made in India and Pakistan, resulting in the estimates of 7–9,000 in India and 1,000 in Pakistan. Unsubstantiated estimates from China give varying estimates between 15 and 46,000, with a government estimate of 28,000. Bangladesh The Wildlife Trust of Bangladesh conducted an on-field survey of bears in Bangladesh from 2008 to 2010 that included Asian black bears. The survey was done in 87 different places, mostly in the north-central, northeastern and southeastern areas of Bangladesh that had historical presence of bears. The survey result says that most of the areas still has some isolated small bear populations, mainly the Asian black bears. According to the survey, the most evidence found relating to bears were of Asian black bears that included nests, footprints, local sightings, etc. There are many reports on the presence of Asian black bears in the central, north-central, northeastern and southeastern parts of Bangladesh. Although Asian black bears still occur in different parts of Bangladesh, mainly in the Chittagong Hill Tracts, the population is very small. Conservationists fear that the species will soon be extinct in the country if necessary steps to protect it are not taken in the near future. China Three subspecies of the Asian black bear occur in China: the Tibetan subspecies (U. thibetanus thibetanus), the Indochinese subspecies (U. thibetanus mupinensis), and the northeastern subspecies (U. thibetanus ussuricus), which is the only subspecies of bear in northeastern China. Asian black bears are mainly distributed in the conifer forests in the cold and temperate zones of northeast China, the main areas being Changbai, Zhang Guangcai, Lao Ye, and the Lesser Xingan Mountains. Within Liaoning province, there are about 100 Asian black bears, which only inhabit the five counties of Xinbin, Huanren, Benxi, Kuandian, and Fencheng. Within Jilin province, Asian black bears occur mainly in the counties of Hunchun, Dunhua, Wangqing, Antu, Changbai, Fusong, Jiaohe, Huadian, Panshi, and Shulan. In Heilongjiang province, Asian black bears occur in the counties of Ningan, Bayan, Wuchang, Tonghe, Baoqing, Fuyuan, Yichun, Taoshan, Lanxi, Tieli, Sunwu, Aihui, Dedu, Beian, and Nenjiang. This population has a northern boundary of about 50° N and the southern boundary in Fengcheng is about 40°30" N. Korea By the 1990s, poaching, habitat destruction, and eradication during the Japanese occupation had led to the extirpation of the species from South Korea. In 2004, the South Korean government initiated a reintroduction program in Jiri Mountain National Park. The effort has been successful, with bears now inhabiting the park and dispersing into northern forests. In 2021, the park's bear population appeared to have reached its carrying capacity. In Korea, most of the Asian black bears live in the broad-leaved forest of the alpine region, more than 1,500 meters north of Jirisan. As of April 2018, there were 56 bears living in the wild of Jirisan. Siberia In Siberia, the Asian black bear's northern range runs from Innokenti Bay on the coast of the Sea of Japan southwest to the elevated areas of Sikhote Alin crossing it at the sources of the Samarga River. At this point, the boundary directs itself to the north, through the middle course of the Khor, Anyui and Khungari rivers, and comes to the shore of the Amur, crossing it at the level of the mouth of the Gorin River. Along the Amur river, the species' presence has been noted as far as 51° N. Lat. From there, the territorial boundary runs southwest of the river's left bank, passing through the northern part of Lake Bolon and the juncture point of the Kur and Tunguska. Asian black bears are encountered in the Urmi's lower course. Within the Ussuri krai, the species is restricted to broad-leaved Manchurian-type forests. Taiwan In Taiwan, the endemic subspecies of Asiatic Black Bear, the Formosan black bear (Ursus thibetanus formosanus), chiefly is confined to the mountain ranges in the central regions of the island. It can be found along the Central and Snow mountain ranges, with populations in the latter being more common. The largest population of bears seem to be Lala mountain in Chatienshan Reserve, the (Snow) Mountain area in Sheipa National Park, and Taroko National Park. These populations' individuals and numbers can be found south to Tawushan Reserve through Yushan National Park. Typically they are found in rugged areas at elevations of . The estimated number of individuals in these regions number some 200 to 600 bears. Behavior and ecology Asian black bears are diurnal, though they become nocturnal near human habitations. They will walk in a procession of largest to smallest. They are good climbers of rocks and trees, and will climb to feed, rest, sun, elude enemies and hibernate. Some older bears may become too heavy to climb. Half of their life is spent in trees and they are one of the largest arboreal mammals. In the Ussuri territory in the Russian Far East, Asian black bears can spend up to 15% of their time in trees. Asian black bears break branches and twigs to place under themselves when feeding on trees, thus causing many trees in their home ranges to have nest-like structures on their tops. Asian black bears will rest for short periods in nests on trees standing fifteen feet or higher. Asian black bears do not hibernate over most of their range. They may hibernate in their colder, northern ranges, though some bears will simply move to lower elevations. Nearly all pregnant sows hibernate. Asian black bears prepare their dens for hibernation in mid-October, and will sleep from November until March. Their dens can either be dug-out hollow trees (60 feet above ground), caves or holes in the ground, hollow logs, or steep, mountainous and sunny slopes. They may also den in abandoned brown bear dens. Asian black bears tend to den at lower elevations and on less steep slopes than brown bears. Female Asian black bears emerge from dens later than do males, and female Asian black bears with cubs emerge later than barren females. Asian black bears tend to be less mobile than brown bears. With sufficient food, Asian black bears can remain in an area of roughly , and sometimes even as little as . Asian black bears have a wide range of vocalizations, including grunts, whines, roars, slurping sounds (sometimes made when feeding) and "an appalling row" when wounded, alarmed or angry. They emit loud hisses when issuing warnings or threats, and scream when fighting. When approaching other bears, they produce "tut tut" sounds, thought to be produced by bears snapping their tongue against the roof of their mouth. When courting, they emit clucking sounds. Reproduction and life cycle Within Sikhote-Alin, the breeding season of Asian black bears occurs earlier than in brown bears, starting from mid-June to mid-August. Birth also occurs earlier, in mid-January. By October, the uterine horns of pregnant females grow to . By late December, the embryos weigh 75 grams. Sows generally have their first litter at the age of three years. Pregnant females generally make up 14% of populations. Similar to brown bears, Asian black bears have delayed implantation. Sows usually give birth in caves or hollow trees in winter or early spring after a gestation period of 200–240 days. Cubs weigh 13 ounces at birth, and will begin walking at four days of age, and open their eyes three days later. The skulls of newborn Asian black bear cubs bear great resemblance to those of adult sun bears. Litters can consist of 1–4 cubs, with 2 being the average. Cubs have a slow growth rate, reaching only 2.5 kg by May. Asian black bear cubs will nurse for 104–130 weeks, and become independent at 24–36 months. There is usually a 2–3 year interval period before females produce subsequent litters. The average lifespan in the wild is 25 years, while the oldest Asian black bear in captivity died at the age of 44. Feeding Asian black bears are omnivorous, and will feed on insects, beetle larvae, invertebrates, termites, grubs, carrion, bees, eggs, garbage, mushrooms, grasses, bark, roots, tubers, fruits, nuts, seeds, honey, herbs, acorns, cherries, dogwood, and grain. Although herbivorous to a greater degree than brown bears, and more carnivorous than American black bears, Asian black bears are not as specialized in their diet as giant pandas are: while giant pandas depend on a constant supply of low calorie, yet abundant foodstuffs, Asian black bears are more opportunistic and have opted for a nutritional boom-or-bust economy. They thus gorge themselves on a variety of seasonal high calorie foods, storing the excess calories as fat, and then hibernate during times of scarcity. Asian black bears will eat pine nuts and acorns of the previous year in the April–May period. In times of scarcity, they enter river valleys to gain access to hazelnuts and insect larvae in rotting logs. From mid-May through late June, they will supplement their diet with green vegetation and fruit. Through July to September, they will climb trees to eat bird cherries, pine cones, vines and grapes. On rare occasions they will eat dead fish during the spawning season, though this constitutes a much lesser portion of their diet than in brown bears. In the 1970s, Asian black bears were reported to kill and eat Hanuman langurs in Nepal. They appear to be more carnivorous than most other bears, including American black bears, and will kill ungulates with some regularity, including domestic livestock. Wild ungulate prey can include muntjacs, serow, takin, malayan tapir wild boar and adult water buffaloes, which they kill by breaking their necks. Interspecific predatory relationships The Asian black bear's range overlaps with that of the sloth bear in central and southern India, the sun bear in Southeast Asia and the brown bear in the southern part of the Russian Far East. Asian black bears seem to intimidate Himalayan brown bears in direct encounters. They eat the fruit dropped by Asian black bears from trees, as they themselves are too large and cumbersome to climb. Ussuri brown bears may attack Asian black bears. Asian black bears are occasionally attacked by tigers and brown bears. Leopards are known to prey on bear cubs younger than two years old. Packs of wolves and Eurasian lynxes are potential predators of bear cubs as well. Asian black bears usually dominate Amur leopards in physical confrontations in heavily vegetated areas, while leopards are uppermost in open areas, though the outcome of such encounters is largely dependent on the size of the individual animals. Tigers occasionally attack and consume Asian black bears. Russian hunters found their remains in tiger scats, and Asian black bear carcasses showing evidence of tiger predation. To escape tigers, Asian black bears rush up a tree and wait for the tiger to leave, though some tigers will pretend to leave, and wait for the bear to descend. Tigers prey foremost on young bears. Some are very tenacious when attacked: Jim Corbett observed a fight between a tiger and the largest Asian black bear he had ever seen. The bear managed to chase off the tiger, despite having half its nose and scalp torn off. He twice saw Asian black bears carry off tiger kills when the latter was absent. Asian black bears are usually safe from tiger attacks once they reach five years of age. One fatal attack of a tiger on a juvenile Asian black bear has been recorded in Jigme Dorji National Park. One Siberian tiger was reported to have lured an Asian black bear by imitating its mating call. However, Asian black bears are probably less vulnerable to tiger attacks than brown bears, due to their habit of living in hollows or in close set rocks. Conservation The Asian black bear is listed as a protected animal in China's National Protection Wildlife Law, which stipulates that anyone hunting or catching bears without permits will be subject to severe punishment. Although the Asian black bear is protected in India, due to being listed as vulnerable in the Red Data Book in Appendix I of CITES in India and in Schedule I of the Indian Wildlife (Protection) Act and its 1991 amendment, it has been difficult to prosecute those accused of poaching Asian black bears due to lack of witnesses and lack of Wildlife Forensic Labs to detect the originality of confiscated animal parts or products. Moreover, due to India's wide-stretching boundaries with other nations such as Pakistan, Tibet, China, Nepal, Bhutan, Bangladesh and Myanmar, it is difficult to police such borders, which are often in mountainous terrain. Five Asian black bear populations, occurring in Kyushu, Shikoku, West-Chugoku, East-Chugoku and Kii areas, were listed as endangered by the Environmental Agency in the Japanese Red Data Book in 1991. Small isolated populations in the Tanzawa and Shimokita areas of mainland Honshū were listed as endangered in 1995. Beyond recognizing these populations as endangered, there is still a lack of efficient conservation methods for Japanese black bears. Asian black bears occur as an infrequent species in the Red Data Book of Russia, thus falling under special protection and hunting is prohibited. There is currently a strong movement to legalize the hunting of Russian black bears, which is supported by most of the local scientific community. As of January 30, 1989, Taiwan's Formosan black bears have been listed as an endangered species under the Natural and Cultural Heritage Act on, and was later listed as a Conserved Species Category I. The Vietnamese government issued Decision 276/QD, 276/1989, which prohibits the hunting and exporting of Asian black bears. The Red Book of Vietnam lists Vietnamese black bears as endangered. The Korean Government designated the Asian black bear as Natural Monument No. 329 and it is considered an extinction crisis. At the present time, the Endangered Species Restoration Center of Korea National Park Service is going through species restoration business. Threats The main habitat threat to Asian black bears is overcutting of forests, mainly due to human populations increasing to over 430,000 in regions where bears are distributed, in the Shaanxi, Ganshu, and Sichuan provinces. 27 forestry enterprises were built in these areas between 1950 and 1985 (excluding the lumbering units belonging to the county). By the early 1990s, the Asian black bear distribution area was reduced to only one-fifth of the area that existed before the 1940s. Isolated bear populations face environmental and genetic stress in these circumstances. However, one of the most important reasons for their decrease involves overhunting, as Asian black bear paws, gall bladders and cubs have great economic value. Asian black bear harvests are maintained at a high level due to the harm they cause to crops, orchards and bee farms. During the 1950s and 1960s, 1,000 Asian black bears were harvested annually in the Heilongjiang Province. However, purchased furs were reduced by 4/5, even by 9/10 yearly in the late 1970s to the early 1980s. Asian black bears have also been declining annually in Dehong Dai and Jingpo Nations Autonomous Prefecture and the Yunnan Province. Poaching for gall bladders and skin are the main threats faced by Asian black bears in India. Although the poaching of Asian black bears is well known throughout Japan, authorities have done little to remedy the situation. The killing of nuisance bears is practiced year-round, and harvest numbers have been on the increase. Box traps have been widely used since 1970 to capture nuisance bears. It is estimated that the number of shot bears will decrease in time, due to the decline of old traditional hunters and the increase of a younger generation less inclined to hunt. Logging is also considered a threat. Although Asian black bears have been afforded protection in Russia since 1983, illegal poaching, fueled by a growing demand for bear parts in the Asian market, is still a major threat to the Russian population. Many workers of Chinese and Korean origin, supposedly employed in the timber industry, are actually involved in the illegal trade. Some Russian sailors reportedly purchase bear parts from local hunters to sell them to Japanese and Southeast Asian clients. Russia's rapidly growing timber industry has been a serious threat to the Asian black bear's home range for three decades. The cutting of trees containing cavities deprives Asian black bears of their main source of dens, and forces them to den on the ground or in rocks, thus making them more vulnerable to tigers, brown bears and hunters. In Taiwan, Asian black bears are not actively pursued, though steel traps set out for wild boars have been responsible for unintentional bear trappings. Timber harvesting has largely stopped being a major threat to Taiwan's Asian black bear population, though a new policy concerning the transfer of ownership of hill land from the government to private interests has the potential to affect some lowland habitat, particularly in the eastern part of the nation. The building of new cross island highways through bear habitat is also potentially threatening. Vietnamese black bear populations have declined rapidly due to the pressures of human population growth and unstable settlement. Vietnamese forests have been shrinking: of the of natural forests, about disappear every year. Hunting pressures have also increased with a coinciding decline of environmental awareness. South Korea remains one of two countries to allow bear bile farming to continue legally. As reported in 2009, approximately 1,374 Asian black bears reside in an estimated 74 bear farms, where they are kept for slaughter to fuel the demands of traditional Asian medicine. In sharp contrast, fewer than 20 Asian black bears can be found at Jirisan Restoration Center, located in Korea's Jirisan National Park. Relationships with humans In folklore and literature In Japanese culture, the Asian black bear is traditionally associated with the mountain spirit (yama no kami) and is characterized variously as "mountain man" (), "mountain uncle" (), "mountain father" (), a loving mother, and a child. Being a largely solitary creature, the Asian black bear is also viewed as "lonely person" (sabishigariya). Asian black bears feature very little in lowland Japanese folklore, but are prominent in upland Japan, a fact thought to reflect the bear's greater economic value in upland areas. According to the local folklore in Kituarahara-gun in Niigata, the Asian black bear received its white mark after being given a silk-wrapped amulet by , which left the mark after being removed. In Hindu mythology, the Asian black bear Jambavantha (also known as Jambavan or Jamvanta) is believed to have lived from Treta Yuga to Dvapara Yuga. In the epic Ramayana, Jambavantha assists Rama in finding his wife Sita and battle her abductor, Ravana. The Asian black bear in Thailand is called mi khwai (), meaning "buffalo bear", not due to its appearance, but a v-shaped patch of fur under its neck similar to that of a buffalo. There is also a Thai idiom bon meuan mi kin pheung (), literally "grumbling like a bear eating honey", referring to people who display behaviors of grumbling and mumbling express dissatisfaction. This idiom comes from the behavior of this species of bear when it climbs trees in search of honey and young bees. While eating honey, it makes a murmuring sound. Asian black bears are briefly mentioned in Yann Martel's novel The Life of Pi, in which they are described by the protagonist's father as being among the most dangerous animals in his zoo. Attacks on humans Although usually shy and cautious animals, Asian black bears are more aggressive towards humans than the brown bears of Eurasia and American black bears. David W. Macdonald theorizes that this greater aggression is an adaptation to being sympatric with tigers. According to Brigadier General R. G. Burton: In response to a chapter on Asian black bears written by Robert Armitage Sterndale in his Natural History of the Mammalia of India and Ceylon on how Asian black bears were no more dangerous than other animals in India, a reader responded with a letter to The Asian on May 11, 1880: At the turn of the 20th century, a hospital in Srinagar, Kashmir received dozens of Asian black bear victims annually. When Asian black bears attack humans, they rear up on their hind legs and knock victims over with their front paws. Then they bite them on an arm or leg and snap on the victim's head, this being the most dangerous part of the attack. Asian black bear attacks have been increasing in Kashmir since the Kashmir conflict. In November 2009, in the Kulgam district of Indian-administered Kashmir, an Asian black bear attacked four insurgents after discovering them in its den, and killed two of them. In India, attacks on humans have been increasing yearly, and have occurred largely in the northwestern and western Himalayan region. In the Chamba District of Himachal Pradesh, the number of Asian black bear attacks on humans has gradually increased from 10 in 1988–89 to 21 in 1991–92. There are no records of predation on humans by Asian black bears in Russia, and no conflicts have been documented in Taiwan. Asian black bear attacks on humans were reported from Junbesi in Langtang National Park, Nepal in 2005, and occurred in villages as well as in the surrounding forest. Nine people were killed by Asian black bears in Japan between 1979 and 1989. In September 2009, an Asian black bear attacked a group of 9 tourists, seriously injuring four of them at a bus station in the built-up area of Takayama, Gifu. The majority of attacks tend to occur when Asian black bears are encountered suddenly, and in close quarters. Because of this, Asian black bears are generally considered more dangerous than brown bears, which live in more open spaces and are thus less likely to be surprised by approaching humans. They are also likely to attack when protecting food. 2016 saw several attacks by Asian black bears in Japan. In May and June four people were killed by Asian black bears in Akita prefecture while picking bamboo shoots, and in August a female safari park worker in Gunma prefecture was killed when an Asian black bear climbed into her car and attacked her. Livestock predation and crop damage In the past, the farmers of the Himalayan lowlands feared Asian black bears more than any other pest, and would erect platforms in the fields, where watchmen would be posted at night and would beat drums to frighten off any interlopers. However, some Asian black bears would grow accustomed to the sound and encroach anyway. Of 1,375 livestock kills examined in Bhutan, Asian black bears accounted for 8% of attacks. Livestock predation, overall, was greatest in the summer and autumn periods, which corresponded with a peak in cropping agriculture; livestock are turned out to pasture and forest during the cropping season and, subsequently, are less well-guarded than at other times. Livestock killed by Asian black bears in Himachal Pradesh, India increased from 29 in 1988–1989 to 45 in 1992–1993. In the remoter areas of Japan, Asian black bears can be serious crop predators: the bears feed on cultivated bamboo shoots in spring, on plums, watermelons and corn in the summer, and on persimmons, sweet potatoes and rice in the autumn. Japanese black bears are estimated to damage 3,000 bee hives annually. When feeding on large crops such as watermelons or pumpkins, Asian black bears will ignore the flesh and eat the seeds, thus adversely affecting future harvests. Asian black bears can girdle and kill trees by stripping their bark for the sap. This can cause serious economic problems in Asia's valuable timber forests. In the late 1970s, 400–1,200 hectares of land had been affected by Asian black bears bark-stripping Japanese conifers. There is evidence that 70-year-old conifers (commanding the highest market values) may also have been bark-stripped. Asian black bears will prey on livestock if their natural food is in poor supply. They have been known to attack bullocks, either killing them outright, or eating them alive. Tameability and trainability Along with sun bears, Asian black bears are the most typically used species in areas where bears are used either in performances or as pets. Asian black bears have an outstanding learning ability in captivity, and are among the most common species used in circus acts. According to Gary Brown: Asian black bears are easily tamed, and can be fed with rice, maize, sweet potatoes, cassavas, pumpkins, ripe fruit, animal fat and sweet foods. Keeping captive Asian black bears is popular in China, especially due to the belief that milking the bear's gall bladder leads to quick prosperity. Asian black bears are also popular as pets in Vietnam. Hunting and exploitation Hunting According to The Great and Small Game of India, Burma, and Tibet, regarding the hunting of Asian black bears in British India: The book also describes a second method of black bear hunting involving the beating of small patches of forest, when the bears march out in single file. However, black bears were rarely hunted for sport, because of the poor quality of their fur and the ease by which they could be shot in trees, or stalked, as their hearing was poor. Although easy to track and shoot, Asian black bears were known by British sportsmen to be extremely dangerous when injured. Brigadier General R.G. Burton wrote of how many sportsmen had been killed by Asian black bears after failing to make direct hits. Today, Asian black bears are only legally hunted for sport in Japan and Russia. In Russia, 75–100 Asian black bears are legally harvested annually, though 500 a year are reportedly harvested illegally. After the introduction of Buddhism in Japan, which prohibited the killing of animals, the Japanese compromised by devising different strategies in hunting bears. Some, such as the inhabitants of the Kiso area in the Nagano Prefecture, prohibited the practice altogether, while others developed rituals in order to placate the spirits of killed bears. In some Japanese hunting communities, Asian black bears lacking the white chest mark are considered sacred. In the Akita Prefecture, bears lacking the mark were known by matagi huntsmen as (all-black) or (black-chested), and were also considered messengers of . If such a bear was shot, the huntsman would offer it to , and give up hunting from that time on. Similar beliefs were held in Nagano, where the completely black Asian black bears were termed or cat-bear. communities believed that killing an Asian black bear in the mountains would result in a bad storm, which was linked to the belief that bear spirits could affect weather. The would generally hunt Asian black bears in spring or from late autumn to early winter, before they hibernated. In mountain regions, Asian black bears were hunted by driving them upland to a waiting hunter, who would then shoot it. Bear hunting expeditions were preceded by rituals, and could last up to two weeks. After killing the bear, the would pray for the bear's soul. Asian black bear hunts in Japan are often termed , meaning "bear conquest". The word itself is often used in Japanese folklore to describe the slaying of monsters and demons. Traditionally, the Atayal, Taroko, and Bunun people of Taiwan consider Asian black bears to be almost human in their behaviors, and thus unjust killing of bears is equated with murder and will cause misfortunes such as disease, death, or crop failure. The Bunun people call Asian black bears Aguman or Duman, which means devil. Traditionally, a Bunun hunter who has accidentally trapped an Asian black bear has to build a cottage in the mountains and cremate the bear within it. The hunter must stay in the cottage alone, away from the village until the end of the millet harvest, as it is believed that the killing of an Asian black bear will cause the millet crop to burn black. In the Tungpu area, Asian black bears are considered animals of the "third category": animals with the most remote relationship to humans and whose activity is restricted outside human settlements. Therefore, when Asian black bears encroach upon human settlements, they are considered ill omens. In this situation, the community can either destroy the trespassing bears or settle somewhere else. The Rukai and Paiwan people are permitted to hunt Asian black bears, though they believe that doing so will curse the hunters involved: Rukai people believe that hunting Asian black bears can result in disease. Children are forbidden from eating bear meat, which is itself not permitted to be taken within homes. Products Asian black bears have been hunted for their body parts in China since the Stone Age. In the 19th century, its fur was considered of low value. Grease was the only practical use for their carcasses in British India, and bears living near villages were considered ideal, as they were almost invariably fatter than forest-dwelling ones. In the former USSR, the Asian black bear yielded fur, meat and fat of greater quality than those of the brown bear. Today, bile is in demand, as it supposedly cures various diseases, treats the accumulation of blood below the skin, and counters toxic effects. Products also include bone 'glue' and fat, both used in traditional medicine and consumed as a tonic. Asian black bear meat is also edible.
Biology and health sciences
Bears
Animals
616992
https://en.wikipedia.org/wiki/Controlled%20burn
Controlled burn
A controlled burn or prescribed burn (Rx burn) is the practice of intentionally setting a fire to change the assemblage of vegetation and decaying material in a landscape. The purpose could be for forest management, ecological restoration, land clearing or wildfire fuel management. A controlled burn may also refer to the intentional burning of slash and fuels through burn piles. Controlled burns may also be referred to as hazard reduction burning, backfire, swailing or a burn-off. In industrialized countries, controlled burning regulations and permits are usually overseen by fire control authorities. Controlled burns are conducted during the cooler months to reduce fuel buildup and decrease the likelihood of more dangerous, hotter fires. Controlled burning stimulates the germination of some trees and reveals soil mineral layers which increases seedling vitality. In grasslands, controlled burns shift the species assemblage to primarily native grassland species. Some seeds, such as those of lodgepole pine, sequoia and many chaparral shrubs are pyriscent, meaning heat from fire causes the cone or woody husk to open and disperse seeds. Fire is a natural part of both forest and grassland ecology and has been used by indigenous people across the world for millennia to promote biodiversity and cultivate wild crops, such as fire-stick farming by aboriginal Australians. Colonial law in North America and Australia displaced indigenous people from lands that were controlled with fire and prohibited from conducting traditional controlled burns. After wildfires began increasing in scale and intensity in the 20th century, fire control authorities began reintroducing controlled burns and indigenous leadership into land management. Uses Forestry Controlled burning reduces fuels, improves wildlife habitat, controls competing vegetation, helps control tree disease and pests, perpetuates fire-dependent species and improves accessibility. To improve the application of prescribed burns for conservation goals, which may involve mimicking historical or natural fire regimes, scientists assess the impact of variation in fire attributes. Parameters measured are fire frequency, intensity, severity, patchiness, spatial scale and phenology. Furthermore, controlled fire can be used for site preparation when mechanized treatments are not possible because of terrain that prevents equipment access. Species variation and competition can drastically increase a few years after fuel treatments because of the increase in soil nutrients and availability of space and sunlight. Many trees depend on fire as a way to clear out other plant species and release their seeds. The giant sequoia, among other fire-adapted conifer species, depends on fire to reproduce. The cones are pyriscent so they will only open after exposed to a certain temperature. This reduces competition for the giant sequoia seedlings because the fire has cleared non-fire-adapted, competing species. Pyriscent species benefit from moderate-intensity fires in older stands; however, climate change is causing more frequent high intensity fires in North America. Controlled burns can manage the fire cycle and the intensity of regenerate fires in forests with pyriscent species like the boreal forest in Canada. Eucalyptus regnans or mountain ash of Australia also shows a unique evolution with fire, quickly replacing damaged buds or stems in the case of danger. They also carry their seeds in capsules which can be deposited at any time of the year . During a wildfire, the capsules drop nearly all of their seeds and the fire consumes the eucalypt adults, but most of the seeds survive using the ash as a source of nutrients. At their rate of growth, they quickly dominate the land and a new, like-aged eucalyptus forest grows. Other tree species like poplar can easily regenerate after a fire into a like-aged stand from a vast root system that is protected from fires because it is underground. Grassland restoration Native grassland species in North America and Australia are adapted to survive occasional low intensity fires. Controlled burns in prairie ecosystems mimic low intensity fires that shift the composition of plants from non-native species to native species. These controlled burns occur during the early spring before native plants begin actively growing, when soil moisture is higher and when the fuel load on the ground is low to ensure that the controlled burn remains low intensity. Wildfire management Controlled burns reduce the amount of understory fuel so when a wildfire enters the area, a controlled burn site can reduce the intensity of the fire or prevent the fire from crossing the area entirely. A controlled burn prior to the wildfire season can protect infrastructure and communities or mitigate risks associated with many dead standing trees such as after a pest infestation when forest fuels are high. Agriculture In the developing world, the use of controlled burns in agriculture is often referred to as slash and burn. In industrialized nations, it is seen as one component of shifting cultivation, as a part of field preparation for planting. Often called field burning, this technique is used to clear the land of any existing crop residue as well as kill weeds and weed seeds. Field burning is less expensive than most other methods such as herbicides or tillage, but because it produces smoke and other fire-related pollutants, its use is not popular in agricultural areas bounded by residential housing. Prescribed fires are broadly used in the context of woody plant encroachment, with the aim of improving the balance of woody plants and grasses in shrublands and grasslands. In Northern-India, especially in Punjab, Haryana, and Uttar Pradesh, unregulated burning of agricultural waste is a major problem. Smoke from these fires leads to degradation in environmental quality in these states and the surrounded area. In East Africa, bird densities increased months after controlled burning had occurred. Greenhouse gas abatement Controlled burns on Australian savannas can result in a long-term cumulative reduction in greenhouse gas emissions. One working example is the West Arnhem Fire Management Agreement, started to bring "strategic fire management across of Western Arnhem Land" to partially offset greenhouse gas emissions from a liquefied natural gas plant in Darwin, Australia. Deliberately starting controlled burns early in the dry season results in a mosaic of burnt and unburnt country which reduces the area of stronger, late dry season fires; it is also known as "patch burning". Procedure Health and safety, protecting personnel, preventing the fire from escaping and reducing the impact of smoke are the most important considerations when planning a controlled burn. While the most common driver of fuel treatment is the prevention of loss of human life and structures, certain parameters can also be changed to promote biodiversity and to rearrange the age of a stand or the assemblage of species. To minimize the impact of smoke, burning should be restricted to daylight hours whenever possible. Furthermore, in temperate climates, it is important to burn grasslands and prairies before native species begin growing for the season so that only non-native species, which send up shoots earlier in the spring, are affected by the fire. Ground ignition Back burning or a back fire is the term given to the process of lighting vegetation in such a way that it has to burn against the prevailing wind. This produces a slower moving and more controllable fire. Controlled burns utilize back burning during planned fire events to create a "black line" where fire cannot burn through. Back burning or backfiring is also done to stop a wildfire that is already in progress. Firebreaks are also used as an anchor point to start a line of fires along natural or man-made features such as a river, road or a bulldozed clearing. Head fires, that burn with the prevailing wind, are used between two firebreaks because head fires will burn more intensely and move faster than a back burn. Head fires are used when a back burn would move too slowly through the fuel either because the fuel moisture is high or the wind speed is low. Another method to increase the speed of a back burn is to use a flank fire which is lit at right angles to the prevailing wind and spreads in the same direction. Grassland or prairie burning In Ontario, Canada, controlled burns are regulated by the Ministry of Natural Resources and only trained personnel can plan and ignite controlled burns within Ontario's fire regions or if the Ministry of Natural Resources in involved in any aspect of planning a controlled burn. The team performing the prescribed burn is divided into several roles; the Burn Boss, Communications, Suppression and Ignition. The planning process begins by submitting an application to a local fire management office and after approval, applicants must submit a burn plan several weeks prior to ignition. On the day of the controlled burn, personnel meet with the Burn Boss and discuss the tactics being used for ignition and suppression, health and safety precautions, fuel moisture levels and the weather (wind direction, wind speed, temperature and precipitation) for the day. On site, local fire control authorities are notified by telephone about the controlled burn while the rest of the team members fill drip torches with pre-mixed fuel, fill suppression packs with water and put up barricades and signage to prevent pedestrian access to the controlled burn. Driptorches are canisters filled with fuel and a wick at the end that is used to ignite the lines of fire. Safe zones are established to ensure personnel know where the fire cannot cross either because of natural barriers like bodies of water or human-made barriers like tilled earth. During ignition, the Burn Boss relays information about the fire (flame length, flame height, the percent of ground that has been blackened) to the Communications Officer who documents this information. The Communications Officer relays information about the wind speed and wind direction so the Burn Boss can determine how the direction of both flames and smoke and plan their lines of fire accordingly. Once the ignition phase has ended in a section, the suppression team "mops up" by using suppression packs to suppress smoldering material. Other tools used for suppression are RTVs equipped with a water tank and a pump and hose that is installed in a nearby body of water. Finally, once the mop up has finished, the Burn Boss declares the controlled burn over and local fire authorities are notified. Slash pile burning There are several different methods used to burn piles of slash from forestry operations. Broadcast burning is the burning of scattered slash over a wide area. Pile burning is gathering up the slash into piles before burning. These burning piles may be referred to as bonfires. High temperatures can harm the soil, damaging it physically, chemically or sterilizing it. Broadcast burns tend to have lower temperatures and will not harm the soil as much as pile burning, though steps can be taken to treat the soil after a burn. In lop and scatter burning, slash is left to compact over time, or is compacted with machinery. This produces a lower intensity fire, as long as the slash is not packed too tightly. The risk of fatal fires that stem from burning slash can also be reduced by proactively reducing ground fuels before they can create a fuel ladder and begin an active crown fire. Predictions show thinned forests lead to a reduction in fire intensity and flame lengths of forest fires compared to untouched or fire-proofed areas. Aerial ignition Aerial ignition is a type of controlled burn where incendiary devices are released from aircraft. History There are two basic causes of wildfires. One is natural, mainly through lightning, and the other is human activity. Controlled burns have a long history in wildland management. Fire has been used by humans to clear land since the Neolithic period. Fire history studies have documented regular wildland fires ignited by indigenous peoples in North America and Australia prior to the establishment of colonial law and fire suppression. Native Americans frequently used fire to manage natural environments in a way that benefited humans and wildlife in forests and grasslands by starting low-intensity fires that released nutrients for plants, reduced competition for cultivated species, and consumed excess flammable material that otherwise would eventually fuel high-intensity, catastrophic fires. North America The use of controlled burns in North America ended in the early 20th century, when federal fire policies were enacted with the goal of suppressing all fires. Since 1995, the US Forest Service has slowly incorporated burning practices into its forest management policies. Fire suppression has changed the composition and ecology of North American habitats, including highly fire-dependent ecosystems such as oak savannas and canebrakes, which are now critically endangered habitats on the brink of extinction. In the Eastern United States, fire-sensitive trees such as the red maple are increasing in number, at the expense of fire-tolerant species like oaks. Canada In the Anishinaabeg Nation around the Great Lakes, fire is a living being that has the power to change landscapes through both destruction and the regrowth and return of life following a fire. Human beings are also inexorably tied to the land they live on as stewards who maintain the ecosystems around them. Because fire can reveal dormant seedlings, it is a land management tool. Fire was a part of the landscapes of Ontario until early colonial rule restricted indigenous culture in across Canada. During colonization, large scale forest fires were caused by sparks from railroads and fire was used to clear land for agriculture use. The public perception of forest fires was positive because the cleared land represented taming the wilderness to an urban populace. The conservation movement, which was spearheaded by Edmund Zavitz in Ontario, caused a ban on all fires, both natural wild fires and intentional fires. In the 1970s, Parks Canada began implementing small prescribed burns however, the scale of wildfires each year outpaces the acreage of land that is intentionally burnt. In the late 1980s, the Ministry of Natural Resources in Ontario began conducting prescribed burns on forested land which led to the created of a prescribed burn program as well as training and regulation for controlled burns in Ontario. In British Columbia, there was an increase in the intensity and scale of wildfires after local bylaws restricted the use of controlled burns. In 2017, following one of the worst years for wildfire in the province's history, indigenous leadership and public service members wrote an independent report that suggested returning to the traditional use of prescribed burns to manage understory fuel from wildfires. The government of British Columbia responded by committing to using controlled burns as a wildfire management tool. United States The Oregon Department of Environmental Quality began requiring a permit for farmers to burn their fields in 1981, but the requirements became stricter in 1988 following a multi-car collision in which smoke from field burning near Albany, Oregon, obscured the vision of drivers on Interstate 5, leading to a 23-car collision in which 7 people died and 37 were injured. This resulted in more scrutiny of field burning and proposals to ban field burning in the state altogether. With controlled burns, there is also a risk that the fires get out of control. For example, the Calf Canyon/Hermits Peak Fire, the largest wildfire in the history of New Mexico, was started by two distinct instances of controlled burns, which had both been set by the US Forest Service, getting out of control and merging. The conflict of controlled burn policy in the United States has roots in historical campaigns to combat wildfires and to the eventual acceptance of fire as a necessary ecological phenomenon. Following colonization of North America, the US used fire suppression laws to eradicate the indigenous practice of prescribed fire. This was done against scientific evidence that supported prescribed burns as a natural process. At the loss to the local environment, colonies utilized fire suppression in order to benefit the logging industry. The notion of fire as a tool had somewhat evolved by the late 1970s as the National Park Service authorized and administered controlled burns. Following prescribed fire reintroduction, the Yellowstone fires of 1988 occurred, which significantly politicized fire management. The ensuing media coverage was a spectacle that was vulnerable to misinformation. Reports drastically inflated the scale of the fires which disposed politicians in Wyoming, Idaho, and Montana to believe that all fires represented a loss of revenue from tourism. Paramount to the new action plans is the suppression of fires that threaten the loss of human life with leniency toward areas of historic, scientific, or special ecological interest. There is still a debate amongst policy makers about how to deal with wildfires. Senators Ron Wyden and Mike Crapo of Oregon and Idaho have been moving to reduce the shifting of capital from fire prevention to fire suppression following the harsh fires of 2017 in both states. Tensions around fire prevention continue to rise due to the increasing prevalence of climate change. As drought conditions worsen, North America has been facing an abundance of destructive wildfires. Since 1988, many states have made progress toward controlled burns. In 2021, California increased the number of trained personnel to perform controlled burns and created more accessibility for landowners. Europe In the European Union, burning crop stubble after harvest is used by farmers for plant health reasons under several restrictions in cross-compliance regulations. In the north of Great Britain, large areas of grouse moors are managed by burning in a practice known as muirburn. This kills trees and grasses, preventing natural succession, and generates the mosaic of ling (heather) of different ages which allows very large populations of red grouse to be reared for shooting. The peat-lands are some of the largest carbon sinks in the UK, providing an immensely important ecological service. The governments has restricted burning to the area but hunters have been continuing to set the moors ablaze, releasing a large amount of carbon into the atmosphere and destroying native habitat. Africa The Maasai ethnic group conduct traditional burning in savanna ecosystems before the rainy season to provide varied grazing land for livestock and to prevent larger fires when the grass is drier and the weather is hotter. In the past few decades, the practice of burning savanna has decreased because rain has become inadequate and unpredictable, there are more frequent occurrences of large accidental fires and Tanzanian government policies prevent burning savanna.
Technology
Trees and forestry
null
616993
https://en.wikipedia.org/wiki/Steamroller
Steamroller
A steamroller (or steam roller) is a form of road roller – a type of heavy construction machinery used for leveling surfaces, such as roads or airfields – that is powered by a steam engine. The leveling/flattening action is achieved through a combination of the size and weight of the vehicle and the rolls: the smooth wheels and the large cylinder or drum fitted in place of treaded road wheels. The majority of steam rollers are outwardly similar to traction engines as many traction engine manufacturers later produced rollers based on their existing designs, and the patents owned by certain roller manufacturers tended to influence the general arrangements used by others. The key difference between the two vehicles is that on a roller the main roll replaces the front wheels and axle that would be fitted to a traction engine, and the driving wheels are smooth-tired. The word steamroller frequently refers to road rollers in general, regardless of the method of propulsion. History Before about 1850, the word steamroller meant a fixed machine for rolling and curving steel plates for boilers and ships. From then on, it also meant a mobile device for flattening ground. An early steamroller was patented by Louis Lemoine in France in 1859 and demonstrated sometime before February 1861. In Britain, a 30-ton steamroller was designed in 1863 by William Clark and partner W.F. Batho. Having failed to impress the British municipal road authorities it was transferred to Kolkata where it continued to work. The company Aveling and Porter was the first to successfully sell the product commercially and subsequently became the largest manufacturer in Britain. In 1866 they produced a prototype roller with rollers fitted to the rear of a standard 12 nominal-horsepower-traction engine. This experimental machine was described by local papers as 'the world's first steamroller' and it caused a public spectacle. In 1867, the steam road roller was patented and the company began production of the first practical steam roller – the new machine's rollers were mounted at the front instead of the back and it weighed in excess of 30 tons. It was tested on the Military Road in Chatham, Star Hill in Rochester and in Hyde Park, London and the machine proved a huge success. Within a year, they were being exported around the world, including to France, India and the United States. A New York City chief engineer said of one of these, that "in one day's rolling at a cost of 10 dollars, as much work was accomplished as in two days' rolling with a 7 ton roller drawn by eight horses at a cost of 20 dollars a day." The heavier rollers were found to be hard to handle and the weight of the machines was reduced to around 10 tons. Aveling and Porter refined their product continuously over the following decades, introducing fully steerable front rollers and compound steam engines at the 1881 Royal Agricultural Show. The move to asphalt for road construction resulted in the demand for steamrollers that could rapidly reverse so they could roll the tar while still hot. Machines that could do this were introduced in the first decade of the 20th century. Production ended around 1950. Configurations The majority of rollers were of the same basic 3-roll configuration, gear-driven, with two large smooth wheels (rolls) at the back and a single wide roll at the front (in actuality, the wide roll usually consisted of two narrower rolls on the same axle, to make steering easier). However, there was also a distinctive variant, the "tandem", which had two wide rolls, one front, one rear. Those made by Robey & Co used their standard steam wagon engine and pistol boiler fitted in a girder frame with rolls and a chain drive to produce a quick-reversing roller suitable for modern road surfaces such as tarmacadam and bituminous asphalt. A number of Robey & Co. tandem rollers were modified to make a further variant, the tri-tandem, which was a tandem with a third roll, mounted directly behind the rear one. Robey supplied the parts, but the modification was undertaken by Goodes of Royston. Ten tandem and two tri-tandem Robey rollers survive in preservation, and one of the tri-tandems is known to have been used to construct parts of the M1 motorway. A variation of the basic configuration was the "convertible": an engine which could be either a steam roller or a traction engine and could be changed from one form to the other in a relatively short time – i.e., less than half a day. Convertible engines were liked by local authorities, since the same machine could be used for haulage in the winter and road-mending in the summer. Design features Although most steam roller designs are derived from traction engines, and were manufactured by the same companies, there are a number of features that set them apart. Wheels The most obvious difference is in the wheels. Traction engines were generally built with large fabricated spoked steel wheels with wide rims. Those intended for road use would have continuous solid rubber tyres bolted around the rims, to improve traction on tarmac. Engines intended for agricultural use would have a series of strakes bolted diagonally across the rims, like the tread on a modern pneumatic tractor tyre, and the wheels were typically wider to spread the load more evenly. Steam rollers, on the other hand, had smooth rear wheels and a roller at the front. The roller consisted of a pair of adjacent wide cylinders supported at both ends. This replaced the separate wheels and axle of a traction engine. Smokebox In the conventional arrangement, the front roller is mounted centrally, forward of the chimney. In order to allow enough clearance from the boiler (and hence a larger front roll), the smokebox is extended forward substantially at the top to incorporate a support plate on which to mount the bearing for the roller assembly. This gives the distinctive, hooded look to the front of a steam roller. It also necessitates a different design of smokebox door – it has to hinge up or down, rather than opening sideways, due to the limited access available. Access to the boiler tubes for cleaning is limited and the brush usually has to be inserted through the small gap between the top of the roll and the fork. Special equipment The front and rear rolls were usually fitted with scraper bars. As the vehicle moved along, these removed any surface material that had become stuck to the roll, to prevent a build-up of material and ensure a flat finish was maintained. Some steam rollers were fitted with a scarifier mounted on the tender box at the rear. They could be swung down to road level and used to rip up the old surface before a road was remade. Another accessory was a tar sprayer – a bar mounted on the back of the roller. This was not a common fixture. Manufacturers Britain was a major exporter of steam rollers over the years, with the firm of Aveling and Porter probably being the most famous and the most prolific. Many other traction engine manufacturers built steam rollers, but after Aveling and Porter, the most popular were Marshall, Sons & Co., John Fowler & Co., and Wallis & Steevens. In America, the was a large builder. J. I. Case made a roller variant of their farm engines, but had a small market share. Other nations had makers including the Czechs, Swiss, Swedes, Germans (notably Kemna) and Dutch which produced steam rollers. Usage In the UK, a number of companies owned fleets of steam rollers and contracted them out to local authorities. Many were still in use into the 1960s, and part of the M1 motorway was made using steam rollers. A few steam rollers were being used for road maintenance in the early 1970s, and this may go some way to explaining why diesel-powered rollers are still colloquially known as steam rollers today. Preservation Many steam rollers are preserved in working order, and can be seen in operation during special live steam festivals, where operating scale models may also be displayed. At some of the UK steam fairs and rallies, demonstrations of road building using the old techniques, tools and machines are re-enacted by 'Road Gangs' in authentic dress. Steam rollers feature prominently in these demonstrations. The annual Great Dorset Steam Fair has a section dedicated to road-making machinery, including a line-up of working steam rollers. A number of steamrollers ended their working lives in children's playgrounds to provide something for children to play on. Popular culture Two popular American bands were named after steamrollers, Buffalo Springfield and Mannheim Steamroller. Parni Valjak (trans. Steamroller) is the name of the popular Croatian and Yugoslav rock band, and the group has used the name Steam Roller on their English language releases. Two different steamrollers appear as prominent characters in the Thomas & Friends television series; George and Buster, both of whom are based on the Aveling-Barford R class design. British steeplejack and engineering enthusiast Fred Dibnah was known as a national institution in Great Britain for the conservation of steam rollers and traction engines. The first engine he restored to working order was an Aveling & Porter steam roller, registration no. DM3079. Built in 1912, it was a 10-ton slide-valve, single-cylinder, 4-shaft, road roller. Originally named "Allison", after his first wife, Fred renamed the engine "Betsy" (his mother's name) following his divorce – Fred's view being "wives may change but your mother remains your mother!" This roller was featured in many of Fred's early television programmes. It may still be seen at steam rallies in Britain and was in steam at the Great Dorset Steam Fair in 2011. Author Terry Pratchett instructed his collaborator Neil Gaiman that anything Pratchett had been working on at the time of his death should be destroyed by a steamroller. Pratchett's daughter and literary executor Rhianna Pratchett also stated that she had no desire to try to finish her father's work or continue the Discworld franchise without him. Accordingly, Pratchett's assistant Rob Wilkins brought Pratchett's computer hard drive to the Great Dorset Steam Fair, where a steamroller was driven over it. As a symbol The steamroller has a strong symbolism of an irresistible, onward-pushing force. The Imperial Russian Army was nicknamed "steamroller" during World War One, as it was huge in size, and Russia initiated the war with an offensive. The "Russian Steamroller" is one of the personifications of Russia, along with the Russian bear, double-headed eagle and Mat Zemlya.
Technology
Specific-purpose transportation
null
13564008
https://en.wikipedia.org/wiki/Reverse%20zoonosis
Reverse zoonosis
A reverse zoonosis, also known as a zooanthroponosis (Greek "animal", "man", "disease") or anthroponosis, is a pathogen reservoired in humans that is capable of being transmitted to non-human animals. Terminology Anthroponosis refers to pathogens sourced from humans and can include human to non-human animal transmission but also human to human transmission. The term zoonosis technically refers to disease transferred between any animal and another animal, human or non-human, without discretion, and also been defined as disease transmitted from animals to humans and vice versa. Yet because of human-centered medical biases, zoonosis tends to be used in the same manner as anthropozoonosis which specifically refers to pathogens reservoired in non-human animals that are transmissible to humans. Additional confusion due to frequency of scientists using "anthropozoonosis" and "zooanthroponosis" interchangeably was resolved during a 1967 Joint Food and Agriculture and World Health Organization committee meeting that recommended the use of "zoonosis" to describe the bidirectional interchange of infectious pathogens between animals and humans. Furthermore, because humans are rarely in direct contact with wild animals and introduce pathogens through "soft contact", the term "sapronotic agents" must be introduced. Sapronoses (Greek "decaying") refers to human diseases that harbor the capacity to grow and replicate (not just survive or contaminate) in abiotic environments such as soil, water, decaying plants, animal corpses, excreta, and other substrata. Additionally, sapro-zoonoses can be characterized as having both a live host and a non-animal developmental site of organic matter, soil, or plants. Obligate intracellular parasites that cannot replicate outside of cells and are entirely reproductively reliant on entering the cell to use intracellular resources such as viruses, rickettsiae, chlamydiae, and Cryptosporidium parvum cannot be sapronotic agents. Etymological pitfalls Categorizing of disease into epidemiologic classes by the infection's supposed source or the direction of transmission raises a number of contradictions that could be resolved by the use of cyclical models. See the following scenarios: Zoonosis vs reverse zoonosis vs anthroponosis In the case of diseases transferred from arthropod vectors such as urban yellow fever, dengue, epidemic typhus, tickborne relapsing fever, zika fever, and malaria, the differentiation between terms becomes ever more hazy. For example, a human infected with malaria is bitten by a mosquito that is subsequently infected as well. This is a case of reverse zoonosis (human to animal). However, the newly infected mosquito then infects another human. This could be a case of zoonosis (animal to human) if the mosquito is considered the original source, or anthroponosis (human to human) if the human is considered the original source. If this infected mosquito instead infected a non-human primate, it could be considered a case of reverse zoonosis/zooanthroponosis (human to animal) if the human is considered the primary source, or simply zoonosis (animal to animal) if the mosquito is considered the primary source. Zoonosis vs anthroponosis Similarly, HIV originating in simians (crossover due to humans consuming wild chimpanzee bushmeat) and influenza A viruses originating in avians (crossover due to an antigenic shift) could have initially been considered a zoonotic transference as the infections first came from vertebrate animals, but could currently be regarded as an anthroponosis because of its potential to transfer between human to human. Sapronosis vs sapro-zoonosis Typical examples of sapronotic agents are fungal such as coccidioidomycosis, histoplasmosis, aspergillosis, cryptococcosis, Microsporum gypseum. Some can be bacterial from the sporulating clostridium and bacillus to Rhodococcus equi, Burkholderia pseudomallei, Listeria, Erysipelothrix, Yersinia pseudotuberculosis, legionellosis, Pontiac fever, and nontuberculous mycobacterioses. Other sapronotic agents are amebic as in primary amebic meningoencephalitis. Yet again, difficulties in classification arise in the case of sporulating bacteria whose infectious spores are only produced after a significant period of inactive vegetative growth within an abiotic environment, yet this is still considered a case of sapronoses. However, cases of zoo-sapronoses involving Listeria, Erysipelothrix, Yersinia pseudotuberculosis, Burkholderia pseudomallei, and Rhodococcus equi can be transferred by an animal or an abiotic substrate but usually occur via a fecal-oral route between humans and other animals. Cases with modes of transmission Arthropod vectors Malaria Malaria involves the cyclical infection of animals (human and non-human) and mosquitoes from the genus Anopheles with a number of Plasmodium species. The Plasmodium parasite is transferred to the mosquito as it feeds on the blood of the infected animal whereupon it begins a sporogenic cycle in the gut of the mosquito that will infect another animal at the next blood meal. There does not seem to be any deleterious effects to the mosquito as a result of the parasitic infection. The Plasmodium brasilianum parasite normally found in primates is morphologically similar to the malarial inducing Plasmodium malariae that is more commonly found in humans and it is contested as to whether the two are actually different species. Nevertheless, 12 reports of malaria in the remotely located indigenous Yanomami communities of the Venezuelan Amazon arose where it was surprisingly found to be caused by a strain of P. brasilianum with 100% identical to sequences found in Alouatta seniculus monkeys. This suggests a definite zoonosis and high possibility of spillback back into non-human primate bands as reverse zoonoses. African sleeping sickness Trypanosoma brucei gambiense (T. b. gambiense) is a species of African trypanosomes which are protozoan hemoflagellates responsible for trypanosomiasis (more commonly known as African sleeping sickness) in humans and other animals. The protozoa are transferred via Tsetse flies where they multiply and can be transferred to yet another animal host during the fly's blood meal feeding. Outbreaks of sleeping sickness in certain human communities have been eliminated but only temporarily as constant re-introduction from unknown sources statistically suggests the presence of a non-human reservoir where spillback of the pathogen is maintained in a sylvatic cycle and re-introduced into the urban cycle. The presence of T. b. gambiense has been found separately in humans and livestock. This spurred a molecular study comparing serum reactivity of pigs, goats, and cows to human serum where notable similarities in all samples but especially in pig samples. Combined, these findings implicate a reverse zoonotic human to animal transmission. Arboviruses Yellow fever viruses, Dengue fever viruses, and Zika viruses are of the Flavivirus genera and Chikungunya virus is of the Alphavirus genera. All of them are considered arboviruses denoting their ability to be transmitted through arthropod vectors. Sylvatic transmission cycles for arboviruses within non-human primate communities have the potential to spillover into an urban cycle within humans where humans could be dead-end hosts in scenarios where further intermingling is eliminated but much more probable is a reemergence of these viruses into either cycle due to spillback. Apparently the maintenance of an arboviral urban cycle between humans requires a rare or understudied conjunction of factors to occur. One of the following situations occurs: An infected human in an urban environment feeds a sylvatic (typically remotely located) mosquito such as Haemogogus (which has a relatively long lifespan compared to other mosquitoes and can transmit the virus for extended periods) that infects another human or non-human animal that will serve as a reservoir. An urban Aedes (more commonly found in urban areas feeds and transmits the virus to another human or non-human animal that will serve as a reservoir. Sufficient numbers of sylvatic vector mosquito and the animal reservoir inhabit the same ecologic niche in close contact to promote and sustain the zoonotic cycle of the virus. The animal reservoir of the virus maintains a suitable virus level in the blood to allow the infection of a vector mosquito. A bridge-vector mosquito such as Aedes albopictus, which can survive in an urban area and spread to rural, semi-rural, and forest areas could carry the virus to a sylvatic environment. Zika fever: The Zika virus is caused by the single stranded RNA Flavivirus that uses the Aedes mosquito as a vector to infect other human and animal hosts. A 2015 zika virus strain isolated from a human in Brazil was used to infect pregnant rhesus macaques intravenously and intraamniotically. Both the dams and the placentas were infected with Zika positive tissue samples being recorded for up to 105 days. This confirms a reverse zoonotic transference potential between humans and non-human primates. Yellow fever: Yellow fever virus also transmitted by the bite of an infected Aedes or Haemagogus species of mosquitoes that feed off an infected animal. The historical course of the American slave trade is a prime example of introduction of a pathogen to create a completely new sylvatic cycle. Previous hypotheses of a "New World YFV" were laid to rest in a 2007 study that examined rates of nucleotide substitution and divergence to determine that yellow fever was introduced into the Americas approximately 400 years ago from West Africa. It was also around the 17th century that yellow fever was documented by Europeans complicit in slave trafficking. The actual mode of introduction could have played out in a number of scenarios whether a viremic Old World human, infected Old World mosquito, eggs laid by an Old World infected mosquito, or all three were transported to the Americas seeing that yellow fever transmission was not uncommon on sailing vessels. Amidst more recent yellow fever outbreaks in southeastern Brazil, the spillback potential was highly indicated. Molecular comparisons of non-human primate outbreak strains proved to be more closely related to human strains than strains derived from other non-human primates thus suggesting a continuing reverse zoonosis. Chikungunya: The Chikungunya virus is a single stranded RNA alphavirus typically transmitted by the Aedes mosquitoes to another animal host. There is no evidence to suggest a barrier to Chikungunya switching hosts between humans and non-human primates because it has no preferences in any given primate species. It has a high potential to spill-over or spill-back into sylvatic cycles as was the case with the similar arbovirus that was imported to the Americas during the slave trade. Studies have proven chikungunya's potential to orally infect sylvatic types of mosquitoes including Haemagogus leucocelaenus and Aedes terrens. Moreover, in a serologic survey carried out in non-human primates of urban and peri-urban areas of Bahia State, 11 animals showed chikungunya neutralizing antibodies. Dengue fever: The Dengue virus is a flavivirus also transmissible by Aedes mosquito vectors to other animal hosts. Dengue was also introduced to the Americas by the slave trade along with Aedes aegypti. A 2009 study in French Guiana found that infections of dengue viruses types 1 through 4 were present in various different types of neotropical forest mammals other than primates such as rodents, marsupials, and bats. After sequence analyses, it was revealed that the 4 non-human mammalian strains had an 89% to 99% similarity index to human strains circulating at the same time. This confirms that other mammals in the vicinity have the potential to be infected by human sources and indicates presence of an urban cycle. A case to prove the arthropod vectors are capable of being infected comes from Brazil where Aedes albopictus (which frequents the backyards of human houses but easily spreads into rural, semi-rural, and wild environments) was found infected with dengue virus 3 in São Paulo State. Meanwhile, in the State of Bahia, the sylvatic vector Haemogogus leucocelaenus was found to be infected with dengue virus 1. In another study carried out in the Atlantic Forest of Bahia, primates (Leontopithecus chrysomelas and Sapajus xanthosternos) were found with antibodies dengue viruses 1 and 2 while sloths (Bradypus torquatus) had antibodies for dengue virus 3 therefore suggesting the possible presence of an established sylvatic cycle. Wild animals A large number of wild animals with habitats that have yet to be encroached upon by humans are still affected by sapronotic agents through contaminated water. Giardia Beavers: Giardia was introduced to beavers through runoff of human sewage upstream of a beaver colony. Influenza A virus subtype H1N1 Seals: In 1999, wild seals were admitted into a Dutch seal rehabilitation center with flu-like symptoms and it was found that they were in fact infected with a human influenza B like virus that had circulated in humans in 1995 and had undergone an antigenic shift since adaptation to its new seal host. Tuberculosis Red deer, wild boar: In areas of intensive game management that included big game fencing, supplementary feeding locations, and grazing livestock, cases of tuberculosis lesions in wild red deer and wild boars appeared. Some boars and deer shared the same strains of tuberculosis which were similar to those found in livestock and humans suggesting a possible sapronotic or sapro-zoonotic contamination of shared water sources, supplemental feed, direct contact with humans or livestock, or their excretions. Domesticated companionship animals E. coli Dogs, horses: Evidence of infection by human E. coli strains in several dogs and horses across Europe was found, thus implicating the possibility of zoonotic inter-special transmission of multiresistant strains from humans to companion animals and vice versa. Tuberculosis Dog: A Yorkshire terrier was admitted into a veterinary clinic with a chronic cough, poor weight retention, and vomiting being reported for months where it was revealed that the owner had recovered from tuberculosis, however the dog initially tested negative for tuberculosis in 2 different molecular assays and was discharged. 8 days later the dog was euthanized because of a urethral obstruction. A necroscopy was performed where liver and tracheobronchial lymph node samples in fact tested positive for the exact same strain of tuberculosis the owner had previously. This is a very clear case of reverse zoonosis. Influenza A virus subtype H1N1 Ferrets: Ferrets are often used in human clinical studies thus the potential for human influenza to infect them was previously confirmed. However confirmation of natural transference of a human H1N1 strain from the 2009 outbreak in household pet ferrets further implicates human to animal transference. COVID-19 Amidst the 2020 global pandemic of COVID-19, susceptibility of cats, ferrets, dogs, chickens, pigs, and ducks to the SARS-CoV-2 coronavirus was examined and it was found that it can be replicated in cats and ferrets with lethal results. Cats: The virus can be transmitted in the air between cats. Viral RNA was detected in feces within 3–5 days of infection and pathological studies detected viral RNA in the soft palate, tonsils, and trachea. Kittens acquired massive lesions in the lungs, nasal and tracheal mucosa epitheliums. Surveillance for SARS-CoV-2 in cats should be considered as an adjunct to elimination of COVID-19 in humans. Ferrets: Ferrets were inoculated with viral strains from the environment of the Huanan Seafood Market in Wuhan, China as well as human isolates from Wuhan. It was found that with both isolates, that the virus can replicate in the upper respiratory tract of ferrets for up to 8 days without causing disease or death and viral RNA was detected in rectal swabs. Pathological studies performed after 13 days of infection revealed mild peribronchitis in the lungs, severe lymphoplasmacytic perivasculitis and vasculitis amongst other ailments with antibody production against SARS-CoV-2 detected in all ferrets. The fact that SARS-CoV-2 replicates efficiently in the upper respiratory tract of ferrets makes them a candidate animal model for evaluating antiviral drugs or vaccine candidates against COVID-19. Dogs: Of the Beagle dogs tested, viral RNA was detected in fecal matter and 50% of the Beagles that were inoculated seroconverted after 14 days while the other 50% remained seronegative demonstrating a low susceptibility to SARS-CoV-2 in dogs. Chicken, duck, pig: There was no evidence of susceptibility in chickens, ducks, or pigs with all viral RNA swabs returning negative results and seronegative after 14 days post inoculation. Domesticated livestock animals Influenza A virus subtype H1N1 Turkeys: A Norwegian turkey breeder's flock exhibited a decrease in egg production with no other clinical signs after a farm hand reported having H1N1. A study revealed that the turkeys also had H1N1 and were seropositive to its antigens. Maternally derived H1N1 antibodies were detected in egg yolks and further genetic analyses revealed an identical H1N1 strain in the turkeys as the farm worker who likely infected the turkeys during artificial insemination. Pigs: Human to pig H1N1 transmission was reported in Canada, Korea, and eventually came to include every continent save Antarctica during the 2009 outbreak. It has also been known to spread during seasonal epidemics in France between humans and pigs. Methicillin-resistant Staphylococcus aureus Horses: 11 equine patients admitted into a veterinary hospital for various reasons from different farms over the span of approximately one year exhibited MRSA infections later. Considering that MRSA isolates are extremely rare in horses, it was suggested that the MRSA outbreak was due to nosocomial infection derived from a human during the horses' stays at the hospital. Cows, turkeys, pigs: A case of reverse zoonosis was proposed to explain how a particular human Methicillin Sensitive Streptococcus Aureus strain was found in livestock (pigs, turkeys, cows) with not only a loss of human virulence genes (which could decrease zoonotic potential for human colonization) but also the addition of methicillin resistance and a tetracycline (which will increase occurrence of MRSA infections). The concern here being that excessive antibiotic use in livestock production exacerbates the creation of novel antibiotic resistant zoonotic pathogens. Wild animals in captivity Tuberculosis Elephants: In 1996, The Hawthorne Circus Corporation reported 4 of their elephants and 11 of their keepers harboring M. tuberculosis infections. Unfortunately, these elephants had been sub-leased out to different circus acts and zoological gardens all over America. This spurred a nation-wide epidemic, but because tuberculosis isn't a disease that's typically transmitted from animals to humans, it was suggested that the epidemic was because of transference from a human handler to a captive elephant. Coronavirus Alpacas: A 2007 outbreak of alpaca coronavirus because of the intermingling happening at a national alpaca exhibition led to a comparison between human and alpaca coronaviruses in an attempt to deduce the source of the outbreak. It was found that the alpaca coronavirus is most evolutionarily similar to a human coronavirus strain that was isolated in the 1960s suggesting that an alpaca coronavirus could have very well been circulating for decades causing respiratory illness in herds undetected for lack of diagnostic capabilities. It also suggests a human to alpaca mode of transmission. Measles Non-human primates: In 1996, a measles outbreak occurred in a sanctuary in 94 non-human primates. Although the source of the outbreak was never determined, serum and urine testing proved that the virus was definitely associated with recent human cases of measles in the U.S. Helicobacter pylori Marsupials: The stripe-face dunnart is an Australian marsupial that has faced multiple outbreaks of Helicobacter pylori in captivity. Stomach sampling from the marsupial revealed that the H. pylori strain responsible for the outbreaks aligned 100% with a strain originating from the human intestinal tract. Thus, it can be assumed that the outbreak was caused by the handlers. Wild animals in conservation areas Coronaviruses Chimpanzees: The transmission of the human coronavirus HCoV-OC43 to wild chimpanzees (Pan troglodytes verus) living in the Taï National Park, Côte d'Ivoire was reported in 2016 to 2017. These chimpanzees were accustomed to human presence that had been studying these particular communities since the 1980s The HCoV-OC43, belonging to the species Betacoronavirus 1 (BetaCoV1), normally causes episodes of common cold in humans (this excludes SARS and MERS), but has also been detected in ungulates, carnivores, and lagomorphs. Therefore, it is completely plausible that researchers or poachers could have inadvertently spread the virus to the chimpanzees thus revealing yet another interface in coronavirus host switching. Rhinovirus C Chimpanzees: Though previously considered a uniquely human pathogen, human Rhinovirus C was determined to be the cause of a 2013 outbreak of respiratory infections in chimpanzees in Uganda. Examination of chimpanzees from all over Africa found that they show a universal homozygosity for the 3 CDHR3-Y529 allele (cadherin related family member) which is a receptor that drastically increases susceptibility to rhinovirus C infection and asthma in humans. If respiratory viruses of human origin are capable of maintaining circulation in non-human primates, this would prove to be harmful should the infection spillback into human communities. Tuberculosis Elephants: A necroscopy of a free-ranging African elephant (Loxodonta africana) in Kruger National Park in South Africa found significant lung damage due to a human strain of M. tuberculosis. Elephants explore their environment with their trunks therefore it was very likely that aerosolized pathogens from domestic waste, contaminated water from a human community upstream, human excrement, or contaminated food from tourists was the source of the infection. Pneumoviruses Chimpanzees: In Uganda, reports of respiratory viruses of human origination infected two chimpanzee (Pan troglodytes schweinfurthii) communities in the same forest. It was later discovered to be caused by a human metapneumovirus (also known as MPV, Pneumoviridae, Metapneumovirus) and a human respirovirus 3 (also known as HRV3, Paramyxoviridae, Respirovirus, or formerly known as parainfluenza virus 3). Reverse zoonosis in gorillas Gorillas: Conservational areas subject to ecotourism in Uganda, Rwanda, and the Democratic Republic of the Congo, free-ranging gorillas have become increasingly accustomed to the presence of humans whether that be in the form of ranger guides, tourists, trackers, veterinarians, poachers, or researchers. Iodamoeba buetschlii, Giardia lamblia, Chilomastix sp., Endolimax nana, Entamoeba coli, and Entamoeba histolytica have been found in the feces of gorillas and promiscuous defecations left behind by humans encroaching on the habitat. Additionally, increased numbers of Cryptosporidium sp. and capillaria infections were found in gorillas that maintained more frequent contact with humans than those that did not. Together these findings suggest the occurrence of reverse zoonoses.
Biology and health sciences
Concepts
Health
174241
https://en.wikipedia.org/wiki/Terracotta
Terracotta
Terracotta, also known as terra cotta or terra-cotta (; ; ), is a clay-based non-vitreous ceramic fired at relatively low temperatures. It is therefore a term used for earthenware objects of certain types, as set out below. Usage and definitions of the term vary, such as: In art, pottery, applied art, and craft, "terracotta" is a term often used for red-coloured earthenware sculptures or functional articles such as flower pots, water and waste water pipes, and tableware. In archaeology and art history, "terracotta" is often used to describe objects such as figurines and loom weights not made on a potter's wheel, with vessels and other objects made on a wheel from the same material referred to as earthenware; the choice of term depends on the type of object rather than the material or shaping technique. Terracotta is also used to refer to the natural brownish-orange color of most terracotta. In architecture, the term encompasses many building materials made out an fired ceramic for exterior covering. Architectural terracotta can also refer to ornate decorative ceramic elements such as antefixes and revetments, which had a large impact on the appearance of temples and other buildings in the classical architecture of Europe, as well as in the Ancient Near East. This article covers the sense of terracotta as a medium in sculpture, as in the Terracotta Army and Greek terracotta figurines, and architectural decoration. Neither pottery such as utilitatian earthenware nor East Asian and European sculpture in porcelain are covered. In art history Asia and the Middle East Terracotta female figurines were uncovered by archaeologists in excavations of Mohenjo-daro, Pakistan (3000–1500 BCE). Along with phallus-shaped stones, these suggest some sort of fertility cult. The Burney Relief is an outstanding terracotta plaque from Ancient Mesopotamia of about 1950 BCE. In Mesoamerica, the great majority of Olmec figurines were in terracotta. Many ushabti mortuary statuettes were also made of terracotta in Ancient Egypt. India Terracotta has been a medium for art since the Harappan civilization, although techniques used differed in each time period. In the Mauryan times, they were mainly figures of mother goddesses, indicating a fertility cult. Moulds were used for the face, whereas the body was hand-modelled. In the Shungan times, a single mould was used to make the entire figure and depending upon the baking time, the colour differed from red to light orange. The Satavahanas used two different moulds- one for the front and the other for the back and kept a piece of clay in each mould and joined them together, making some artefacts hollow from within. Some Satavahana terracotta artefacts also seem to have a thin strip of clay joining the two moulds. This technique may have been imported from the Romans and is seen nowhere else in the country. Contemporary centres for terracotta figurines include West Bengal, Bihar, Jharkhand, Rajasthan and Tamil Nadu. In Bishnupur, West Bengal, the terracotta pattern–panels on the temples are known for their intricate details. The Bankura Horse is also very famous and belongs to the Bengal school of terracotta. Madhya Pradesh is one of the most prominent production centres of terracotta art today. The tribes of the Bastar have a rich tradition. They make intricate designs and statues of animals and birds. Hand-painted clay and terracotta products are produced in Gujarat. The Aiyanar cult in Tamil Nadu is associated with life-size terracotta statues. Traditional terracotta sculptures, mainly religious, also continue to be made. The demand for this craft is seasonal, reaching its peak during the harvest festival, when new pottery and votive idols are required. During the rest of the year, the makers rely on agriculture or some other means of income. The designs are often redundant as crafters apply similar reliefs and techniques for different subjects. Customers suggest subjects and uses for each piece. To sustain the legacy, the Indian Government has established the Sanskriti Museum of Indian Terracotta in New Delhi. The initiative encourages ongoing work in this medium through displays terracotta from different sub-continent regions and periods. In 2010, the India Post Service issued a stamp commemorating the craft which shows a terracotta doll from the craft museum. China Chinese sculpture made great use of terracotta, with and without glazing and color, from a very early date. The famous Terracotta Army of Emperor Qin Shi Huang, 209–210 BCE, was somewhat untypical, and two thousand years ago reliefs were more common, in tombs and elsewhere. Later Buddhist figures were often made in painted and glazed terracotta, with the Yixian glazed pottery luohans, probably of 1150–1250, now in various Western museums, among the most prominent examples. Brick-built tombs from the Han dynasty were often finished on the interior wall with bricks decorated on one face; the techniques included molded reliefs. Later tombs contained many figures of protective spirits and animals and servants for the afterlife, including the famous horses of the Tang dynasty; as an arbitrary matter of terminology these tend not to be referred to as terracottas. Africa Precolonial West African sculpture also made extensive use of terracotta. The regions most recognized for producing terracotta art in that part of the world include the Nok culture of central and north-central Nigeria, the Ife-Benin cultural axis in western and southern Nigeria (also noted for its exceptionally naturalistic sculpture), and the Igbo culture area of eastern Nigeria, which excelled in terracotta pottery. These related, but separate, traditions also gave birth to elaborate schools of bronze and brass sculpture in the area. Europe The Ancient Greeks' Tanagra figurines were mass-produced mold-cast and fired terracotta figurines, that seem to have been widely affordable in the Hellenistic period, and often purely decorative in function. They were part of a wide range of Greek terracotta figurines, which included larger and higher-quality works such as the Aphrodite Heyl; the Romans too made great numbers of small figurines, which were often used in a religious context as cult statues or temple decorations. Etruscan art often used terracotta in preference to stone even for larger statues, such as the near life-size Apollo of Veii and the Sarcophagus of the Spouses. Campana reliefs are Ancient Roman terracotta reliefs, originally mostly used to make friezes for the outside of buildings, as a cheaper substitute for stone. European medieval art made little use of terracotta sculpture, until the late 14th century, when it became used in advanced International Gothic workshops in parts of Germany. The Virgin illustrated at the start of the article from Bohemia is the unique example known from there. A few decades later, there was a revival in the Italian Renaissance, inspired by excavated classical terracottas as well as the German examples, which gradually spread to the rest of Europe. In Florence, Luca della Robbia (1399/1400–1482) was a sculptor who founded a family dynasty specializing in glazed and painted terracotta, especially large roundels which were used to decorate the exterior of churches and other buildings. These used the same techniques as contemporary maiolica and other tin-glazed pottery. Other sculptors included Pietro Torrigiano (1472–1528), who produced statues, and in England busts of the Tudor royal family. The unglazed busts of the Roman Emperors adorning Hampton Court Palace, by Giovanni da Maiano, 1521, were another example of Italian work in England. They were originally painted but this has now been lost from weathering. In the 18th-century unglazed terracotta, which had long been used for preliminary clay models or maquettes that were then fired, became fashionable as a material for small sculptures including portrait busts. It was much easier to work than carved materials, and allowed a more spontaneous approach by the artist. Claude Michel (1738–1814), known as Clodion, was an influential pioneer in France. John Michael Rysbrack (1694–1770), a Flemish portrait sculptor working in England, sold his terracotta modelli for larger works in stone, and produced busts only in terracotta. In the next century the French sculptor Albert-Ernest Carrier-Belleuse made many terracotta pieces, but possibly the most famous is The Abduction of Hippodameia depicting the Greek mythological scene of a centaur kidnapping Hippodameia on her wedding day. Architecture History Architectural terracotta is a broad term encompassing a wide ranging variety of clay-based architectural elements such as wall reliefs, decorative roof elements, and architectural sculpture. Many ancient and traditional roofing styles included more elaborate sculptural elements than the plain roof tiles, such as Chinese Imperial roof decoration and the antefix of western classical architecture. In India West Bengal made a speciality of terracotta temples, with the sculpted decoration from the same material as the main brick construction. Architectural terracotta experienced a resurgence in western architecture starting in the mid-19th century. Starting in Europe, architects designed elaborate buildings relying on terracotta detailing for their facades. James Taylor was one of the first producers of architectural terracotta to find success in the United States, using his experience manufacturing the material in England to guide his work in North America. The Great Chicago Fire of 1871 led to increased demand for fireproof materials in urban settings, and helped drive the following push for architectural terracotta throughout North America. The material remained popular through the early 1900s, with its versatility allowing it to support a variety of architectural styles such as Rennaissance revival, neo-Gothic, and Art deco. Emerging trends in Modernist architecture favoring the use of concrete and glass significantly reduced demand for architectural terracotta starting in the 1930s. In the time since, the material has experienced a resurgence of interest, favored for work in postmodern and revivalist architectural styles. Differences from non-architectural terracotta Unlike art and pottery terracotta, clays used for architectural terracotta can range from dark-bodied stonewares to light-bodied whitewares, ranging depending on what is required for their particular application. The clays are usually fired to or near vitrification in order to survive continued exposure to harsh outdoor conditions such as freeze-thaw cycles and salt intrusion. Contrary to popular belief, glazing does not seal terracotta from water penetration and a non-porous clay body is necessary to prevent failure from these issues. Production Prior to firing, terracotta clays are easy to shape. Shaping techniques include throwing, slip casting as well as others. After drying, it is placed in a kiln or, more traditionally, in a pit covered with combustible material, then fired. The typical firing temperature is around , though it may be as low as in historic and archaeological examples. During this process, the iron oxides in the body reacts with oxygen, often resulting in the reddish colour known as terracotta. However, color can vary widely, including shades of yellow, orange, buff, red, pink, grey or brown. A final method is to carve fired bricks or other terracotta shapes. This technique is less common, but examples can be found in the architecture of Bengal on Hindu temples and mosques. Properties Terracotta is not watertight, but its porousness decreases when the body is surface-burnished before firing. Glazes can used to decrease permeability and hence increase watertightness. Unglazed terracotta is suitable for use below ground to carry pressurized water (an archaic use), for garden pots and irrigation or building decoration in many environments, and for oil containers, oil lamps, or ovens. Most other uses require the material to be glazed, such as tableware, sanitary piping, or building decorations built for freezing environments. Terracotta will also ring if lightly struck, as long as it is not cracked. Painted (polychrome) terracotta is typically first covered with a thin coat of gesso, then painted. It is widely used, but only suitable for indoor positions and much less durable than fired colors in or under a ceramic glaze. Terracotta sculptures in the West were rarely left in their "raw" fired state until the 18th century. Advantages in sculpture As compared to bronze sculpture, terracotta uses a far simpler and quicker process for creating the finished work with much lower material costs. The easier task of modelling, typically with a limited range of knives and wooden shaping tools, but mainly using the fingers, allows the artist to take a more free and flexible approach. Small details that might be impractical to carve in stone, of hair or costume for example, can easily be accomplished in terracotta, and drapery can sometimes be made up of thin sheets of clay that make it much easier to achieve a realistic effect. Reusable mold-making techniques may be used for production of many identical pieces. Compared to marble sculpture and other stonework, the finished product is far lighter and may be further painted and glazed to produce objects with color or durable simulations of metal patina. Robust durable works for outdoor use require greater thickness and so will be heavier, with more care needed in the drying of the unfinished piece to prevent cracking as the material shrinks. Structural considerations are similar to those required for stone sculpture; there is a limit on the stress that can be imposed on terracotta, and terracotta statues of unsupported standing figures are limited to well under life-size unless extra structural support is added. This is also because large figures are extremely difficult to fire, and surviving examples often show sagging or cracks. The Yixian figures were fired in several pieces, and have iron rods inside to hold the structure together. Gallery
Technology
Materials
null
174396
https://en.wikipedia.org/wiki/Bohr%20radius
Bohr radius
The Bohr radius () is a physical constant, approximately equal to the most probable distance between the nucleus and the electron in a hydrogen atom in its ground state. It is named after Niels Bohr, due to its role in the Bohr model of an atom. Its value is Definition and value The Bohr radius is defined as where is the permittivity of free space, is the reduced Planck constant, is the mass of an electron, is the elementary charge, is the speed of light in vacuum, and is the fine-structure constant. The CODATA value of the Bohr radius (in SI units) is History In the Bohr model for atomic structure, put forward by Niels Bohr in 1913, electrons orbit a central nucleus under electrostatic attraction. The original derivation posited that electrons have orbital angular momentum in integer multiples of the reduced Planck constant, which successfully matched the observation of discrete energy levels in emission spectra, along with predicting a fixed radius for each of these levels. In the simplest atom, hydrogen, a single electron orbits the nucleus, and its smallest possible orbit, with the lowest energy, has an orbital radius almost equal to the Bohr radius. (It is not exactly the Bohr radius due to the reduced mass effect. They differ by about 0.05%.) The Bohr model of the atom was superseded by an electron probability cloud adhering to the Schrödinger equation as published in 1926. This is further complicated by spin and quantum vacuum effects to produce fine structure and hyperfine structure. Nevertheless, the Bohr radius formula remains central in atomic physics calculations, due to its simple relationship with fundamental constants (this is why it is defined using the true electron mass rather than the reduced mass, as mentioned above). As such, it became the unit of length in atomic units. In Schrödinger's quantum-mechanical theory of the hydrogen atom, the Bohr radius is the value of the radial coordinate for which the radial probability density of the electron position is highest. The expected value of the radial distance of the electron, by contrast, is . Related constants The Bohr radius is one of a trio of related units of length, the other two being the Compton wavelength of the electron () and the classical electron radius (). Any one of these constants can be written in terms of any of the others using the fine-structure constant : Hydrogen atom and similar systems The Bohr radius including the effect of reduced mass in the hydrogen atom is given by where is the reduced mass of the electron–proton system (with being the mass of proton). The use of reduced mass is a generalization of the two-body problem from classical physics beyond the case in which the approximation that the mass of the orbiting body is negligible compared to the mass of the body being orbited. Since the reduced mass of the electron–proton system is a little bit smaller than the electron mass, the "reduced" Bohr radius is slightly larger than the Bohr radius ( meters). This result can be generalized to other systems, such as positronium (an electron orbiting a positron) and muonium (an electron orbiting an anti-muon) by using the reduced mass of the system and considering the possible change in charge. Typically, Bohr model relations (radius, energy, etc.) can be easily modified for these exotic systems (up to lowest order) by simply replacing the electron mass with the reduced mass for the system (as well as adjusting the charge when appropriate). For example, the radius of positronium is approximately , since the reduced mass of the positronium system is half the electron mass (). A hydrogen-like atom will have a Bohr radius which primarily scales as , with the number of protons in the nucleus. Meanwhile, the reduced mass () only becomes better approximated by in the limit of increasing nuclear mass. These results are summarized in the equation A table of approximate relationships is given below.
Physical sciences
Physical constants
Physics
174412
https://en.wikipedia.org/wiki/Birefringence
Birefringence
Birefringence means double refraction. It is the optical property of a material having a refractive index that depends on the polarization and propagation direction of light. These optically anisotropic materials are described as birefringent or birefractive. The birefringence is often quantified as the maximum difference between refractive indices exhibited by the material. Crystals with non-cubic crystal structures are often birefringent, as are plastics under mechanical stress. Birefringence is responsible for the phenomenon of double refraction whereby a ray of light, when incident upon a birefringent material, is split by polarization into two rays taking slightly different paths. This effect was first described by Danish scientist Rasmus Bartholin in 1669, who observed it in Iceland spar (calcite) crystals which have one of the strongest birefringences. In the 19th century Augustin-Jean Fresnel described the phenomenon in terms of polarization, understanding light as a wave with field components in transverse polarization (perpendicular to the direction of the wave vector). Explanation A mathematical description of wave propagation in a birefringent medium is presented below. Following is a qualitative explanation of the phenomenon. Uniaxial materials The simplest type of birefringence is described as uniaxial, meaning that there is a single direction governing the optical anisotropy whereby all directions perpendicular to it (or at a given angle to it) are optically equivalent. Thus rotating the material around this axis does not change its optical behaviour. This special direction is known as the optic axis of the material. Light propagating parallel to the optic axis (whose polarization is always perpendicular to the optic axis) is governed by a refractive index (for "ordinary") regardless of its specific polarization. For rays with any other propagation direction, there is one linear polarization that is perpendicular to the optic axis, and a ray with that polarization is called an ordinary ray and is governed by the same refractive index value . For a ray propagating in the same direction but with a polarization perpendicular to that of the ordinary ray, the polarization direction will be partly in the direction of (parallel to) the optic axis, and this extraordinary ray will be governed by a different, direction-dependent refractive index. Because the index of refraction depends on the polarization when unpolarized light enters a uniaxial birefringent material, it is split into two beams travelling in different directions, one having the polarization of the ordinary ray and the other the polarization of the extraordinary ray. The ordinary ray will always experience a refractive index of , whereas the refractive index of the extraordinary ray will be in between and , depending on the ray direction as described by the index ellipsoid. The magnitude of the difference is quantified by the birefringence The propagation (as well as reflection coefficient) of the ordinary ray is simply described by as if there were no birefringence involved. The extraordinary ray, as its name suggests, propagates unlike any wave in an isotropic optical material. Its refraction (and reflection) at a surface can be understood using the effective refractive index (a value in between and ). Its power flow (given by the Poynting vector) is not exactly in the direction of the wave vector. This causes an additional shift in that beam, even when launched at normal incidence, as is popularly observed using a crystal of calcite as photographed above. Rotating the calcite crystal will cause one of the two images, that of the extraordinary ray, to rotate slightly around that of the ordinary ray, which remains fixed. When the light propagates either along or orthogonal to the optic axis, such a lateral shift does not occur. In the first case, both polarizations are perpendicular to the optic axis and see the same effective refractive index, so there is no extraordinary ray. In the second case the extraordinary ray propagates at a different phase velocity (corresponding to ) but still has the power flow in the direction of the wave vector. A crystal with its optic axis in this orientation, parallel to the optical surface, may be used to create a waveplate, in which there is no distortion of the image but an intentional modification of the state of polarization of the incident wave. For instance, a quarter-wave plate is commonly used to create circular polarization from a linearly polarized source. Biaxial materials The case of so-called biaxial crystals is substantially more complex. These are characterized by three refractive indices corresponding to three principal axes of the crystal. For most ray directions, both polarizations would be classified as extraordinary rays but with different effective refractive indices. Being extraordinary waves, the direction of power flow is not identical to the direction of the wave vector in either case. The two refractive indices can be determined using the index ellipsoids for given directions of the polarization. Note that for biaxial crystals the index ellipsoid will not be an ellipsoid of revolution ("spheroid") but is described by three unequal principle refractive indices , and . Thus there is no axis around which a rotation leaves the optical properties invariant (as there is with uniaxial crystals whose index ellipsoid is a spheroid). Although there is no axis of symmetry, there are two optical axes or binormals which are defined as directions along which light may propagate without birefringence, i.e., directions along which the wavelength is independent of polarization. For this reason, birefringent materials with three distinct refractive indices are called biaxial. Additionally, there are two distinct axes known as optical ray axes or biradials along which the group velocity of the light is independent of polarization. Double refraction When an arbitrary beam of light strikes the surface of a birefringent material at non-normal incidence, the polarization component normal to the optic axis (ordinary ray) and the other linear polarization (extraordinary ray) will be refracted toward somewhat different paths. Natural light, so-called unpolarized light, consists of equal amounts of energy in any two orthogonal polarizations. Even linearly polarized light has some energy in both polarizations, unless aligned along one of the two axes of birefringence. According to Snell's law of refraction, the two angles of refraction are governed by the effective refractive index of each of these two polarizations. This is clearly seen, for instance, in the Wollaston prism which separates incoming light into two linear polarizations using prisms composed of a birefringent material such as calcite. The different angles of refraction for the two polarization components are shown in the figure at the top of this page, with the optic axis along the surface (and perpendicular to the plane of incidence), so that the angle of refraction is different for the polarization (the "ordinary ray" in this case, having its electric vector perpendicular to the optic axis) and the polarization (the "extraordinary ray" in this case, whose electric field polarization includes a component in the direction of the optic axis). In addition, a distinct form of double refraction occurs, even with normal incidence, in cases where the optic axis is not along the refracting surface (nor exactly normal to it); in this case, the dielectric polarization of the birefringent material is not exactly in the direction of the wave's electric field for the extraordinary ray. The direction of power flow (given by the Poynting vector) for this inhomogenous wave is at a finite angle from the direction of the wave vector resulting in an additional separation between these beams. So even in the case of normal incidence, where one would compute the angle of refraction as zero (according to Snell's law, regardless of the effective index of refraction), the energy of the extraordinary ray is propagated at an angle. If exiting the crystal through a face parallel to the incoming face, the direction of both rays will be restored, but leaving a shift between the two beams. This is commonly observed using a piece of calcite cut along its natural cleavage, placed above a paper with writing, as in the above photographs. On the contrary, waveplates specifically have their optic axis along the surface of the plate, so that with (approximately) normal incidence there will be no shift in the image from light of either polarization, simply a relative phase shift between the two light waves. Terminology Much of the work involving polarization preceded the understanding of light as a transverse electromagnetic wave, and this has affected some terminology in use. Isotropic materials have symmetry in all directions and the refractive index is the same for any polarization direction. An anisotropic material is called "birefringent" because it will generally refract a single incoming ray in two directions, which we now understand correspond to the two different polarizations. This is true of either a uniaxial or biaxial material. In a uniaxial material, one ray behaves according to the normal law of refraction (corresponding to the ordinary refractive index), so an incoming ray at normal incidence remains normal to the refracting surface. As explained above, the other polarization can deviate from normal incidence, which cannot be described using the law of refraction. This thus became known as the extraordinary ray. The terms "ordinary" and "extraordinary" are still applied to the polarization components perpendicular to and not perpendicular to the optic axis respectively, even in cases where no double refraction is involved. A material is termed uniaxial when it has a single direction of symmetry in its optical behavior, which we term the optic axis. It also happens to be the axis of symmetry of the index ellipsoid (a spheroid in this case). The index ellipsoid could still be described according to the refractive indices, , and , along three coordinate axes; in this case two are equal. So if corresponding to the and axes, then the extraordinary index is corresponding to the axis, which is also called the optic axis in this case. Materials in which all three refractive indices are different are termed biaxial and the origin of this term is more complicated and frequently misunderstood. In a uniaxial crystal, different polarization components of a beam will travel at different phase velocities, except for rays in the direction of what we call the optic axis. Thus the optic axis has the particular property that rays in that direction do not exhibit birefringence, with all polarizations in such a beam experiencing the same index of refraction. It is very different when the three principal refractive indices are all different; then an incoming ray in any of those principal directions will still encounter two different refractive indices. But it turns out that there are two special directions (at an angle to all of the 3 axes) where the refractive indices for different polarizations are again equal. For this reason, these crystals were designated as biaxial, with the two "axes" in this case referring to ray directions in which propagation does not experience birefringence. Fast and slow rays In a birefringent material, a wave consists of two polarization components which generally are governed by different effective refractive indices. The so-called slow ray is the component for which the material has the higher effective refractive index (slower phase velocity), while the fast ray is the one with a lower effective refractive index. When a beam is incident on such a material from air (or any material with a lower refractive index), the slow ray is thus refracted more towards the normal than the fast ray. In the example figure at top of this page, it can be seen that refracted ray with s polarization (with its electric vibration along the direction of the optic axis, thus called the extraordinary ray) is the slow ray in given scenario. Using a thin slab of that material at normal incidence, one would implement a waveplate. In this case, there is essentially no spatial separation between the polarizations, the phase of the wave in the parallel polarization (the slow ray) will be retarded with respect to the perpendicular polarization. These directions are thus known as the slow axis and fast axis of the waveplate. Positive or negative Uniaxial birefringence is classified as positive when the extraordinary index of refraction is greater than the ordinary index . Negative birefringence means that is less than zero. In other words, the polarization of the fast (or slow) wave is perpendicular to the optic axis when the birefringence of the crystal is positive (or negative, respectively). In the case of biaxial crystals, all three of the principal axes have different refractive indices, so this designation does not apply. But for any defined ray direction one can just as well designate the fast and slow ray polarizations. Sources of optical birefringence While the best known source of birefringence is the entrance of light into an anisotropic crystal, it can result in otherwise optically isotropic materials in a few ways: Stress birefringence results when a normally isotropic solid is stressed and deformed (i.e., stretched or bent) causing a loss of physical isotropy and consequently a loss of isotropy in the material's permittivity tensor; Form birefringence, whereby structure elements such as rods, having one refractive index, are suspended in a medium with a different refractive index. When the lattice spacing is much smaller than a wavelength, such a structure is described as a metamaterial; By the Pockels or Kerr effect, whereby an applied electric field induces birefringence due to nonlinear optics; By the self or forced alignment into thin films of amphiphilic molecules such as lipids, some surfactants or liquid crystals; Circular birefringence takes place generally not in materials which are anisotropic but rather ones which are chiral. This can include liquids where there is an enantiomeric excess of a chiral molecule, that is, one that has stereo isomers; By the Faraday effect, where a longitudinal magnetic field causes some materials to become circularly birefringent (having slightly different indices of refraction for left- and right-handed circular polarizations), similar to optical activity while the field is applied. Common birefringent materials The best characterized birefringent materials are crystals. Due to their specific crystal structures their refractive indices are well defined. Depending on the symmetry of a crystal structure (as determined by one of the 32 possible crystallographic point groups), crystals in that group may be forced to be isotropic (not birefringent), to have uniaxial symmetry, or neither in which case it is a biaxial crystal. The crystal structures permitting uniaxial and biaxial birefringence are noted in the two tables, below, listing the two or three principal refractive indices (at wavelength 590 nm) of some better-known crystals. In addition to induced birefringence while under stress, many plastics obtain permanent birefringence during manufacture due to stresses which are "frozen in" due to mechanical forces present when the plastic is molded or extruded. For example, ordinary cellophane is birefringent. Polarizers are routinely used to detect stress, either applied or frozen-in, in plastics such as polystyrene and polycarbonate. Cotton fiber is birefringent because of high levels of cellulosic material in the fibre's secondary cell wall which is directionally aligned with the cotton fibers. Polarized light microscopy is commonly used in biological tissue, as many biological materials are linearly or circularly birefringent. Collagen, found in cartilage, tendon, bone, corneas, and several other areas in the body, is birefringent and commonly studied with polarized light microscopy. Some proteins are also birefringent, exhibiting form birefringence. Inevitable manufacturing imperfections in optical fiber leads to birefringence, which is one cause of pulse broadening in fiber-optic communications. Such imperfections can be geometrical (lack of circular symmetry), or due to unequal lateral stress applied to the optical fibre. Birefringence is intentionally introduced (for instance, by making the cross-section elliptical) in order to produce polarization-maintaining optical fibers. Birefringence can be induced (or corrected) in optical fibers through bending them which causes anisotropy in form and stress given the axis around which it is bent and radius of curvature. In addition to anisotropy in the electric polarizability that we have been discussing, anisotropy in the magnetic permeability could be a source of birefringence. At optical frequencies, there is no measurable magnetic polarizability () of natural materials, so this is not an actual source of birefringence. Measurement Birefringence and other polarization-based optical effects (such as optical rotation and linear or circular dichroism) can be observed by measuring any change in the polarization of light passing through the material. These measurements are known as polarimetry. Polarized light microscopes, which contain two polarizers that are at 90° to each other on either side of the sample, are used to visualize birefringence, since light that has not been affected by birefringence remains in a polarization that is totally rejected by the second polarizer ("analyzer"). The addition of quarter-wave plates permits examination using circularly polarized light. Determination of the change in polarization state using such an apparatus is the basis of ellipsometry, by which the optical properties of specular surfaces can be gauged through reflection. Birefringence measurements have been made with phase-modulated systems for examining the transient flow behaviour of fluids. Birefringence of lipid bilayers can be measured using dual-polarization interferometry. This provides a measure of the degree of order within these fluid layers and how this order is disrupted when the layer interacts with other biomolecules. For the 3D measurement of birefringence, a technique based on holographic tomography can be used. Applications Optical devices Birefringence is used in many optical devices. Liquid-crystal displays, the most common sort of flat-panel display, cause their pixels to become lighter or darker through rotation of the polarization (circular birefringence) of linearly polarized light as viewed through a sheet polarizer at the screen's surface. Similarly, light modulators modulate the intensity of light through electrically induced birefringence of polarized light followed by a polarizer. The Lyot filter is a specialized narrowband spectral filter employing the wavelength dependence of birefringence. Waveplates are thin birefringent sheets widely used in certain optical equipment for modifying the polarization state of light passing through it. To manufacture polarizers with high transmittance, birefringent crystals are used in devices such as the Glan–Thompson prism, Glan–Taylor prism and other variants. Layered birefringent polymer sheets can also be used for this purpose. Birefringence also plays an important role in second-harmonic generation and other nonlinear optical processes. The crystals used for these purposes are almost always birefringent. By adjusting the angle of incidence, the effective refractive index of the extraordinary ray can be tuned in order to achieve phase matching, which is required for the efficient operation of these devices. Medicine Birefringence is utilized in medical diagnostics. One powerful accessory used with optical microscopes is a pair of crossed polarizing filters. Light from the source is polarized in the direction after passing through the first polarizer, but above the specimen is a polarizer (a so-called analyzer) oriented in the direction. Therefore, no light from the source will be accepted by the analyzer, and the field will appear dark. Areas of the sample possessing birefringence will generally couple some of the -polarized light into the polarization; these areas will then appear bright against the dark background. Modifications to this basic principle can differentiate between positive and negative birefringence. For instance, needle aspiration of fluid from a gouty joint will reveal negatively birefringent monosodium urate crystals. Calcium pyrophosphate crystals, in contrast, show weak positive birefringence. Urate crystals appear yellow, and calcium pyrophosphate crystals appear blue when their long axes are aligned parallel to that of a red compensator filter, or a crystal of known birefringence is added to the sample for comparison. The birefringence of tissue inside a living human thigh was measured using polarization-sensitive optical coherence tomography at 1310 nm and a single mode fiber in a needle. Skeletal muscle birefringence was Δn = 1.79 × 10−3 ± 0.18×10−3, adipose Δn = 0.07 × 10−3 ± 0.50 × 10−3, superficial aponeurosis Δn = 5.08 × 10−3 ± 0.73 × 10−3 and interstitial tissue Δn = 0.65 × 10−3 ±0.39 × 10−3. These measurements may be important for the development of a less invasive method to diagnose Duchenne muscular dystrophy. Birefringence can be observed in amyloid plaques such as are found in the brains of Alzheimer's patients when stained with a dye such as Congo Red. Modified proteins such as immunoglobulin light chains abnormally accumulate between cells, forming fibrils. Multiple folds of these fibers line up and take on a beta-pleated sheet conformation. Congo red dye intercalates between the folds and, when observed under polarized light, causes birefringence. In ophthalmology, binocular retinal birefringence screening of the Henle fibers (photoreceptor axons that go radially outward from the fovea) provides a reliable detection of strabismus and possibly also of anisometropic amblyopia. In healthy subjects, the maximum retardation induced by the Henle fiber layer is approximately 22 degrees at 840 nm. Furthermore, scanning laser polarimetry uses the birefringence of the optic nerve fiber layer to indirectly quantify its thickness, which is of use in the assessment and monitoring of glaucoma. Polarization-sensitive optical coherence tomography measurements obtained from healthy human subjects have demonstrated a change in birefringence of the retinal nerve fiber layer as a function of location around the optic nerve head. The same technology was recently applied in the living human retina to quantify the polarization properties of vessel walls near the optic nerve. While retinal vessel walls become thicker and less birefringent in patients who suffer from hypertension, hinting at a decrease in vessel wall condition, the vessel walls of diabetic patients do not experience a change in thickness, but do see an increase in birefringence, presumably due to fibrosis or inflammation. Birefringence characteristics in sperm heads allow the selection of spermatozoa for intracytoplasmic sperm injection. Likewise, zona imaging uses birefringence on oocytes to select the ones with highest chances of successful pregnancy. Birefringence of particles biopsied from pulmonary nodules indicates silicosis. Dermatologists use dermatoscopes to view skin lesions. Dermoscopes use polarized light, allowing the user to view crystalline structures corresponding to dermal collagen in the skin. These structures may appear as shiny white lines or rosette shapes and are only visible under polarized dermoscopy. Stress-induced birefringence Isotropic solids do not exhibit birefringence. When they are under mechanical stress, birefringence results. The stress can be applied externally or is "frozen in" after a birefringent plastic ware is cooled after it is manufactured using injection molding. When such a sample is placed between two crossed polarizers, colour patterns can be observed, because polarization of a light ray is rotated after passing through a birefringent material and the amount of rotation is dependent on wavelength. The experimental method called photoelasticity used for analyzing stress distribution in solids is based on the same principle. There has been recent research on using stress-induced birefringence in a glass plate to generate an optical vortex and full Poincare beams (optical beams that have every possible polarization state across a cross-section). Other cases of birefringence Birefringence is observed in anisotropic elastic materials. In these materials, the two polarizations split according to their effective refractive indices, which are also sensitive to stress. The study of birefringence in shear waves traveling through the solid Earth (the Earth's liquid core does not support shear waves) is widely used in seismology. Birefringence is widely used in mineralogy to identify rocks, minerals, and gemstones. Theory In an isotropic medium (including free space) the so-called electric displacement () is just proportional to the electric field () according to where the material's permittivity is just a scalar (and equal to where is the index of refraction). In an anisotropic material exhibiting birefringence, the relationship between and must now be described using a tensor equation: where is now a 3 × 3 permittivity tensor. We assume linearity and no magnetic permeability in the medium: . The electric field of a plane wave of angular frequency can be written in the general form: where is the position vector, is time, and is a vector describing the electric field at , . Then we shall find the possible wave vectors . By combining Maxwell's equations for and , we can eliminate to obtain: With no free charges, Maxwell's equation for the divergence of vanishes: We can apply the vector identity to the left hand side of , and use the spatial dependence in which each differentiation in (for instance) results in multiplication by to find: The right hand side of can be expressed in terms of through application of the permittivity tensor and noting that differentiation in time results in multiplication by , then becomes: Applying the differentiation rule to we find: indicates that is orthogonal to the direction of the wavevector , even though that is no longer generally true for as would be the case in an isotropic medium. will not be needed for the further steps in the following derivation. Finding the allowed values of for a given is easiest done by using Cartesian coordinates with the , and axes chosen in the directions of the symmetry axes of the crystal (or simply choosing in the direction of the optic axis of a uniaxial crystal), resulting in a diagonal matrix for the permittivity tensor : where the diagonal values are squares of the refractive indices for polarizations along the three principal axes , and . With in this form, and substituting in the speed of light using , the component of the vector equation becomes where , , are the components of (at any given position in space and time) and , , are the components of . Rearranging, we can write (and similarly for the and components of ) This is a set of linear equations in , , , so it can have a nontrivial solution (that is, one other than ) as long as the following determinant is zero: Evaluating the determinant of , and rearranging the terms according to the powers of , the constant terms cancel. After eliminating the common factor from the remaining terms, we obtain In the case of a uniaxial material, choosing the optic axis to be in the direction so that and , this expression can be factored into Setting either of the factors in to zero will define an ellipsoidal surface in the space of wavevectors that are allowed for a given . The first factor being zero defines a sphere; this is the solution for so-called ordinary rays, in which the effective refractive index is exactly regardless of the direction of . The second defines a spheroid symmetric about the axis. This solution corresponds to the so-called extraordinary rays in which the effective refractive index is in between and , depending on the direction of . Therefore, for any arbitrary direction of propagation (other than in the direction of the optic axis), two distinct wavevectors are allowed corresponding to the polarizations of the ordinary and extraordinary rays. For a biaxial material a similar but more complicated condition on the two waves can be described; the locus of allowed vectors (the wavevector surface) is a 4th-degree two-sheeted surface, so that in a given direction there are generally two permitted vectors (and their opposites). By inspection one can see that is generally satisfied for two positive values of . Or, for a specified optical frequency and direction normal to the wavefronts , it is satisfied for two wavenumbers (or propagation constants) (and thus effective refractive indices) corresponding to the propagation of two linear polarizations in that direction. When those two propagation constants are equal then the effective refractive index is independent of polarization, and there is consequently no birefringence encountered by a wave traveling in that particular direction. For a uniaxial crystal, this is the optic axis, the ±z direction according to the above construction. But when all three refractive indices (or permittivities), , and are distinct, it can be shown that there are exactly two such directions, where the two sheets of the wave-vector surface touch; these directions are not at all obvious and do not lie along any of the three principal axes (, , according to the above convention). Historically that accounts for the use of the term "biaxial" for such crystals, as the existence of exactly two such special directions (considered "axes") was discovered well before polarization and birefringence were understood physically. These two special directions are generally not of particular interest; biaxial crystals are rather specified by their three refractive indices corresponding to the three axes of symmetry. A general state of polarization launched into the medium can always be decomposed into two waves, one in each of those two polarizations, which will then propagate with different wavenumbers . Applying the different phase of propagation to those two waves over a specified propagation distance will result in a generally different net polarization state at that point; this is the principle of the waveplate for instance. With a waveplate, there is no spatial displacement between the two rays as their vectors are still in the same direction. That is true when each of the two polarizations is either normal to the optic axis (the ordinary ray) or parallel to it (the extraordinary ray). In the more general case, there is a difference not only in the magnitude but the direction of the two rays. For instance, the photograph through a calcite crystal (top of page) shows a shifted image in the two polarizations; this is due to the optic axis being neither parallel nor normal to the crystal surface. And even when the optic axis is parallel to the surface, this will occur for waves launched at non-normal incidence (as depicted in the explanatory figure). In these cases the two vectors can be found by solving constrained by the boundary condition which requires that the components of the two transmitted waves' vectors, and the vector of the incident wave, as projected onto the surface of the interface, must all be identical. For a uniaxial crystal it will be found that there is not a spatial shift for the ordinary ray (hence its name) which will refract as if the material were non-birefringent with an index the same as the two axes which are not the optic axis. For a biaxial crystal neither ray is deemed "ordinary" nor would generally be refracted according to a refractive index equal to one of the principal axes.
Physical sciences
Optics
Physics
174431
https://en.wikipedia.org/wiki/Fiberglass
Fiberglass
Fiberglass (American English) or fibreglass (Commonwealth English) is a common type of fiber-reinforced plastic using glass fiber. The fibers may be randomly arranged, flattened into a sheet called a chopped strand mat, or woven into glass cloth. The plastic matrix may be a thermoset polymer matrix—most often based on thermosetting polymers such as epoxy, polyester resin, or vinyl ester resin—or a thermoplastic. Cheaper and more flexible than carbon fiber, it is stronger than many metals by weight, non-magnetic, non-conductive, transparent to electromagnetic radiation, can be molded into complex shapes, and is chemically inert under many circumstances. Applications include aircraft, boats, automobiles, bath tubs and enclosures, swimming pools, hot tubs, septic tanks, water tanks, roofing, pipes, cladding, orthopedic casts, surfboards, and external door skins. Other common names for fiberglass are glass-reinforced plastic (GRP), glass-fiber reinforced plastic (GFRP) or GFK (from ). Because glass fiber itself is sometimes referred to as "fiberglass", the composite is also called fiberglass-reinforced plastic (FRP). This article uses "fiberglass" to refer to the complete fiber-reinforced composite material, rather than only to the glass fiber within it. History Glass fibers have been produced for centuries, but the earliest patent was awarded to the Prussian inventor Hermann Hammesfahr (1845–1914) in the U.S. in 1880. Mass production of glass strands was accidentally discovered in 1932 when Games Slayter, a researcher at Owens-Illinois, directed a jet of compressed air at a stream of molten glass and produced fibers. A patent for this method of producing glass wool was first applied for in 1933. Owens joined with the Corning company in 1935 and the method was adapted by Owens Corning to produce its patented "Fiberglas" (spelled with one "s") in 1936. Originally, Fiberglas was a glass wool with fibers entrapping a great deal of gas, making it useful as an insulator, especially at high temperatures. A suitable resin for combining the fiberglass with a plastic to produce a composite material was developed in 1936 by DuPont. The first ancestor of modern polyester resins is Cyanamid's resin of 1942. Peroxide curing systems were used by then. With the combination of fiberglass and resin the gas content of the material was replaced by plastic. This reduced the insulation properties to values typical of the plastic, but now for the first time, the composite showed great strength and promise as a structural and building material. Many glass fiber composites continued to be called "fiberglass" (as a generic name) and the name was also used for the low-density glass wool product containing gas instead of plastic. Ray Greene of Owens Corning is credited with producing the first composite boat in 1937 but did not proceed further at the time because of the brittle nature of the plastic used. In 1939 Russia was reported to have constructed a passenger boat of plastic materials, and the United States a fuselage and wings of an aircraft. The first car to have a fiberglass body was a 1946 prototype of the Stout Scarab, but the model did not enter production. Fiber Unlike glass fibers used for insulation, for the final structure to be strong, the fiber's surfaces must be almost entirely free of defects, as this permits the fibers to reach gigapascal tensile strengths. If a bulk piece of glass were defect-free, it would be as strong as glass fibers; however, it is generally impractical to produce and maintain bulk material in a defect-free state outside of laboratory conditions. Production The process of manufacturing fiberglass is called pultrusion. The manufacturing process for glass fibers suitable for reinforcement uses large furnaces to gradually melt the silica sand, limestone, kaolin clay, fluorspar, colemanite, dolomite and other minerals until a liquid forms. It is then extruded through bushings (spinneret), which are bundles of very small orifices (typically 5–25 micrometres in diameter for E-Glass, 9 micrometres for S-Glass). These filaments are then sized (coated) with a chemical solution. The individual filaments are now bundled in large numbers to provide a roving. The diameter of the filaments, and the number of filaments in the roving, determine its weight, typically expressed in one of two measurement systems: yield, or yards per pound (the number of yards of fiber in one pound of material; thus a smaller number means a heavier roving). Examples of standard yields are 225yield, 450yield, 675yield. tex, or grams per km (how many grams 1 km of roving weighs, inverted from yield; thus a smaller number means a lighter roving). Examples of standard tex are 750tex, 1100tex, 2200tex. These rovings are then either used directly in a composite application such as pultrusion, filament winding (pipe), gun roving (where an automated gun chops the glass into short lengths and drops it into a jet of resin, projected onto the surface of a mold), or in an intermediary step, to manufacture fabrics such as chopped strand mat (CSM) (made of randomly oriented small cut lengths of fiber all bonded together), woven fabrics, knit fabrics or unidirectional fabrics. Chopped strand mat Chopped strand mat (CSM) is a form of reinforcement used in fiberglass. It consists of glass fibers laid randomly across each other and held together by a binder. It is typically processed using the hand lay-up technique, where sheets of material are placed on a mold and brushed with resin. Because the binder dissolves in resin, the material easily conforms to different shapes when wetted out. After the resin cures, the hardened product can be taken from the mold and finished. Using chopped strand mat gives the fiberglass isotropic in-plane material properties. Sizing A coating or primer is applied to the roving to help protect the glass filaments for processing and manipulation and to ensure proper bonding to the resin matrix, thus allowing for the transfer of shear loads from the glass fibers to the thermoset plastic. Without this bonding, the fibers can 'slip' in the matrix causing localized failure. Properties An individual structural glass fiber is both stiff and strong in tension and compression—that is, along its axis. Although it might be assumed that the fiber is weak in compression, it is actually only the long aspect ratio of the fiber which makes it seem so; i.e., because a typical fiber is long and narrow, it buckles easily. On the other hand, the glass fiber is weak in shear—that is, across its axis. Therefore, if a collection of fibers can be arranged permanently in a preferred direction within a material, and if they can be prevented from buckling in compression, the material will be preferentially strong in that direction. Furthermore, by laying multiple layers of fiber on top of one another, with each layer oriented in various preferred directions, the material's overall stiffness and strength can be efficiently controlled. In fiberglass, it is the plastic matrix which permanently constrains the structural glass fibers to directions chosen by the designer. With chopped strand mat, this directionality is essentially an entire two-dimensional plane; with woven fabrics or unidirectional layers, directionality of stiffness and strength can be more precisely controlled within the plane. A fiberglass component is typically of a thin "shell" construction, sometimes filled on the inside with structural foam, as in the case of surfboards. The component may be of nearly arbitrary shape, limited only by the complexity and tolerances of the mold used for manufacturing the shell. The mechanical functionality of materials is heavily reliant on the combined performances of both the resin (AKA matrix) and fibers. For example, in severe temperature conditions (over 180 °C), the resin component of the composite may lose its functionality, partially due to bond deterioration of resin and fiber. However, GFRPs can still show significant residual strength after experiencing high temperatures (200 °C). One notable feature of fiberglass is that the resins used are subject to contraction during the curing process. For polyester this contraction is often 5–6%; for epoxy, about 2%. Because the fibers do not contract, this differential can create changes in the shape of the part during curing. Distortions can appear hours, days, or weeks after the resin has set. While this distortion can be minimized by symmetric use of the fibers in the design, a certain amount of internal stress is created; and if it becomes too great, cracks form. Types The most common types of glass fiber used in fiberglass is E-glass, which is alumino-borosilicate glass with less than 1% w/w alkali oxides, mainly used for glass-reinforced plastics. Other types of glass used are A-glass (Alkali-lime glass with little or no boron oxide), E-CR-glass (Electrical/Chemical Resistance; alumino-lime silicate with less than 1% w/w alkali oxides, with high acid resistance), C-glass (alkali-lime glass with high boron oxide content, used for glass staple fibers and insulation), D-glass (borosilicate glass, named for its low Dielectric constant), R-glass (alumino silicate glass without MgO and CaO with high mechanical requirements as Reinforcement), and S-glass (alumino silicate glass without CaO but with high MgO content with high tensile strength). Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fiber for fiberglass but has the drawback that it must be worked at very high temperatures. In order to lower the necessary work temperature, other materials are introduced as "fluxing agents" (i.e., components to lower the melting point). Ordinary A-glass ("A" for "alkali-lime") or soda lime glass, crushed and ready to be remelted, as so-called cullet glass, was the first type of glass used for fiberglass. E-glass ("E" because of initial Electrical application), is alkali-free and was the first glass formulation used for continuous filament formation. It now makes up most of the fiberglass production in the world, and also is the single largest consumer of boron minerals globally. It is susceptible to chloride ion attack and is a poor choice for marine applications. S-glass ("S" for "stiff") is used when tensile strength (high modulus) is important and is thus an important building and aircraft epoxy composite (it is called R-glass, "R" for "reinforcement" in Europe). C-glass ("C" for "chemical resistance") and T-glass ("T" is for "thermal insulator"—a North American variant of C-glass) are resistant to chemical attack; both are often found in insulation-grades of blown fiberglass. Table of some common fiberglass types Applications Fiberglass is versatile because it is lightweight, strong, weather-resistant, and can have a variety of surface textures. During World War II, fiberglass was developed as a replacement for the molded plywood used in aircraft radomes (fiberglass being transparent to microwaves). Its first main civilian application was for the building of boats and sports car bodies, where it gained acceptance in the 1950s. Its use has broadened to the automotive and sport equipment sectors. In the production of some products, such as aircraft, carbon fiber is now used instead of fiberglass, which is stronger by volume and weight. Advanced manufacturing techniques such as pre-pregs and fiber rovings extend fiberglass's applications and the tensile strength possible with fiber-reinforced plastics. Fiberglass is also used in the telecommunications industry for shrouding antennas, due to its RF permeability and low signal attenuation properties. It may also be used to conceal other equipment where no signal permeability is required, such as equipment cabinets and steel support structures, due to the ease with which it can be molded and painted to blend with existing structures and surfaces. Other uses include sheet-form electrical insulators and structural components commonly found in power-industry products. Because of fiberglass's lightweight and durability, it is often used in protective equipment such as helmets. Many sports use fiberglass protective gear, such as goaltenders' and catchers' masks. Storage tanks Storage tanks can be made of fiberglass with capacities up to about 300 tonnes. Smaller tanks can be made with chopped strand mat cast over a thermoplastic inner tank which acts as a preform during construction. Much more reliable tanks are made using woven mat or filament wound fiber, with the fiber orientation at right angles to the hoop stress imposed in the sidewall by the contents. Such tanks tend to be used for chemical storage because the plastic liner (often polypropylene) is resistant to a wide range of corrosive chemicals. Fiberglass is also used for septic tanks. House building Glass-reinforced plastics are also used to produce house building components such as roofing laminate, door surrounds, over-door canopies, window canopies and dormers, chimneys, coping systems, and heads with keystones and sills. The material's reduced weight and easier handling, compared to wood or metal, allows faster installation. Mass-produced fiberglass brick-effect panels can be used in the construction of composite housing, and can include insulation to reduce heat loss. Oil and gas artificial lift systems In rod pumping applications, fiberglass rods are often used for their high tensile strength to weight ratio. Fiberglass rods provide an advantage over steel rods because they stretch more elastically (lower Young's modulus) than steel for a given weight, meaning more oil can be lifted from the hydrocarbon reservoir to the surface with each stroke, all while reducing the load on the pumping unit. Fiberglass rods must be kept in tension, however, as they frequently part if placed in even a small amount of compression. The buoyancy of the rods within a fluid amplifies this tendency. Piping GRP and GRE pipe can be used in a variety of above- and below-ground systems, including those for desalination, water treatment, water distribution networks, chemical process plants, water used for firefighting, hot and cold drinking water, wastewater/sewage, municipal waste and liquified petroleum gas. Boating Fiberglass composite boats have been made since the early 1940s, and many sailing vessels made after 1950 were built using the fiberglass lay-up process. As of 2022, boats continue to be made with fiberglass, though more advanced techniques such as vacuum bag moulding are used in the construction process. Armour Though most bullet-resistant armours are made using different textiles, fiberglass composites have been shown to be effective as ballistic armor. Construction methods Filament winding Filament winding is a fabrication technique mainly used for manufacturing open (cylinders) or closed-end structures (pressure vessels or tanks). The process involves winding filaments under tension over a male mandrel. The mandrel rotates while a wind eye on a carriage moves horizontally, laying down fibers in the desired pattern. The most common filaments are carbon or glass fiber and are coated with synthetic resin as they are wound. Once the mandrel is completely covered to the desired thickness, the resin is cured; often the mandrel is placed in an oven to achieve this, though sometimes radiant heaters are used with the mandrel still turning in the machine. Once the resin has cured, the mandrel is removed, leaving the hollow final product. For some products such as gas bottles, the 'mandrel' is a permanent part of the finished product forming a liner to prevent gas leakage or as a barrier to protect the composite from the fluid to be stored. Filament winding is well suited to automation, and there are many applications, such as pipe and small pressure vessels that are wound and cured without any human intervention. The controlled variables for winding are fiber type, resin content, wind angle, tow or bandwidth and thickness of the fiber bundle. The angle at which the fiber has an effect on the properties of the final product. A high angle "hoop" will provide circumferential or "burst" strength, while lower angle patterns (polar or helical) will provide greater longitudinal tensile strength. Products currently being produced using this technique range from pipes, golf clubs, Reverse Osmosis Membrane Housings, oars, bicycle forks, bicycle rims, power and transmission poles, pressure vessels to missile casings, aircraft fuselages and lamp posts and yacht masts. Fiberglass hand lay-up operation A release agent, usually in either wax or liquid form, is applied to the chosen mold to allow the finished product to be cleanly removed from the mold. Resin—typically a 2-part thermoset polyester, vinyl, or epoxy—is mixed with its hardener and applied to the surface. Sheets of fiberglass matting are laid into the mold, then more resin mixture is added using a brush or roller. The material must conform to the mold, and air must not be trapped between the fiberglass and the mold. Additional resin is applied and possibly additional sheets of fiberglass. Hand pressure, vacuum or rollers are used to be sure the resin saturates and fully wets all layers, and that any air pockets are removed. The work must be done quickly before the resin starts to cure unless high-temperature resins are used which will not cure until the part is warmed in an oven. In some cases, the work is covered with plastic sheets and vacuum is drawn on the work to remove air bubbles and press the fiberglass to the shape of the mold. Fiberglass spray lay-up operation The fiberglass spray lay-up process is similar to the hand lay-up process but differs in the application of the fiber and resin to the mold. Spray-up is an open-molding composites fabrication process where resin and reinforcements are sprayed onto a mold. The resin and glass may be applied separately or simultaneously "chopped" in a combined stream from a chopper gun. Workers roll out the spray-up to compact the laminate. Wood, foam or other core material may then be added, and a secondary spray-up layer imbeds the core between the laminates. The part is then cured, cooled, and removed from the reusable mold. Pultrusion operation Pultrusion is a manufacturing method used to make strong, lightweight composite materials. In pultrusion, material is pulled through forming machinery using either a hand-over-hand method or a continuous-roller method (as opposed to extrusion, where the material is pushed through dies). In fiberglass pultrusion, fibers (the glass material) are pulled from spools through a device that coats them with a resin. They are then typically heat-treated and cut to length. Fiberglass produced this way can be made in a variety of shapes and cross-sections, such as W or S cross-sections. Health hazards Exposure People can be exposed to fiberglass in the workplace during its fabrication, installation or removal, by breathing it in, by skin contact, or by eye contact. Furthermore, in the manufacturing process of fiberglass, styrene vapors are released while the resins are cured. These are also irritating to mucous membranes and respiratory tract. The general population can get exposed to fibreglass from insulation and building materials or from fibers in the air near manufacturing facilities or when they are near building fires or implosions. The American Lung Association advises that fiberglass insulation should never be left exposed in an occupied area. Since work practices are not always followed, and fiberglass is often left exposed in basements that later become occupied, people can get exposed. No readily usable biological or clinical indices of exposure exist. Symptoms and signs, health effects Fiberglass will irritate the eyes, skin, and the respiratory system. Hence, symptoms can include itchy eyes, skin, nose, sore throat, hoarseness, dyspnea (breathing difficulty) and cough. Peak alveolar deposition was observed in rodents and humans for fibers with diameters of 1 to 2 μm. In animal experiments, adverse lung effects such as lung inflammation and lung fibrosis have occurred, and increased incidences of mesothelioma, pleural sarcoma, and lung carcinoma had been found with intrapleural or intratracheal instillations in rats. As of 2001, in humans only the more biopersistent materials like ceramic fibres, which are used industrially as insulation in high-temperature environments such as blast furnaces, and certain special-purpose glass wools not used as insulating materials remain classified as possible carcinogens (IARC Group 2B). The more commonly used glass fibre wools including insulation glass wool, rock wool and slag wool are considered not classifiable as to carcinogenicity to humans (IARC Group 3). In October 2001, all fiberglass wools commonly used for thermal and acoustical insulation were reclassified by the International Agency for Research on Cancer (IARC) as "not classifiable as to carcinogenicity to humans" (IARC group 3). "Epidemiologic studies published during the 15 years since the previous IARC monographs review of these fibers in 1988 provide no evidence of increased risks of lung cancer or mesothelioma (cancer of the lining of the body cavities) from occupational exposures during the manufacture of these materials, and inadequate evidence overall of any cancer risk." In June 2011, the US National Toxicology Program (NTP) removed from its Report on Carcinogens all biosoluble glass wool used in home and building insulation and for non-insulation products. However, NTP still considers fibrous glass dust to be "reasonably anticipated [as] a human carcinogen (Certain Glass Wool Fibers (Inhalable))". Similarly, California's Office of Environmental Health Hazard Assessment (OEHHA) published a November, 2011 modification to its Proposition 65 listing to include only "Glass wool fibers (inhalable and biopersistent)." Therefore a cancer warning label for biosoluble fiber glass home and building insulation is no longer required under federal or California law. As of 2012, the North American Insulation Manufacturers Association stated that fiberglass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation. As of 2012, the European Union and Germany have classified synthetic glass fibers as possibly or probably carcinogenic, but fibers can be exempt from this classification if they pass specific tests. A 2012 health hazard review for the European Commission stated that inhalation of fiberglass at concentrations of 3, 16 and 30 mg/m3 "did not induce fibrosis nor tumours except transient lung inflammation that disappeared after a post-exposure recovery period." Historic reviews of the epidemiology studies had been conducted by Harvard's Medical and Public Health Schools in 1995, the National Academy of Sciences in 2000, the Agency for Toxic Substances and Disease Registry ("ATSDR") in 2004, and the National Toxicology Program in 2011. which reached the same conclusion as IARC that there is no evidence of increased risk from occupational exposure to glass wool fibers. Pathophysiology Genetic and toxic effects are exerted through production of reactive oxygen species, which can damage DNA, and cause chromosomal aberrations, nuclear abnormalities, mutations, gene amplification in proto-oncogenes, and cell transformation in mammalian cells. There is also indirect, inflammation-driven genotoxicity through reactive oxygen species by inflammatory cells. The longer and thinner as well as the more durable (biopersistent) fibers were, the more potent they were in damage. Regulation, exposure limits In the US, fine mineral fiber emissions have been regulated by the EPA, but respirable fibers (“particulates not otherwise regulated”) are regulated by Occupational Safety and Health Administration (OSHA); OSHA has set the legal limit (permissible exposure limit) for fiberglass exposure in the workplace as 15 mg/m3 total and 5 mg/m3 in respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 3 fibers/cm3 (less than 3.5 micrometers in diameter and greater than 10 micrometers in length) as a time-weighted average over an 8-hour workday, and a 5 mg/m3 total limit. As of 2001, the Hazardous Substances Ordinance in Germany dictates a maximum occupational exposure limit of 86 mg/m3. In certain concentrations, a potentially explosive mixture may occur. Further manufacture of GRP components (grinding, cutting, sawing) creates fine dust and chips containing glass filaments, as well as tacky dust, in quantities high enough to affect health and the functionality of machines and equipment. The installation of effective extraction and filtration equipment is required to ensure safety and efficiency.
Technology
Materials
null
174455
https://en.wikipedia.org/wiki/Lentil
Lentil
The lentil (Vicia lens or Lens culinaris) is a legume; it is an annual plant grown for its lens-shaped edible seeds, also called lentils. It is about tall, and the seeds grow in pods, usually with two seeds in each. Lentil seeds are used around the world for culinary purposes. In cuisines of the Indian subcontinent, where lentils are a staple, split lentils (often with their hulls removed) known as dal are often cooked into a thick curry that is usually eaten with rice or roti. Lentils are commonly used in stews and soups. Botanical description Name Many different names in different parts of the world are used for the crop lentil. The first use of the word lens to designate a specific genus was in the 17th century by the botanist Tournefort. The word "lens" for the lentil is of classical Roman or Latin origin, possibly from a prominent Roman family named Lentulus, just as the family name "Cicero" was derived from the chickpea, Cicer arietinum, and "Fabia" (as in Quintus Fabius Maximus) from the fava bean (Vicia faba). Systematics The genus Vicia is part of the subfamily Faboideae which is contained in the flowering plant family Fabaceae or commonly known as legume or bean family, of the order Fabales in the kingdom Plantae. The former genus Lens consisted of the cultivated L. culinaris and six related wild taxa. Among the different taxa of wild lentils, L. orientalis was considered to be the progenitor of the cultivated lentil and was generally classified as L. culinaris subsp. orientalis. Lentil is hypogeal, which means the cotyledons of the germinating seed stay in the ground and inside the seed coat. Therefore, it is less vulnerable to frost, wind erosion, or insect attack. The plant is a diploid, annual, bushy herb of erect, semierect, or spreading and compact growth and normally varies from in height. It has many hairy branches and its stem is slender and angular. The rachis bears 10 to 15 leaflets in five to eight pairs. The leaves are alternate, of oblong-linear and obtuse shape and from yellowish green to dark bluish green in colour. In general, the upper leaves are converted into tendrils, whereas the lower leaves are mucronate. If stipules are present, they are small. The flowers, one to four in number, are small, white, pink, purple, pale purple, or pale blue in colour. They arise from the axils of the leaves, on a slender footstalk almost as long as the leaves. The pods are oblong, slightly inflated, and about long. Normally, each of them contains two seeds, about in diameter, in the characteristic lens shape. The seeds can also be mottled and speckled. The several cultivated varieties of lentil differ in size, hairiness, and colour of the leaves, flowers, and seeds. Lentils are self-pollinating. The flowering begins from the lowermost buds and gradually moves upward, so-called acropetal flowering. About two weeks are needed for all the flowers to open on the single branch. At the end of the second day and on the third day after the opening of the flowers, they close completely and the colour begins to fade. After three to four days, the setting of the pods takes place. Types Types can be classified according to their size, whether they are split or whole, or shelled or unshelled. Seed coats can range from light green to deep purple, as well as being tan, grey, brown, black or mottled. Shelled lentils show the colour of the cotyledon which can be yellow, orange, red, or green. Red-cotyledon types: Nipper (Australia) Northfield (Australia) Cobber (Australia) Digger (Australia) Nugget (Australia) Aldinga (Australia) Masoor dal (unshelled lentils with a brown seed coat and an orange-red cotyledon) Petite crimson (shelled Masoor lentils) Red Chief (light tan seed coat and red cotyledon) Small green/brown-seed coat types: Eston Green Pardina (Spain) Verdina (Spain) Medium green/brown-seed coat types Avondale (United States) Matilda (Australia) Richlea Large green/brown-seed coat types: Boomer (Australia) Brewer's: a large brown lentil which is often considered the "regular" lentil in the United States Castellana (Spanish) Laird: the commercial standard for large green lentils in western Canada Mason Merrit Mosa (Spain) Naslada (Bulgaria) Pennell (United States) Riveland (United States) Other types: Beluga: black, bead-like, lens-shaped, almost spherical, named for resemblance to beluga caviar. Called Indianhead in Canada. Macachiados: big yellow Mexican lentils Puy lentils (var. puyensis): Small dark speckled blue-green lentil from France with a Protected Designation of Origin name Alb-Leisa three traditional genotypes of lentils native to the Swabian Jura (Alps) in Germany and protected by the producers' association Öko-Erzeugergemeinschaft Alb-Leisa (engl. "Eco-producer association Alb-Leisa") Production In 2022, global production of lentils was 6.7 million tonnes. Canada produced the largest share, 2.2 million tonnes, or roughly 34% of the world's total output (table), nearly all (95%) of it in Saskatchewan. India was the world's second-largest producer, led by the states of Madhya Pradesh and Uttar Pradesh, which together account for roughly 70 percent of the national lentil production. Cultivation History The cultivated lentil Lens culinaris subsp. culinaris was derived from its wild subspecies L. culinaris subsp. orientalis, although other species may also have contributed some genes, according to Jonathan Sauer (Historical Geography of Crop Plants, 2017). Unlike their wild ancestors, domesticated lentil crops have indehiscent pods and non-dormant seeds. Lentil was domesticated in the Fertile Crescent of the Near East and then spread to Europe and North Africa and the Indo-Gangetic plain. The primary center of diversity for the domestic Lens culinaris as well as its wild progenitor L. culinaris ssp. orientalis is considered to be the Middle East. The oldest known carbonized remains of lentil from Greece's Franchthi Cave are dated to 11,000 BC. In archaeobotanical excavations carbonized remains of lentil seeds have been recovered from widely dispersed places such as Tell Ramad in Syria (6250–5950 BC), Aceramic Beidha in Jordan, Hacilar in Turkey (5800–5000 BC), Tepe Sabz (Ita. Tepe Sabz) in Iran (5500–5000 BC) and Argissa-Magula Tessaly in Greece (6000–5000 BC), along other places. Soil requirements Lentils can grow on various soil types, from sand to clay loam, growing best in deep sandy loam soils with moderate fertility. A soil pH around 7 would be the best. Lentils do not tolerate flooding or water-logged conditions. Lentils improve the physical properties of soils and increase the yield of succeeding cereal crops. Biological nitrogen fixation or other rotational effects could be the reason for higher yields after lentils. Climate requirements The conditions under which lentils are grown differ across different growing regions. In the temperate climates lentils are planted in the winter and spring under low temperatures and vegetative growth occurs in later spring and the summer. Rainfall during this time is not limited. In the subtropics, lentils are planted under relatively high temperatures at the end of the rainy season, and vegetative growth occurs on the residual soil moisture in the summer season. Rainfall during this time is limited. In West Asia and North Africa, some lentils are planted as a winter crop before snowfall. Plant growth occurs during the time of snow melting. Under such cultivation, seed yields are often much higher. Seedbed requirements and sowing The lentil requires a firm, smooth seedbed with most of the previous crop residues incorporated. For the seed placement and for later harvesting it is important that the surface is not uneven with large clods, stones, or protruding crop residue. It is also important that the soil be made friable and weed-free, so that seeding can be done at a uniform depth. The plant densities for lentils vary between genotypes, seed size, planting time and growing conditions, and also from region to region. In South Asia, a seed rate of is recommended. In West Asian countries, a higher seed rate is recommended, and also leads to a higher yield. The seeds should be sown deep. In agriculturally mechanized countries, lentils are planted using grain drills, but many other areas still hand broadcast. Cultivation management, fertilization In intercropping systems – a practice commonly used in lentil cultivation – herbicides may be needed to assure crop health. Like many other legume crops, lentils can fix atmospheric nitrogen in the soil with specific rhizobia. Lentils grow well under low fertilizer input conditions, although phosphorus, nitrogen, potassium, and sulfur may be used for nutrient-poor soils. Diseases Below is a list of the most common lentil diseases. Fungal diseases Nematodes, parasitic Viral diseases Use by humans Processing A combination of gravity, screens and air flow is used to clean and sort lentils by shape and density. After destoning, they may be separated by a color sorter and then packaged. A major part of the world's red lentil production undergoes a secondary processing step. These lentils are dehulled, split and polished. In the Indian subcontinent, this process is called dal milling. The moisture content of the lentils prior to dehulling is crucial to guarantee a good dehulling efficiency. The hull of lentils usually accounts for 6 to 7 percent of the total seed weight, which is lower than most legumes. Lentil flour can be produced by milling the seeds, like cereals. Culinary use Lentils can be eaten soaked, germinated, fried, baked or boiled – the most common preparation method. The seeds require a cooking time of 10 to 40 minutes, depending on the variety; small varieties with the husk removed, such as the common red lentil, require shorter cooking times (and unlike most legumes don't require soaking). Most varieties have a distinctive, earthy flavor. Lentils with husks remain whole with moderate cooking, while those without husks tend to disintegrate into a thick purée, which may enable various dishes. The composition of lentils leads to a high emulsifying capacity which can be even increased by dough fermentation in bread making. Lentil dishes Lentils are used worldwide in many different dishes. Lentil dishes are most widespread throughout South Asia, the Mediterranean regions, West Asia, and Latin America. In the Indian subcontinent, Fiji, Mauritius, Singapore and the Caribbean, lentil curry is part of the everyday diet, eaten with both rice and roti. Boiled lentils and lentil stock are used to thicken most vegetarian curries. They are also used as stuffing in dal parathas and puri for breakfast or snacks. Lentils are also used in many regional varieties of sweets. Lentil flour is used to prepare several different bread varieties, such as papadam. They are frequently combined with rice, which has a similar cooking time. A lentil and rice dish is referred to in Levantine countries as mujaddara or mejadra. In Iran, rice and lentil is served with fried raisin; this dish is called adas polo. Rice and lentils are also cooked together in khichdi, a popular dish in the Indian subcontinent (India and Pakistan); a similar dish, kushari, made in Egypt, is considered one of two national dishes. Lentils are used to prepare an inexpensive and nutritious soup throughout Europe and North and South America, sometimes combined with chicken or pork. In Western countries, cooked lentils are often used in salads. In Italy, the traditional dish for New Year's Eve is Cotechino served with lentils. Lentils are commonly eaten in Ethiopia in a stew-like dish called kik, or kik wot, one of the dishes people eat with Ethiopia's national food, injera flatbread. Yellow lentils are used to make a non-spicy stew, which is one of the first solid foods Ethiopians feed their babies. Lentils were a chief part of the diet of ancient Iranians, who consumed lentils daily in the form of a stew poured over rice. Nutritional value Boiled lentils are 70% water, 20% carbohydrates, 9% protein, and 0.4% fat (table). In a reference amount of , cooked lentils (boiled; variety unspecified) provide 114 calories, and are a rich source (20% or more of the Daily Value, DV) of folate (45% DV), iron (25% DV), manganese (24% DV), and phosphorus (26% DV). They are a good source (10% DV) of thiamine (15% DV), pantothenic acid (13% DV), vitamin B6 (14% DV), magnesium (10% DV), copper (13% DV), and zinc (13%) (table). Lentils contain carotenoids, lutein and zeaxanthin, and polyunsaturated fatty acids. Digestive effects The low levels of readily digestible starch (5 percent) and high levels of slowly digested starch make lentils of potential value to people with diabetes. The remaining 65% of the starch is a resistant starch classified as RS1. A minimum of 10% in starch from lentils escapes digestion and absorption in the small intestine (therefore called "resistant starch"). Additional resistant starch is synthesized from gelatinized starch, during cooling, after lentils are cooked. Lentils also have antinutrient factors, such as trypsin inhibitors and a relatively high phytate content. Trypsin is an enzyme involved in digestion, and phytates reduce the bioavailability of dietary minerals. The phytates can be reduced by prolonged soaking and fermentation or sprouting. Cooking nearly completely removes the trypsin inhibitor activity; sprouting is also effective. Breeding Although lentils have been an important crop for centuries, lentil breeding and genetic research have a relatively short history compared to that of many other crops. Since the inception of The International Center for Agriculture Research in the Dry Areas (ICARDA) breeding programme in 1977 significant gains have been made. It supplies landraces and breeding lines for countries around the world, supplemented by other programmes in both developing (e.g. India) and developed (e.g. Australia and Canada) countries. In recent years, such collaborations among breeders and agronomists are becoming increasingly important. The focus lies on high yielding and stable cultivars for diverse environments to match the demand of a growing population. In particular, progress in quantity and quality as well as in the resistance to disease and abiotic stresses are the major breeding aims. Several varieties have been developed applying conventional breeding methodologies. Serious genetic improvement for yield has been made, however, the full potential of production and productivity could not yet be tapped due to several biotic and abiotic stresses. Wild Lens species are a significant source of genetic variation for improving the relatively narrow genetic base of this crop. The wild species possess many diverse traits including disease resistances and abiotic stress tolerances. The above-mentioned L. nigricans and L. orientalis possess morphological similarities to the cultivated L. culinaris. But only L. culinaris and L. culinaris subsp. orientalis are crossable and produce fully fertile seed. Between the different related species hybridisation barriers exist. According to their inter-crossability Lens species can be divided into three gene pools: Primary gene pool: L. culinaris (and L. culinaris subsp. orientalis) and L. odemensis Secondary gene pool: L. ervoides and L. nigricans Tertiary gene pool: L. lamottei and L. tomentosus Crosses generally fail between members of different gene pools. However, plant growth regulators and/or embryo rescue allows the growth of viable hybrids between groups. Even if crosses are successful, many undesired genes may be introduced as well in addition to the desired ones. This can be resolved by using a backcrossing programme. Thus, mutagenesis is crucial to create new and desirable varieties. According to Yadav et al. other biotechnology techniques which may impact on lentil breeding are micro-propagation using meristamatic explants, callus culture and regeneration, protoplast culture and doubled haploid production. There is a proposed revision of the gene pools using SNP phylogeny.
Biology and health sciences
Fabales
null
174475
https://en.wikipedia.org/wiki/Modularity%20theorem
Modularity theorem
The modularity theorem (formerly called the Taniyama–Shimura conjecture, Taniyama–Shimura–Weil conjecture or modularity conjecture for elliptic curves) states that elliptic curves over the field of rational numbers are related to modular forms in a particular way. Andrew Wiles and Richard Taylor proved the modularity theorem for semistable elliptic curves, which was enough to imply Fermat's Last Theorem. Later, a series of papers by Wiles's former students Brian Conrad, Fred Diamond and Richard Taylor, culminating in a joint paper with Christophe Breuil, extended Wiles's techniques to prove the full modularity theorem in 2001. Statement The theorem states that any elliptic curve over can be obtained via a rational map with integer coefficients from the classical modular curve for some integer ; this is a curve with integer coefficients with an explicit definition. This mapping is called a modular parametrization of level . If is the smallest integer for which such a parametrization can be found (which by the modularity theorem itself is now known to be a number called the conductor), then the parametrization may be defined in terms of a mapping generated by a particular kind of modular form of weight two and level , a normalized newform with integer -expansion, followed if need be by an isogeny. Related statements The modularity theorem implies a closely related analytic statement: To each elliptic curve over we may attach a corresponding -series. The -series is a Dirichlet series, commonly written The generating function of the coefficients is then If we make the substitution we see that we have written the Fourier expansion of a function of the complex variable , so the coefficients of the -series are also thought of as the Fourier coefficients of . The function obtained in this way is, remarkably, a cusp form of weight two and level and is also an eigenform (an eigenvector of all Hecke operators); this is the Hasse–Weil conjecture, which follows from the modularity theorem. Some modular forms of weight two, in turn, correspond to holomorphic differentials for an elliptic curve. The Jacobian of the modular curve can (up to isogeny) be written as a product of irreducible Abelian varieties, corresponding to Hecke eigenforms of weight 2. The 1-dimensional factors are elliptic curves (there can also be higher-dimensional factors, so not all Hecke eigenforms correspond to rational elliptic curves). The curve obtained by finding the corresponding cusp form, and then constructing a curve from it, is isogenous to the original curve (but not, in general, isomorphic to it). History Yutaka Taniyama stated a preliminary (slightly incorrect) version of the conjecture at the 1955 international symposium on algebraic number theory in Tokyo and Nikkō as the twelfth of his set of 36 unsolved problems. Goro Shimura and Taniyama worked on improving its rigor until 1957. André Weil rediscovered the conjecture, and showed in 1967 that it would follow from the (conjectured) functional equations for some twisted -series of the elliptic curve; this was the first serious evidence that the conjecture might be true. Weil also showed that the conductor of the elliptic curve should be the level of the corresponding modular form. The Taniyama–Shimura–Weil conjecture became a part of the Langlands program. The conjecture attracted considerable interest when Gerhard Frey suggested in 1986 that it implies Fermat's Last Theorem. He did this by attempting to show that any counterexample to Fermat's Last Theorem would imply the existence of at least one non-modular elliptic curve. This argument was completed in 1987 when Jean-Pierre Serre identified a missing link (now known as the epsilon conjecture or Ribet's theorem) in Frey's original work, followed two years later by Ken Ribet's completion of a proof of the epsilon conjecture. Even after gaining serious attention, the Taniyama–Shimura–Weil conjecture was seen by contemporary mathematicians as extraordinarily difficult to prove or perhaps even inaccessible to prove. For example, Wiles's Ph.D. supervisor John Coates states that it seemed "impossible to actually prove", and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible". In 1995, Andrew Wiles, with some help from Richard Taylor, proved the Taniyama–Shimura–Weil conjecture for all semistable elliptic curves. Wiles used this to prove Fermat's Last Theorem, and the full Taniyama–Shimura–Weil conjecture was finally proved by Diamond, Conrad, Diamond & Taylor; and Breuil, Conrad, Diamond & Taylor; building on Wiles's work, they incrementally chipped away at the remaining cases until the full result was proved in 1999. Once fully proven, the conjecture became known as the modularity theorem. Several theorems in number theory similar to Fermat's Last Theorem follow from the modularity theorem. For example: no cube can be written as a sum of two coprime th powers, . Generalizations The modularity theorem is a special case of more general conjectures due to Robert Langlands. The Langlands program seeks to attach an automorphic form or automorphic representation (a suitable generalization of a modular form) to more general objects of arithmetic algebraic geometry, such as to every elliptic curve over a number field. Most cases of these extended conjectures have not yet been proved. In 2013, Freitas, Le Hung, and Siksek proved that elliptic curves defined over real quadratic fields are modular. Example For example, the elliptic curve , with discriminant (and conductor) 37, is associated to the form For prime numbers not equal to 37, one can verify the property about the coefficients. Thus, for , there are 6 solutions of the equation modulo 3: , , , , , ; thus . The conjecture, going back to the 1950s, was completely proven by 1999 using the ideas of Andrew Wiles, who proved it in 1994 for a large family of elliptic curves. There are several formulations of the conjecture. Showing that they are equivalent was a main challenge of number theory in the second half of the 20th century. The modularity of an elliptic curve of conductor can be expressed also by saying that there is a non-constant rational map defined over , from the modular curve to . In particular, the points of can be parametrized by modular functions. For example, a modular parametrization of the curve is given by where, as above, . The functions and are modular of weight 0 and level 37; in other words they are meromorphic, defined on the upper half-plane and satisfy and likewise for , for all integers with and . Another formulation depends on the comparison of Galois representations attached on the one hand to elliptic curves, and on the other hand to modular forms. The latter formulation has been used in the proof of the conjecture. Dealing with the level of the forms (and the connection to the conductor of the curve) is particularly delicate. The most spectacular application of the conjecture is the proof of Fermat's Last Theorem (FLT). Suppose that for a prime , the Fermat equation has a solution with non-zero integers, hence a counter-example to FLT. Then as was the first to notice, the elliptic curve of discriminant cannot be modular. Thus, the proof of the Taniyama–Shimura–Weil conjecture for this family of elliptic curves (called Hellegouarch–Frey curves) implies FLT. The proof of the link between these two statements, based on an idea of Gerhard Frey (1985), is difficult and technical. It was established by Kenneth Ribet in 1987.
Mathematics
Diophantine equations
null
174482
https://en.wikipedia.org/wiki/Common%20logarithm
Common logarithm
In mathematics, the common logarithm (aka "standard logarithm") is the logarithm with base 10. It is also known as the decadic logarithm, the decimal logarithm and the Briggsian logarithm. The name "Briggsian logarithm" is in honor of the British mathematician Henry Briggs who conceived of and developed the values for the "common logarithm". Historically', the "common logarithm" was known by its Latin name logarithmus decimalis or logarithmus decadis. The mathematical notation for using the common logarithm is , , or sometimes with a capital ; on calculators, it is printed as "log", but mathematicians usually mean natural logarithm (logarithm with base e ≈ 2.71828) rather than common logarithm when writing "log". To mitigate this ambiguity, the ISO 80000 specification recommends that should be written , and should be . Before the early 1970s, handheld electronic calculators were not available, and mechanical calculators capable of multiplication were bulky, expensive and not widely available. Instead, tables of base-10 logarithms were used in science, engineering and navigation—when calculations required greater accuracy than could be achieved with a slide rule. By turning multiplication and division to addition and subtraction, use of logarithms avoided laborious and error-prone paper-and-pencil multiplications and divisions. Because logarithms were so useful, tables of base-10 logarithms were given in appendices of many textbooks. Mathematical and navigation handbooks included tables of the logarithms of trigonometric functions as well. For the history of such tables, see log table. Mantissa and characteristic An important property of base-10 logarithms, which makes them so useful in calculations, is that the logarithm of numbers greater than 1 that differ by a factor of a power of 10 all have the same fractional part. The fractional part is known as the mantissa. Thus, log tables need only show the fractional part. Tables of common logarithms typically listed the mantissa, to four or five decimal places or more, of each number in a range, e.g. 1000 to 9999. The integer part, called the characteristic, can be computed by simply counting how many places the decimal point must be moved, so that it is just to the right of the first significant digit. For example, the logarithm of 120 is given by the following calculation: The last number (0.07918)—the fractional part or the mantissa of the common logarithm of 120—can be found in the table shown. The location of the decimal point in 120 tells us that the integer part of the common logarithm of 120, the characteristic, is 2. Negative logarithms Positive numbers less than 1 have negative logarithms. For example, To avoid the need for separate tables to convert positive and negative logarithms back to their original numbers, one can express a negative logarithm as a negative integer characteristic plus a positive mantissa. To facilitate this, a special notation, called bar notation, is used: The bar over the characteristic indicates that it is negative, while the mantissa remains positive. When reading a number in bar notation out loud, the symbol is read as "bar ", so that is read as "bar 2 point 07918...". An alternative convention is to express the logarithm modulo 10, in which case with the actual value of the result of a calculation determined by knowledge of the reasonable range of the result. The following example uses the bar notation to calculate 0.012 × 0.85 = 0.0102: * This step makes the mantissa between 0 and 1, so that its antilog (10) can be looked up. The following table shows how the same mantissa can be used for a range of numbers differing by powers of ten: Note that the mantissa is common to all of the . This holds for any positive real number  because Since is a constant, the mantissa comes from , which is constant for given . This allows a table of logarithms to include only one entry for each mantissa. In the example of , 0.698 970 (004 336 018 ...) will be listed once indexed by 5 (or 0.5, or 500, etc.). History Common logarithms are sometimes also called "Briggsian logarithms" after Henry Briggs, a 17th century British mathematician. In 1616 and 1617, Briggs visited John Napier at Edinburgh, the inventor of what are now called natural (base-e) logarithms, in order to suggest a change to Napier's logarithms. During these conferences, the alteration proposed by Briggs was agreed upon; and after his return from his second visit, he published the first chiliad of his logarithms. Because base-10 logarithms were most useful for computations, engineers generally simply wrote "" when they meant . Mathematicians, on the other hand, wrote "" when they meant for the natural logarithm. Today, both notations are found. Since hand-held electronic calculators are designed by engineers rather than mathematicians, it became customary that they follow engineers' notation. So the notation, according to which one writes "" when the natural logarithm is intended, may have been further popularized by the very invention that made the use of "common logarithms" far less common, electronic calculators. Numeric value The numerical value for logarithm to the base 10 can be calculated with the following identities: or or using logarithms of any available base as procedures exist for determining the numerical value for logarithm base (see ) and logarithm base 2 (see Algorithms for computing binary logarithms). Derivative The derivative of a logarithm with a base b is such that , so .
Mathematics
Specific functions
null
174492
https://en.wikipedia.org/wiki/Online%20chat
Online chat
Online chat is any direct text-, audio- or video-based (webcams), one-on-one or one-to-many (group) chat (formally also known as synchronous conferencing), using tools such as instant messengers, Internet Relay Chat (IRC), talkers and possibly MUDs or other online games. Online chat includes web-based applications that allow communication – often directly addressed, but anonymous between users in a multi-user environment. Web conferencing is a more specific online service, that is often sold as a service, hosted on a web server controlled by the vendor. Online chat may address point-to-point communications as well as multicast communications from one sender to many receivers and voice and video chat, or may be a feature of a web conferencing service. Online chat in a narrower sense is any kind of communication over the Internet that offers a real-time transmission of text messages from sender to receiver. Chat messages are generally short in order to enable other participants to respond quickly. Thereby, a feeling similar to a spoken conversation is created, which distinguishes chatting from other text-based online communication forms such as Internet forums and email. The expression online chat comes from the word chat which means "informal conversation". Synchronous conferencing or synchronous computer-mediated communication (SCMC) is any form of computer-mediated communication that occurs in real-time; that is, there is no significant delay between sending and receiving messages. SCMC includes real-time forms of text, audio, and video communication. SCMC has been highly studied in the context of e-learning. History The first online chat system was called Talkomatic, created by Doug Brown and David R. Woolley in 1973 on the PLATO System at the University of Illinois. It offered several channels, each of which could accommodate up to five people, with messages appearing on all users' screens character-by-character as they were typed. Talkomatic was very popular among PLATO users into the mid-1980s. In 2014, Brown and Woolley released a web-based version of Talkomatic. The first online system to use the actual command "chat" was created for The Source in 1979 by Tom Walker and Fritz Thane of Dialcom, Inc. Other chat platforms flourished during the 1980s. Among the earliest with a GUI was BroadCast, a Macintosh extension that became especially popular on university campuses in America and Germany. The first transatlantic Internet chat took place between Oulu, Finland and Corvallis, Oregon in February 1989. The first dedicated online chat service that was widely available to the public was the CompuServe CB Simulator in 1980, created by CompuServe executive Alexander "Sandy" Trevor in Columbus, Ohio. Ancestors include network chat software such as UNIX "talk" used in the 1970s. Chat is implemented in many video-conferencing tools. A study of chat use during work-related videoconferencing found that chat during meetings allows participants to communicate without interrupting the meeting, plan action around common resources, and enables greater inclusion. The study also found that chat can cause distractions and information asymmetries between participants. Types According to the type of media used, synchronous conferencing can be divided into audio conferencing: only audio is used video conferencing: Both audio (voice) and video and pictures are used. According to the number of access point used, synchronous conferencing can be divided into point-to-point: Only two computers are connected end to end. multi-point: Two or more than two computers are connected. Methods Some of the methods used in synchronous conferencing are: Chat (text only): Multiple participants can be logged into the conference and can interactively share resources and ideas. There is also an option to save the chat and archive it for later review. Voice (telephone or voice-over IP): This is a conference call between the instructor and the participating students where they can speak through a built-in microphone or a headset. Video conferencing: This may or may not require the participants to have their webcams running. Usually, a video conference involves a live feed from a classroom or elsewhere or content. Web conferencing: This includes Webinar (Web-based seminar) as well. Unlike in video conferencing, participants of web conferencing can access a wider variety of media elements. Web conferences are comparatively more interactive and usually incorporate chat sessions as well. Virtual worlds: In this setup, students can meet in the virtual world and speak with each other through headsets and VoIP. This can make learning more productive and engaging when the students can navigate the worlds and operate in their avatar. Synchronous vs asynchronous conferencing Both synchronous and asynchronous conferencing are online conferencing where the participants can interact while being physically located at different places in the world. Asynchronous conferencing allows the students to access the learning material at their convenience while synchronous conferencing requires that all participants including the instructor and the students be online at the time of the conference. While synchronous conferencing enables real-time interaction of the participants, asynchronous conferencing allows participants to post messages and others can respond to it at any convenient time. Sometimes a combination of both synchronous and asynchronous conferencing is used. Both methods give a permanent record of the conference. Critical factors for effective implementation There are four critical factors identified for implementing synchronous conferencing for effective instruction to the students Video and audio quality which depends on technical factors like higher bandwidth and processing capabilities of the system. Training time depends on the familiarity and proficiency of the instructors and the students with the technology. Teaching strategies depend on the adaptability of the instructors to the new methods, preparing appropriate and effective training materials, and motivating students. Direct meeting of the instructor and the students. Synchronous conferencing in higher education Synchronous conferencing in education helps in the delivery of content through digital media. Since this is real-time teaching, it also brings the benefits of face-to-face teaching in distance learning. Many higher education institutions offer well-designed quality e-learning opportunities. Some of the advantages of synchronous conferencing in education are: Helps the students to connect with not only their teachers and peers but also with recognized experts in the field regardless of the geographical distance and different time zones. Provides opportunities for both the teachers and the students to expand their knowledge outside the classroom. Helps students who are home-bound or limited mobility to connect with their classrooms and participate in learning. Helps the faculties to conduct classes when they are not able to come to classes due to an emergency. Supports real-time collaboration, interaction, and immediate feedback Encourage students to learn together and in turn, develop cultural understanding Personalized learning experience for the students Real-time discussion opportunities for students promoting student engagement Active interaction can lead to an associated community of like-minded students Saves travel expenses and time Implementation of educational technology The tools for implementing synchronous conferencing depend on the type of educational problem addressed. This is in turn decides the method of synchronous conferencing to be used and the tool to be used in the learning context. The tool selected addresses the problem of improving the learning outcomes which cannot be solved with an asynchronous environment. There are many tools and platforms available for synchronous conferencing. Smartphone applications Web conferencing tools Video conferencing tools Video and hangout platforms Shared whiteboards The selection of tools and platforms also depends on the group size which depends on the activity for the course design. Chatiquette The term chatiquette (chat etiquette) is a variation of netiquette (Internet etiquette) and describes basic rules of online communication. These conventions or guidelines have been created to avoid misunderstandings and to simplify the communication between users. Chatiquette varies from community to community and generally describes basic courtesy. As an example, it is considered rude to write only in upper case, because it appears as if the user is shouting. The word "chatiquette" has been used in connection with various chat systems (e.g. Internet Relay Chat) since 1995. Chatrooms can produce a strong sense of online identity leading to impression of subculture. Chats are valuable sources of various types of information, the automatic processing of which is the object of chat/text mining technologies. Limitations Some limitations for synchronous conferencing in learning are: Disjointed discussions, not connected in time Lack of effective moderation and/or clear guidelines for learners Difficulty in collaborating on online projects Lack of proper communication with the instructor and students. Technical issues may arise if not analysed and planned in advance Lack of familiarity with the tools Limited time to complete the learning activity and to incorporate interactions with the learners Social criticism Criticism of online chatting and text messaging include concern that they replace proper English with shorthand or with an almost completely new hybrid language. Writing is changing as it takes on some of the functions and features of speech. Internet chat rooms and rapid real-time teleconferencing allow users to interact with whoever happens to coexist in cyberspace. These virtual interactions involve us in 'talking' more freely and more widely than ever before. With chatrooms replacing many face-to-face conversations, it is necessary to be able to have quick conversation as if the person were present, so many people learn to type as quickly as they would normally speak. Some critics are wary that this casual form of speech is being used so much that it will slowly take over common grammar; however, such a change has yet to be seen. With the increasing population of online chatrooms there has been a massive growth of new words created or slang words, many of them documented on the website Urban Dictionary. Sven Birkerts wrote:"as new electronic modes of communication provoke similar anxieties amongst critics who express concern that young people are at risk, endangered by a rising tide of information over which the traditional controls of print media and the guardians of knowledge have no control on it". In Guy Merchant's journal article Teenagers in Cyberspace: An Investigation of Language Use and Language Change in Internet Chatrooms; Merchant says"that teenagers and young people are in the leading the movement of change as they take advantage of the possibilities of digital technology, drastically changing the face of literacy in a variety of media through their uses of mobile phone text messages, e-mails, web-pages and on-line chatrooms. This new literacy develops skills that may well be important to the labor market but are currently viewed with suspicion in the media and by educationalists. Merchant also says "Younger people tend to be more adaptable than other sectors of society and, in general, quicker to adapt to new technology. To some extent they are the innovators, the forces of change in the new communication landscape." In this article he is saying that young people are merely adapting to what they were given. Synchronous conferencing protocols Synchronous conferencing protocols include: IRC (Internet Relay Chat) PSYC (Protocol for Synchronous Conferencing) SILC (Secure Internet Live Conferencing protocol) XMPP (Extensible Messaging and Presence Protocol) SIMPLE (instant messaging protocol) (Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions) Software and protocols The following are common chat programs and protocols: AIM (No longer available) Camfrog Campfire Discord XMPP Flock Gadu-Gadu Google Talk (No longer available) I2P-Messenger (anonymous, end-to-end encrypted IM for the I2P network) ICQ (OSCAR) ICB IRC Line Mattermost Apple Messages Teams Paltalk RetroShare (encrypted, decentralized) Signal (encrypted messaging protocol and software) SILC Skype Slack Talk Talker TeamSpeak (TS) Telegram QQ The Palace (encrypted, decentralized) WebChat Broadcasting System (WBS) WeChat WhatsApp Windows Live Messenger Yahoo! Messenger (No longer available) Chat programs supporting multiple protocols: Adium Google+ Hangouts IBM Sametime Kopete Miranda NG Pidgin Quiet Internet Pager Trillian Windows Live Messenger Web sites with browser-based chat services: Chat-Avenue Convore (No longer available) Cryptocat eBuddy Facebook FilmOn Gmail Google+ (No longer available) Chat Television (No longer available) MeBeam Meebo (No longer available) Mibbit (No longer available) Omegle (no longer available) Talkomatic Tinychat Tokbox (No longer available) Trillian Userplane (No longer available) Woo Media (No longer available) Zumbl (No longer available)
Technology
Internet
null
174515
https://en.wikipedia.org/wiki/Dirichlet%20character
Dirichlet character
In analytic number theory and related branches of mathematics, a complex-valued arithmetic function is a Dirichlet character of modulus (where is a positive integer) if for all integers and : that is, is completely multiplicative. (gcd is the greatest common divisor) ; that is, is periodic with period . The simplest possible character, called the principal character, usually denoted , (see Notation below) exists for all moduli: The German mathematician Peter Gustav Lejeune Dirichlet—for whom the character is named—introduced these functions in his 1837 paper on primes in arithmetic progressions. Notation is Euler's totient function. is a complex primitive n-th root of unity: but is the group of units mod . It has order is the group of Dirichlet characters mod . etc. are prime numbers. is a standard abbreviation for etc. are Dirichlet characters. (the lowercase Greek letter chi for "character") There is no standard notation for Dirichlet characters that includes the modulus. In many contexts (such as in the proof of Dirichlet's theorem) the modulus is fixed. In other contexts, such as this article, characters of different moduli appear. Where appropriate this article employs a variation of Conrey labeling (introduced by Brian Conrey and used by the LMFDB). In this labeling characters for modulus are denoted where the index is described in the section the group of characters below. In this labeling, denotes an unspecified character and denotes the principal character mod . Relation to group characters The word "character" is used several ways in mathematics. In this section it refers to a homomorphism from a group (written multiplicatively) to the multiplicative group of the field of complex numbers: The set of characters is denoted If the product of two characters is defined by pointwise multiplication the identity by the trivial character and the inverse by complex inversion then becomes an abelian group. If is a finite abelian group then there is an isomorphism , and the orthogonality relations:     and     The elements of the finite abelian group are the residue classes where A group character can be extended to a Dirichlet character by defining and conversely, a Dirichlet character mod defines a group character on Paraphrasing Davenport, Dirichlet characters can be regarded as a particular case of Abelian group characters. But this article follows Dirichlet in giving a direct and constructive account of them. This is partly for historical reasons, in that Dirichlet's work preceded by several decades the development of group theory, and partly for a mathematical reason, namely that the group in question has a simple and interesting structure which is obscured if one treats it as one treats the general Abelian group. Elementary facts 4) Since property 2) says so it can be canceled from both sides of : 5) Property 3) is equivalent to if   then 6) Property 1) implies that, for any positive integer 7) Euler's theorem states that if then Therefore, That is, the nonzero values of are -th roots of unity: for some integer which depends on and . This implies there are only a finite number of characters for a given modulus. 8) If and are two characters for the same modulus so is their product defined by pointwise multiplication:   ( obviously satisfies 1-3). The principal character is an identity: 9) Let denote the inverse of in . Then so which extends 6) to all integers. The complex conjugate of a root of unity is also its inverse (see here for details), so for   ( also obviously satisfies 1-3). Thus for all integers   in other words .  10) The multiplication and identity defined in 8) and the inversion defined in 9) turn the set of Dirichlet characters for a given modulus into a finite abelian group. The group of characters There are three different cases because the groups have different structures depending on whether is a power of 2, a power of an odd prime, or the product of prime powers. Powers of odd primes If is an odd number is cyclic of order ; a generator is called a primitive root mod . Let be a primitive root and for define the function (the index of ) by For if and only if Since   is determined by its value at Let be a primitive -th root of unity. From property 7) above the possible values of are These distinct values give rise to Dirichlet characters mod For define as Then for and all and showing that is a character and which gives an explicit isomorphism Examples m = 3, 5, 7, 9 2 is a primitive root mod 3.   () so the values of are . The nonzero values of the characters mod 3 are 2 is a primitive root mod 5.   () so the values of are . The nonzero values of the characters mod 5 are 3 is a primitive root mod 7.   () so the values of are . The nonzero values of the characters mod 7 are () . 2 is a primitive root mod 9.   () so the values of are . The nonzero values of the characters mod 9 are () . Powers of 2 is the trivial group with one element. is cyclic of order 2. For 8, 16, and higher powers of 2, there is no primitive root; the powers of 5 are the units and their negatives are the units For example Let ; then is the direct product of a cyclic group of order 2 (generated by −1) and a cyclic group of order (generated by 5). For odd numbers define the functions and by For odd and if and only if and For odd the value of is determined by the values of and Let be a primitive -th root of unity. The possible values of are These distinct values give rise to Dirichlet characters mod For odd define by Then for odd and and all and showing that is a character and showing that Examples m = 2, 4, 8, 16 The only character mod 2 is the principal character . −1 is a primitive root mod 4 () The nonzero values of the characters mod 4 are −1 is and 5 generate the units mod 8 () . The nonzero values of the characters mod 8 are −1 and 5 generate the units mod 16 () . The nonzero values of the characters mod 16 are . Products of prime powers Let where be the factorization of into prime powers. The group of units mod is isomorphic to the direct product of the groups mod the : This means that 1) there is a one-to-one correspondence between and -tuples where and 2) multiplication mod corresponds to coordinate-wise multiplication of -tuples: corresponds to where The Chinese remainder theorem (CRT) implies that the are simply There are subgroups such that and Then and every corresponds to a -tuple where and Every can be uniquely factored as If is a character mod on the subgroup it must be identical to some mod Then showing that every character mod is the product of characters mod the . For define Then for and all and showing that is a character and showing an isomorphism Examples m = 15, 24, 40 The factorization of the characters mod 15 is The nonzero values of the characters mod 15 are . The factorization of the characters mod 24 is The nonzero values of the characters mod 24 are . The factorization of the characters mod 40 is The nonzero values of the characters mod 40 are . Summary Let , be the factorization of and assume There are Dirichlet characters mod They are denoted by where is equivalent to The identity is an isomorphism Each character mod has a unique factorization as the product of characters mod the prime powers dividing : If the product is a character where is given by and Also, Orthogonality The two orthogonality relations are     and     The relations can be written in the symmetric form     and     The first relation is easy to prove: If there are non-zero summands each equal to 1. If there is some  Then   implying   Dividing by the first factor gives QED. The identity for shows that the relations are equivalent to each other. The second relation can be proven directly in the same way, but requires a lemma Given there is a The second relation has an important corollary: if define the function   Then That is the indicator function of the residue class . It is basic in the proof of Dirichlet's theorem. Classification of characters Conductor; Primitive and induced characters Any character mod a prime power is also a character mod every larger power. For example, mod 16 has period 16, but has period 8 and has period 4:   and   We say that a character of modulus has a quasiperiod of if for all , coprime to satisfying mod . For example, , the only Dirichlet character of modulus , has a quasiperiod of , but not a period of (it has a period of , though). The smallest positive integer for which is quasiperiodic is the conductor of . So, for instance, has a conductor of . The conductor of is 16, the conductor of is 8 and that of and is 4. If the modulus and conductor are equal the character is primitive, otherwise imprimitive. An imprimitive character is induced by the character for the smallest modulus: is induced from and and are induced from . A related phenomenon can happen with a character mod the product of primes; its nonzero values may be periodic with a smaller period. For example, mod 15, . The nonzero values of have period 15, but those of have period 3 and those of have period 5. This is easier to see by juxtaposing them with characters mod 3 and 5: . If a character mod is defined as ,   or equivalently as its nonzero values are determined by the character mod and have period . The smallest period of the nonzero values is the conductor of the character. For example, the conductor of is 15, the conductor of is 3, and that of is 5. As in the prime-power case, if the conductor equals the modulus the character is primitive, otherwise imprimitive. If imprimitive it is induced from the character with the smaller modulus. For example, is induced from and is induced from The principal character is not primitive. The character is primitive if and only if each of the factors is primitive. Primitive characters often simplify (or make possible) formulas in the theories of L-functions and modular forms. Parity is even if and is odd if This distinction appears in the functional equation of the Dirichlet L-function. Order The order of a character is its order as an element of the group , i.e. the smallest positive integer such that Because of the isomorphism the order of is the same as the order of in The principal character has order 1; other real characters have order 2, and imaginary characters have order 3 or greater. By Lagrange's theorem the order of a character divides the order of which is Real characters is real or quadratic if all of its values are real (they must be ); otherwise it is complex or imaginary. is real if and only if ; is real if and only if ; in particular, is real and non-principal. Dirichlet's original proof that (which was only valid for prime moduli) took two different forms depending on whether was real or not. His later proof, valid for all moduli, was based on his class number formula. Real characters are Kronecker symbols; for example, the principal character can be written . The real characters in the examples are: Principal If the principal character is Primitive If the modulus is the absolute value of a fundamental discriminant there is a real primitive character (there are two if the modulus is a multiple of 8); otherwise if there are any primitive characters they are imaginary. Imprimitive Applications L-functions The Dirichlet L-series for a character is This series only converges for ; it can be analytically continued to a meromorphic function. Dirichlet introduced the -function along with the characters in his 1837 paper. Modular forms and functions Dirichlet characters appear several places in the theory of modular forms and functions. A typical example is Let and let be primitive. If define ,   Then . If is a cusp form so is See theta series of a Dirichlet character for another example. Gauss sum The Gauss sum of a Dirichlet character modulo is It appears in the functional equation of the Dirichlet L-function. Jacobi sum If and are Dirichlet characters mod a prime their Jacobi sum is Jacobi sums can be factored into products of Gauss sums. Kloosterman sum If is a Dirichlet character mod and the Kloosterman sum is defined as If it is a Gauss sum. Sufficient conditions It is not necessary to establish the defining properties 1) – 3) to show that a function is a Dirichlet character. From Davenport's book If such that 1)   2)   , 3)   If then , but 4)   is not always 0, then is one of the characters mod Sárközy's Condition A Dirichlet character is a completely multiplicative function that satisfies a linear recurrence relation: that is, if for all positive integer , where are not all zero and are distinct then is a Dirichlet character. Chudakov's Condition A Dirichlet character is a completely multiplicative function satisfying the following three properties: a) takes only finitely many values; b) vanishes at only finitely many primes; c) there is an for which the remainder is uniformly bounded, as . This equivalent definition of Dirichlet characters was conjectured by Chudakov in 1956, and proved in 2017 by Klurman and Mangerel.
Mathematics
Subdisciplines
null
174521
https://en.wikipedia.org/wiki/Infrastructure
Infrastructure
Infrastructure is the set of facilities and systems that serve a country, city, or other area, and encompasses the services and facilities necessary for its economy, households and firms to function. Infrastructure is composed of public and private physical structures such as roads, railways, bridges, airports, public transit systems, tunnels, water supply, sewers, electrical grids, and telecommunications (including Internet connectivity and broadband access). In general, infrastructure has been defined as "the physical components of interrelated systems providing commodities and services essential to enable, sustain, or enhance societal living conditions" and maintain the surrounding environment. Especially in light of the massive societal transformations needed to mitigate and adapt to climate change, contemporary infrastructure conversations frequently focus on sustainable development and green infrastructure. Acknowledging this importance, the international community has created policy focused on sustainable infrastructure through the Sustainable Development Goals, especially Sustainable Development Goal 9 "Industry, Innovation and Infrastructure". One way to describe different types of infrastructure is to classify them as two distinct kinds: hard infrastructure and soft infrastructure. Hard infrastructure is the physical networks necessary for the functioning of a modern industrial society or industry. This includes roads, bridges, and railways. Soft infrastructure is all the institutions that maintain the economic, health, social, environmental, and cultural standards of a country. This includes educational programs, official statistics, parks and recreational facilities, law enforcement agencies, and emergency services. Classifications A 1987 US National Research Council panel adopted the term "public works infrastructure", referring to: "... both specific functional modes – highways, streets, roads, and bridges; mass transit; airports and airways; water supply and water resources; wastewater management; solid-waste treatment and disposal; electric power generation and transmission; telecommunications; and hazardous waste management – and the combined system these modal elements comprise. A comprehension of infrastructure spans not only these public works facilities, but also the operating procedures, management practices, and development policies that interact together with societal demand and the physical world to facilitate the transport of people and goods, provision of water for drinking and a variety of other uses, safe disposal of society's waste products, provision of energy where it is needed, and transmission of information within and between communities." The American Society of Civil Engineers publishes an "Infrastructure Report Card" which represents the organization's opinion on the condition of various infrastructure every 2–4 years. they grade 16 categories, namely aviation, bridges, dams, drinking water, energy, hazardous waste, inland waterways, levees, parks and recreation, ports, rail, roads, schools, solid waste, transit and wastewater. The United States has received a rating of "D+" on its infrastructure. This aging infrastructure is a result of governmental neglect and inadequate funding. As the United States presumably looks to upgrade its existing infrastructure, sustainable measures could be a consideration of the design, build, and operation plans. Public Public infrastructure is that owned or available for use by the public (represented by the government). It includes: Transport infrastructure – vehicles, road, rail, cable and financing of transport Aviation infrastructure – air traffic control technology in aviation Rail transport – trackage, signals, electrification of rails Road transport – roads, bridges, tunnels Critical infrastructure – assets required to sustain human life Energy infrastructure – transmission and storage of fossil fuels and renewable sources Information and communication infrastructure – systems of information storage and distribution Public capital – government-owned assets Public works – municipal infrastructure, maintenance functions and agencies Municipal solid waste – generation, collection, management of trash/garbage Sustainable urban infrastructure – technology, architecture, policy for sustainable living Water supply network – the distribution and maintenance of water supply Wastewater infrastructure – disposal and treatment of wastewater Infrastructure-based development Personal A way to embody personal infrastructure is to think of it in terms of human capital. Human capital is defined by the Encyclopædia Britannica as "intangible collective resources possessed by individuals and groups within a given population". The goal of personal infrastructure is to determine the quality of the economic agents' values. This results in three major tasks: the task of economic proxies in the economic process (teachers, unskilled and qualified labor, etc.); the importance of personal infrastructure for an individual (short and long-term consumption of education); and the social relevance of personal infrastructure. Essentially, personal infrastructure maps the human impact on infrastructure as it is related to the economy, individual growth, and social impact. Institutional Institutional infrastructure branches from the term "economic constitution". According to Gianpiero Torrisi, institutional infrastructure is the object of economic and legal policy. It compromises the growth and sets norms. It refers to the degree of fair treatment of equal economic data and determines the framework within which economic agents may formulate their own economic plans and carry them out in co-operation with others. Sustainable Sustainable infrastructure refers to the processes of design and construction that take into consideration their environmental, economic, and social impact. Included in this section are several elements of sustainable schemes, including materials, water, energy, transportation, and waste management infrastructure. Although there are endless other factors of consideration, those will not be covered in this section. Material Material infrastructure is defined as "those immobile, non-circulating capital goods that essentially contribute to the production of infrastructure goods and services needed to satisfy basic physical and social requirements of economic agents". There are two distinct qualities of material infrastructures: 1) fulfillment of social needs and 2) mass production. The first characteristic deals with the basic needs of human life. The second characteristic is the non-availability of infrastructure goods and services. Today, there are various materials that can be used to build infrastructure. The most prevalent ones are asphalt, concrete, steel, masonry, wood, polymers and composites. Economic According to the business dictionary, economic infrastructure can be defined as "internal facilities of a country that make business activity possible, such as communication, transportation and distribution networks, financial institutions and related international markets, and energy supply systems". Economic infrastructure support productive activities and events. This includes roads, highways, bridges, airports, cycling infrastructure, water distribution networks, sewer systems, and irrigation plants. Social Social infrastructure can be broadly defined as the construction and maintenance of facilities that support social services. Social infrastructures are created to increase social comfort and promote economic activity. These include schools, parks and playgrounds, structures for public safety, waste disposal plants, hospitals, and sports areas. Core Core assets provide essential services and have monopolistic characteristics. Investors seeking core infrastructure look for five different characteristics: income, low volatility of returns, diversification, inflation protection, and long-term liability matching. Core infrastructure incorporates all the main types of infrastructure, such as roads, highways, railways, public transportation, water, and gas supply. Basic Basic infrastructure refers to main railways, roads, canals, harbors and docks, the electromagnetic telegraph, drainage, dikes, and land reclamation. It consist of the more well-known and common features of infrastructure that we come across in our daily lives (buildings, roads, docks). Complementary Complementary infrastructure refers to things like light railways, tramways, and gas/electricity/water supply. To complement something means to bring it to perfection or complete it. Complementary infrastructure deals with the little parts of the engineering world that make life more convenient and efficient. They are needed to ensure successful usage and marketing of an already finished product, like in the case of road bridges. Other examples are lights on sidewalks, landscaping around buildings, and benches where pedestrians can rest. Applications Engineering and construction Engineers generally limit the term "infrastructure" to describe fixed assets that are in the form of a large network; in other words, hard infrastructure. Efforts to devise more generic definitions of infrastructures have typically referred to the network aspects of most of the structures, and to the accumulated value of investments in the networks as assets. One such definition from 1998 defined infrastructure as the network of assets "where the system as a whole is intended to be maintained indefinitely at a specified standard of service by the continuing replacement and refurbishment of its components". Civil defense and economic development Civil defense planners and developmental economists generally refer to both hard and soft infrastructure, including public services such as schools and hospitals, emergency services such as police and fire fighting, and basic services in the economic sector. The notion of infrastructure-based development combining long-term infrastructure investments by government agencies at central and regional levels with public private partnerships has proven popular among economists in Asia (notably Singapore and China), mainland Europe, and Latin America. Military Military infrastructure is the buildings and permanent installations necessary for the support of military forces, whether they are stationed in bases, being deployed or engaged in operations. Examples include barracks, headquarters, airfields, communications facilities, stores of military equipment, port installations, and maintenance stations. Communications Communications infrastructure is the informal and formal channels of communication, political and social networks, or beliefs held by members of particular groups, as well as information technology, software development tools. Still underlying these more conceptual uses is the idea that infrastructure provides organizing structure and support for the system or organization it serves, whether it is a city, a nation, a corporation, or a collection of people with common interests. Examples include IT infrastructure, research infrastructure, terrorist infrastructure, employment infrastructure, and tourism infrastructure. Related concepts The term "infrastructure" may be confused with the following overlapping or related concepts. Land improvement and land development are general terms that in some contexts may include infrastructure, but in the context of a discussion of infrastructure would refer only to smaller-scale systems or works that are not included in infrastructure, because they are typically limited to a single parcel of land, and are owned and operated by the landowner. For example, an irrigation canal that serves a region or district would be included with infrastructure, but the private irrigation systems on individual land parcels would be considered land improvements, not infrastructure. Service connections to municipal service and public utility networks would also be considered land improvements, not infrastructure. The term "public works" includes government-owned and operated infrastructure as well as public buildings, such as schools and courthouses. Public works generally refers to physical assets needed to deliver public services. Public services include both infrastructure and services generally provided by the government. Ownership and financing Infrastructure may be owned and managed by governments or by privately held companies, such as sole public utility or railway companies. Generally, most roads, major airports and other ports, water distribution systems, and sewage networks are publicly owned, whereas most energy and telecommunications networks are privately owned. Publicly owned infrastructure may be paid for from taxes, tolls, or metered user fees, whereas private infrastructure is generally paid for by metered user fees. Major investment projects are generally financed by the issuance of long-term bonds. Government-owned and operated infrastructure may be developed and operated in the private sector or in public-private partnerships, in addition to in the public sector. in the United States for example, public spending on infrastructure has varied between 2.3% and 3.6% of GDP since 1950. Many financial institutions invest in infrastructure. In the developing world According to researchers at the Overseas Development Institute, the lack of infrastructure in many developing countries represents one of the most significant limitations to economic growth and achievement of the Millennium Development Goals (MDGs). Infrastructure investments and maintenance can be very expensive, especially in such areas as landlocked, rural and sparsely populated countries in Africa. It has been argued that infrastructure investments contributed to more than half of Africa's improved growth performance between 1990 and 2005, and increased investment is necessary to maintain growth and tackle poverty. The returns to investment in infrastructure are very significant, with on average thirty to forty percent returns for telecommunications (ICT) investments, over forty percent for electricity generation, and eighty percent for roads. Regional differences The demand for infrastructure both by consumers and by companies is much higher than the amount invested. There are severe constraints on the supply side of the provision of infrastructure in Asia. The infrastructure financing gap between what is invested in Asia-Pacific (around US$48 billion) and what is needed (US$228 billion) is around US$180 billion every year. In Latin America, three percent of GDP (around US$71 billion) would need to be invested in infrastructure in order to satisfy demand, yet in 2005, for example, only around two percent was invested leaving a financing gap of approximately US$24 billion. In Africa, in order to reach the seven percent annual growth calculated to be required to meet the MDGs by 2015 would require infrastructure investments of about fifteen percent of GDP, or around US$93 billion a year. In fragile states, over thirty-seven percent of GDP would be required. Sources of funding for infrastructure The source of financing for infrastructure varies significantly across sectors. Some sectors are dominated by government spending, others by overseas development aid (ODA), and yet others by private investors. In California, infrastructure financing districts are established by local governments to pay for physical facilities and services within a specified area by using property tax increases. In order to facilitate investment of the private sector in developing countries' infrastructure markets, it is necessary to design risk-allocation mechanisms more carefully, given the higher risks of their markets. The spending money that comes from the government is less than it used to be. From the 1930s to 2019, the United States went from spending 4.2% of GDP to 2.5% of GDP on infrastructure. These under investments have accrued, in fact, according to the 2017 ASCE Infrastructure Report Card, from 2016 to 2025, infrastructure will be underinvested by $2 trillion. Compared to the global GDP percentages, The United States is tied for second-to-last place, with an average percentage of 2.4%. This means that the government spends less money on repairing old infrastructure and or on infrastructure as a whole. In Sub-Saharan Africa, governments spend around US$9.4 billion out of a total of US$24.9 billion. In irrigation, governments represent almost all spending. In transport and energy a majority of investment is government spending. In ICT and water supply and sanitation, the private sector represents the majority of capital expenditure. Overall, between them aid, the private sector, and non-OECD financiers exceed government spending. The private sector spending alone equals state capital expenditure, though the majority is focused on ICT infrastructure investments. External financing increased in the 2000s (decade) and in Africa alone external infrastructure investments increased from US$7 billion in 2002 to US$27 billion in 2009. China, in particular, has emerged as an important investor. Coronavirus implications The 2020 COVID-19 pandemic has only exacerbated the underfunding of infrastructure globally that has been accumulating for decades. The pandemic has increased unemployment and has widely disrupted the economy. This has serious impacts on households, businesses, and federal, state and local governments. This is especially detrimental to infrastructure because it is so dependent on funding from government agencieswith state and local governments accounting for approximately 75% of spending on public infrastructure in the United States. Governments are facing enormous decreases in revenue, economic downturns, overworked health systems, and hesitant workforces, resulting in huge budget deficits across the board. However, they must also scale up public investment to ensure successful reopening, boost growth and employment, and green their economies. The unusually large scale of the packages needed for COVID-19 was accompanied by widespread calls for "greening" them to meet the dual goals of economic recovery and environmental sustainability. However, as of March 2021, only a small fraction of the G20 COVID-19 related fiscal measures was found to be climate friendly. Sustainable infrastructure Although it is readily apparent that much effort is needed to repair the economic damage inflicted by the Coronavirus epidemic, an immediate return to business as usual could be environmentally harmful, as shown by the 2007-08 financial crisis in the United States. While the ensuing economic slowdown reduced global greenhouse gas emissions in 2009, emissions reached a record high in 2010, partially due to governments' implemented economic stimulus measures with minimal consideration of the environmental consequences. The concern is whether this same pattern will repeat itself. The post-COVID-19 period could determine whether the world meets or misses the emissions goals of the 2015 Paris Agreement and limits global warming to 1.5 degrees C to 2 degrees C. As a result of the COVID-19 epidemic, a host of factors could jeopardize a low-carbon recovery plan: this includes reduced attention on the global political stage (2020 UN Climate Summit has been postponed to 2021), the relaxing of environmental regulations in pursuit of economic growth, decreased oil prices preventing low-carbon technologies from being competitive, and finally, stimulus programs that take away funds that could have been used to further the process of decarbonization. Research suggests that a recovery plan based on lower-carbon emissions could not only make significant emissions reductions needed to battle climate change, but also create more economic growth and jobs than a high-carbon recovery plan would. A study published in the Oxford Review of Economic Policy, more than 200 economists and economic officials reported that "green" economic-recovery initiatives performed at least as well as less "green" initiatives. There have also been calls for an independent body could provide a comparable assessment of countries' fiscal policies, promoting transparency and accountability at the international level. In addition, in an econometric study published in the Economic Modelling journal, an analysis on government energy technology spending showed that spending on the renewable energy sector created five more jobs per million dollars invested than spending on fossil fuels. Since sustainable infrastructure is more beneficial in both an economic and environmental context, it represents the future of infrastructure. Especially with increasing pressure from climate change and diminishing natural resources, infrastructure not only needs to maintain economic development and job development, and a high quality of life for residents, but also protect the environment and its natural resources. Sustainable energy Sustainable energy infrastructure includes types of renewable energy power plants as well as the means of exchange from the plant to the homes and businesses that use that energy. Renewable energy includes well researched and widely implemented methods such as wind, solar, and hydraulic power, as well as newer and less commonly used types of power creation such as fusion energy. Sustainable energy infrastructure must maintain a strong supply relative to demand, and must also maintain sufficiently low prices for consumers so as not to decrease demand. Any type of renewable energy infrastructure that fails to meet these consumption and price requirements will ultimately be forced out of the market by prevailing non renewable energy sources. Sustainable water Sustainable water infrastructure is focused on a community's sufficient access to clean, safe drinking water. Water is a public good along with electricity, which means that sustainable water catchment and distribution systems must remain affordable to all members of a population. "Sustainable Water" may refer to a nation or community's ability to be self-sustainable, with enough water to meet multiple needs including agriculture, industry, sanitation, and drinking water. It can also refer to the holistic and effective management of water resources. Increasingly, policy makers and regulators are incorporating Nature-based solutions (NBS or NbS) into attempts to achieve sustainable water infrastructure. Sustainable waste management Sustainable waste management systems aim to minimize the amount of waste products produced by individuals and corporations. Commercial waste management plans have transitioned from simple waste removal plans into comprehensive plans focused on reducing the total amount of waste produced before removal. Sustainable waste management is beneficial environmentally, and can also cut costs for businesses that reduce their amount of disposed goods. Sustainable transportation Sustainable transportation includes a shift away from private, greenhouse gas emitting cars in favor of adopting methods of transportation that are either carbon neutral or reduce carbon emissions such as bikes or electric bus systems. Additionally, cities must invest in the appropriate built environments for these ecologically preferable modes of transportation. Cities will need to invest in public transportation networks, as well as bike path networks among other sustainable solutions that incentivize citizens to use these alternate transit options. Reducing the urban dependency on cars is a fundamental goal of developing sustainable transportation, and this cannot be accomplished without a coordinated focus on both creating the methods of transportation themselves and providing them with networks that are equally or more efficient than existing car networks such as aging highway systems. Sustainable materials Another solution to transition into a more sustainable infrastructure is using more sustainable materials. A material is sustainable if the needed amount can be produced without depleting non-renewable resources. It also should have low environmental impacts by not disrupting the established steady-state equilibrium of it. The materials should also be resilient, renewable, reusable, and recyclable. Today, concrete is one of the most common materials used in infrastructure. There is twice as much concrete used in construction than all other building materials combined. It is the backbone of industrialization, as it is used in bridges, piers, pipelines, pavements, and buildings. However, while they do serve as a connection between cities, transportation for people and goods, and protection for land against flooding and erosion, they only last for 50 to 100 years. Many were built within the last 50 years, which means many infrastructures need substantial maintenance to continue functioning. However, concrete is not sustainable. The production of concrete contributes up to 8% of the world's greenhouse gas emissions. A tenth of the world's industrial water usage is from producing concrete. Even transporting the raw materials to concrete production sites adds to airborne pollution. Furthermore, the production sites and the infrastructures themselves all strip away agricultural land that could have been fertile soil or habitats vital to the ecosystem. Green infrastructure Green infrastructure is a type of sustainable infrastructure. Green infrastructure uses plant or soil systems to restore some of the natural processes needed to manage water, reduce the effects of disasters such as flooding, and create healthier urban environments. In a more practical sense, it refers to a decentralized network of stormwater management practices, which includes green roofs, trees, bioretention and infiltration, and permeable pavement. Green infrastructure has become an increasingly popular strategy in recent years due to its effectiveness in providing ecological, economic, and social benefitsincluding positively impacting energy consumption, air quality, and carbon reduction and sequestration. Green roofs A green roof is a rooftop that is partially or completely covered with growing vegetation planted over a membrane. It also includes additional layers, including a root barrier and drainage and irrigation systems. There are several categories of green roofs, including extensive (have a growing media depth ranging from two to six inches) and intensive (have a growing media with a depth greater than six inches). One benefit of green roofs is that they reduce stormwater runoff because of its ability to store water in its growing media, reducing the runoff entering the sewer system and waterways, which also decreases the risk of combined sewer overflows. They reduce energy usage since the growing media provides additional insulation, reduces the amount of solar radiation on the roof's surface, and provides evaporative cooling from water in the plants, which reduce the roof surface temperatures and heat influx. Green roofs also reduce atmospheric carbon dioxide since the vegetation sequesters carbon and, since they reduce energy usage and the urban heat island by reducing the roof temperature, they also lower carbon dioxide emissions from electricity generation. Tree planting Tree planting provides a host of ecological, social, and economic benefits. Trees can intercept rain, support infiltration and water storage in soil, diminish the impact of raindrops on barren surfaces, minimize soil moisture through transpiration, and they help reduce stormwater runoff. Additionally, trees contribute to recharging local aquifers and improve the health of watershed systems. Trees also reduce energy usage by providing shade and releasing water into the atmosphere which cools the air and reduces the amount of heat absorbed by buildings. Finally, trees improve air quality by absorbing harmful air pollutants reducing the amount of greenhouse gases. Bioretention and infiltration practices There are a variety of types of bioretention and infiltration practices, including rain gardens and bioswales. A rain garden is planted in a small depression or natural slope and includes native shrubs and flowers. They temporarily hold and absorb rain water and are effective in removing up to 90% of nutrients and chemicals and up to 80% of sediments from the runoff. As a result, they soak 30% more water than conventional gardens. Bioswales are planted in paved areas like parking lots or sidewalks and are made to allow for overflow into the sewer system by trapping silt and other pollutants, which are normally left over from impermeable surfaces. Both rain gardens and bioswales mitigate flood impacts and prevent stormwater from polluting local waterways; increase the usable water supply by reducing the amount of water needed for outdoor irrigation; improve air quality by minimizing the amount of water going into treatment facilities, which also reduces energy usage and, as a result, reduces air pollution since less greenhouse gases are emitted. Smart cities Smart cities use innovative methods of design and implementation in various sectors of infrastructure and planning to create communities that operate at a higher level of relative sustainability than their traditional counterparts. In a sustainable city, urban resilience as well as infrastructure reliability must both be present. Urban resilience is defined by a city's capacity to quickly adapt or recover from infrastructure defects, and infrastructure reliability means that systems must work efficiently while continuing to maximize their output. When urban resilience and infrastructure reliability interact, cities are able to produce the same level of output at similarly reasonable costs as compared to other non sustainable communities, while still maintaining ease of operation and usage. Masdar City Masdar City is a proposed zero emission smart city that will be contracted in the United Arab Emirates. Some individuals have referred to this planned settlement as "utopia-like", due to the fact that it will feature multiple sustainable infrastructure elements, including energy, water, waste management, and transportation. Masdar City will have a power infrastructure containing renewable energy methods including solar energy. Masdar City is located in a desert region, meaning that sustainable collection and distribution of water is dependent on the city's ability to use water at innovative stages of the water cycle. The city will use groundwater, greywater, seawater, blackwater, and other water resources to obtain both drinking and landscaping water. Initially, Masdar City will be waste-free. Recycling and other waste management and waste reduction methods will be encouraged. Additionally, the city will implement a system to convert waste into fertilizer, which will decrease the amount of space needed for waste accumulation as well as provide an environmentally friendly alternative to traditional fertilizer production methods. No cars will be allowed in Masdar City, contributing to low carbon emissions within the city boundaries. Instead, alternative transportation options will be prioritized during infrastructure development. This means that a bike lane network will be accessible and comprehensive, and other options will also be available.
Technology
Structures
null
174576
https://en.wikipedia.org/wiki/Transform%20fault
Transform fault
A transform fault or transform boundary, is a fault along a plate boundary where the motion is predominantly horizontal. It ends abruptly where it connects to another plate boundary, either another transform, a spreading ridge, or a subduction zone. A transform fault is a special case of a strike-slip fault that also forms a plate boundary. Most such faults are found in oceanic crust, where they accommodate the lateral offset between segments of divergent boundaries, forming a zigzag pattern. This results from oblique seafloor spreading where the direction of motion is not perpendicular to the trend of the overall divergent boundary. A smaller number of such faults are found on land, although these are generally better-known, such as the San Andreas Fault and North Anatolian Fault. Nomenclature Transform boundaries are also known as conservative plate boundaries because they involve no addition or loss of lithosphere at the Earth's surface. Background Geophysicist and geologist John Tuzo Wilson recognized that the offsets of oceanic ridges by faults do not follow the classical pattern of an offset fence or geological marker in Reid's rebound theory of faulting, from which the sense of slip is derived. The new class of faults, called transform faults, produce slip in the opposite direction from what one would surmise from the standard interpretation of an offset geological feature. Slip along transform faults does not increase the distance between the ridges it separates; the distance remains constant in earthquakes because the ridges are spreading centers. This hypothesis was confirmed in a study of the fault plane solutions that showed the slip on transform faults points in the opposite direction than classical interpretation would suggest. Difference between transform and transcurrent faults Transform faults are closely related to transcurrent faults and are commonly confused. Both types of fault are strike-slip or side-to-side in movement; nevertheless, transform faults always end at a junction with another plate boundary, while transcurrent faults may die out without a junction with another fault. Finally, transform faults form a tectonic plate boundary, while transcurrent faults do not. Mechanics Faults in general are focused areas of deformation or strain, which are the response of built-up stresses in the form of compression, tension, or shear stress in rock at the surface or deep in the Earth's subsurface. Transform faults specifically accommodate lateral strain by transferring displacement between mid-ocean ridges or subduction zones. They also act as the plane of weakness, which may result in splitting in rift zones. Transform faults and divergent boundaries Transform faults are commonly found linking segments of divergent boundaries (mid-oceanic ridges or spreading centres). These mid-oceanic ridges are where new seafloor is constantly created through the upwelling of new basaltic magma. With new seafloor being pushed and pulled out, the older seafloor slowly slides away from the mid-oceanic ridges toward the continents. Although separated only by tens of kilometers, this separation between segments of the ridges causes portions of the seafloor to push past each other in opposing directions. This lateral movement of seafloors past each other is where transform faults are currently active. Transform faults move differently from a strike-slip fault at the mid-oceanic ridge. Instead of the ridges moving away from each other, as they do in other strike-slip faults, transform-fault ridges remain in the same, fixed locations, and the new ocean seafloor created at the ridges is pushed away from the ridge. Evidence of this motion can be found in paleomagnetic striping on the seafloor. A paper written by geophysicist Taras Gerya theorizes that the creation of the transform faults between the ridges of the mid-oceanic ridge is attributed to rotated and stretched sections of the mid-oceanic ridge. This occurs over a long period of time with the spreading center or ridge slowly deforming from a straight line to a curved line. Finally, fracturing along these planes forms transform faults. As this takes place, the fault changes from a normal fault with extensional stress to a strike-slip fault with lateral stress. In the study done by Bonatti and Crane, peridotite and gabbro rocks were discovered in the edges of the transform ridges. These rocks are created deep inside the Earth's mantle and then rapidly exhumed to the surface. This evidence helps to prove that new seafloor is being created at the mid-oceanic ridges and further supports the theory of plate tectonics. Active transform faults are between two tectonic structures or faults. Fracture zones represent the previously active transform-fault lines, which have since passed the active transform zone and are being pushed toward the continents. These elevated ridges on the ocean floor can be traced for hundreds of miles and in some cases even from one continent across an ocean to the other continent. Types In his work on transform-fault systems, geologist Tuzo Wilson said that transform faults must be connected to other faults or tectonic-plate boundaries on both ends; because of that requirement, transform faults can grow in length, keep a constant length, or decrease in length. These length changes are dependent on which type of fault or tectonic structure connect with the transform fault. Wilson described six types of transform faults: Growing length: In situations where a transform fault links a spreading center and the upper block of a subduction zone or where two upper blocks of subduction zones are linked, the transform fault itself will grow in length. Constant length: In other cases, transform faults will remain at a constant length. This steadiness can be attributed to many different causes. In the case of ridge-to-ridge transforms, the constancy is caused by the continuous growth by both ridges outward, canceling any change in length. The opposite occurs when a ridge linked to a subducting plate, where all the lithosphere (new seafloor) being created by the ridge is subducted, or swallowed up, by the subduction zone. Finally, when two upper subduction plates are linked there is no change in length. This is due to the plates moving parallel with each other and no new lithosphere is being created to change that length. Decreasing length faults: In rare cases, transform faults can shrink in length. These occur when two descending subduction plates are linked by a transform fault. In time as the plates are subducted, the transform fault will decrease in length until the transform fault disappears completely, leaving only two subduction zones facing in opposite directions. Examples The most prominent examples of the mid-oceanic ridge transform zones are in the Atlantic Ocean between South America and Africa. Known as the St. Paul, Romanche, Chain, and Ascension fracture zones, these areas have deep, easily identifiable transform faults and ridges. Other locations include: the East Pacific Ridge located in the South Eastern Pacific Ocean, which meets up with San Andreas Fault to the North. Transform faults are not limited to oceanic crust and spreading centers; many of them are on continental margins. The best example is the San Andreas Fault on the Pacific coast of the United States. The San Andreas Fault links the East Pacific Rise off the West coast of Mexico (Gulf of California) to the Mendocino triple junction (Part of the Juan de Fuca plate) off the coast of the Northwestern United States, making it a ridge-to-transform-style fault. The formation of the San Andreas Fault system occurred fairly recently during the Oligocene Period between 34 million and 24 million years ago. During this period, the Farallon plate, followed by the Pacific plate, collided into the North American plate. The collision led to the subduction of the Farallon plate underneath the North American plate. Once the spreading center separating the Pacific and the Farallon plates was subducted beneath the North American plate, the San Andreas Continental Transform-Fault system was created. In New Zealand, the South Island's Alpine Fault is a transform fault for much of its length. This has resulted in the folded land of the Southland Syncline being split into an eastern and western section several hundred kilometres apart. The majority of the syncline is found in Southland and The Catlins in the island's southeast, but a smaller section is also present in the Tasman District in the island's northwest. Other examples include: Middle East's Dead Sea Transform Fault Pakistan's Chaman Fault Turkey's North Anatolian Fault North America's Queen Charlotte Fault Myanmar's Sagaing Fault
Physical sciences
Tectonics
Earth science
174705
https://en.wikipedia.org/wiki/Algebraic%20number%20theory
Algebraic number theory
Algebraic number theory is a branch of number theory that uses the techniques of abstract algebra to study the integers, rational numbers, and their generalizations. Number-theoretic questions are expressed in terms of properties of algebraic objects such as algebraic number fields and their rings of integers, finite fields, and function fields. These properties, such as whether a ring admits unique factorization, the behavior of ideals, and the Galois groups of fields, can resolve questions of primary importance in number theory, like the existence of solutions to Diophantine equations. History Diophantus The beginnings of algebraic number theory can be traced to Diophantine equations, named after the 3rd-century Alexandrian mathematician, Diophantus, who studied them and developed methods for the solution of some kinds of Diophantine equations. A typical Diophantine problem is to find two integers x and y such that their sum, and the sum of their squares, equal two given numbers A and B, respectively: Diophantine equations have been studied for thousands of years. For example, the solutions to the quadratic Diophantine equation x2 + y2 = z2 are given by the Pythagorean triples, originally solved by the Babylonians (). Solutions to linear Diophantine equations, such as 26x + 65y = 13, may be found using the Euclidean algorithm (c. 5th century BC). Diophantus's major work was the Arithmetica, of which only a portion has survived. Fermat Fermat's Last Theorem was first conjectured by Pierre de Fermat in 1637, famously in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. No successful proof was published until 1995 despite the efforts of countless mathematicians during the 358 intervening years. The unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century. Gauss One of the founding works of algebraic number theory, the Disquisitiones Arithmeticae (Latin: Arithmetical Investigations) is a textbook of number theory written in Latin by Carl Friedrich Gauss in 1798 when Gauss was 21 and first published in 1801 when he was 24. In this book Gauss brings together results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange and Legendre and adds important new results of his own. Before the Disquisitiones was published, number theory consisted of a collection of isolated theorems and conjectures. Gauss brought the work of his predecessors together with his own original work into a systematic framework, filled in gaps, corrected unsound proofs, and extended the subject in numerous ways. The Disquisitiones was the starting point for the work of other nineteenth century European mathematicians including Ernst Kummer, Peter Gustav Lejeune Dirichlet and Richard Dedekind. Many of the annotations given by Gauss are in effect announcements of further research of his own, some of which remained unpublished. They must have appeared particularly cryptic to his contemporaries; we can now read them as containing the germs of the theories of L-functions and complex multiplication, in particular. Dirichlet In a couple of papers in 1838 and 1839 Peter Gustav Lejeune Dirichlet proved the first class number formula, for quadratic forms (later refined by his student Leopold Kronecker). The formula, which Jacobi called a result "touching the utmost of human acumen", opened the way for similar results regarding more general number fields. Based on his research of the structure of the unit group of quadratic fields, he proved the Dirichlet unit theorem, a fundamental result in algebraic number theory. He first used the pigeonhole principle, a basic counting argument, in the proof of a theorem in diophantine approximation, later named after him Dirichlet's approximation theorem. He published important contributions to Fermat's last theorem, for which he proved the cases n = 5 and n = 14, and to the biquadratic reciprocity law. The Dirichlet divisor problem, for which he found the first results, is still an unsolved problem in number theory despite later contributions by other researchers. Dedekind Richard Dedekind's study of Lejeune Dirichlet's work was what led him to his later study of algebraic number fields and ideals. In 1863, he published Lejeune Dirichlet's lectures on number theory as Vorlesungen über Zahlentheorie ("Lectures on Number Theory") about which it has been written that: 1879 and 1894 editions of the Vorlesungen included supplements introducing the notion of an ideal, fundamental to ring theory. (The word "Ring", introduced later by Hilbert, does not appear in Dedekind's work.) Dedekind defined an ideal as a subset of a set of numbers, composed of algebraic integers that satisfy polynomial equations with integer coefficients. The concept underwent further development in the hands of Hilbert and, especially, of Emmy Noether. Ideals generalize Ernst Eduard Kummer's ideal numbers, devised as part of Kummer's 1843 attempt to prove Fermat's Last Theorem. Hilbert David Hilbert unified the field of algebraic number theory with his 1897 treatise Zahlbericht (literally "report on numbers"). He also resolved a significant number-theory problem formulated by Waring in 1770. As with the finiteness theorem, he used an existence proof that shows there must be solutions for the problem rather than providing a mechanism to produce the answers. He then had little more to publish on the subject; but the emergence of Hilbert modular forms in the dissertation of a student means his name is further attached to a major area. He made a series of conjectures on class field theory. The concepts were highly influential, and his own contribution lives on in the names of the Hilbert class field and of the Hilbert symbol of local class field theory. Results were mostly proved by 1930, after work by Teiji Takagi. Artin Emil Artin established the Artin reciprocity law in a series of papers (1924; 1927; 1930). This law is a general theorem in number theory that forms a central part of global class field theory. The term "reciprocity law" refers to a long line of more concrete number theoretic statements which it generalized, from the quadratic reciprocity law and the reciprocity laws of Eisenstein and Kummer to Hilbert's product formula for the norm symbol. Artin's result provided a partial solution to Hilbert's ninth problem. Modern theory Around 1955, Japanese mathematicians Goro Shimura and Yutaka Taniyama observed a possible link between two apparently completely distinct, branches of mathematics, elliptic curves and modular forms. The resulting modularity theorem (at the time known as the Taniyama–Shimura conjecture) states that every elliptic curve is modular, meaning that it can be associated with a unique modular form. It was initially dismissed as unlikely or highly speculative, but was taken more seriously when number theorist André Weil found evidence supporting it, yet no proof; as a result the "astounding" conjecture was often known as the Taniyama–Shimura-Weil conjecture. It became a part of the Langlands program, a list of important conjectures needing proof or disproof. From 1993 to 1994, Andrew Wiles provided a proof of the modularity theorem for semistable elliptic curves, which, together with Ribet's theorem, provided a proof for Fermat's Last Theorem. Almost every mathematician at the time had previously considered both Fermat's Last Theorem and the Modularity Theorem either impossible or virtually impossible to prove, even given the most cutting-edge developments. Wiles first announced his proof in June 1993 in a version that was soon recognized as having a serious gap at a key point. The proof was corrected by Wiles, partly in collaboration with Richard Taylor, and the final, widely accepted version was released in September 1994, and formally published in 1995. The proof uses many techniques from algebraic geometry and number theory, and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry, such as the category of schemes and Iwasawa theory, and other 20th-century techniques not available to Fermat. Basic notions Failure of unique factorization An important property of the ring of integers is that it satisfies the fundamental theorem of arithmetic, that every (positive) integer has a factorization into a product of prime numbers, and this factorization is unique up to the ordering of the factors. This may no longer be true in the ring of integers of an algebraic number field . A prime element is an element of such that if divides a product , then it divides one of the factors or . This property is closely related to primality in the integers, because any positive integer satisfying this property is either or a prime number. However, it is strictly weaker. For example, is not a prime number because it is negative, but it is a prime element. If factorizations into prime elements are permitted, then, even in the integers, there are alternative factorizations such as In general, if is a unit, meaning a number with a multiplicative inverse in , and if is a prime element, then is also a prime element. Numbers such as and are said to be associate. In the integers, the primes and are associate, but only one of these is positive. Requiring that prime numbers be positive selects a unique element from among a set of associated prime elements. When K is not the rational numbers, however, there is no analog of positivity. For example, in the Gaussian integers , the numbers and are associate because the latter is the product of the former by , but there is no way to single out one as being more canonical than the other. This leads to equations such as which prove that in , it is not true that factorizations are unique up to the order of the factors. For this reason, one adopts the definition of unique factorization used in unique factorization domains (UFDs). In a UFD, the prime elements occurring in a factorization are only expected to be unique up to units and their ordering. However, even with this weaker definition, many rings of integers in algebraic number fields do not admit unique factorization. There is an algebraic obstruction called the ideal class group. When the ideal class group is trivial, the ring is a UFD. When it is not, there is a distinction between a prime element and an irreducible element. An irreducible element is an element such that if , then either or is a unit. These are the elements that cannot be factored any further. Every element in O admits a factorization into irreducible elements, but it may admit more than one. This is because, while all prime elements are irreducible, some irreducible elements may not be prime. For example, consider the ring . In this ring, the numbers , and are irreducible. This means that the number has two factorizations into irreducible elements, This equation shows that divides the product . If were a prime element, then it would divide or , but it does not, because all elements divisible by are of the form . Similarly, and divide the product , but neither of these elements divides itself, so neither of them are prime. As there is no sense in which the elements , and can be made equivalent, unique factorization fails in . Unlike the situation with units, where uniqueness could be repaired by weakening the definition, overcoming this failure requires a new perspective. Factorization into prime ideals If is an ideal in , then there is always a factorization where each is a prime ideal, and where this expression is unique up to the order of the factors. In particular, this is true if is the principal ideal generated by a single element. This is the strongest sense in which the ring of integers of a general number field admits unique factorization. In the language of ring theory, it says that rings of integers are Dedekind domains. When is a UFD, every prime ideal is generated by a prime element. Otherwise, there are prime ideals which are not generated by prime elements. In , for instance, the ideal is a prime ideal which cannot be generated by a single element. Historically, the idea of factoring ideals into prime ideals was preceded by Ernst Kummer's introduction of ideal numbers. These are numbers lying in an extension field of . This extension field is now known as the Hilbert class field. By the principal ideal theorem, every prime ideal of generates a principal ideal of the ring of integers of . A generator of this principal ideal is called an ideal number. Kummer used these as a substitute for the failure of unique factorization in cyclotomic fields. These eventually led Richard Dedekind to introduce a forerunner of ideals and to prove unique factorization of ideals. An ideal which is prime in the ring of integers in one number field may fail to be prime when extended to a larger number field. Consider, for example, the prime numbers. The corresponding ideals are prime ideals of the ring . However, when this ideal is extended to the Gaussian integers to obtain , it may or may not be prime. For example, the factorization implies that note that because , the ideals generated by and are the same. A complete answer to the question of which ideals remain prime in the Gaussian integers is provided by Fermat's theorem on sums of two squares. It implies that for an odd prime number , is a prime ideal if and is not a prime ideal if . This, together with the observation that the ideal is prime, provides a complete description of the prime ideals in the Gaussian integers. Generalizing this simple result to more general rings of integers is a basic problem in algebraic number theory. Class field theory accomplishes this goal when K is an abelian extension of Q (that is, a Galois extension with abelian Galois group). Ideal class group Unique factorization fails if and only if there are prime ideals that fail to be principal. The object which measures the failure of prime ideals to be principal is called the ideal class group. Defining the ideal class group requires enlarging the set of ideals in a ring of algebraic integers so that they admit a group structure. This is done by generalizing ideals to fractional ideals. A fractional ideal is an additive subgroup of which is closed under multiplication by elements of , meaning that if . All ideals of are also fractional ideals. If and are fractional ideals, then the set of all products of an element in and an element in is also a fractional ideal. This operation makes the set of non-zero fractional ideals into a group. The group identity is the ideal , and the inverse of is a (generalized) ideal quotient: The principal fractional ideals, meaning the ones of the form where , form a subgroup of the group of all non-zero fractional ideals. The quotient of the group of non-zero fractional ideals by this subgroup is the ideal class group. Two fractional ideals and represent the same element of the ideal class group if and only if there exists an element such that . Therefore, the ideal class group makes two fractional ideals equivalent if one is as close to being principal as the other is. The ideal class group is generally denoted , , or (with the last notation identifying it with the Picard group in algebraic geometry). The number of elements in the class group is called the class number of K. The class number of is 2. This means that there are only two ideal classes, the class of principal fractional ideals, and the class of a non-principal fractional ideal such as . The ideal class group has another description in terms of divisors. These are formal objects which represent possible factorizations of numbers. The divisor group is defined to be the free abelian group generated by the prime ideals of . There is a group homomorphism from , the non-zero elements of up to multiplication, to . Suppose that satisfies Then is defined to be the divisor The kernel of is the group of units in , while the cokernel is the ideal class group. In the language of homological algebra, this says that there is an exact sequence of abelian groups (written multiplicatively), Real and complex embeddings Some number fields, such as , can be specified as subfields of the real numbers. Others, such as , cannot. Abstractly, such a specification corresponds to a field homomorphism or . These are called real embeddings and complex embeddings, respectively. A real quadratic field , with , and not a perfect square, is so-called because it admits two real embeddings but no complex embeddings. These are the field homomorphisms which send to and to , respectively. Dually, an imaginary quadratic field admits no real embeddings but admits a conjugate pair of complex embeddings. One of these embeddings sends to , while the other sends it to its complex conjugate, . Conventionally, the number of real embeddings of is denoted , while the number of conjugate pairs of complex embeddings is denoted . The signature of K is the pair . It is a theorem that , where is the degree of . Considering all embeddings at once determines a function , or equivalently This is called the Minkowski embedding. The subspace of the codomain fixed by complex conjugation is a real vector space of dimension called Minkowski space. Because the Minkowski embedding is defined by field homomorphisms, multiplication of elements of by an element corresponds to multiplication by a diagonal matrix in the Minkowski embedding. The dot product on Minkowski space corresponds to the trace form . The image of under the Minkowski embedding is a -dimensional lattice. If is a basis for this lattice, then is the discriminant of . The discriminant is denoted or . The covolume of the image of is . Places Real and complex embeddings can be put on the same footing as prime ideals by adopting a perspective based on valuations. Consider, for example, the integers. In addition to the usual absolute value function |·| : Q → R, there are p-adic absolute value functions |·|p : Q → R, defined for each prime number p, which measure divisibility by p. Ostrowski's theorem states that these are all possible absolute value functions on Q (up to equivalence). Therefore, absolute values are a common language to describe both the real embedding of Q and the prime numbers. A place of an algebraic number field is an equivalence class of absolute value functions on K. There are two types of places. There is a -adic absolute value for each prime ideal of O, and, like the p-adic absolute values, it measures divisibility. These are called finite places. The other type of place is specified using a real or complex embedding of K and the standard absolute value function on R or C. These are infinite places. Because absolute values are unable to distinguish between a complex embedding and its conjugate, a complex embedding and its conjugate determine the same place. Therefore, there are real places and complex places. Because places encompass the primes, places are sometimes referred to as primes. When this is done, finite places are called finite primes and infinite places are called infinite primes. If is a valuation corresponding to an absolute value, then one frequently writes to mean that is an infinite place and to mean that it is a finite place. Considering all the places of the field together produces the adele ring of the number field. The adele ring allows one to simultaneously track all the data available using absolute values. This produces significant advantages in situations where the behavior at one place can affect the behavior at other places, as in the Artin reciprocity law. Places at infinity geometrically There is a geometric analogy for places at infinity which holds on the function fields of curves. For example, let and be a smooth, projective, algebraic curve. The function field has many absolute values, or places, and each corresponds to a point on the curve. If is the projective completion of an affine curve then the points in correspond to the places at infinity. Then, the completion of at one of these points gives an analogue of the -adics. For example, if then its function field is isomorphic to where is an indeterminant and the field is the field of fractions of polynomials in . Then, a place at a point measures the order of vanishing or the order of a pole of a fraction of polynomials at the point . For example, if , so on the affine chart this corresponds to the point , the valuation measures the order of vanishing of minus the order of vanishing of at . The function field of the completion at the place is then which is the field of power series in the variable , so an element is of the formfor some . For the place at infinity, this corresponds to the function field which are power series of the form Units The integers have only two units, and . Other rings of integers may admit more units. The Gaussian integers have four units, the previous two as well as . The Eisenstein integers have six units. The integers in real quadratic number fields have infinitely many units. For example, in , every power of is a unit, and all these powers are distinct. In general, the group of units of , denoted , is a finitely generated abelian group. The fundamental theorem of finitely generated abelian groups therefore implies that it is a direct sum of a torsion part and a free part. Reinterpreting this in the context of a number field, the torsion part consists of the roots of unity that lie in . This group is cyclic. The free part is described by Dirichlet's unit theorem. This theorem says that rank of the free part is . Thus, for example, the only fields for which the rank of the free part is zero are and the imaginary quadratic fields. A more precise statement giving the structure of O× ⊗Z Q as a Galois module for the Galois group of K/Q is also possible. The free part of the unit group can be studied using the infinite places of . Consider the function where varies over the infinite places of and |·|v is the absolute value associated with . The function is a homomorphism from to a real vector space. It can be shown that the image of is a lattice that spans the hyperplane defined by The covolume of this lattice is the regulator of the number field. One of the simplifications made possible by working with the adele ring is that there is a single object, the idele class group, that describes both the quotient by this lattice and the ideal class group. Zeta function The Dedekind zeta function of a number field, analogous to the Riemann zeta function, is an analytic object which describes the behavior of prime ideals in . When is an abelian extension of , Dedekind zeta functions are products of Dirichlet L-functions, with there being one factor for each Dirichlet character. The trivial character corresponds to the Riemann zeta function. When is a Galois extension, the Dedekind zeta function is the Artin L-function of the regular representation of the Galois group of , and it has a factorization in terms of irreducible Artin representations of the Galois group. The zeta function is related to the other invariants described above by the class number formula. Local fields Completing a number field K at a place w gives a complete field. If the valuation is Archimedean, one obtains R or C, if it is non-Archimedean and lies over a prime p of the rationals, one obtains a finite extension a complete, discrete valued field with finite residue field. This process simplifies the arithmetic of the field and allows the local study of problems. For example, the Kronecker–Weber theorem can be deduced easily from the analogous local statement. The philosophy behind the study of local fields is largely motivated by geometric methods. In algebraic geometry, it is common to study varieties locally at a point by localizing to a maximal ideal. Global information can then be recovered by gluing together local data. This spirit is adopted in algebraic number theory. Given a prime in the ring of algebraic integers in a number field, it is desirable to study the field locally at that prime. Therefore, one localizes the ring of algebraic integers to that prime and then completes the fraction field much in the spirit of geometry. Major results Finiteness of the class group One of the classical results in algebraic number theory is that the ideal class group of an algebraic number field K is finite. This is a consequence of Minkowski's theorem since there are only finitely many Integral ideals with norm less than a fixed positive integer page 78. The order of the class group is called the class number, and is often denoted by the letter h. Dirichlet's unit theorem Dirichlet's unit theorem provides a description of the structure of the multiplicative group of units O× of the ring of integers O. Specifically, it states that O× is isomorphic to G × Zr, where G is the finite cyclic group consisting of all the roots of unity in O, and r = r1 + r2 − 1 (where r1 (respectively, r2) denotes the number of real embeddings (respectively, pairs of conjugate non-real embeddings) of K). In other words, O× is a finitely generated abelian group of rank r1 + r2 − 1 whose torsion consists of the roots of unity in O. Reciprocity laws In terms of the Legendre symbol, the law of quadratic reciprocity for positive odd primes states A reciprocity law is a generalization of the law of quadratic reciprocity. There are several different ways to express reciprocity laws. The early reciprocity laws found in the 19th century were usually expressed in terms of a power residue symbol (p/q) generalizing the quadratic reciprocity symbol, that describes when a prime number is an nth power residue modulo another prime, and gave a relation between (p/q) and (q/p). Hilbert reformulated the reciprocity laws as saying that a product over p of Hilbert symbols (a,b/p), taking values in roots of unity, is equal to 1. Artin's reformulated reciprocity law states that the Artin symbol from ideals (or ideles) to elements of a Galois group is trivial on a certain subgroup. Several more recent generalizations express reciprocity laws using cohomology of groups or representations of adelic groups or algebraic K-groups, and their relationship with the original quadratic reciprocity law can be hard to see. Class number formula The class number formula relates many important invariants of a number field to a special value of its Dedekind zeta function. Related areas Algebraic number theory interacts with many other mathematical disciplines. It uses tools from homological algebra. Via the analogy of function fields vs. number fields, it relies on techniques and ideas from algebraic geometry. Moreover, the study of higher-dimensional schemes over Z instead of number rings is referred to as arithmetic geometry. Algebraic number theory is also used in the study of arithmetic hyperbolic 3-manifolds.
Mathematics
Other
null
174706
https://en.wikipedia.org/wiki/Laplace%20operator
Laplace operator
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian of a function at a point measures by how much the average value of over small spheres or balls centered at deviates from . The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation are called harmonic functions and represent the possible gravitational potentials in regions of vacuum. The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials; the diffusion equation describes heat and fluid flow; the wave equation describes wave propagation; and the Schrödinger equation describes the wave function in quantum mechanics. In image processing and computer vision, the Laplacian operator has been used for various tasks, such as blob and edge detection. The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology. Definition The Laplace operator is a second-order differential operator in the n-dimensional Euclidean space, defined as the divergence () of the gradient (). Thus if is a twice-differentiable real-valued function, then the Laplacian of is the real-valued function defined by: where the latter notations derive from formally writing: Explicitly, the Laplacian of is thus the sum of all the unmixed second partial derivatives in the Cartesian coordinates : As a second-order differential operator, the Laplace operator maps functions to functions for . It is a linear operator , or more generally, an operator for any open set . Alternatively, the Laplace operator can be defined as: Where is the dimension of the space, is the average value of on the surface of a n-sphere of radius R, is the surface integral over a n-sphere of radius R, and is the hypervolume of the boundary of a unit n-sphere. Motivation Diffusion In the physical theory of diffusion, the Laplace operator arises naturally in the mathematical description of equilibrium. Specifically, if is the density at equilibrium of some quantity such as a chemical concentration, then the net flux of through the boundary (also called ) of any smooth region is zero, provided there is no source or sink within : where is the outward unit normal to the boundary of . By the divergence theorem, Since this holds for all smooth regions , one can show that it implies: The left-hand side of this equation is the Laplace operator, and the entire equation is known as Laplace's equation. Solutions of the Laplace equation, i.e. functions whose Laplacian is identically zero, thus represent possible equilibrium densities under diffusion. The Laplace operator itself has a physical interpretation for non-equilibrium diffusion as the extent to which a point represents a source or sink of chemical concentration, in a sense made precise by the diffusion equation. This interpretation of the Laplacian is also explained by the following fact about averages. Averages Given a twice continuously differentiable function and a point , the average value of over the ball with radius centered at is: Similarly, the average value of over the sphere (the boundary of a ball) with radius centered at is: Density associated with a potential If denotes the electrostatic potential associated to a charge distribution , then the charge distribution itself is given by the negative of the Laplacian of : where is the electric constant. This is a consequence of Gauss's law. Indeed, if is any smooth region with boundary , then by Gauss's law the flux of the electrostatic field across the boundary is proportional to the charge enclosed: where the first equality is due to the divergence theorem. Since the electrostatic field is the (negative) gradient of the potential, this gives: Since this holds for all regions , we must have The same approach implies that the negative of the Laplacian of the gravitational potential is the mass distribution. Often the charge (or mass) distribution are given, and the associated potential is unknown. Finding the potential function subject to suitable boundary conditions is equivalent to solving Poisson's equation. Energy minimization Another motivation for the Laplacian appearing in physics is that solutions to in a region are functions that make the Dirichlet energy functional stationary: To see this, suppose is a function, and is a function that vanishes on the boundary of . Then: where the last equality follows using Green's first identity. This calculation shows that if , then is stationary around . Conversely, if is stationary around , then by the fundamental lemma of calculus of variations. Coordinate expressions Two dimensions The Laplace operator in two dimensions is given by: In Cartesian coordinates, where and are the standard Cartesian coordinates of the -plane. In polar coordinates, where represents the radial distance and the angle. Three dimensions In three dimensions, it is common to work with the Laplacian in a variety of different coordinate systems. In Cartesian coordinates, In cylindrical coordinates, where represents the radial distance, the azimuth angle and the height. In spherical coordinates: or by expanding the first and second term, these expressions read where represents the azimuthal angle and the zenith angle or co-latitude. In particular, the above is equivalent to where is the Laplace-Beltrami operator on the unit sphere. In general curvilinear coordinates (): where summation over the repeated indices is implied, is the inverse metric tensor and are the Christoffel symbols for the selected coordinates. dimensions In arbitrary curvilinear coordinates in dimensions (), we can write the Laplacian in terms of the inverse metric tensor, : from the Voss-Weyl formula for the divergence. In spherical coordinates in dimensions, with the parametrization with representing a positive real radius and an element of the unit sphere , where is the Laplace–Beltrami operator on the -sphere, known as the spherical Laplacian. The two radial derivative terms can be equivalently rewritten as: As a consequence, the spherical Laplacian of a function defined on can be computed as the ordinary Laplacian of the function extended to so that it is constant along rays, i.e., homogeneous of degree zero. Euclidean invariance The Laplacian is invariant under all Euclidean transformations: rotations and translations. In two dimensions, for example, this means that: for all θ, a, and b. In arbitrary dimensions, whenever ρ is a rotation, and likewise: whenever τ is a translation. (More generally, this remains true when ρ is an orthogonal transformation such as a reflection.) In fact, the algebra of all scalar linear differential operators, with constant coefficients, that commute with all Euclidean transformations, is the polynomial algebra generated by the Laplace operator. Spectral theory The spectrum of the Laplace operator consists of all eigenvalues for which there is a corresponding eigenfunction with: This is known as the Helmholtz equation. If is a bounded domain in , then the eigenfunctions of the Laplacian are an orthonormal basis for the Hilbert space . This result essentially follows from the spectral theorem on compact self-adjoint operators, applied to the inverse of the Laplacian (which is compact, by the Poincaré inequality and the Rellich–Kondrachov theorem). It can also be shown that the eigenfunctions are infinitely differentiable functions. More generally, these results hold for the Laplace–Beltrami operator on any compact Riemannian manifold with boundary, or indeed for the Dirichlet eigenvalue problem of any elliptic operator with smooth coefficients on a bounded domain. When is the -sphere, the eigenfunctions of the Laplacian are the spherical harmonics. Vector Laplacian The vector Laplace operator, also denoted by , is a differential operator defined over a vector field. The vector Laplacian is similar to the scalar Laplacian; whereas the scalar Laplacian applies to a scalar field and returns a scalar quantity, the vector Laplacian applies to a vector field, returning a vector quantity. When computed in orthonormal Cartesian coordinates, the returned vector field is equal to the vector field of the scalar Laplacian applied to each vector component. The vector Laplacian of a vector field is defined as This definition can be seen as the Helmholtz decomposition of the vector Laplacian. In Cartesian coordinates, this reduces to the much simpler form as where , , and are the components of the vector field , and just on the left of each vector field component is the (scalar) Laplace operator. This can be seen to be a special case of Lagrange's formula; see Vector triple product. For expressions of the vector Laplacian in other coordinate systems see Del in cylindrical and spherical coordinates. Generalization The Laplacian of any tensor field ("tensor" includes scalar and vector) is defined as the divergence of the gradient of the tensor: For the special case where is a scalar (a tensor of degree zero), the Laplacian takes on the familiar form. If is a vector (a tensor of first degree), the gradient is a covariant derivative which results in a tensor of second degree, and the divergence of this is again a vector. The formula for the vector Laplacian above may be used to avoid tensor math and may be shown to be equivalent to the divergence of the Jacobian matrix shown below for the gradient of a vector: And, in the same manner, a dot product, which evaluates to a vector, of a vector by the gradient of another vector (a tensor of 2nd degree) can be seen as a product of matrices: This identity is a coordinate dependent result, and is not general. Use in physics An example of the usage of the vector Laplacian is the Navier-Stokes equations for a Newtonian incompressible flow: where the term with the vector Laplacian of the velocity field represents the viscous stresses in the fluid. Another example is the wave equation for the electric field that can be derived from Maxwell's equations in the absence of charges and currents: This equation can also be written as: where is the D'Alembertian, used in the Klein–Gordon equation. Some properties First of all, we say that a smooth function is superharmonic whenever . Let be a smooth function, and let be a connected compact set. If is superharmonic, then, for every , we have for some constant depending on and . Generalizations A version of the Laplacian can be defined wherever the Dirichlet energy functional makes sense, which is the theory of Dirichlet forms. For spaces with additional structure, one can give more explicit descriptions of the Laplacian, as follows. Laplace–Beltrami operator The Laplacian also can be generalized to an elliptic operator called the Laplace–Beltrami operator defined on a Riemannian manifold. The Laplace–Beltrami operator, when applied to a function, is the trace () of the function's Hessian: where the trace is taken with respect to the inverse of the metric tensor. The Laplace–Beltrami operator also can be generalized to an operator (also called the Laplace–Beltrami operator) which operates on tensor fields, by a similar formula. Another generalization of the Laplace operator that is available on pseudo-Riemannian manifolds uses the exterior derivative, in terms of which the "geometer's Laplacian" is expressed as Here is the codifferential, which can also be expressed in terms of the Hodge star and the exterior derivative. This operator differs in sign from the "analyst's Laplacian" defined above. More generally, the "Hodge" Laplacian is defined on differential forms by This is known as the Laplace–de Rham operator, which is related to the Laplace–Beltrami operator by the Weitzenböck identity. D'Alembertian The Laplacian can be generalized in certain ways to non-Euclidean spaces, where it may be elliptic, hyperbolic, or ultrahyperbolic. In Minkowski space the Laplace–Beltrami operator becomes the D'Alembert operator or D'Alembertian: It is the generalization of the Laplace operator in the sense that it is the differential operator which is invariant under the isometry group of the underlying space and it reduces to the Laplace operator if restricted to time-independent functions. The overall sign of the metric here is chosen such that the spatial parts of the operator admit a negative sign, which is the usual convention in high-energy particle physics. The D'Alembert operator is also known as the wave operator because it is the differential operator appearing in the wave equations, and it is also part of the Klein–Gordon equation, which reduces to the wave equation in the massless case. The additional factor of in the metric is needed in physics if space and time are measured in different units; a similar factor would be required if, for example, the direction were measured in meters while the direction were measured in centimeters. Indeed, theoretical physicists usually work in units such that in order to simplify the equation. The d'Alembert operator generalizes to a hyperbolic operator on pseudo-Riemannian manifolds.
Mathematics
Multivariable and vector calculus
null
174762
https://en.wikipedia.org/wiki/Reflection%20nebula
Reflection nebula
In astronomy, reflection nebulae are clouds of interstellar dust which might reflect the light of a nearby star or stars. The energy from the nearby stars is insufficient to ionize the gas of the nebula to create an emission nebula, but is enough to give sufficient scattering to make the dust visible. Thus, the frequency spectrum shown by reflection nebulae is similar to that of the illuminating stars. Among the microscopic particles responsible for the scattering are carbon compounds (e. g. diamond dust) and compounds of other elements such as iron and nickel. The latter two are often aligned with the galactic magnetic field and cause the scattered light to be slightly polarized. Discovery Analyzing the spectrum of the nebula associated with the star Merope in the Pleiades, Vesto Slipher concluded in 1912 that the source of its light is most likely the star itself, and that the nebula reflects light from the star (and that of the star Alcyone). Calculations by Ejnar Hertzsprung in 1913 lend credence to that hypothesis. Edwin Hubble further distinguished between the emission and reflection nebulae in 1922. Reflection nebula are usually blue because the scattering is more efficient for blue light than red (this is the same scattering process that gives us blue skies and red sunsets). Reflection nebulae and emission nebulae are often seen together and are sometimes both referred to as diffuse nebulae. Some 500 reflection nebulae are known. A blue reflection nebula can also be seen in the same area of the sky as the Trifid Nebula. The supergiant star Antares, which is very red (spectral class M1), is surrounded by a large, yellow reflection nebula. Reflection nebulae may also be the site of star formation. Luminosity law In 1922, Edwin Hubble published the result of his investigations on bright nebulae. One part of this work is the Hubble luminosity law for reflection nebulae, which makes a relationship between the angular size (R) of the nebula and the apparent magnitude (m) of the associated star: where is a constant that depends on the sensitivity of the measurement.
Physical sciences
Basics_2
Astronomy
174781
https://en.wikipedia.org/wiki/Emission%20nebula
Emission nebula
An emission nebula is a nebula formed of ionized gases that emit light of various wavelengths. The most common source of ionization is high-energy ultraviolet photons emitted from a nearby hot star. Among the several different types of emission nebulae are H II regions, in which star formation is taking place and young, massive stars are the source of the ionizing photons; and planetary nebulae, in which a dying star has thrown off its outer layers, with the exposed hot core then ionizing them. General information Usually, a young star will ionize part of the same cloud from which it was born, although only massive, hot stars can release sufficient energy to ionize a significant part of a cloud. In many emission nebulae, an entire cluster of young stars is contributing energy. Stars that are hotter than 25,000 K generally emit enough ionizing ultraviolet radiation (wavelength shorter than 91.2 nm) to cause the emission nebulae around them to be brighter than the reflection nebulae. The radiation emitted by cooler stars is generally not energetic enough to ionize hydrogen, which results in the reflection nebulae around these stars giving off less light than the emission nebulae. The nebula's color depends on its chemical composition and degree of ionization. Due to the prevalence of hydrogen in interstellar gas, and its relatively low energy of ionization, many emission nebulae appear red due to strong emissions of the Balmer series. If more energy is available, other elements will be ionized, and green and blue nebulae become possible. By examining the spectra of nebulae, astronomers infer their chemical content. Most emission nebulae are about 90% hydrogen, with the remaining helium, oxygen, nitrogen, and other elements. Some of the most prominent emission nebulae visible from the northern celestial hemisphere are the North America Nebula (NGC 7000) and Veil Nebula NGC 6960/6992 in Cygnus, while in the south celestial hemisphere, the Lagoon Nebula M8 / NGC 6523 in Sagittarius and the Orion Nebula M42. Further in the southern hemisphere is the bright Carina Nebula NGC 3372. Emission nebulae often have dark areas in them which result from clouds of dust which block the light. Many nebulae are made up of both reflection and emission components such as the Trifid Nebula. Image gallery
Physical sciences
Basics_2
Astronomy
174782
https://en.wikipedia.org/wiki/Gravitational%20field
Gravitational field
In physics, a gravitational field or gravitational acceleration field is a vector field used to explain the influences that a body extends into the space around itself. A gravitational field is used to explain gravitational phenomena, such as the gravitational force field exerted on another massive body. It has dimension of acceleration (L/T2) and it is measured in units of newtons per kilogram (N/kg) or, equivalently, in meters per second squared (m/s2). In its original concept, gravity was a force between point masses. Following Isaac Newton, Pierre-Simon Laplace attempted to model gravity as some kind of radiation field or fluid, and since the 19th century, explanations for gravity in classical mechanics have usually been taught in terms of a field model, rather than a point attraction. It results from the spatial gradient of the gravitational potential field. In general relativity, rather than two particles attracting each other, the particles distort spacetime via their mass, and this distortion is what is perceived and measured as a "force". In such a model one states that matter moves in certain ways in response to the curvature of spacetime, and that there is either no gravitational force, or that gravity is a fictitious force. Gravity is distinguished from other forces by its obedience to the equivalence principle. Classical mechanics In classical mechanics, a gravitational field is a physical quantity. A gravitational field can be defined using Newton's law of universal gravitation. Determined in this way, the gravitational field around a single particle of mass is a vector field consisting at every point of a vector pointing directly towards the particle. The magnitude of the field at every point is calculated by applying the universal law, and represents the force per unit mass on any object at that point in space. Because the force field is conservative, there is a scalar potential energy per unit mass, , at each point in space associated with the force fields; this is called gravitational potential. The gravitational field equation is where is the gravitational force, is the mass of the test particle, is the radial vector of the test particle relative to the mass (or for Newton's second law of motion which is a time dependent function, a set of positions of test particles each occupying a particular point in space for the start of testing), is time, is the gravitational constant, and is the del operator. This includes Newton's law of universal gravitation, and the relation between gravitational potential and field acceleration. and are both equal to the gravitational acceleration (equivalent to the inertial acceleration, so same mathematical form, but also defined as gravitational force per unit mass). The negative signs are inserted since the force acts antiparallel to the displacement. The equivalent field equation in terms of mass density of the attracting mass is: which contains Gauss's law for gravity, and Poisson's equation for gravity. Newton's law implies Gauss's law, but not vice versa; see Relation between Gauss's and Newton's laws. These classical equations are differential equations of motion for a test particle in the presence of a gravitational field, i.e. setting up and solving these equations allows the motion of a test mass to be determined and described. The field around multiple particles is simply the vector sum of the fields around each individual particle. A test particle in such a field will experience a force that equals the vector sum of the forces that it would experience in these individual fields. This is i.e. the gravitational field on mass is the sum of all gravitational fields due to all other masses mi, except the mass itself. is the position vector of the gravitating particle , and is that of the test particle. General relativity In general relativity, the Christoffel symbols play the role of the gravitational force field and the metric tensor plays the role of the gravitational potential. In general relativity, the gravitational field is determined by solving the Einstein field equations where is the stress–energy tensor, is the Einstein tensor, and is the Einstein gravitational constant. The latter is defined as , where is the Newtonian constant of gravitation and is the speed of light. These equations are dependent on the distribution of matter, stress and momentum in a region of space, unlike Newtonian gravity, which is depends on only the distribution of matter. The fields themselves in general relativity represent the curvature of spacetime. General relativity states that being in a region of curved space is equivalent to accelerating up the gradient of the field. By Newton's second law, this will cause an object to experience a fictitious force if it is held still with respect to the field. This is why a person will feel himself pulled down by the force of gravity while standing still on the Earth's surface. In general the gravitational fields predicted by general relativity differ in their effects only slightly from those predicted by classical mechanics, but there are a number of easily verifiable differences, one of the most well known being the deflection of light in such fields. Embedding diagram Embedding diagrams are three dimensional graphs commonly used to educationally illustrate gravitational potential by drawing gravitational potential fields as a gravitational topography, depicting the potentials as so-called gravitational wells, sphere of influence.
Physical sciences
Classical mechanics
Physics
174823
https://en.wikipedia.org/wiki/Dark%20nebula
Dark nebula
A dark nebula or absorption nebula is a type of interstellar cloud, particularly molecular clouds, that is so dense that it obscures the visible wavelengths of light from objects behind it, such as background stars and emission or reflection nebulae. The extinction of the light is caused by interstellar dust grains in the coldest, densest parts of molecular clouds. Clusters and large complexes of dark nebulae are associated with Giant Molecular Clouds. Isolated small dark nebulae are called Bok globules. Like other interstellar dust or material, the things it obscures are visible only using radio waves in radio astronomy or infrared in infrared astronomy. Dark clouds appear so because of sub-micrometre-sized dust particles, coated with frozen carbon monoxide and nitrogen, which effectively block the passage of light at visible wavelengths. Also present are molecular hydrogen, atomic helium, C18O (CO with oxygen as the 18O isotope), CS, NH3 (ammonia), H2CO (formaldehyde), c-C3H2 (cyclopropenylidene) and a molecular ion N2H+ (diazenylium), all of which are relatively transparent. These clouds are the spawning grounds of stars and planets, and understanding their development is essential to understanding star formation. The form of such dark clouds is very irregular: they have no clearly defined outer boundaries and sometimes take on convoluted serpentine shapes. The closest and largest dark nebulae are visible to the naked eye, since they are the least obscured by stars in between Earth and the nebula, and because they have the largest angular size, appearing as dark patches against the brighter background of the Milky Way like the Coalsack Nebula and the Great Rift. These naked-eye objects are sometimes known as dark cloud constellations and take on a variety of names. In the inner molecular regions of dark nebulae, important events take place, such as the formation of stars and masers. Complexes and constellations Along with molecular clouds, dark nebula make up molecular cloud complexes. Dark nebula form in the night sky apparent dark cloud constellations.
Physical sciences
Basics_3
null
174850
https://en.wikipedia.org/wiki/Andalusite
Andalusite
Andalusite is an aluminium nesosilicate mineral with the chemical formula Al2SiO5. This mineral was called andalousite by Delamétherie, who thought it came from Andalusia, Spain. It soon became clear that it was a locality error, and that the specimens studied were actually from El Cardoso de la Sierra, in the Spanish province of Guadalajara, not Andalusia. Andalusite is trimorphic with kyanite and sillimanite, being the lower pressure mid temperature polymorph. At higher temperatures and pressures, andalusite may convert to sillimanite. Thus, as with its other polymorphs, andalusite is an aluminosilicate index mineral, providing clues to depth and pressures involved in producing the host rock. Varieties The variety chiastolite commonly contains dark inclusions of carbon or clay which form a cruciform pattern when shown in cross-section. This stone was known at least from the sixteenth century, being taken to many European countries, as a souvenir, by pilgrims returning from Santiago de Compostela. Viridine is a green variety of andalusite in which manganese 3+ substitutes for aluminium, the same change is also responsible for the colour. Kanonaite is a greenish-black mineral related to andalusite and having the approximate composition . A clear variety found in Brazil and Sri-Lanka can be cut into a gemstone. Faceted andalusite stones give a play of red, green, and yellow colors that resembles a muted form of iridescence, although the colors are actually the result of unusually strong pleochroism. Occurrence Andalusite is a common metamorphic mineral which forms under low pressure and low to high temperatures. The minerals kyanite and sillimanite are polymorphs of andalusite, each occurring under different temperature-pressure regimes and are therefore rarely found together in the same rock. Because of this the three minerals are a useful tool to help identify the pressure-temperature paths of the host rock in which they are found. It is particularly associated with pelitic metamorphic rocks such as mica schist. The world's highest concentration of andalusite is found in the Glomel mine in Côtes-d'Armor (France) which accounts for 25% of the global production of this mineral. South Africa possesses the largest portion of the world's known andalusite deposits. Uses Andalusite is used as a refractory in furnaces, kilns and other industrial processes.
Physical sciences
Silicate minerals
Earth science
174901
https://en.wikipedia.org/wiki/Hartree
Hartree
The hartree (symbol: Eh), also known as the Hartree energy, is the unit of energy in the atomic units system, named after the British physicist Douglas Hartree. Its CODATA recommended value is = The hartree is approximately the negative electric potential energy of the electron in a hydrogen atom in its ground state and, by the virial theorem, approximately twice its ionization energy; the relationships are not exact because of the finite mass of the nucleus of the hydrogen atom and relativistic corrections. The hartree is usually used as a unit of energy in atomic physics and computational chemistry: for experimental measurements at the atomic scale, the electronvolt (eV) or the reciprocal centimetre (cm−1) are much more widely used. Other relationships = 2 Ry = 2 R∞hc = = = ≘ ≘ ≘ ≘ where: ħ is the reduced Planck constant, me is the electron mass, e is the elementary charge, a0 is the Bohr radius, ε0 is the electric constant, c is the speed of light in vacuum, and α is the fine-structure constant. Effective hartree units are used in semiconductor physics where is replaced by and is the static dielectric constant. Also, the electron mass is replaced by the effective band mass . The effective hartree in semiconductors becomes small enough to be measured in millielectronvolts (meV).
Physical sciences
Energy
Basics and measurement
174914
https://en.wikipedia.org/wiki/Atomic%20units
Atomic units
The atomic units are a system of natural units of measurement that is especially convenient for calculations in atomic physics and related scientific fields, such as computational chemistry and atomic spectroscopy. They were originally suggested and named by the physicist Douglas Hartree. Atomic units are often abbreviated "a.u." or "au", not to be confused with similar abbreviations used for astronomical units, arbitrary units, and absorbance units in other contexts. Motivation In the context of atomic physics, using the atomic units system can be a convenient shortcut, eliminating symbols and numbers and reducing the order of magnitude of most numbers involved. For example, the Hamiltonian operator in the Schrödinger equation for the helium atom with standard quantities, such as when using SI units, is but adopting the convention associated with atomic units that transforms quantities into dimensionless equivalents, it becomes In this convention, the constants , , , and all correspond to the value (see below). The distances relevant to the physics expressed in SI units are naturally on the order of , while expressed in atomic units distances are on the order of (one Bohr radius, the atomic unit of length). An additional benefit of expressing quantities using atomic units is that their values calculated and reported in atomic units do not change when values of fundamental constants are revised, since the fundamental constants are built into the conversion factors between atomic units and SI. History Hartree defined units based on three physical constants: Here, the modern equivalent of is the Rydberg constant , of is the electron mass , of is the Bohr radius , and of is the reduced Planck constant . Hartree's expressions that contain differ from the modern form due to a change in the definition of , as explained below. In 1957, Bethe and Salpeter's book Quantum mechanics of one-and two-electron atoms built on Hartree's units, which they called atomic units abbreviated "a.u.". They chose to use , their unit of action and angular momentum in place of Hartree's length as the base units. They noted that the unit of length in this system is the radius of the first Bohr orbit and their velocity is the electron velocity in Bohr's model of the first orbit. In 1959, Shull and Hall advocated atomic units based on Hartree's model but again chose to use as the defining unit. They explicitly named the distance unit a "Bohr radius"; in addition, they wrote the unit of energy as and called it a Hartree. These terms came to be used widely in quantum chemistry. In 1973 McWeeny extended the system of Shull and Hall by adding permittivity in the form of as a defining or base unit. Simultaneously he adopted the SI definition of so that his expression for energy in atomic units is , matching the expression in the 8th SI brochure. Definition A set of base units in the atomic system as in one proposal are the electron rest mass, the magnitude of the electronic charge, the Planck constant, and the permittivity. In the atomic units system, each of these takes the value 1; the corresponding values in the International System of Units are given in the table. Table notes Units Three of the defining constants (reduced Planck constant, elementary charge, and electron rest mass) are atomic units themselves – of action, electric charge, and mass, respectively. Two named units are those of length (Bohr radius ) and energy (hartree ). Conventions Different conventions are adopted in the use of atomic units, which vary in presentation, formality and convenience. Explicit units Many texts (e.g. Jerrard & McNiell, Shull & Hall) define the atomic units as quantities, without a transformation of the equations in use. As such, they do not suggest treating either quantities as dimensionless or changing the form of any equations. This is consistent with expressing quantities in terms of dimensional quantities, where the atomic unit is included explicitly as a symbol (e.g. , , or more ambiguously, ), and keeping equations unaltered with explicit constants. Provision for choosing more convenient closely related quantities that are more suited to the problem as units than universal fixed units are is also suggested, for example based on the reduced mass of an electron, albeit with careful definition thereof where used (for example, a unit , where for a specified mass ). A convention that eliminates units In atomic physics, it is common to simplify mathematical expressions by a transformation of all quantities: Hartree suggested that expression in terms of atomic units allows us "to eliminate various universal constants from the equations", which amounts to informally suggesting a transformation of quantities and equations such that all quantities are replaced by corresponding dimensionless quantities. He does not elaborate beyond examples. McWeeny suggests that "... their adoption permits all the fundamental equations to be written in a dimensionless form in which constants such as , and are absent and need not be considered at all during mathematical derivations or the processes of numerical solution; the units in which any calculated quantity must appear are implicit in its physical dimensions and may be supplied at the end." He also states that "An alternative convention is to interpret the symbols as the numerical measures of the quantities they represent, referred to some specified system of units: in this case the equations contain only pure numbers or dimensionless variables; ... the appropriate units are supplied at the end of a calculation, by reference to the physical dimensions of the quantity calculated. [This] convention has much to recommend it and is tacitly accepted in atomic and molecular physics whenever atomic units are introduced, for example for convenience in computation." An informal approach is often taken, in which "equations are expressed in terms of atomic units simply by setting ". This is a form of shorthand for the more formal process of transformation between quantities that is suggested by others, such as McWeeny. Physical constants Dimensionless physical constants retain their values in any system of units. Of note is the fine-structure constant , which appears in expressions as a consequence of the choice of units. For example, the numeric value of the speed of light, expressed in atomic units, is Bohr model in atomic units Atomic units are chosen to reflect the properties of electrons in atoms, which is particularly clear in the classical Bohr model of the hydrogen atom for the bound electron in its ground state: Mass = 1 a.u. of mass Charge = −1 a.u. of charge Orbital radius = 1 a.u. of length Orbital velocity = 1 a.u. of velocity Orbital period = 2π a.u. of time Orbital angular velocity = 1 radian per a.u. of time Orbital momentum = 1 a.u. of momentum Ionization energy = a.u. of energy Electric field (due to nucleus) = 1 a.u. of electric field Lorentz force (due to nucleus) = 1 a.u. of force
Physical sciences
Measurement systems
Basics and measurement
174929
https://en.wikipedia.org/wiki/Acer%20saccharinum
Acer saccharinum
Acer saccharinum, commonly known as silver maple, creek maple, silverleaf maple, soft maple, large maple, water maple, swamp maple, or white maple, is a species of maple native to the eastern and central United States and southeastern Canada. It is one of the most common trees in the United States. Although the silver maple's Latin name is similar, it should not be confused with Acer saccharum, the sugar maple. Some of the common names are also applied to other maples, especially Acer rubrum. Description The silver maple tree is a relatively fast-growing deciduous tree, commonly reaching a height of , exceptionally . Its spread will generally be wide. A 10-year-old sapling will stand about tall. It is often found along waterways and in wetlands, leading to the colloquial name "water maple". It is a highly adaptable tree, although it has higher sunlight requirements than other maple trees. The leaves are simple and palmately veined, long and broad, with deep angular notches between the five lobes. The long, slender stalks of the leaves mean that even a light breeze can produce a striking effect as the downy silver undersides of the leaves are exposed. The autumn color is less pronounced than in many maples, generally ending up a pale yellow, although some specimens can produce a more brilliant yellow and even orange and red colorations. The tree has a tendency to color and drop its leaves slightly earlier in autumn than other maples. The flowers are in dense clusters, produced before the leaves in early spring, with the seeds maturing in early summer. The fruit is a schizocarp of two single-seeded, winged samaras. The wing of each samara is about long. The fruit of this species is the largest among the maples native to its range. Although the wings provide for some transport by air, the fruit are heavy and are also transported by water. Silver maple and its close cousin red maple are the only Acer species which produce their fruit crop in spring instead of fall. The seeds of both trees have no epigeal dormancy and will germinate immediately. Seed production begins at 11 years of age and large crops are produced most years. Like most maples, silver maple can be variably dioecious (separate male or female trees) or monoecious (male and female flowers on the same tree) but dioecious trees are far more common. They can also change sex from year to year. On mature trunks, the bark is gray and shaggy. On branches and young trunks, the bark is smooth and silvery gray. Cultivation and uses Wildlife uses the silver maple in various ways. In many parts of the eastern U.S., the large rounded buds are one of the primary food sources for squirrels during the spring, after many acorns and nuts have sprouted and the squirrels' food is scarce. The seeds are also a food source for chipmunks and birds. The bark can be eaten by beaver and deer. The trunks tend to produce cavities, which can shelter squirrels, raccoons, opossums, owls and woodpeckers, and are frequented by carpenter ants. Additionally, the leaves serve as a source of food for species of Lepidoptera, such as the rosy maple moth (Dryocampa rubicunda). The wood can be used as pulp for making paper. Lumber from the tree is used in furniture, cabinets, flooring, musical instruments, crates, and tool handles, because it is light and easily worked. Because of the silver maple's fast growth, it is being researched as a potential source of biofuels. Silver maple produces a sweet sap but it is generally not used by commercial sugarmakers because its sugar content is lower than in other maple species. Silver maple is often planted as an ornamental tree because of its rapid growth and ease of propagation and transplanting. It is highly tolerant of urban situations and is frequently planted next to streets. However, its quick growth produces brittle wood which is commonly damaged in storms. The silver maple's root system is shallow and fibrous and easily invades septic fields and old drain pipes; it can also crack sidewalks and foundations. It is a vigorous resprouter, and if not pruned, will often grow with multiple trunks. Although it naturally is found near water, it can grow on drier ground if planted there. In ideal natural conditions, A. saccharinum may live up to 130 years but in urban environments often 80 or less. Following World War II, silver maples were commonly used as a landscaping and street tree in suburban housing developments and cities due to their rapid growth, especially as a replacement for the blighted American elm. However, they fell out of favor for this purpose because of brittle wood, unattractive form when not pruned or trained, and tendency to produce large numbers of volunteer seedlings. Today the tree has fallen so far out of favor that some towns and cities have banned its use as a street tree. Silver maple's natural range encompasses most of the eastern US, the Midwestern US and southern Canada, that being Southern Ontario and southwestern Quebec. It is generally absent from the humid US coastal plain south of Maryland, so it is confined to the Appalachians in those states. It does not occur along the Gulf Coast or in Florida outside a few scattered locations in the panhandle. It is commonly cultivated outside its native range, showing tolerance of a wide range of climates, and growing successfully as far north as central Norway. It also is in Anchorage, Alaska. It can thrive in a Mediterranean climate, as at Jerusalem and Los Angeles, if summer water is provided. It is also grown in temperate parts of the Southern Hemisphere: Argentina, Uruguay, Venezuela, the southern states of Brazil (and in a few low-temperature locations within the states of São Paulo and Minas Gerais). The silver maple is closely related to the red maple (Acer rubrum) and can hybridise with it. The hybrid is known as the Freeman maple (Acer × freemanii). The Freeman maple is a popular ornamental tree in parks and large gardens, combining the fast growth of silver maple with the less brittle wood, less invasive roots, and the beautiful bright red fall foliage of the red maple. The cultivar Acer × freemanii = 'Jeffersred' has gained the Royal Horticultural Society's Award of Garden Merit. The silver maple is the favored host of the maple bladder gall mite Vasates quadripedes. Native American ethnobotany Native Americans used the sap of wild trees to make sugar, as medicine, and in bread. They used the wood to make baskets and furniture. An infusion of bark removed from the south side of the tree is used by the Mohegan as cough medicine. The Cherokee take an infusion of the bark to treat cramps, menstrual pains, dysentery, and hives. They boil the inner bark and use it with water as a wash for sore eyes. They take a hot infusion of the bark to treat measles. They use the tree to make baskets, for lumber, building material, and for carving.
Biology and health sciences
Sapindales
Plants
174932
https://en.wikipedia.org/wiki/Acer%20saccharum
Acer saccharum
Acer saccharum, the sugar maple, is a species of flowering plant in the soapberry and lychee family Sapindaceae. It is native to the hardwood forests of eastern Canada and the eastern United States. Sugar maple is best known for being the primary source of maple syrup and for its brightly colored fall foliage. It may also be called "rock maple," "sugar tree," "sweet maple," or, particularly in reference to the wood, "hard maple," "birds-eye maple," or "curly maple," the last two being specially figured lumber. Description Acer saccharum is a deciduous tree normally reaching heights of , and exceptionally up to . A 10-year-old tree is typically about tall. As with most trees, forest-grown sugar maples form a much taller trunk and narrower canopy than open-growth ones. The leaves are deciduous, up to long and wide, palmate, with five lobes and borne in opposite pairs. The basal lobes are relatively small, while the upper lobes are larger and deeply notched. In contrast with the angular notching of the silver maple, however, the notches tend to be rounded at their interior. The fall color is often spectacular, ranging from bright yellow on some trees through orange to fluorescent red-orange on others. Sugar maples also have a tendency to color unevenly in fall. In some trees, all colors above can be seen at the same time. They also share a tendency with red maples for certain parts of a mature tree to change color weeks ahead of or behind the remainder of the tree. The leaf buds are pointy and brown-colored. The recent year's growth twigs are green, and turn dark brown. The flowers are in panicles of five to ten together, yellow-green and without petals; flowering occurs in early spring after 30–55 growing degree days. The sugar maple will generally begin flowering when it is between 10 and 200 years old. The fruit is a pair of samaras (winged seeds). The seeds are globose, in diameter, the wing long. The seeds fall from the tree in autumn, where they must be exposed to 45 days of temperatures below to break their coating down. Germination of A. saccharum is slow, not taking place until the following spring when the soil has warmed and all frost danger is past. It is closely related to the black maple, which is sometimes included in this species, but sometimes separated as Acer nigrum. The western sugar maple (Acer grandidentatum) and southern sugar maple ((Acer floridanum) are also treated as a varieties or subspecies of northern sugar maple by some botanists. The sugar maple can be confused with the Norway maple, which is not native to America but is commonly planted in cities and suburbs, and they are not closely related within the genus. The sugar maple is most easily identified by clear sap in the leaf petiole (the Norway maple has white sap), brown, sharp-tipped buds (the Norway maple has blunt, green or reddish-purple buds), and shaggy bark on older trees (the Norway maple bark has small grooves). Also, the leaf lobes of the sugar maple have a more triangular shape, in contrast to the squarish lobes of the Norway maple. Ecology The sugar maple is an extremely important species to the ecology of many forests in the northern United States and Canada. Pure stands are common, and it is a major component of the northern and Midwestern U.S. hardwood forests. Due to its need for cold winters, sugar maple is mostly found north of the 42nd parallel in USDA growing zones 3–5. It is less common in the southern part of its range (USDA Zone 6) where summers are hot and humid; there sugar maple is confined to ravines and moist flatlands. In the east, south of Maryland, it is limited to the Appalachians. In the west, Tennessee represents the southern limit of its range and Missouri its southwestern limit. Collection of sap for sugar is also not possible in the southern part of sugar maple's range as winter temperatures do not become cold enough. The minimum seed-bearing age of sugar maple is about 30 years. The tree is long-lived, typically 200 years and occasionally as much as 300. Sugar maple is native to areas with cooler climates and requires a hard freeze each winter for proper dormancy. In northern parts of its range, January temperatures average about and July temperatures about ; in southern parts, January temperatures average about and July temperatures average almost . Seed germination also requires extremely low temperatures, the optimal being just slightly above freezing, and no other known tree species has this property. Germination of sugar maple seed in temperatures above is rare to nonexistent. Acer saccharum is among the most shade tolerant of large deciduous trees. Its shade tolerance is exceeded only by the striped maple, a smaller tree. Like other maples, its shade tolerance is manifested in its ability to germinate and persist under a closed canopy as an understory plant, and respond with rapid growth to the increased light formed by a gap in the canopy. Sugar maple can tolerate virtually any soil type short of pure sand, but does not tolerate xeric or swampy conditions. Sugar maples are deeper-rooted than most maples and engage in hydraulic lift, drawing water from lower soil layers and exuding that water into upper, drier soil layers. This not only benefits the tree itself, but also many other plants growing around it. The mushroom Pholiota squarrosoides is known to decay the logs of the tree. Human influences have contributed to the decline of the sugar maple in many regions. Its role as a species of mature forests has led it to be replaced by more opportunistic species in areas where forests are cut over. The sugar maple also exhibits a greater susceptibility to pollution than other species of maple. Acid rain and soil acidification are some of the primary contributing factors to maple decline. Also, the increased use of salt over the last several decades on streets and roads for deicing purposes has decimated the sugar maple's role as a street tree. In some parts of New England, particularly near urbanized areas, the sugar maple is being displaced by the Norway maple. The Norway maple is also highly shade tolerant, but is considerably more tolerant of urban conditions, resulting in the sugar maple's replacement in those areas. In addition, Norway maple produces much larger crops of seeds, allowing it to out-compete native species. Cultivation and uses Maple syrup and other food use The sugar maple is one of the most important Canadian trees, being, with the black maple, the major source of sap for making maple syrup. Other maple species can be used as a sap source for maple syrup, but some have lower sugar content and/or produce more cloudy syrup than these two. In maple syrup production from Acer saccharum, the sap is extracted from the trees using a tap placed into a hole drilled through the phloem, just inside the bark. The collected sap is then boiled. As the sap boils, the water evaporates and the syrup is left behind. Forty gallons of maple sap produces 1 gallon of syrup. In the southern part of their range, sugar maples produce little sap; syrup production is dependent on the tree growing in cooler climates. Additionally, the samaras (seeds) can be soaked, and—with their wings removed—boiled, seasoned, and roasted to make them edible. The young leaves and inner bark can be eaten either raw or cooked. Timber The sapwood can be white, and smaller logs may have a higher proportion of this desirable wood. Bowling alleys and bowling pins are both commonly manufactured from sugar maple. Trees with wavy wood grain, which can occur in curly, quilted, and "birdseye maple" forms, are especially valued. Maple is also the wood used for basketball courts, including the floors used by the NBA, and it is a popular wood for baseball bats, along with white ash. In recent years, because white ash has become threatened by emerald ash borer, sugar maple wood has increasingly displaced ash for baseball bat production. It is also widely used in the manufacture of musical instruments, such as the members of the violin family (sides and back), guitars (neck), grand pianos (rim), and drum shells. It is also often used in the manufacture of sporting goods. Canadian maple, often referred to as "Canadian hardrock maple", is prized for pool cues, especially the shafts. Some production-line cues will use lower-quality maple wood with cosmetic issues, such as "sugar marks", which are most often light brown discolorations caused by sap in the wood. The best shaft wood has a very consistent grain, with no marks or discoloration. Sugar marks usually do not affect how the cue plays, but are not as high quality as those without it. The wood is also used in skateboards, gunstocks, and flooring for its strength. Canadian hardrock maple is also used in the manufacture of electric guitar necks due to its high torsional stability and the bright, crisp resonant tone it produces. If the grain is curly, with flame or quilt patterns, it is usually reserved for more expensive instruments. In high-end guitars this wood is sometimes Torrefied to cook out the Lignin resins, allowing the greater stability to climate & environmental changes, and to enhance its tonal characteristics as the instrument's resonance is more evenly distributed across the cellulose structure of the wood without the lignin. Urban planting Sugar maple was a favorite street and park tree during the 19th century because it was easy to propagate and transplant, is fairly fast-growing, and has beautiful fall color. As noted above, however, it proved too delicate to continue in that role after the rise of automobile-induced pollution and was replaced by Norway maple and other hardier species. It is intolerant of road salt. Sugar Maples are commonly planted as a street tree in cities within the Mountain West region of the United States, usually a different cultivar such as the “Legacy” sugar maple. The shade and the shallow, fibrous roots may interfere with grass growing under the trees. Deep, well-drained loam is the best rooting medium, although sugar maples can grow well on sandy soil which has a good buildup of humus. Light (or loose) clay soils are also well known to support sugar maple growth. Poorly drained areas are unsuitable, and the species is especially short-lived on flood-prone clay flats. Its salt tolerance is low and it is very sensitive to boron. The species is also subject to defoliation when there are dense populations of larvae of Lepidoptera species like the rosy maple moth (Dryocampa rubicunda). Cultivars 'Apollo' – columnar 'Arrowhead' – pyramidal crown 'Astis' ('Steeple') – heat-tolerant, good in southeastern USA, oval crown 'Bonfire' – fast growing 'Caddo' – naturally occurring southern ecotype or subspecies, from Southwestern Oklahoma, great drought and heat tolerance, good choice for the Great Plains region 'Columnare' ('Newton Sentry') – very narrow 'Fall Fiesta' – tough-leaved, colorful in season, above-average hardiness 'Goldspire' – columnar with yellow-orange fall color 'Green Mountain' (PNI 0285) – durable foliage resists heat and drought, oval crown, above-average hardiness 'Inferno' – possibly the hardiest cultivar, with more red fall color than 'Lord Selkirk' or 'Unity' 'Legacy' – tough, vigorous and popular 'Lord Selkirk' – very hardy, more upright than other northern cultivars 'Monumentale' – columnar 'September Flare' - very hardy, early orange-red fall color 'Sweet Shadow' – lacy foliage 'Temple's Upright' – almost as narrow as 'Columnare' 'Unity' – very hardy, from Manitoba, slow steady growth Use by Native Americans The Mohegan use the inner bark as a cough remedy, and the sap as a sweetening agent, and to make maple syrup following the introduction of metal cookware by Europeans. Big trees The United States national champion for Acer saccharum is located in Charlemont, Massachusetts. In 2007, the year it was submitted, it had a circumference of at above the ground's surface, and thus a diameter at breast height of about . At that time the tree was tall with an average crown spread of . Using the scoring system of circumference in inches plus height in feet plus 25% of crown spread in feet resulted in a total number of 368 points at the National Register of Big Trees. A tree in Lyme, Connecticut, measured in 2012, had a circumference of , or an average diameter at breast height of about . This tree had been tall with a crown spread of , counting for a total number of 364 points. In popular culture The sugar maple is the state tree of the US states of New York, Vermont, West Virginia, and Wisconsin. It is depicted on the state quarter of Vermont, issued in 2001.
Biology and health sciences
Sapindales
Plants
174945
https://en.wikipedia.org/wiki/Elementary%20charge
Elementary charge
The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton (+1 e) or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 . In the SI system of units, the value of the elementary charge is exactly defined as or 160.2176634 zeptocoulombs (zC). Since the 2019 revision of the SI, the seven SI base units are defined in terms of seven fundamental physical constants, of which the elementary charge is one. In the centimetre–gram–second system of units (CGS), the corresponding quantity is . Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865. As a unit In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron. In other natural unit systems, the unit of charge is defined as with the result that where is the fine-structure constant, is the speed of light, is the electric constant, and is the reduced Planck constant. Quantization Charge quantization is the principle that the charge of any object is an integer multiple of the elementary charge. Thus, an object's charge can be exactly 0 e, or exactly 1 e, −1 e, 2 e, etc., but not  e, or −3.8 e, etc. (There may be exceptions to this statement, depending on how "object" is defined; see below.) This is the reason for the terminology "elementary charge": it is meant to imply that it is an indivisible unit of charge. Fractional elementary charge There are two known sorts of exceptions to the indivisibility of the elementary charge: quarks and quasiparticles. Quarks, first posited in the 1960s, have quantized charge, but the charge is quantized into multiples of . However, quarks cannot be isolated; they exist only in groupings, and stable groupings of quarks (such as a proton, which consists of three quarks) all have charges that are integer multiples of e. For this reason, either 1 e or can be justifiably considered to be "the quantum of charge", depending on the context. This charge commensurability, "charge quantization", has partially motivated grand unified theories. Quasiparticles are not particles as such, but rather an emergent entity in a complex material system that behaves like a particle. In 1982 Robert Laughlin explained the fractional quantum Hall effect by postulating the existence of fractionally charged quasiparticles. This theory is now widely accepted, but this is not considered to be a violation of the principle of charge quantization, since quasiparticles are not elementary particles. Quantum of charge All known elementary particles, including quarks, have charges that are integer multiples of  e. Therefore, the "quantum of charge" is  e. In this case, one says that the "elementary charge" is three times as large as the "quantum of charge". On the other hand, all isolatable particles have charges that are integer multiples of e. (Quarks cannot be isolated: they exist only in collective states like protons that have total charges that are integer multiples of e.) Therefore, the "quantum of charge" is e, with the proviso that quarks are not to be included. In this case, "elementary charge" would be synonymous with the "quantum of charge". In fact, both terminologies are used. For this reason, phrases like "the quantum of charge" or "the indivisible unit of charge" can be ambiguous unless further specification is given. On the other hand, the term "elementary charge" is unambiguous: it refers to a quantity of charge equal to that of a proton. Lack of fractional charges Paul Dirac argued in 1931 that if magnetic monopoles exist, then electric charge must be quantized; however, it is unknown whether magnetic monopoles actually exist. It is currently unknown why isolatable particles are restricted to integer charges; much of the string theory landscape appears to admit fractional charges. Experimental measurements of the elementary charge The elementary charge is exactly defined since 20 May 2019 by the International System of Units. Prior to this change, the elementary charge was a measured quantity whose magnitude was determined experimentally. This section summarizes these historical experimental measurements. In terms of the Avogadro constant and Faraday constant If the Avogadro constant NA and the Faraday constant F are independently known, the value of the elementary charge can be deduced using the formula (In other words, the charge of one mole of electrons, divided by the number of electrons in a mole, equals the charge of a single electron.) This method is not how the most accurate values are measured today. Nevertheless, it is a legitimate and still quite accurate method, and experimental methodologies are described below. The value of the Avogadro constant NA was first approximated by Johann Josef Loschmidt who, in 1865, estimated the average diameter of the molecules in air by a method that is equivalent to calculating the number of particles in a given volume of gas. Today the value of NA can be measured at very high accuracy by taking an extremely pure crystal (often silicon), measuring how far apart the atoms are spaced using X-ray diffraction or another method, and accurately measuring the density of the crystal. From this information, one can deduce the mass (m) of a single atom; and since the molar mass (M) is known, the number of atoms in a mole can be calculated: . The value of F can be measured directly using Faraday's laws of electrolysis. Faraday's laws of electrolysis are quantitative relationships based on the electrochemical researches published by Michael Faraday in 1834. In an electrolysis experiment, there is a one-to-one correspondence between the electrons passing through the anode-to-cathode wire and the ions that plate onto or off of the anode or cathode. Measuring the mass change of the anode or cathode, and the total charge passing through the wire (which can be measured as the time-integral of electric current), and also taking into account the molar mass of the ions, one can deduce F. The limit to the precision of the method is the measurement of F: the best experimental value has a relative uncertainty of 1.6 ppm, about thirty times higher than other modern methods of measuring or calculating the elementary charge. Oil-drop experiment A famous method for measuring e is Millikan's oil-drop experiment. A small drop of oil in an electric field would move at a rate that balanced the forces of gravity, viscosity (of traveling through the air), and electric force. The forces due to gravity and viscosity could be calculated based on the size and velocity of the oil drop, so electric force could be deduced. Since electric force, in turn, is the product of the electric charge and the known electric field, the electric charge of the oil drop could be accurately computed. By measuring the charges of many different oil drops, it can be seen that the charges are all integer multiples of a single small charge, namely e. The necessity of measuring the size of the oil droplets can be eliminated by using tiny plastic spheres of a uniform size. The force due to viscosity can be eliminated by adjusting the strength of the electric field so that the sphere hovers motionless. Shot noise Any electric current will be associated with noise from a variety of sources, one of which is shot noise. Shot noise exists because a current is not a smooth continual flow; instead, a current is made up of discrete electrons that pass by one at a time. By carefully analyzing the noise of a current, the charge of an electron can be calculated. This method, first proposed by Walter H. Schottky, can determine a value of e of which the accuracy is limited to a few percent. However, it was used in the first direct observation of Laughlin quasiparticles, implicated in the fractional quantum Hall effect. From the Josephson and von Klitzing constants Another accurate method for measuring the elementary charge is by inferring it from measurements of two effects in quantum mechanics: The Josephson effect, voltage oscillations that arise in certain superconducting structures; and the quantum Hall effect, a quantum effect of electrons at low temperatures, strong magnetic fields, and confinement into two dimensions. The Josephson constant is where h is the Planck constant. It can be measured directly using the Josephson effect. The von Klitzing constant is It can be measured directly using the quantum Hall effect. From these two constants, the elementary charge can be deduced: CODATA method The relation used by CODATA to determine elementary charge was: where h is the Planck constant, α is the fine-structure constant, μ0 is the magnetic constant, ε0 is the electric constant, and c is the speed of light. Presently this equation reflects a relation between ε0 and α, while all others are fixed values. Thus the relative standard uncertainties of both will be same. Tests of the universality of elementary charge
Physical sciences
Physical constants
Physics
174955
https://en.wikipedia.org/wiki/Bohr%20magneton
Bohr magneton
In atomic physics, the Bohr magneton (symbol ) is a physical constant and the natural unit for expressing the magnetic moment of an electron caused by its orbital or spin angular momentum. In SI units, the Bohr magneton is defined as and in the Gaussian CGS units as where is the elementary charge, is the reduced Planck constant, is the electron mass, is the speed of light. History The idea of elementary magnets is due to Walther Ritz (1907) and Pierre Weiss. Already before the Rutherford model of atomic structure, several theorists commented that the magneton should involve the Planck constant h. By postulating that the ratio of electron kinetic energy to orbital frequency should be equal to h, Richard Gans computed a value that was twice as large as the Bohr magneton in September 1911. At the First Solvay Conference in November that year, Paul Langevin obtained a value of . Langevin assumed that the attractive force was inversely proportional to distance to the power and specifically The Romanian physicist Ștefan Procopiu had obtained the expression for the magnetic moment of the electron in 1913. The value is sometimes referred to as the "Bohr–Procopiu magneton" in Romanian scientific literature. The Weiss magneton was experimentally derived in 1911 as a unit of magnetic moment equal to joules per tesla, which is about 20% of the Bohr magneton. In the summer of 1913, the values for the natural units of atomic angular momentum and magnetic moment were obtained by the Danish physicist Niels Bohr as a consequence of his atom model. In 1920, Wolfgang Pauli gave the Bohr magneton its name in an article where he contrasted it with the magneton of the experimentalists which he called the Weiss magneton. Theory A magnetic moment of an electron in an atom is composed of two components. First, the orbital motion of an electron around a nucleus generates a magnetic moment by Ampère's circuital law. Second, the inherent rotation, or spin, of the electron has a spin magnetic moment. In the Bohr model of the atom, for an electron that is in the orbit of lowest energy, its orbital angular momentum has magnitude equal to the reduced Planck constant, denoted ħ. The Bohr magneton is the magnitude of the magnetic dipole moment of an electron orbiting an atom with this angular momentum. The spin angular momentum of an electron is ħ, but the intrinsic electron magnetic moment caused by its spin is also approximately one Bohr magneton, which results in the electron spin g-factor, a factor relating spin angular momentum to corresponding magnetic moment of a particle, having a value of approximately 2.
Physical sciences
Physical constants
Physics
175217
https://en.wikipedia.org/wiki/Rhizome
Rhizome
In botany and dendrology, a rhizome ( ) is a modified subterranean plant stem that sends out roots and shoots from its nodes. Rhizomes are also called creeping rootstalks or just rootstalks. Rhizomes develop from axillary buds and grow horizontally. The rhizome also retains the ability to allow new shoots to grow upwards. A rhizome is the main stem of the plant that runs typically underground and horizontally to the soil surface. Rhizomes have nodes and internodes and auxiliary buds. Roots do not have nodes and internodes and have a root cap terminating their ends. In general, rhizomes have short internodes, send out roots from the bottom of the nodes, and generate new upward-growing shoots from the top of the nodes. A stolon is similar to a rhizome, but stolon sprouts from an existing stem having long internodes and generating new shoots at the ends, they are often also called runners such as in the strawberry plant. A stem tuber is a thickened part of a rhizome or stolon that has been enlarged for use as a storage organ. In general, a tuber is high in starch, e.g. the potato, which is a modified stolon. The term "tuber" is often used imprecisely and is sometimes applied to plants with rhizomes. The plant uses the rhizome to store starches, proteins, and other nutrients. These nutrients become useful for the plant when new shoots must be formed or when the plant dies back for the winter. If a rhizome is separated, each piece may be able to give rise to a new plant. This is a process known as vegetative reproduction and is used by farmers and gardeners to propagate certain plants. This also allows for lateral spread of grasses like bamboo and bunch grasses. Examples of plants that are propagated this way include hops, asparagus, ginger, irises, lily of the valley, cannas, and sympodial orchids. Stored rhizomes are subject to bacterial and fungal infections, making them unsuitable for replanting and greatly diminishing stocks. However, rhizomes can also be produced artificially from tissue cultures. The ability to easily grow rhizomes from tissue cultures leads to better stocks for replanting and greater yields. The plant hormones ethylene and jasmonic acid have been found to help induce and regulate the growth of rhizomes, specifically in rhubarb. Ethylene that was applied externally was found to affect internal ethylene levels, allowing easy manipulations of ethylene concentrations. Knowledge of how to use these hormones to induce rhizome growth could help farmers and biologists to produce plants grown from rhizomes, and more easily cultivate and grow better plants. Some plants have rhizomes that grow above ground or that lie at the soil surface, including some Iris species as well as ferns, whose spreading stems are rhizomes. Plants with underground rhizomes include gingers, bamboo, snake plant, the Venus flytrap, Chinese lantern, western poison-oak, hops, and Alstroemeria, and some grasses, such as Johnson grass, Bermuda grass, and purple nut sedge. Rhizomes generally form a single layer, but in giant horsetails, can be multi-tiered. Many rhizomes have culinary value, and some, such as zhe'ergen, are commonly consumed raw. Some rhizomes that are used directly in cooking include ginger, turmeric, galangal, fingerroot, and lotus.
Biology and health sciences
Plant anatomy and morphology: General
Biology
175357
https://en.wikipedia.org/wiki/Postpartum%20depression
Postpartum depression
Postpartum depression (PPD), also called perinatal depression, is a mood disorder which may be experienced by pregnant or postpartum individuals. Symptoms include extreme sadness, low energy, anxiety, crying episodes, irritability, and changes in sleeping or eating patterns. PPD can also negatively affect the newborn child. The exact cause of PPD is unclear, however, it is believed to be due to a combination of physical, emotional, genetic, and social factors such as hormone imbalances and sleep deprivation. Risk factors include prior episodes of postpartum depression, bipolar disorder, a family history of depression, psychological stress, complications of childbirth, lack of support, or a drug use disorder. Diagnosis is based on a person's symptoms. While most women experience a brief period of worry or unhappiness after delivery, postpartum depression should be suspected when symptoms are severe and last over two weeks. Among those at risk, providing psychosocial support may be protective in preventing PPD. This may include community support such as food, household chores, mother care, and companionship. Treatment for PPD may include counseling or medications. Types of counseling that are effective include interpersonal psychotherapy (IPT), cognitive behavioral therapy (CBT), and psychodynamic therapy. Tentative evidence supports the use of selective serotonin reuptake inhibitors (SSRIs). Depression occurs in roughly 10 to 20% of postpartum women. Postpartum depression commonly affects mothers who have experienced stillbirth, live in urban areas and adolescent mothers. Moreover, this mood disorder is estimated to affect 1% to 26% of new fathers. A different kind of postpartum mood disorder is Postpartum psychosis, which is more severe and occurs in about 1 to 2 per 1,000 women following childbirth. Postpartum psychosis is one of the leading causes of the murder of children less than one year of age, which occurs in about 8 per 100,000 births in the United States. Signs and symptoms Symptoms of PPD can occur at any time in the first year postpartum. Typically, a diagnosis of postpartum depression is considered after signs and symptoms persist for at least two weeks. Emotional Persistent sadness, anxiousness, or "empty" mood Severe mood swings Frustration, irritability, restlessness, anger Feelings of hopelessness or helplessness Guilt, shame, worthlessness Low self-esteem Numbness, emptiness Exhaustion Inability to be comforted Trouble bonding with the baby Feeling inadequate in taking care of the baby Thoughts of self-harm or suicide Behavioral Lack of interest or pleasure in usual activities Low libido Changes in appetite Fatigue, decreased energy and motivation Poor self-care Social withdrawal Insomnia or excessive sleep Worry about harming self, baby, or partner Neurobiology fMRI studies indicate differences in brain activity between mothers with postpartum depression and those without. Mothers diagnosed with PPD tend to have less activity in the left frontal lobe and increased activity in the right frontal lobe when compared with healthy controls. They also exhibit decreased connectivity between vital brain structures, including the anterior cingulate cortex, dorsal lateral prefrontal cortex, amygdala, and hippocampus. Brain activation differences between depressed and nondepressed mothers are more pronounced when stimulated by non-infant emotional cues. Depressed mothers show greater neural activity in the right amygdala toward non-infant emotional cues as well as reduced connectivity between the amygdala and right insular cortex. Recent findings have also identified blunted activity in the anterior cingulate cortex, striatum, orbitofrontal cortex, and insula in mothers with PPD when viewing images of their infants. More robust studies on neural activation regarding PPD have been conducted with rodents than humans. These studies have allowed for greater isolation of specific brain regions, neurotransmitters, hormones, and steroids. Onset and duration Postpartum depression onset usually begins between two weeks to a month after delivery. A study done at an inner-city mental health clinic has shown that 50% of postpartum depressive episodes began before delivery. In the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) PPD is not recognized as a distinct condition but rather a specific type of a major depressive episode. In the DSM-5, the specifier "with peripartum onset" can be applied to a major depressive episode if the onset occurred either during pregnancy or within the four weeks following delivery. The prevalence of postpartum depression differs across different months after childbirth. Studies done on postpartum depression amongst women in the Middle East show that the prevalence in the first three months of postpartum was 31%, while the prevalence from the fourth to twelfth months of postpartum was 19%. PPD may last several months or even a year. Consequences on maternal and child health Postpartum depression can interfere with normal maternal-infant bonding and adversely affect acute and long-term child development. Infants of mothers with PPD have higher incidences of excess crying, temperamental issues, and sleeping difficulties. Issues with sleeping in infants may exacerbate or be exacerbated by concurrent PPD in mothers. Maternal outcomes of PPD include withdrawal, disengagement, and hostility. Additional patterns observed in mothers with PPD include lower rates of initiation and maintenance of breastfeeding. Children and infants of PPD-affected mothers experience negative long-term impacts on their cognitive functioning, inhibitory control, and emotional regulation. In cases of untreated PPD, violent behaviors and psychiatric and medical conditions in adolescence have been observed. Suicide rates of women with PPD are lower than those outside of the perinatal period. Fetal or infant death in the first year postpartum has been associated with a higher risk of suicide attempt and higher inpatient psychiatric admissions. Postpartum depression in fathers Paternal postpartum depression is a poorly understood concept with a limited evidence-base. However, postpartum depression affects 8 to 10% of fathers. There are no set criteria for men to have postpartum depression. The cause may be distinct in males. Causes of paternal postpartum depression include hormonal changes during pregnancy, which can be indicative of father-child relationships. For instance, male depressive symptoms have been associated with low testosterone levels in men. Low prolactin, estrogen, and vasopressin levels have been associated with struggles with father-infant attachment, which can lead to depression in first-time fathers. Symptoms of postpartum depression in men are extreme sadness, fatigue, anxiety, irritability, and suicidal thoughts. Postpartum depression in men is most likely to occur 3–6 months after delivery and is correlated with maternal depression, meaning that if the mother is experiencing postpartum depression, then the father is at a higher risk of developing the illness as well. Postpartum depression in men leads to an increased risk of suicide, while also limiting healthy infant-father attachment. Men who experience PPD can exhibit poor parenting behaviors, and distress, and reduce infant interaction. Reduced paternal interaction can later lead to cognitive and behavioral problems in children. Children as young as 3.5 years old may experience problems with internalizing and externalizing behaviors, indicating that paternal postpartum depression can have long-term consequences. Furthermore, if children as young as two are not frequently read to, this negative parent-child interaction can harm their expressive vocabulary. A study focusing on low-incom e fathers found that increased involvement in their child's first year was linked to lower rates of postpartum depression. Adoptive parents Postpartum depression may also be experienced by non-biological parents. While not much research has been done regarding post-adoption depression, difficulties associated with parenting post-partum are similar between biological and adoptive parents. Women who adopt children undergo significant stress and life changes during the postpartum period, similar to biological mothers. This may raise their chance of developing depressive symptoms and anxious tendencies. Postpartum depression presents in adoptive mothers via sleep deprivation similar to birth mothers, but adoptive parents may have added risk factors such as a history of infertility. Issues for LGBTQ people Additionally, preliminary research has shown that childbearing individuals who are part of the LGBTQ community may be more susceptible to prenatal depression and anxiety than cisgender and heterosexual people. According to two other studies, LGBTQ people were discouraged from accessing postpartum mental health services due to societal stigma adding a social barrier that heteronormative mothers do not have. Lesbian participants expressed apprehension about receiving a mental health diagnosis because of worries about social stigma and employment opportunities. Concerns were also raised about possible child removal and a parent's diagnosis including mental illness. From the studies conducted thus far, although limited, it is evident that there is a much larger population that experiences depression associated with childbirth than just biological mothers. Causes The cause of PPD is unknown. Hormonal and physical changes, personal and family history of depression, and the stress of caring for a new baby all may contribute to the development of postpartum depression. Evidence suggests that hormonal changes may play a role. Understanding the neuroendocrinology characteristic of PPD has proven to be particularly challenging given the erratic changes to the brain and biological systems during pregnancy and postpartum. A review of exploratory studies in PPD has observed that women with PPD have more dramatic changes in HPA axis activity, however, the directionality of specific hormone increases or decreases remain mixed. Hormones that have been studied include estrogen, progesterone, thyroid hormone, testosterone, corticotropin releasing hormone, endorphins, and cortisol. Estrogen and progesterone levels drop back to pre-pregnancy levels within 24 hours of giving birth, and that sudden change may cause it. Aberrant steroid hormone-dependent regulation of neuronal calcium influx via extracellular matrix proteins and membrane receptors involved in responding to the cell's microenvironment might be important in conferring biological risk. The use of synthetic oxytocin, a birth-inducing drug, has been linked to increased rates of postpartum depression and anxiety. Estradiol, which helps the uterus thicken and grow, is thought to contribute to the development of PPD. This is due to its relationship with serotonin. Estradiol levels increase during pregnancy, then drastically decrease following childbirth. When estradiol levels drop postpartum, the levels of serotonin decline as well. Serotonin is a neurotransmitter that helps regulate mood. Low serotonin levels cause feelings of depression and anxiety. Thus, when estradiol levels are low, serotonin can be low, suggesting that estradiol plays a role in the development of PPD. Profound lifestyle changes that are brought about by caring for the infant are also frequently hypothesized to cause PPD. However, little evidence supports this hypothesis. Mothers who have had several previous children without experiencing PPD can nonetheless experience it with their latest child. Despite the biological and psychosocial changes that may accompany pregnancy and the postpartum period, most women are not diagnosed with PPD. Many mothers are unable to get the rest they need to fully recover from giving birth. Sleep deprivation can lead to physical discomfort and exhaustion, which can contribute to the symptoms of postpartum depression. Risk factors While the causes of PPD are not understood, several factors have been suggested to increase the risk. These risks can be broken down into two categories, biological and psychosocial: Biological Administration of labor-inducing medication synthetic oxytocin Chronic illnesses caused by neuroendocrine irregularities Genetic history of PPD Hormone irregularities Inflammatory illnesses (irritable bowel syndrome, fibromyalgia) Cigarette smoking Gut microbiome The risk factors for postpartum depression can be broken down into two categories as listed above, biological and psychosocial. Certain biological risk factors include the administration of oxytocin to induce labor. Chronic illnesses such as diabetes, or Addison's disease, as well as issues with hypothalamic-pituitary-adrenal dysregulation (which controls hormonal responses), inflammatory processes like asthma or celiac disease, and genetic vulnerabilities such as a family history of depression or PPD. Chronic illnesses caused by neuroendocrine irregularities including irritable bowel syndrome and fibromyalgia typically put individuals at risk for further health complications. However, it has been found that these diseases do not increase the risk for postpartum depression, these factors are known to correlate with PPD. This correlation does not mean these factors are causal. Cigarette smoking has been known to have additive effects. Some studies have found a link between PPD and low levels of DHA (an omega-3 fatty acid) in the mother. A correlation between postpartum thyroiditis and postpartum depression has been proposed but remains controversial. There may also be a link between postpartum depression and anti-thyroid antibodies. Psychosocial Prenatal depression or anxiety A personal or family history of depression Moderate to severe premenstrual symptoms Stressful life events experienced during pregnancy Postpartum blues Birth-related psychological trauma Birth-related physical trauma History of sexual abuse Childhood trauma Previous stillbirth or miscarriage Formula-feeding rather than breast-feeding Low self-esteem Childcare or life stress Low social support Poor marital relationship or single marital status Low socioeconomic status A lack of strong emotional support from spouse, partner, family, or friends Infant temperament problems/colic Unplanned/unwanted pregnancy Breastfeeding difficulties Maternal age, family food insecurity, and violence against women The psychosocial risk factors for postpartum depression include severe life events, some forms of chronic strain, relationship quality, and support from partner and mother. There is a need for more research regarding the link between psychosocial risk factors and postpartum depression. Some psychosocial risk factors can be linked to the social determinants of health. Women with fewer resources indicate a higher level of postpartum depression and stress than those women with more resources, such as financial. Rates of PPD have been shown to decrease as income increases. Women with fewer resources may be more likely to have an unintended or unwanted pregnancy, increasing the risk of PPD. Women with fewer resources may also include single mothers of low income. Single mothers of low income may have more limited access to resources while transitioning into motherhood. These women already have fewer spending options, and having a child may spread those options even further. Low-income women are frequently trapped in a cycle of poverty, unable to advance, affecting their ability to access and receive quality healthcare to diagnose and treat postpartum depression. Studies in the US have also shown a correlation between a mother's race and postpartum depression. African American mothers have been shown to have the highest risk of PPD at 25%, while Asian mothers had the lowest at 11.5%, after controlling for social factors such as age, income, education, marital status, and baby's health. The PPD rates for First Nations, Caucasian, and Hispanic women fell in between. Migration away from a cultural community of support can be a factor in PPD. Traditional cultures around the world prioritize organized support during postpartum care to ensure the mother's mental and physical health, well-being, and recovery. One of the strongest predictors of paternal PPD is having a partner who has PPD, with fathers developing PPD 50% of the time when their female partner has PPD. Sexual orientation has also been studied as a risk factor for PPD. In a 2007 study conducted by Ross and colleagues, lesbian and bisexual mothers were tested for PPD and then compared with a heterosexual sample group. It was found that lesbian and bisexual biological mothers had significantly higher Edinburgh Postnatal Depression Scale scores than the heterosexual women in the sample. Postpartum depression is more common among lesbian women than heterosexual women, which can be attributed to lesbian women's higher depression prevalence. Lesbian women have a higher risk of depression because they are more likely to have been treated for depression and to have attempted or contemplated suicide than heterosexual women. These higher rates of PPD in lesbian/bisexual mothers may reflect less social support, particularly from their families of origin, and additional stress due to homophobic discrimination in society. Different risk variables linked to postpartum depression (PPD) among Arabic women emphasize regional influences.  Risk factors that have been identified include the gender of the infant and polygamy. According to three studies conducted in Egypt and one in Jordan, mothers of female babies had a two-to-four-fold increased risk of postpartum depression (PPD) compared to mothers of male babies. Four studies found that conflicts with the mother-in-law are associated with PPD, with risk ratios of 1.8 and 2.7. Studies have also shown a correlation between postpartum depression in mothers living within areas of conflicts, crises, and wars in the Middle East. Studies in Qatar have found a correlation between lower education levels and higher PPD prevalence. According to research done in Egypt and Lebanon, rural residential living is linked to an increased risk. It was found that rural Lebanese women who had Caesarean births had greater PPD rates. On the other hand, Lebanese women in urban areas showed an opposite pattern. Research conducted in the Middle East has demonstrated a link between PPD risk and mothers who were not informed and who are not given due consideration when decisions are made during childbirth. There is a call to integrate both a consideration of biological and psychosocial risk factors for PPD when treating and researching the illness. Violence A meta-analysis reviewing research on the association of violence and postpartum depression showed that violence against women increases the incidence of postpartum depression. About one-third of women throughout the world will experience physical or sexual violence at some point in their lives. Violence against women occurs in conflict, post-conflict, and non-conflict areas. The research reviewed only looked at violence experienced by women from male perpetrators. Studies from the Middle East suggest that individuals who have experienced family violence are 2.5 times more likely to develop PPD. Further, violence against women was defined as "any act of gender-based violence that results in, or is likely to result in, physical, sexual, or psychological harm or suffering to women". Psychological and cultural factors associated with increased incidence of postpartum depression include family history of depression, stressful life events during early puberty or pregnancy, anxiety or depression during pregnancy, and low social support. Violence against women is a chronic stressor, so depression may occur when someone is no longer able to respond to the violence. Diagnosis Criteria Postpartum depression in the DSM-5 is known as "depressive disorder with peripartum onset". Peripartum onset is defined as starting anytime during pregnancy or within the four weeks following delivery. There is no longer a distinction made between depressive episodes that occur during pregnancy or those that occur after delivery. Nevertheless, the majority of experts continue to diagnose postpartum depression as depression with onset anytime within the first year after delivery. The criteria required for the diagnosis of postpartum depression are the same as those required to make a diagnosis of non-childbirth-related major depression or minor depression. The criteria include at least five of the following nine symptoms, within two weeks: Feelings of sadness, emptiness, or hopelessness, nearly every day, for most of the day, or the observation of a depressed mood made by others Loss of interest or pleasure in activities Weight loss or decreased appetite Changes in sleep patterns Feelings of restlessness Loss of energy Feelings of worthlessness or guilt Loss of concentration or increased indecisiveness Recurrent thoughts of death, with or without plans of suicide Differential diagnosis Postpartum blues Postpartum blues, commonly known as "baby blues," is a transient postpartum mood disorder characterized by milder depressive symptoms than postpartum depression. This type of depression can occur in up to 80% of all mothers following delivery. Symptoms typically resolve within two weeks. Symptoms lasting longer than two weeks are a sign of a more serious type of depression. Women who experience "baby blues" may have a higher risk of experiencing a more serious episode of depression later on. Psychosis Postpartum psychosis is not a formal diagnosis, but is widely used to describe a psychiatric emergency that appears to occur in about 1 in 1000 pregnancies, in which symptoms of high mood and racing thoughts (mania), depression, severe confusion, loss of inhibition, paranoia, hallucinations, and delusions begin suddenly in the first two weeks after delivery; the symptoms vary and can change quickly. It is different from postpartum depression and maternity blues. It may be a form of bipolar disorder. It is important not to confuse psychosis with other symptoms that may occur after delivery, such as delirium. Delirium typically includes a loss of awareness or inability to pay attention. About half of women who experience postpartum psychosis have no risk factors; but a prior history of mental illness, especially bipolar disorder, a history of prior episodes of postpartum psychosis, or a family history put some at a higher risk. Postpartum psychosis often requires hospitalization, where treatment is antipsychotic medications, mood stabilizers, and in cases of strong risk for suicide, electroconvulsive therapy. The most severe symptoms last from 2 to 12 weeks, and recovery takes 6 months to a year. Women who have been hospitalized for a psychiatric condition immediately after delivery are at a much higher risk of suicide during the first year after delivery. Childbirth-Related/Postpartum Posttraumatic Stress Disorder Parents may suffer from post-traumatic stress disorder (PTSD), or suffer post-traumatic stress disorder symptoms, following childbirth. While there has been debate in the medical community as to whether childbirth should be considered a traumatic event, the current consensus is childbirth can be a traumatic event. The DSM-IV and DSM-5 (standard classifications of mental disorders used by medical professionals) do not explicitly recognize childbirth-related PTSD, but both allow childbirth to be considered as a potential cause of PTSD. Childbirth-related PTSD is closely related to postpartum depression. Research indicates mothers who have childbirth-related PTSD also commonly have postpartum depression. Childbirth-related PTSD and postpartum depression have some common symptoms. Although both diagnoses overlap in their signs and symptoms, some symptoms specific to postpartum PTSD include being easily startled, recurring nightmares and flashbacks, avoiding the baby or anything that reminds one of birth, aggression, irritability, and panic attacks. Real or perceived trauma before, during, or after childbirth is a crucial element in diagnosing childbirth-related PTSD. Currently, there are no widely recognized assessments that measure postpartum post-traumatic stress disorder in medical settings. Existing PTSD assessments (such as the DSM-IV) have been used to measure childbirth-related PTSD. Some surveys exist to measure childbirth-related PTSD specifically, however, these are not widely used outside of research settings. Approximately 3-6% of mothers in the postpartum period have childbirth-related PTSD. The percentage of individuals with childbirth-related PTSD is approximately 15-18% in high-risk samples (women who experience severe birth complications, have a history of sexual/physical violence, or have other risk factors). Research has identified several factors that increase the chance of developing childbirth-related PTSD. These include a negative subjective experience of childbirth, maternal mental health (prenatal depression, perinatal anxiety, acute postpartum depression, and history of psychological problems), history of trauma, complications with delivery and baby (for example emergency cesarean section or NICU admittance), and a low level of social support. Childbirth-related PTSD has several negative health effects. Research suggests that childbirth-related PTSD may negatively affect the emotional attachment between mother and child. However, maternal depression or other factors may also explain this negative effect. Childbirth-related PTSD in the postpartum period may also lead to issues with the child's social-emotional development. Current research suggests childbirth-related PTSD results in lower breastfeeding rates and may prevent parents from breastfeeding for the desired amount of time. Screening Screening for postpartum depression is critical as up to 50% of cases go undiagnosed in the US, emphasizing the significance of comprehensive screening measures. In the US, the American College of Obstetricians and Gynecologists suggests healthcare providers consider depression screening for perinatal women. Additionally, the American Academy of Pediatrics recommends pediatricians screen mothers for PPD at 1-month, 2-month, and 4-month visits. However, many providers do not consistently provide screening and appropriate follow-up. For example, in Canada, Alberta is the only province with universal PPD screening. This screening is carried out by Public Health nurses with the baby's immunization schedule. In Sweden, Child Health Services offers a free program for new parents that includes screening mothers for PPD at 2 months postpartum. However, there are concerns about adherence to screening guidelines regarding maternal mental health. The Edinburgh Postnatal Depression Scale, a standardized self-reported questionnaire, may be used to identify women who have postpartum depression. If the new mother scores 13 or more, she likely has PPD and further assessment should follow. Healthcare providers may take a blood sample to test if another disorder is contributing to depression during the screening. The Edinburgh Postnatal Depression Scale is used within the first week of the newborn being admitted. If mothers receive a score less than 12 they are told to be reassessed because of the depression testing protocol. It is also advised that mothers in the NICU get screened every four to six weeks as their infant remains in the neonatal intensive care unit. Mothers who score between twelve and nineteen on the EPDS are offered two types of support. The mothers are offered LV treatment provided by a nurse in the NICU and they can be referred to the mental health professional services. If a mother receives a three on item number ten of the EPDS they are immediately referred to the social work team as they may be suicidal. It is critical to acknowledge the diversity of patient populations diagnosed with postpartum depression and how this may impact the reliability of the screening tools used. There are cultural differences in how patients express symptoms of postpartum depression; those in non-western countries exhibit more physical symptoms, whereas those in Western countries have more feelings of sadness. Depending on one's cultural background, symptoms of postpartum depression may manifest differently, and non-Westerners being screened in Western countries may be misdiagnosed because their screening tools do not account for cultural diversity. Aside from culture, it is also important to consider one's social context, as women with low socioeconomic status may have additional stressors that affect their postpartum depression screening scores. Prevention A 2013 Cochrane review found evidence that psychosocial or psychological intervention after childbirth helped reduce the risk of postnatal depression. These interventions included home visits, telephone-based peer support, and interpersonal psychotherapy. Support is an important aspect of prevention, as depressed mothers commonly state that their feelings of depression were brought on by "lack of support" and "feeling isolated." Across different cultures, traditional rituals for postpartum care may be preventative for PPD but are more effective when the support is welcomed by the mother. In couples, emotional closeness and global support by the partner protect against both perinatal depression and anxiety. In 2014, Alasoom and Koura found that compared to 42.9 percent of women who did not get spousal support, only 14.7 percent of women who got spousal assistance had PPD. Further factors such as communication between the couple and relationship satisfaction have a protective effect against anxiety alone. In those who are at risk counseling is recommended. The US Preventative Services Task Force (USPSTF) conducted a review of evidence which supported the use of counseling interventions such as therapy for the prevention of PPD in high-risk groups. Women who are considered to be high-risk include those with a past or present history of depression, or with certain socioeconomic factors such as low income or young age. Preventative treatment with antidepressants may be considered for those who have had PPD previously. However, as of 2017, the evidence supporting such use is weak. Community perinatal mental health teams were launched in England in 2016 to improve access to mental healthcare for pregnant women. They aim to prevent and treat episodes of mental illness during pregnancy and after birth. Researchers found that in areas of the country where teams were available, women who had previous contact with psychiatric services (many of whom had a previous diagnosis of anxiety or depression) were more likely to access mental health support and had a lower risk of relapse requiring hospital admission in the year after giving birth. Treatments Treatment for mild to moderate PPD includes psychological interventions or antidepressants. Women with moderate to severe PPD would likely experience a greater benefit with a combination of psychological and medical interventions. Light aerobic exercise is useful for mild and moderate cases. Therapy Both individual social and psychological interventions appear equally effective in the treatment of PPD. Social interventions include individual counseling and peer support, while psychological interventions include cognitive behavioral therapy (CBT) and interpersonal therapy (IPT). Support groups and group therapy options focused on psychoeducation around postpartum depression have been shown to enhance the understanding of postpartum symptoms and often assist in finding further treatment options. Other forms of therapy, such as group therapy, home visits, counseling, and ensuring greater sleep for the mother may also have a benefit. While specialists trained in providing counseling interventions often serve this population in need, results from a recent systematic review and meta-analysis found that nonspecialist providers, including lay counselors, nurses, midwives, and teachers without formal training in counseling interventions, often provide effective services related to perinatal depression and anxiety. Psychotherapy Psychotherapy is the use of psychological methods, particularly when based on regular personal interaction, to help a person change behavior, increase happiness, and overcome problems. Psychotherapy can be super beneficial for mothers or fathers that are dealing with PPD. It allows individuals to talk with someone, maybe even someone who specializes in working with people who are dealing with PPD, and share their emotions and feelings to get help to become more emotionally stable. Psychotherapy proves to show efficacy of psychodynamic interventions for postpartum depression, both in home and clinical settings and both in group and individual format. Cognitive behavioral therapy Internet-based cognitive behavioral therapy (CBT) has shown promising results with lower negative parenting behavior scores and lower rates of anxiety, stress, and depression. CBT  may be beneficial for mothers who have limitations in accessing in-person CBT. However, the long-term benefits have not been determined. The implementation of cognitive behavioral therapy happens to be one of the most successful and well-known forms of therapy regarding PPD. In simple terms, cognitive behavioral therapy is a psycho-social intervention that aims to reduce symptoms of various mental health conditions, primarily depression and anxiety disorders. While being a wide branch of therapy, it remains very beneficial when tackling specific emotional distress, which is the foundation of PPD. Thus, CBT manages to further reduce or limit the frequency and intensity of emotional outbreaks in the mothers or fathers. Interpersonal therapy Interpersonal therapy (IPT) has shown to be effective in focusing specifically on the mother and infant bond. Psychosocial interventions are effective for the treatment of postpartum depression. Interpersonal therapy otherwise known as IPT is a wonderfully intuitive fit for many women with PPD as they typically experience a multitude of biopsychosocial stressors that are associated with their depression, including several disrupted interpersonal relationships. Medication A 2010 review found few studies of medications for treating PPD noting small sample sizes and generally weak evidence. Some evidence suggests that mothers with PPD will respond similarly to people with major depressive disorder. There is low-certainty evidence which suggests that selective serotonin reuptake inhibitors (SSRIs) are an effective treatment for PPD. The first-line anti-depressant medication of choice is sertraline, an SSRI, as very little of it passes into the breast milk and, as a result, to the child. However, a recent study has found that adding sertraline to psychotherapy does not appear to confer any additional benefit. Therefore, it is not completely clear which antidepressants, if any, are most effective for the treatment of PPD, and for whom antidepressants would be a better option than non-pharmacotherapy. Some studies show that hormone therapy may be effective in women with PPD, supported by the idea that the drop in estrogen and progesterone levels post-delivery contributes to depressive symptoms. However, there is some controversy with this form of treatment because estrogen should not be given to people who are at higher risk of blood clots, which include women up to 12 weeks after delivery. Additionally, none of the existing studies included women who were breastfeeding. However, there is some evidence that the use of estradiol patches might help with PPD symptoms. Oxytocin is an effective anxiolytic and in some cases antidepressant treatment in men and women. Exogenous oxytocin has only been explored as a PPD treatment with rodents, but results are encouraging for potential application in humans. In 2019, the FDA approved brexanolone, a synthetic analog of the neurosteroid allopregnanolone, for use intravenously in postpartum depression. Allopregnanolone levels drop after giving birth, which may lead to women becoming depressed and anxious. Some trials have demonstrated an effect on PPD within 48 hours from the start of infusion. Other new allopregnanolone analogs under evaluation for use in the treatment of PPD include zuranolone and ganaxolone. Brexanolone has risks that can occur during administration, including excessive sedation and sudden loss of consciousness, and therefore has been approved under the Risk Evaluation and Mitigation Strategy (REMS) program. The mother is to be enrolled before receiving the medication. It is only available to those at certified healthcare facilities with a healthcare provider who can continually monitor the patient. The infusion itself is a 60-hour, or 2.5-day, process. People's oxygen levels are to be monitored with a pulse oximeter. Side effects of the medication include dry mouth, sleepiness, somnolence, flushing, and loss of consciousness. It is also important to monitor for early signs of suicidal thoughts or behaviors. In 2023, the FDA approved zuranolone, sold under the brand name Zurzuvae for treatment of postpartum depression. Zuranolone is administered through a pill, which is more convenient than brexanolone, which is administered through an intravenous injection. Breastfeeding The use of SSRIs for the treatment of PPD is not a contraindication for breastfeeding. While antidepressants are excreted in breastmilk, the concentrations recorded in breastmilk are very low. Extensive research has shown that the use of SSRI's by women who are lactating is safe for the breastfeeding infant/child. Regarding allopregnanolone, very limited data did not indicate a risk for the infant. Other Electroconvulsive therapy (ECT) has shown efficacy in women with severe PPD who have either failed multiple trials of medication-based treatment or cannot tolerate the available antidepressants. Tentative evidence supports the use of repetitive transcranial magnetic stimulation (rTMS). As of 2013, it is unclear if acupuncture, massage, bright lights, or taking omega-3 fatty acids are useful. Resources International Postpartum Support International is the most recognized international resource for those with PPD as well as healthcare providers. It brings together those experiencing PPD, volunteers, and professionals to share information, referrals, and support networks. Services offered by PSI include the website (with support, education, and local resource info), coordinators for support and local resources, online weekly video support groups in English and Spanish, free weekly phone conferences with chats with experts, educational videos, closed Facebook groups for support, and professional training of healthcare workers. United States Educational interventions Educational interventions can help women struggling with postpartum depression (PPD) to cultivate coping strategies and develop resiliency. The phenomenon of "scientific motherhood" represents the origin of women's education on perinatal care with publications like Ms. circulating some of the first press articles on PPD that helped to normalize the symptoms that women experienced. Feminist writings on PPD from the early seventies shed light on the darker realities of motherhood and amplified the lived experiences of mothers with PPD. Instructional videos have been popular among women who turn to the internet for PPD treatment, especially when the videos are interactive and get patients involved in their treatment plans. Since the early 2000s, video tutorials on PPD have been integrated into many web-based training programs for individuals with PPD and are often considered a type of evidence-based management strategy for individuals. This can take the form of objective-based learning, detailed exploration of case studies, resource guides for additional support and information, etc. Government-funded programs The National Child and Maternal Health Education Program functions as a larger education and outreach program supported by the National Institute of Child Health and Human Development (NICHD) and the National Institute of Health. The NICHD has worked alongside organizations like the World Health Organization to conduct research on the psychosocial development of children with part of their efforts going towards the support of mothers' health and safety. Training and education services are offered through the NICHD to equip women and their healthcare providers with evidence-based knowledge of PPD. Other initiatives include the Substance Abuse and Mental Health Services Administration (SAMHSA) whose disaster relief program provides medical assistance at both the national and local level. The disaster relief fund not only helps to raise awareness of the benefits of having healthcare professionals screen for PPD but also helps childhood professionals (home visitors and early care providers) develop the skills to diagnose and prevent PPD. The Infant and Early Childhood Mental Health Consultation (IECMH) center is a related technical assistance program that utilizes evidence-based treatment services to address issues of PPD. The IECMH facilitates parenting and home visit programs, early care site interventions with parents and children, and a variety of other consultation-based services. The IECMH's initiatives seek to educate home visitors on screening protocols for PPD as well as ways to refer depressed mothers to professional help. Links to government-funded programs www.nichd.nih.gov/ncmhep www.nichd.nih.gov www.samhsa.gov www.samhsa.gov/iecmhc Psychotherapy Therapeutic methods of intervention can begin as early as a few days post-birth when most mothers are discharged from hospitals. Research surveys have revealed a paucity of professional, and emotional support for women struggling in the weeks following delivery despite there being a heightened risk for PPD for new mothers during this transitional period. Community-based support A lack of social support has been identified as a barrier to seeking help for postpartum depression. Peer support programs have been identified as an effective intervention for women experiencing symptoms of postpartum depression. In-person, online, and telephone support groups are available to both women and men throughout the United States. Peer support models are appealing to many women because they are offered in a group and outside of the mental health setting. The website Postpartum Progress provides a comprehensive list of support groups separated by state and includes the contact information for each group. The National Alliance on Mental Illness lists a virtual support group titled "The Shades of Blue Project," which is available to all women via the submission of a name and email address. Additionally, NAMI recommends the website "National Association of Professional and Peer Lactation Supports of Color" for mothers in need of a lactation supporter. Lactation assistance is available either online or in-person if there is support nearby. Personal narratives & memoirs Postpartum Progress is a blog focused on being a community of mothers talking openly about postpartum depression and other mental health conditions associated. Story-telling and online communities reduce the stigma around PPD and promote peer-based care. Postpartum Progress is specifically relevant to people of color and queer folks due to an emphasis on cultural competency. Hotlines & telephone interviews Hotlines, chat lines, and telephone interviews offer immediate, emergency support for those experiencing PPD. Telephone-based peer support can be effective in the prevention and treatment of postpartum depression among women at high risk. Established examples of telephone hotlines include the National Alliance on Mental Illness: 800-950-NAMI (6264), National Suicide Prevention Lifeline: 800-273-TALK (8255), Postpartum Support International: 800-944-4PPD (4773), and SAMHSA's National Hotline: 1-800-662-HELP (4357). Postpartum Health Alliance has an immediate, 24/7 support line in San Diego/San Diego Access and Crisis Line at (888) 724–7240, in which you can talk with mothers who have recovered from PPD and trained providers. However, hotlines can lack cultural competency which is crucial in quality healthcare, specifically for people of color. Calling the police or 911, specifically for mental health crises, is dangerous for many people of color. Culturally and structurally competent emergency hotlines are a huge need in PPD care. National Alliance on Mental Illness: 800-950-NAMI (6264) National Suicide Prevention Lifeline: 800-273-TALK (8255) Postpartum Support International: 800-944-4PPD (4773) SAMHSA's National Hotline: 1-800-662-HELP (4357) Self-care & well-being activities Women demonstrated an interest in self-care and well-being in an online PPD prevention program. Self-care activities, specifically music therapy, are accessible to most communities and valued among women as a way to connect with their children and manage symptoms of depression. Well-being activities associated with being outdoors, including walking and running, were noted amongst women as a way to help manage mood. Accessibility to care Those with PPD come across many help-seeking barriers, including lack of knowledge, stigma about symptoms, as well as health service barriers. There are also attitudinal barriers to seeking treatment, including stigma. Interpersonal relationships with friends and family, as well as institutional and financial obstacles, serve as help-seeking barriers. A history of mistrust within the United States healthcare system or negative health experiences can influence one's willingness and adherence to seek postpartum depression treatment. Cultural responses must be adequate in PPD healthcare and resources. Representation and cultural competency are crucial to equitable healthcare for PPD. Different ethnic groups may believe that healthcare providers will not respect their cultural values or religious practices, which influences their willingness to use mental health services or be prescribed antidepressant medications. Additionally, resources for PPD are limited and often don't incorporate what mothers would prefer. The use of technology can be a beneficial way to provide mothers with resources because it is accessible and convenient. Epidemiology North America United States Within the United States, the prevalence of postpartum depression was lower than the global approximation at 11.5% but varied between states from as low as 8% to as high as 20.1%. The highest prevalence in the US is found among women who are American Indian/Alaska Natives or Asian/Pacific Islanders, possess less than 12 years of education, are unmarried, smoke during pregnancy, experience over two stressful life events, or have full-term infant is low-birthweight or was admitted to a NICU. While US prevalence decreased from 2004 to 2012, it did not decrease among American Indian/Alaska Native women or those with full term, low-birthweight infants. Even with the variety of studies, it is difficult to find the exact rate as approximately 60% of US women are not diagnosed and of those diagnosed, approximately 50% are not treated for PPD. Cesarean section rates did not affect the rates of PPD. While there is discussion of postpartum depression in fathers, there is no formal diagnosis for postpartum depression in fathers. Canada Canada has one of the largest refugee resettlement in the world with an equal percentage of women to men. This means that Canada has a disproportionate percentage of women who develop postpartum depression since there is an increased risk among the refugee population. In a blind study, where women had to reach out and participate, around 27% of the sample population had symptoms consistent with postpartum depression without even knowing. Also found that on average 8.46 women had minor and major PPDS was found to be 8.46 and 8.69% respectively. The main factors that were found to contribute to this study were the stress during pregnancy, the availability of support after, and a prior diagnosis of depression were all found to be factors. Canada has specific population demographics that also involve a large amount of immigrant and indigenous women which creates a specific cultural demographic localized to Canada. In this study, researchers found that these two populations were at significantly higher risk compared to "Canadian-born non-indigenous mothers". This study found that risk factors such as low education, low-income cut-off, taking antidepressants, and low social support are all factors that contribute to the higher percentage of these populations developing PPDS. Specifically, indigenous mothers had the most risk factors than immigrant mothers with non-indigenous Canadian women being closer to the overall population. South America A main issue surrounding PPD is the lack of study and the lack of reported prevalence that is based on studies developed in Western economically developed countries. In countries such as Brazil, Guyana, Costa Rica, Italy, Chile, and South Africa reports are prevalent, around 60%. An itemized research analysis put a mean prevalence at 10-15% percent but explicitly stated that cultural factors such as perception of mental health and stigma could be preventing accurate reporting. The analysis for South America shows that PPD occurs at a high rate looking comparatively at Brazil (42%) Chile (4.6-48%) Guyana and Colombia (57%) and Venezuela (22%). In most of these countries, PPD is not considered a serious condition for women and therefore there is an absence of support programs for prevention and treatment in health systems. Specifically, in Brazil PPD is identified through the family environment whereas in Chile PPD manifests itself through suicidal ideation and emotional instability. In both cases, most women feel regret and refuse to take care of the child showing that this illness is serious for both the mother and child. Asia From a selected group of studies found from a literature search, researchers discovered many demographic factors of Asian populations that showed significant association with PPD. Some of these include the age of the mother at the time of childbirth as well as the older age at marriage. Being a migrant and giving birth to a child overseas has also been identified as a risk factor for PPD. Specifically for Japanese women who were born and raised in Japan but who gave birth to their child in Hawaii, USA, about 50% of them experienced emotional dysfunction during their pregnancy. All women who gave birth for the first time and were included in the study experienced PPD. In immigrant Asian Indian women, the researchers found a minor depressive symptomatology rate of 28% and an additional major depressive symptomatology rate of 24% likely due to different healthcare attitudes in different cultures and distance from family leading to homesickness. In the context of Asian countries, premarital pregnancy is an important risk factor for PPD. This is because it is considered highly unacceptable in most Asian cultures as there is a highly conservative attitude toward sex among Asian people than people in the West. In addition, conflicts between mother and daughter-in-law are notoriously common in Asian societies as traditionally for them, marriage means the daughter-in-law joining and adjusting to the groom's family completely. These conflicts may be responsible for the emergence of PPD. Regarding the gender of the child, many studies have suggested dissatisfaction with an infant's gender (birth of a baby girl) is a risk factor for PPD. This is because, in some Asian cultures, married couples are expected by the family to have at least one son to maintain the continuity of the bloodline which might lead a woman to experience PPD if she cannot give birth to a baby boy. The Middle East With a prevalence of 27%, postpartum depression amongst mothers in the Middle East is higher than in the Western world and other regions of the world. Despite the high number of postpartum depression cases in the region in comparison to other areas, there is a large literature gap in correlation with the Arab region, and no studies have been conducted in the Middle East studying interventions and prevention to tackle postpartum depression in Arab mothers. Countries within the Arab region had a postpartum depression prevalence ranging from 10% to 40%, with a PPD prevalence in Qatar at 18.6%, UAE between 18% and 24%, Jordan between 21.2 and 22.1, Lebanon at 21%, Saudi Arabia between 10.1 and 10.3, and Tunisia between 13.2% and 19.2%, according to studies carried out in these countries. There are also examples of nations with noticeably higher rates, such as Iran at 40.2%, Bahrain at 37.1%, and Turkey at 27%. The high prevalence of postpartum depression in the region may be attributed to socio-economic and cultural factors involving social and partner support, poverty, and prevailing societal views on pregnancy and motherhood. Another factor is related to the region's women's lack of access to care services because many societies within the region do not prioritize mental health and do not perceive it as a serious issue. The prevailing crises and wars within some countries of the region, lack of education, polygamy, and early childbearing are additional factors. Fertility rates in Palestine are noticeably high; higher fertility rates have been connected to a possible pattern where birth rates increase after violent episodes. Research conducted on Arab women indicates that more cases of postpartum depression are associated with increased parity. A study found that the most common pregnancy and birth variable reported to be associated with PPD in the Middle East was an unplanned or unwanted pregnancy while having a female baby instead of a male baby is also discussed as a factor with 2 to 4 times higher risk. Europe There is a general assumption that Western cultures are homogenous and that there are no significant differences in psychiatric disorders across Europe and the USA. However, in reality, factors associated with maternal depression, including work and environmental demands, access to universal maternity leave, healthcare, and financial security, are regulated and influenced by local policies that differ across countries. For example, European social policies differ from country to country contrary to the US, all countries provide some form of paid universal maternity leave and free healthcare. Studies also found differences in symptomatic manifestations of PPD between European and American women. Women from Europe reported higher scores of anhedonia, self-blaming, and anxiety, while women from the US disclosed more severe insomnia, depressive feelings, and thoughts of self-harming. Additionally, there are differences in prescribing patterns and attitudes towards certain medications between the US and Europe which are indicative of how different countries approach treatment, and their different stigmas. Africa Africa, like all other parts of the world, struggles with the burden of postpartum depression. Current studies estimate the prevalence to be 15-25% but this is likely higher due to a lack of data and recorded cases. The magnitude of postpartum depression in South Africa is between 31.7% and 39.6%, in Morocco between 6.9% and 14%, in Nigeria between 10.7% and 22.9%, in Uganda 43%, in Tanzania 12%, in Zimbabwe 33%, in Sudan 9.2%, in Kenya between 13% and 18.7% and, 19.9% for participants in Ethiopia according to studies carried out in these countries among postpartum mothers between the ages of 17–49. This demonstrates the gravity of this problem in Africa and the need for postpartum depression to be taken seriously as a public health concern in the continent. Additionally, each of these studies was conducted using Western-developed assessment tools. Cultural factors can affect diagnosis and can be a barrier to assessing the burden of disease. Some recommendations to combat postpartum depression in Africa include considering postpartum depression as a public health problem that is neglected among postpartum mothers. Investing in research to assess the actual prevalence of postpartum depression, and encourage early screening, diagnosis, and treatment of postpartum depression as an essential aspect of maternal care throughout Africa. Issues in reporting prevalence Most studies regarding PPD are done using self-report screenings which are less reliable than clinical interviews. This use of self-reporting may have results that underreport symptoms and thus postpartum depression rates. Furthermore, the prevalence of postpartum depression in Arab countries exhibits significant variability, often due to diverse assessment methodologies. In a review of twenty-five studies examining PPD, differences in assessment methods, recruitment locations, and timing of evaluations complicate prevalence measurement. For instance, the studies varied in their approach, with some using a longitudinal panel method tracking PPD at multiple points during pregnancy and postpartum periods, while others employed cross-sectional approaches to estimate point or period prevalences. The Edinburgh Postnatal Depression Scale (EPDS) was commonly used across these studies, yet variations in cutoff scores further determined the results of prevalence. For example, a study in Kom Ombo, Egypt, reported a rate of 73.7% for PPD, but the small sample size of 57 mothers and the broad measurement timeframe spanning from two weeks to one year postpartum contributes to the challenge of making definitive prevalence conclusions (2). This wide array of assessment methods and timing significantly impacts the reported rates of postpartum depression. History Prior to the 19th century Western medical science's understanding and construction of postpartum depression have evolved over the centuries. Ideas surrounding women's moods and states have been around for a long time, typically recorded by men. In 460 B.C., Hippocrates wrote about puerperal fever, agitation, delirium, and mania experienced by women after childbirth. Hippocrates' ideas still linger in how postpartum depression is seen today. A woman who lived in the 14th century, Margery Kempe, was a Christian mystic. She was a pilgrim known as "Madwoman" after having a tough labor and delivery. There was a long physical recovery period during which she started descending into "madness" and became suicidal. Based on her descriptions of visions of demons and conversations she wrote about that she had with religious figures like God and the Virgin Mary, historians have identified what Margery Kempe was experiencing as "postnatal psychosis" and not postpartum depression. This distinction became important to emphasize the difference between postpartum depression and postpartum psychosis. A 16th-century physician, Castello Branco, documented a case of postpartum depression without the formal title as a relatively healthy woman with melancholy after childbirth, remained insane for a month, and recovered with treatment. Although this treatment was not described, experimental treatments began to be implemented for postpartum depression for the centuries that followed. Connections between female reproductive function and mental illness would continue to center around reproductive organs from this time through to the modern age, with a slowly evolving discussion around "female madness". 19th century and after With the 19th century came a new attitude about the relationship between female mental illness and pregnancy, childbirth, or menstruation. The famous short story, "The Yellow Wallpaper", was published by Charlotte Perkins Gilman in this period. In the story, an unnamed woman journals her life when she is treated by her physician husband, John, for hysterical and depressive tendencies after the birth of their baby. Gilman wrote the story to protest the societal oppression of women as the result of her own experience as a patient. Also during the 19th century, gynecologists embraced the idea that female reproductive organs, and the natural processes they were involved in, were at fault for "female insanity." Approximately 10% of asylum admissions during this period are connected to "puerperal insanity," the named intersection between pregnancy or childbirth and female mental illness. It wasn't until the onset of the twentieth century that the attitude of the scientific community shifted once again: the consensus amongst gynecologists and other medical experts was to turn away from the idea of diseased reproductive organs and instead towards more "scientific theories" that encompassed a broadening medical perspective on mental illness. Society and culture Legal recognition Recently, postpartum depression has become more widely recognized in society. In the US, the Patient Protection and Affordable Care Act included a section focusing on research into postpartum conditions including postpartum depression. Some argue that more resources in the form of policies, programs, and health objectives need to be directed to the care of those with PPD. Role of stigma When stigma occurs, a person is labeled by their illness and viewed as part of a stereotyped group. There are three main elements of stigmas, 1) problems of knowledge (ignorance or misinformation), 2) problems of attitudes (prejudice), and 3) problems of behavior (discrimination). Specifically regarding PPD, it is often left untreated as women frequently report feeling ashamed about seeking help and are concerned about being labeled as a "bad mother" if they acknowledge that they are experiencing depression. Although there has been previous research interest in depression-related stigma, few studies have addressed PPD stigma. One study studied PPD stigma by examining how an education intervention would impact it. They hypothesized that an education intervention would significantly influence PPD stigma scores. Although they found some consistency with previous mental health stigma studies, for example, that males had higher levels of personal PPD stigma than females, most of the PPD results were inconsistent with other mental health studies. For example, they hypothesized that education intervention would lower PPD stigma scores, but in reality, there was no significant impact, and also familiarity with PPD was not associated with one's stigma towards people with PPD. This study was a strong starting point for further PPD research but indicates more needs to be done to learn what the most effective anti-stigma strategies are specifically for PPD. Postpartum depression is still linked to significant stigma. This can also be difficult when trying to determine the true prevalence of postpartum depression. Participants in studies about PPD carry their beliefs, perceptions, cultural context, and stigma of mental health in their cultures with them which can affect data. The stigma of mental health - with or without support from family members and health professionals - often deters women from seeking help for their PPD. When medical help is achieved, some women find the diagnosis helpful and encourage a higher profile for PPD amongst the health professional community. Cultural beliefs Postpartum depression can be influenced by sociocultural factors. There are many examples of particular cultures and societies that hold specific beliefs about PPD. Malay culture holds a belief in Hantu Meroyan; a spirit that resides in the placenta and amniotic fluid. When this spirit is unsatisfied and venting resentment, it causes the mother to experience frequent crying, loss of appetite, and trouble sleeping, known collectively as "sakit meroyan". The mother can be cured with the help of a shaman, who performs a séance to force the spirits to leave. Some cultures believe that the symptoms of postpartum depression or similar illnesses can be avoided through protective rituals in the period after birth. These may include offering structures of organized support, hygiene care, diet, rest, infant care, and breastfeeding instruction. The rituals appear to be most effective when the support is welcomed by the mother. Some Chinese women participate in a ritual that is known as "doing the month" (confinement) in which they spend the first 30 days after giving birth resting in bed, while the mother or mother-in-law takes care of domestic duties and childcare. In addition, the new mother is not allowed to bathe or shower, wash her hair, clean her teeth, leave the house, or be blown by the wind. The relationship with the mother-in-law has been identified as a significant risk factor for postpartum depression in many Arab regions. Based on cultural beliefs that place importance on mothers, mothers-in-law have significant influences on daughters-in-law and grandchildren's lives in such societies as the husbands frequently have close relationships with their family of origin, including living together. Furthermore, cultural factors influence how Middle Eastern women are screened for PPD. The traditional Edinburgh Postnatal Depression Scale, or EPDS, has come under criticism for emphasizing depression symptoms that may not be consistent with Muslim cultural standards. Thoughts of self-harm are strictly prohibited in Islam, yet it is a major symptom within the EPDS. Words like "depression screen" or "mental health" are considered disrespectful to some Arab cultures. Furthermore, women may under report symptoms to put the needs of the family before their own because these countries have collectivist cultures. Additionally, research showed that mothers of female babies had a considerably higher risk of PPD, ranging from 2-4 times higher than those of mothers of male babies, due to the value certain cultures in the Middle East place on female babies compared to male babies. Media Certain cases of postpartum mental health concerns received attention in the media and brought about dialogue on ways to address and understand more about postpartum mental health. Andrea Yates, a former nurse, became pregnant for the first time in 1993. After giving birth to five children in the coming years, she had severe depression and many depressive episodes. This led to her believing that her children needed to be saved and that by killing them, she could rescue their eternal souls. She drowned her children one by one over the course of an hour, by holding their heads underwater in their family bathtub. When called into trial, she felt that she had saved her children rather than harming them and that this action would contribute to defeating Satan. This was one of the first public and notable cases of postpartum psychosis, which helped create a dialogue on women's mental health after childbirth. The court found that Yates was experiencing mental illness concerns, and the trial started the conversation of mental illness in cases of murder and whether or not it would lessen the sentence or not. It also started a dialogue on women going against "maternal instinct" after childbirth and what maternal instinct was truly defined by. Yates' case brought wide media attention to the problem of filicide, or the murder of children by their parents. Throughout history, both men and women have perpetrated this act, but the study of maternal filicide is more extensive.
Biology and health sciences
Mental disorders
Health
175440
https://en.wikipedia.org/wiki/Medical%20cannabis
Medical cannabis
Medical cannabis, medicinal cannabis or medical marijuana (MMJ) refers to cannabis products and cannabinoid molecules that are prescribed by physicians for their patients. The use of cannabis as medicine has a long history, but has not been as rigorously tested as other medicinal plants due to legal and governmental restrictions, resulting in limited clinical research to define the safety and efficacy of using cannabis to treat diseases. Preliminary evidence has indicated that cannabis might reduce nausea and vomiting during chemotherapy and reduce chronic pain and muscle spasms. Regarding non-inhaled cannabis or cannabinoids, a 2021 review found that it provided little relief against chronic pain and sleep disturbance, and caused several transient adverse effects, such as cognitive impairment, nausea, and drowsiness. Short-term use increases the risk of minor and major adverse effects. Common side effects include dizziness, feeling tired, vomiting, and hallucinations. Long-term effects of cannabis are not clear. Concerns include memory and cognition problems, risk of addiction, schizophrenia in young people, and the risk of children taking it by accident. Many cultures have used cannabis for therapeutic purposes for thousands of years. Some American medical organizations have requested removal of cannabis from the list of Schedule I controlled substances, emphasizing that rescheduling would enable more extensive research and regulatory oversight to ensure safe access. Others oppose its legalization, such as the American Academy of Pediatrics. Medical cannabis can be administered through various methods, including capsules, lozenges, tinctures, dermal patches, oral or dermal sprays, cannabis edibles, and vaporizing or smoking dried buds. Synthetic cannabinoids are available for prescription use in some countries, such as synthetic delta-9-THC and nabilone. Countries that allow the medical use of whole-plant cannabis include Argentina, Australia, Canada, Chile, Colombia, Germany, Greece, Israel, Italy, the Netherlands, Peru, Poland, Portugal, Spain, and Uruguay. In the United States, 38 states and the District of Columbia have legalized cannabis for medical purposes, beginning with the passage of California's Proposition 215 in 1996. Although cannabis remains prohibited for any use at the federal level, the Rohrabacher–Farr amendment was enacted in December 2014, limiting the ability of federal law to be enforced in states where medical cannabis has been legalized. This amendment reflects an increasing bipartisan acknowledgment of the potential therapeutic uses of cannabis and the significance of state-level policymaking in this area. Classification In the U.S., the National Institute on Drug Abuse defines medical cannabis as "using the whole, unprocessed marijuana plant or its basic extracts to treat symptoms of illness and other conditions". A cannabis plant includes more than 400 different chemicals, of which about 70 are cannabinoids. In comparison, typical government-approved medications contain only one or two chemicals. The number of active chemicals in cannabis is one reason why treatment with cannabis is difficult to classify and study. A 2014 review stated that the variations in ratio of CBD-to-THC in botanical and pharmaceutical preparations determines the therapeutic vs psychoactive effects (CBD attenuates THC's psychoactive effects) of cannabis products. Medical uses Overall, research into the health effects of medical cannabis has been of low quality and it is not clear whether it is a useful treatment for any condition, or whether harms outweigh any benefit. There is no consistent evidence that it helps with chronic pain and muscle spasms. Low quality evidence suggests its use for reducing nausea during chemotherapy, improving appetite in HIV/AIDS, improving sleep, and improving tics in Tourette syndrome. When usual treatments are ineffective, cannabinoids have also been recommended for anorexia, arthritis, glaucoma, and migraine. It is unclear whether American states might be able to mitigate the adverse effects of the opioid epidemic by prescribing medical cannabis as an alternative pain management drug. Cannabis should not be used in pregnancy. Insomnia Research analyzing data from the National Health and Nutrition Examination Survey (NHANES) did not find significant differences in sleep duration between cannabis users and non-users. This suggests that while some individuals may perceive benefits from cannabis use in terms of sleep, it may not significantly change overall sleep patterns across the general population. A review of literature up to 2018 indicates that cannabidiol (CBD) may have therapeutic potential for the treatment of insomnia. CBD, a non-psychoactive component of cannabis, is of particular interest due to its potential to influence sleep without the psychoactive effects associated with tetrahydrocannabinol (THC). Nausea and vomiting Medical cannabis is somewhat effective in chemotherapy-induced nausea and vomiting (CINV) and may be a reasonable option in those who do not improve following preferential treatment. Comparative studies have found cannabinoids to be more effective than some conventional antiemetics such as prochlorperazine, promethazine, and metoclopramide in controlling CINV, but these are used less frequently because of side effects including dizziness, dysphoria, and hallucinations. Long-term cannabis use may cause nausea and vomiting, a condition known as cannabinoid hyperemesis syndrome (CHS). A 2016 Cochrane review said that cannabinoids were "probably effective" in treating chemotherapy-induced nausea in children, but with a high side-effect profile (mainly drowsiness, dizziness, altered moods, and increased appetite). Less common side effects were "ocular problems, orthostatic hypotension, muscle twitching, pruritus, vagueness, hallucinations, lightheadedness and dry mouth". HIV/AIDS Evidence is lacking for both efficacy and safety of cannabis and cannabinoids in treating patients with HIV/AIDS or for anorexia associated with AIDS. As of 2013, current studies suffer from the effects of bias, small sample size, and lack of long-term data. Pain A 2021 review found little effect of using non-inhaled cannabis to relieve chronic pain. According to a 2019 systematic review, there have been inconsistent results of using cannabis for neuropathic pain, spasms associated with multiple sclerosis and pain from rheumatic disorders, but was not effective treating chronic cancer pain. The authors state that additional randomized controlled trials of different cannabis products are necessary to make conclusive recommendations. When cannabis is inhaled to relieve pain, blood levels of cannabinoids rise faster than when oral products are used, peaking within three minutes and attaining an analgesic effect in seven minutes. A 2011 review considered cannabis to be generally safe, and it appears safer than opioids in palliative care. A 2022 review concluded the pain relief experienced after using medical cannabis is due to the placebo effect, especially given widespread media attention that sets the expectation for pain relief. Neurological conditions Cannabis' efficacy is not clear in treating neurological problems, including multiple sclerosis (MS) and movement problems. Evidence also suggests that oral cannabis extract is effective for reducing patient-centered measures of spasticity. A trial of cannabis is deemed to be a reasonable option if other treatments have not been effective. Its use for MS is approved in ten countries. A 2012 review found no problems with tolerance, abuse, or addiction. In the United States, cannabidiol, one of the cannabinoids found in the marijuana plant, has been approved for treating two severe forms of epilepsy, Lennox-Gastaut syndrome and Dravet syndrome. Mental health A 2019 systematic review found that there is a lack of evidence that cannabinoids are effective in treating depressive or anxiety disorders, attention-deficit hyperactivity disorder (ADHD), Tourette syndrome, post-traumatic stress disorder, or psychosis. Research indicates that cannabis, particularly CBD, may have anxiolytic (anxiety-reducing) effects. A study found that CBD significantly reduced anxiety during a simulated public speaking test for individuals with social anxiety disorder. However, the relationship between cannabis use and anxiety symptoms is complex, and while some users report relief, the overall evidence from observational studies and clinical trials remains inconclusive. Cannabis is often used by people to cope with anxiety, yet the efficacy and safety of cannabis for treating anxiety disorders is yet to be researched. Cannabis use, especially at high doses, is associated with a higher risk of psychosis, particularly in individuals with a genetic predisposition to psychotic disorders like schizophrenia. Some studies have shown that cannabis can trigger a temporary psychotic episode, which may increase the risk of developing a psychotic disorder later. The impact of cannabis on depression is less clear. Some studies suggest a potential increase in depression risk among adolescents who use cannabis, though findings are inconsistent across studies. Adverse effects Medical use There is insufficient data to draw strong conclusions about the safety of medical cannabis. Typically, adverse effects of medical cannabis use are not serious; they include tiredness, dizziness, increased appetite, and cardiovascular and psychoactive effects. Other effects can include impaired short-term memory; impaired motor coordination; altered judgment; and paranoia or psychosis at high doses. Tolerance to these effects develops over a period of days or weeks. The amount of cannabis normally used for medicinal purposes is not believed to cause any permanent cognitive impairment in adults, though long-term treatment in adolescents should be weighed carefully as they are more susceptible to these impairments. Withdrawal symptoms are rarely a problem with controlled medical administration of cannabinoids. The ability to drive vehicles or to operate machinery may be impaired until a tolerance is developed. Although supporters of medical cannabis say that it is safe, further research is required to assess the long-term safety of its use. Cognitive effects Recreational use of cannabis is associated with cognitive deficits, especially for those who begin to use cannabis in adolescence. there is a lack of research into long-term cognitive effects of medical use of cannabis, but one 12-month observational study reported that "MC patients demonstrated significant improvements on measures of executive function and clinical state over the course of 12 months". Impact on psychosis Exposure to THC can cause acute transient psychotic symptoms in healthy individuals and people with schizophrenia. A 2007 meta analysis concluded that cannabis use reduced the average age of onset of psychosis by 2.7 years relative to non-cannabis use. A 2005 meta analysis concluded that adolescent use of cannabis increases the risk of psychosis, and that the risk is dose-related. A 2004 literature review on the subject concluded that cannabis use is associated with a two-fold increase in the risk of psychosis, but that cannabis use is "neither necessary nor sufficient" to cause psychosis. A French review from 2009 came to a conclusion that cannabis use, particularly that before age 15, was a factor in the development of schizophrenic disorders. Pharmacology The genus Cannabis contains two species which produce useful amounts of psychoactive cannabinoids: Cannabis indica and Cannabis sativa, which are listed as Schedule I medicinal plants in the US; a third species, Cannabis ruderalis, has few psychogenic properties. Cannabis contains more than 460 compounds; at least 80 of these are cannabinoids – chemical compounds that interact with cannabinoid receptors in the brain. As of 2012, more than 20 cannabinoids were being studied by the U.S. FDA. The most psychoactive cannabinoid found in the cannabis plant is tetrahydrocannabinol (or delta-9-tetrahydrocannabinol, commonly known as THC). Other cannabinoids include delta-8-tetrahydrocannabinol, cannabidiol (CBD), cannabinol (CBN), cannabicyclol (CBL), cannabichromene (CBC) and cannabigerol (CBG); they have less psychotropic effects than THC, but may play a role in the overall effect of cannabis. The most studied are THC, CBD and CBN. CB1 and CB2 are the primary cannabinoid receptors responsible for several of the effects of cannabinoids, although other receptors may play a role as well. Both belong to a group of receptors called G protein-coupled receptors (GPCRs). CB1 receptors are found in very high levels in the brain and are thought to be responsible for psychoactive effects. CB2 receptors are found peripherally throughout the body and are thought to modulate pain and inflammation. Absorption Cannabinoid absorption is dependent on its route of administration. Inhaled and vaporized THC have similar absorption profiles to smoked THC, with a bioavailability ranging from 10 to 35%. Oral administration has the lowest bioavailability of approximately 6%, variable absorption depending on the vehicle used, and the longest time to peak plasma levels (2 to 6 hours) compared to smoked or vaporized THC. Similar to THC, CBD has poor oral bioavailability, approximately 6%. The low bioavailability is largely attributed to significant first-pass metabolism in the liver and erratic absorption from the gastrointestinal tract. However, oral administration of CBD has a faster time to peak concentrations (2 hours) than THC. Due to the poor bioavailability of oral preparations, alternative routes of administration have been studied, including sublingual and rectal. These alternative formulations maximize bioavailability and reduce first-pass metabolism. Sublingual administration in rabbits yielded bioavailability of 16% and time to peak concentration of 4 hours. Rectal administration in monkeys doubled bioavailability to 13.5% and achieved peak blood concentrations within 1 to 8 hours after administration. Distribution Like cannabinoid absorption, distribution is also dependent on route of administration. Smoking and inhalation of vaporized cannabis have better absorption than do other routes of administration, and therefore also have more predictable distribution. THC is highly protein bound once absorbed, with only 3% found unbound in the plasma. It distributes rapidly to highly vascularized organs such as the heart, lungs, liver, spleen, and kidneys, as well as to various glands. Low levels can be detected in the brain, testes, and unborn fetuses, all of which are protected from systemic circulation via barriers. THC further distributes into fatty tissues a few days after administration due to its high lipophilicity, and is found deposited in the spleen and fat after redistribution. Metabolism Delta-9-THC is the primary molecule responsible for the effects of cannabis. Delta-9-THC is metabolized in the liver and turns into 11-OH-THC. 11-OH-THC is the first metabolic product in this pathway. Both Delta-9-THC and 11-OH-THC are psychoactive. The metabolism of THC into 11-OH-THC plays a part in the heightened psychoactive effects of edible cannabis. Next, 11-OH-THC is metabolized in the liver into 11-COOH-THC, which is the second metabolic product of THC. 11-COOH-THC is not psychoactive. Ingestion of edible cannabis products lead to a slower onset of effect than the inhalation of it because the THC travels to the liver first through the blood before it travels to the rest of the body. Inhaled cannabis can result in THC going directly to the brain, where it then travels from the brain back to the liver in recirculation for metabolism. Eventually, both routes of metabolism result in the metabolism of psychoactive THC to inactive 11-COOH-THC. Excretion Due to substantial metabolism of THC and CBD, their metabolites are excreted mostly via feces, rather than by urine. After delta-9-THC is hydroxylated into 11-OH-THC via CYP2C9, CYP2C19, and CYP3A4, it undergoes phase II metabolism into more than 30 metabolites, a majority of which are products of glucuronidation. Approximately 65% of THC is excreted in feces and 25% in the urine, while the remaining 10% is excreted by other means. The terminal half-life of THC is 25 to 36 hours, whereas for CBD it is 18 to 32 hours. CBD is hydroxylated by P450 liver enzymes into 7-OH-CBD. Its metabolites are products of primarily CYP2C19 and CYP3A4 activity, with potential activity of CYP1A1, CYP1A2, CYP2C9, and CYP2D6. Similar to delta-9-THC, a majority of CBD is excreted in feces and some in the urine. The terminal half-life is approximately 18–32 hours. Administration Smoking has been the means of administration of cannabis for many users, but it is not suitable for the use of cannabis as a medicine. It was the most common method of medical cannabis consumption in the US . It is difficult to predict the pharmacological response to cannabis because concentration of cannabinoids varies widely, as there are different ways of preparing it for consumption (smoked, applied as oils, eaten, infused into other foods, or drunk) and a lack of production controls. The potential for adverse effects from smoke inhalation makes smoking a less viable option than oral preparations. Cannabis vaporizers have gained popularity because of a perception among users that fewer harmful chemicals are ingested when components are inhaled via aerosol rather than smoke. Cannabinoid medicines are available in pill form (dronabinol and nabilone) and liquid extracts formulated into an oromucosal spray (nabiximols). Oral preparations are "problematic due to the uptake of cannabinoids into fatty tissue, from which they are released slowly, and the significant first-pass liver metabolism, which breaks down Δ9THC and contributes further to the variability of plasma concentrations". The US Food and Drug Administration (FDA) has not approved smoked cannabis for any condition or disease, as it deems that evidence is lacking concerning safety and efficacy. The FDA issued a 2006 advisory against smoked medical cannabis stating: "marijuana has a high potential for abuse, has no currently accepted medical use in treatment in the United States, and has a lack of accepted safety for use under medical supervision." History Ancient Cannabis, called má 麻 (meaning "hemp; cannabis; numbness") or dàmá 大麻 (with "big; great") in Chinese, was used in Taiwan for fiber starting about 10,000 years ago. The botanist Hui-lin Li wrote that in China, "The use of Cannabis in medicine was probably a very early development. Since ancient humans used hemp seed as food, it was quite natural for them to also discover the medicinal properties of the plant." Emperor Shen-Nung, who was also a pharmacologist, wrote a book on treatment methods in 2737 BCE that included the medical benefits of cannabis. He recommended the substance for many ailments, including constipation, gout, rheumatism, and absent-mindedness. Cannabis is one of the 50 "fundamental" herbs in traditional Chinese medicine. The Ebers Papyrus () from Ancient Egypt describes medical cannabis. The ancient Egyptians used hemp (cannabis) in suppositories for relieving the pain of hemorrhoids. Surviving texts from ancient India confirm that cannabis' psychoactive properties were recognized, and doctors used it for treating a variety of illnesses and ailments, including insomnia, headaches, gastrointestinal disorders, and pain, including during childbirth. The Ancient Greeks used cannabis to dress wounds and sores on their horses, and in humans, dried leaves of cannabis were used to treat nose bleeds, and cannabis seeds were used to expel tapeworms. In the medieval Islamic world, Arabic physicians made use of the diuretic, antiemetic, antiepileptic, anti-inflammatory, analgesic and antipyretic properties of Cannabis sativa, and used it extensively as medication from the 8th to 18th centuries. Landrace strains Cannabis seeds may have been used for food, rituals or religious practices in ancient Europe and China. Harvesting the plant led to the spread of cannabis throughout Eurasia about 10,000 to 5,000 years ago, with further distribution to the Middle East and Africa about 2,000 to 500 years ago. A landrace strain of cannabis developed over centuries. They are cultivars of the plant that originated in one specific region. Widely cultivated strains of cannabis, such as "Afghani" or "Hindu Kush", are indigenous to the Pakistan and Afghanistan regions, while "Durban Poison" is native to Africa. There are approximately 16 landrace strains of cannabis identified from Pakistan, Jamaica, Africa, Mexico, Central America and Asia. Modern An Irish physician, William Brooke O'Shaughnessy, is credited with introducing cannabis to Western medicine. O'Shaughnessy discovered cannabis in the 1830s while living abroad in India, where he conducted numerous experiments investigating the drug's medical utility (noting in particular its analgesic and anticonvulsant effects). He returned to England with a supply of cannabis in 1842, after which its use spread through Europe and the United States. In 1845 French physician Jacques-Joseph Moreau published a book about the use of cannabis in psychiatry. In 1850 cannabis was entered into the United States Pharmacopeia. An anecdotal report of Cannabis indica as a treatment for tetanus appeared in Scientific American in 1880. The use of cannabis in medicine began to decline by the end of the 19th century, due to difficulty in controlling dosages and the rise in popularity of synthetic and opium-derived drugs. Also, the advent of the hypodermic syringe allowed these drugs to be injected for immediate effect, in contrast to cannabis which is not water-soluble and therefore cannot be injected. In the United States, the medical use of cannabis further declined with the passage of the Marihuana Tax Act of 1937, which imposed new regulations and fees on physicians prescribing cannabis. Cannabis was removed from the U.S. Pharmacopeia in 1941, and officially banned for any use with the passage of the Controlled Substances Act of 1970. Cannabis began to attract renewed interest as medicine in the 1970s and 1980s, in particular due to its use by cancer and AIDS patients who reported relief from the effects of chemotherapy and wasting syndrome. In 1996, California became the first U.S. state to legalize medical cannabis in defiance of federal law. In 2001, Canada became the first country to adopt a system regulating the medical use of cannabis. Society and culture Legal status Countries that have legalized the medical use of cannabis include Argentina, Australia, Brazil, Canada, Chile, Colombia, Costa Rica, Croatia, Cyprus, Czech Republic, Finland, Germany, Greece, Israel, Italy, Jamaica, Lebanon, Luxembourg, Malta, Morocco, the Netherlands, New Zealand, North Macedonia, Panama, Peru, Poland, Portugal, Rwanda, Sri Lanka, Switzerland, Thailand, the United Kingdom, and Uruguay. Other countries have more restrictive laws that allow only the use of isolated cannabinoid drugs such as Sativex or Epidiolex. Countries with the most relaxed policies include Canada, the Netherlands, Thailand, and Uruguay, where cannabis can be purchased without need for a prescription. In Mexico, THC content of medical cannabis is limited to one percent. In the United States, the legality of medical cannabis varies by state. However, in many of these countries, access may not always be possible under the same conditions. International law Cannabis and its derivatives are subject to regulation under three United Nations drug control treaties: the 1961 Single Convention on Narcotic Drugs, the 1971 Convention on Psychotropic Substances, and the 1988 Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. Cannabis and cannabis resin are classified as a Schedule I drug under the Single Convention treaty, meaning that medical use is considered "indispensible for the relief of pain and suffering" but that it is considered to be an addictive medication with risks of abuse. Countries have an obligation to provide access and sufficient availability of drugs listed in Schedule I for the purposes of medical uses. Prior to December 2020 cannabis and cannabis resin were also included in Schedule IV, a more restrictive level of control, which is for only the most dangerous drugs such as heroin and fentanyl. They were removed after an independent scientific assessment by the World Health Organization in 2018-1029. Member nations of the UN Commission on Narcotic Drugs voted 27–25 to remove it from Schedule IV on 2 December 2020, following a World Health Organization recommendation for removal in January 2019. United States In the United States, the use of cannabis for medical purposes is legal in 38 states, four out of five permanently inhabited U.S. territories, and the District of Columbia. An additional 10 states have more restrictive laws allowing the use of low-THC products. Cannabis remains illegal at the federal level under the Controlled Substances Act, which classifies it as a Schedule I drug with a high potential for abuse and no accepted medical use. In December 2014, however, the Rohrabacher–Farr amendment was signed into law, prohibiting the Justice Department from prosecuting individuals acting in accordance with state medical cannabis laws. In the US, the FDA has approved two oral cannabinoids for use as medicine in 1985: dronabinol (pure delta-9-THC; brand name Marinol) and nabilone (a synthetic neocannabinoid; brand name Cesamet). In the US, they are both listed as Schedule II, indicating high potential for side effects and addiction. Economics Distribution The method of obtaining medical cannabis varies by region and by legislation. In the US, most consumers grow their own or buy it from cannabis dispensaries in states where it is legal. Marijuana vending machines for selling or dispensing cannabis are in use in the United States and are planned to be used in Canada. In 2014, the startup Meadow began offering on-demand delivery of medical marijuana in the San Francisco Bay Area, through their mobile app. Almost 70% of medical cannabis is exported from the United Kingdom, according to a 2017 United Nations report, with much of the remaining amount coming from Canada and the Netherlands. Insurance In the United States, health insurance companies may not pay for a medical marijuana prescription as the Food and Drug Administration must approve any substance for medicinal purposes. Before this can happen, the FDA must first permit the study of the medical benefits and drawbacks of the substance, which it has not done since it was placed on Schedule I of the Controlled Substances Act in 1970. Therefore, all expenses incurred fulfilling a medical marijuana prescription will possibly be incurred as out-of-pocket. However, the New Mexico Court of Appeals has ruled that workers' compensation insurance must pay for prescribed marijuana as part of the state's Medical Cannabis Program. Positions of medical organizations Medical organizations that have issued statements in support of allowing access to medical cannabis include the American Nurses Association, American Public Health Association, American Medical Student Association, National Multiple Sclerosis Society, Epilepsy Foundation, and Leukemia & Lymphoma Society. Organizations that oppose the legalization of medical cannabis include the American Academy of Pediatrics (AAP) and American Psychiatric Association. However, the AAP also supports rescheduling for the purpose of facilitating research. The American Medical Association and American College of Physicians do not take a position on the legalization of medical cannabis, but have called for the Schedule I classification to be reviewed. The American Academy of Family Physicians and American Society of Addiction Medicine also do not take a position, but do support rescheduling to better facilitate research. The American Heart Association says that "many of the concerning health implications of cannabis include cardiovascular diseases" but that it supports rescheduling to allow "more nuanced ... marijuana legislation and regulation" and to "reflect the existing science behind cannabis". The American Cancer Society and American Psychological Association have noted the obstacles that exist for conducting research on cannabis, and have called on the federal government to better enable scientific study of the drug. Cancer Research UK say that while cannabis is being studied for therapeutic potential, "claims that there is solid "proof" that cannabis or cannabinoids can cure cancer is highly misleading to patients and their families, and builds a false picture of the state of progress in this area". Nonproprietary names There are three International Nonproprietary Name (INN) granted for cannabinoids: two plant-derived phytocannabinoids and one neocannabinoid: Dronabinol is the INN for delta-9-THC (there is a common confusion according to which the word "dronabinol" would only refer to synthetic delta-9-THC, which is incorrect). Cannabidiol is also the official INN for the molecule, granted in 2017. Nabilone is the INN for a synthetic cannabinoid analog (not present in Cannabis plants). Nabiximols is the generic name (but not recognized as an INN) of a mixture of Cannabidiol and Dronabinol. Its most common form is the oromucosal spray derived from two strains of Cannabis sativa and containing THC and CBD traded under the brand name Sativex®. It is not approved in the United States, but is approved in several European countries, Canada, and New Zealand as of 2013. As an antiemetic, these medications are usually used when conventional treatment for nausea and vomiting associated with cancer chemotherapy fail to work. Nabiximols is used for treatment of spasticity associated with MS when other therapies have not worked, and when an initial trial demonstrates "meaningful improvement". Trials for FDA approval in the US are underway. It is also approved in several European countries for overactive bladder and vomiting. When sold under the trade name Sativex as a mouth spray, the prescribed daily dose in Sweden delivers a maximum of 32.4 mg of THC and 30 mg of CBD; mild to moderate dizziness is common during the first few weeks. Relative to inhaled consumption, peak concentration of oral THC is delayed, and it may be difficult to determine optimal dosage because of variability in patient absorption. In 1964, Albert Lockhart and Manley West began studying the health effects of traditional cannabis use in Jamaican communities. They developed, and in 1987 gained permission to market, the pharmaceutical "Canasol", one of the first cannabis extracts. Research A 2022 review concluded that "oral, synthetic cannabis products with high THC-to-CBD ratios and sublingual, extracted cannabis products with comparable THC-to-CBD ratios may be associated with short-term improvements in chronic pain and increased risk for dizziness and sedation."
Biology and health sciences
Pain treatments
Health
175442
https://en.wikipedia.org/wiki/Chameleon
Chameleon
Chameleons or chamaeleons (family Chamaeleonidae) are a distinctive and highly specialized clade of Old World lizards with 200 species described as of June 2015. The members of this family are best known for their distinct range of colours, being capable of colour-shifting camouflage. The large number of species in the family exhibit considerable variability in their capacity to change colour. For some, it is more of a shift of brightness (shades of brown); for others, a plethora of colour-combinations (reds, yellows, greens, blues) can be seen. Chameleons are also distinguished by their zygodactylous feet, their prehensile tail, their literally compressed bodies, their head casques, their projectile tongues used for catching prey, their swaying gait, and in some species crests or horns on their brow and snout. Chameleons' eyes are independently mobile, and because of this the chameleon’s brain is constantly analyzing two separate, individual images of its environment. When hunting prey, the eyes focus forward in coordination, affording the animal stereoscopic vision. Chameleons are diurnal and adapted for visual hunting of invertebrates, mostly insects, although the large species also can catch small vertebrates. Chameleons typically are arboreal, but there are also many species that live on the ground. The arboreal species use their prehensile tail as an extra anchor point when they are moving or resting in trees or bushes; because of this, their tail is often referred to as a "fifth limb". Depending on species, they range from rainforest to desert conditions and from lowlands to highlands, with the vast majority occurring in Africa (about half of the species are restricted to Madagascar), but with a single species in southern Europe, and a few across southern Asia as far east as India and Sri Lanka. They have been introduced to Hawaii and Florida. Etymology The English word chameleon ( , ) is a simplified spelling of Latin , a borrowing of the Greek (khamailéōn), a compound of (khamaí) "on the ground" and (léōn) "lion". Classification In 1986, the family Chamaeleonidae was divided into two subfamilies, Brookesiinae and Chamaeleoninae. Under this classification, Brookesiinae included the genera Brookesia and Rhampholeon, as well as the genera later split off from them (Palleon and Rieppeleon), while Chamaeleoninae included the genera Bradypodion, Calumma, Chamaeleo, Furcifer and Trioceros, as well as the genera later split off from them (Archaius, Nadzikambia and Kinyongia). Since that time, however, the validity of this subfamily designation has been the subject of much debate, although most phylogenetic studies support the notion that the pygmy chameleons of the subfamily Brookesiinae are not a monophyletic group. While some authorities have previously preferred to use this subfamilial classification on the basis of the absence of evidence principle, these authorities later abandoned this subfamilial division, no longer recognizing any subfamilies with the family Chamaeleonidae. In 2015, however, Glaw reworked the subfamilial division by placing only the genera Brookesia and Palleon within the Brookesiinae subfamily, with all other genera being placed in Chamaeleoninae. Change of color Some chameleon species are able to change their skin coloration. Different chameleon species are able to vary their colouration and pattern through combinations of pink, blue, red, orange, green, black, brown, light blue, yellow, turquoise, and purple. Chameleon skin has a superficial layer which contains pigments, and under the layer are cells with very small (nanoscale) guanine crystals. Chameleons change colour by "actively tuning the photonic response of a lattice of small guanine nanocrystals in the s-iridophores". This tuning, by an unknown molecular mechanism, changes the wavelength of light reflected off the crystals which changes the colour of the skin. The colour change was duplicated ex vivo by modifying the osmolarity of pieces of white skin. Colour change in chameleons has functions in camouflage, but most commonly in social signaling and in reactions to temperature and other conditions. The relative importance of these functions varies with the circumstances, as well as the species. Colour change signals a chameleon's physiological condition and intentions to other chameleons. Because chameleons are ectothermic, another reason why they change colour is to regulate their body temperatures, either to a darker colour to absorb light and heat to raise their temperature, or to a lighter colour to reflect light and heat, thereby either stabilizing or lowering their body temperature. Chameleons tend to show brighter colours when displaying aggression to other chameleons, and darker colours when they submit or "give up". Most chameleon genera (exceptions are Chamaeleo, Rhampholeon and Rieppeleon) have blue fluorescence in a species specific pattern in their skull tubercles and in Brookesia there is also some in tubercles on the body. The fluorescence is derived from bones that only are covered in very thin skin and it possibly serves a signaling role, especially in shaded habitats. Some species, such as Smith's dwarf chameleon and several others in the genus Bradypodion, adjust their colours for camouflage depending on the vision of the specific predator species (for example, bird or snake) by which they are being threatened. In the introduced Hawaiian population of Jackson's chameleon, conspicuous colour changes that are used for communication between chameleons have increased whereas anti-predator camouflage colour changes have decreased relative to the native source population in Kenya where there are more predators. Chameleons have two superimposed layers within their skin that control their colour and thermoregulation. The top layer contains a lattice of guanine nanocrystals, and by exciting this lattice the spacing between the nanocrystals can be manipulated, which in turn affects which wavelengths of light are reflected and which are absorbed. Exciting the lattice increases the distance between the nanocrystals, and the skin reflects longer wavelengths of light. Thus, in a relaxed state the crystals reflect blue and green, but in an excited state the longer wavelengths such as yellow, orange, green, and red are reflected. The skin of a chameleon also contains some yellow pigments, which combined with the blue reflected by a relaxed crystal lattice results in the characteristic green colour which is common of many chameleons in their relaxed state. Chameleon colour palettes have evolved through evolution and the environment. Chameleons living in the forest have a more defined and colourful palette compared to those living in the desert or savanna, which have more of a basic, brown, and charred palette. Evolution The oldest described chameleon is Anqingosaurus brevicephalus from the Middle Paleocene (about 58.7–61.7 mya) of China. Other chameleon fossils include Chamaeleo caroliquarti from the Lower Miocene (about 13–23 mya) of the Czech Republic and Germany, and Chamaeleo intermedius from the Upper Miocene (about 5–13 mya) of Kenya. The chameleons are probably far older than that, perhaps sharing a common ancestor with iguanids and agamids more than 100 mya (agamids being more closely related). Since fossils have been found in Africa, Europe, and Asia, chameleons were certainly once more widespread than they are today. Although nearly half of all chameleon species today live in Madagascar, this offers no basis for speculation that chameleons might originate from there. In fact, it has recently been shown that chameleons most likely originated in mainland Africa. It appears there were two distinct oceanic migrations from the mainland to Madagascar. The diverse speciation of chameleons has been theorized to have directly reflected the increase in open habitats (savannah, grassland, and heathland) that accompanied the Oligocene period. Monophyly of the family is supported by several studies. Daza et al. (2016) described a small (10.6 mm in snout-vent length), probably neonatal lizard preserved in the Cretaceous (Albian-Cenomanian boundary) amber from Myanmar. The authors noted that the lizard has "short and wide skull, large orbits, elongated and robust lingual process, frontal with parallel margins, incipient prefrontal boss, reduced vomers, absent retroarticular process, low presacral vertebral count (between 15 and 17) and extremely short, curled tail"; the authors considered these traits to be indicative of the lizard's affiliation with Chamaeleonidae. The phylogenetic analysis conducted by the authors indicated that the lizard was a stem-chamaeleonid. However, Matsumoto & Evans (2018) reinterpreted this specimen as an albanerpetontid amphibian. This specimen was given the name Yaksha perettii in 2020, and was noted to have several convergently chameleon-like features, including adaptations for ballistic feeding. While the exact evolutionary history of colour change in chameleons is still unknown, there is one aspect of the evolutionary history of chameleon colour change that has already been conclusively studied: the effects of signal efficacy. Signal efficacy, or how well the signal can be seen against its background, has been shown to correlate directly to the spectral qualities of chameleon displays. Dwarf chameleons, the chameleon of study, occupy a wide variety of habitats from forests to grasslands to shrubbery. It was demonstrated that chameleons in brighter areas tended to present brighter signals, but chameleons in darker areas tended to present relatively more contrasting signals to their backgrounds. This finding suggests that signal efficacy (and thus habitat) has affected the evolution of chameleon signaling. Stuart-Fox et al. note that it makes sense that selection for crypsis is not seen to be as important as selection for signal efficacy, because the signals are only shown briefly; chameleons are almost always muted cryptic colours. Description Chameleons vary greatly in size and body structure, with maximum total lengths varying from in male Brookesia nana (one of the world's smallest reptiles) to in the male Furcifer oustaleti. Many have head or facial ornamentation, such as nasal protrusions, or horn-like projections in the case of Trioceros jacksonii, or large crests on top of their heads, like Chamaeleo calyptratus. Many species are sexually dimorphic, and males are typically much more ornamented than the female chameleons. Typical sizes of species of chameleon commonly kept in captivity or as pets are: The feet of chameleons are highly adapted to arboreal locomotion, and species such as Chamaeleo namaquensis that have secondarily adopted a terrestrial habit have retained the same foot morphology with little modification. On each foot, the five distinguished toes are grouped into two fascicles. The toes in each fascicle are bound into a flattened group of either two or three, giving each foot a tongs-like appearance. On the front feet, the outer, lateral, group contains two toes, whereas the inner, medial, group contains three. On the rear feet, this arrangement is reversed, the medial group containing two toes, and the lateral group three. These specialized feet allow chameleons to grip tightly onto narrow or rough branches. Furthermore, each toe is equipped with a sharp claw to afford a grip on surfaces such as bark when climbing. It is common to refer to the feet of chameleons as didactyl or zygodactyl, though neither term is fully satisfactory, both being used in describing different feet, such as the zygodactyl feet of parrots or didactyl feet of sloths or ostriches, none of which is significantly like chameleon feet. Although "zygodactyl" is reasonably descriptive of chameleon foot anatomy, their foot structure does not resemble that of parrots, to which the term was first applied. As for didactyly, chameleons visibly have five toes on each foot, not two. Some chameleons have a crest of small spikes extending along the spine from the proximal part of the tail to the neck; both the extent and size of the spikes vary between species and individuals. These spikes help break up the definitive outline of the chameleon, which aids it when trying to blend into a background. Senses Chameleons have the most distinctive eyes of any reptile. The upper and lower eyelids are joined, with only a pinhole large enough for the pupil to see through. Each eye can pivot and focus independently, allowing the chameleon to observe two different objects simultaneously. This gives them a full 360-degree arc of vision around their bodies. Prey is located using monocular depth perception, not stereopsis. Chameleons have the highest magnification (per size) of any vertebrate, with the highest density of cones in the retina. Like snakes, chameleons do not have an outer or a middle ear, so there is neither an ear-opening nor an eardrum. However, chameleons are not deaf: they can detect sound frequencies in the range of 200–600 Hz. Chameleons can see in both visible and ultraviolet light. Chameleons exposed to ultraviolet light show increased social behavior and activity levels, are more inclined to bask, feed, and reproduce as it has a positive effect on the pineal gland. Feeding All chameleons are primarily insectivores that feed by ballistically projecting their long tongues from their mouths to capture prey located some distance away. While the chameleons' tongues are typically thought to be one and a half to two times the length of their bodies (their length excluding the tail), smaller chameleons (both smaller species and smaller individuals of the same species) have recently been found to have proportionately larger tongue apparatuses than their larger counterparts. Thus, smaller chameleons are able to project their tongues greater distances than the larger chameleons that are the subject of most studies and tongue length estimates, and can project their tongues more than twice their body length. The tongue apparatus consists of highly modified hyoid bones, tongue muscles, and collagenous elements. The hyoid bone has an elongated, parallel-sided projection, called the entoglossal process, over which a tubular muscle, the accelerator muscle, sits. The accelerator muscle contracts around the entoglossal process and is responsible for creating the work to power tongue projection, both directly and through the loading of collagenous elements located between the entoglossal process and the accelerator muscle. The tongue retractor muscle, the hyoglossus, connects the hyoid and accelerator muscle, and is responsible for drawing the tongue back into the mouth following tongue projection. Tongue projection occurs at extremely high performance, reaching the prey in as little as 0.07 seconds, having been launched at accelerations exceeding 41 g. The power with which the tongue is launched, known to exceed 3000 W kg−1, exceeds that which muscle is able to produce, indicating the presence of an elastic power amplifier to power tongue projection. The recoil of elastic elements in the tongue apparatus is thus responsible for large percentages of the overall tongue projection performance. One consequence of the incorporation of an elastic recoil mechanism to the tongue projection mechanism is relative thermal insensitivity of tongue projection relative to tongue retraction, which is powered by muscle contraction alone, and is heavily thermally sensitive. While other ectothermic animals become sluggish as their body temperatures decline, due to a reduction in the contractile velocity of their muscles, chameleons are able to project their tongues at high performance even at low body temperatures. The thermal sensitivity of tongue retraction in chameleons, however, is not a problem, as chameleons have a very effective mechanism of holding onto their prey once the tongue has come into contact with it, including surface phenomena, such as wet adhesion and interlocking, and suction. The thermal insensitivity of tongue projection thus enables chameleons to feed effectively on cold mornings prior to being able to behaviorally elevate their body temperatures through thermoregulation, when other sympatric lizards species are still inactive, likely temporarily expanding their thermal niche as a result. Bones Certain species of chameleons have bones that glow when under ultraviolet light, also known as biogenic fluorescence. Some 31 different species of Calumma chameleons, all native to Madagascar, displayed this fluorescence in CT scans. The bones emitted a bright blue glow and could even shine through the chameleon's four layers of skin. The face was found to have a different glow, appearing as dots otherwise known as tubercles on facial bones. The glow results from proteins, pigments, chitin, and other materials that make up a chameleon's skeleton, possibly giving chameleons a secondary signaling system that does not interfere with their colour-changing ability, and may have evolved from sexual selection. Distribution and habitat Chameleons primarily live in the mainland of sub-Saharan Africa and on the island of Madagascar, although a few species live in northern Africa, southern Europe (Portugal, Spain, Italy, Greece, Cyprus and Malta), the Middle East, southeast Pakistan, India, Sri Lanka, and several smaller islands in the western Indian Ocean. Introduced, non-native populations are found in Hawaii and Florida. Chameleons are found only in tropical and subtropical regions and inhabit all kinds of lowland and mountain forests, woodlands, shrublands, savannas, and sometimes deserts, but each species tends to be a restricted to only one of a few different habitat types. The typical chameleons from the subfamily Chamaeleoninae are arboreal, usually living in trees or bushes, although a few (notably the Namaqua chameleon) are partially or largely terrestrial. The genus Brookesia, which comprises the majority of the species in the subfamily Brookesiinae, live low in vegetation or on the ground among leaf litter. Many chameleon species have small distributions and are considered threatened. Declining chameleon numbers are mostly due to habitat loss. Reproduction Most chameleons are oviparous, but all Bradypodion species and many Trioceros species are ovoviviparous (although some biologists prefer to avoid the term ovoviviparous because of inconsistencies with its use in some animal groups, instead just using viviparous). The oviparous species lay eggs three to six weeks after copulation. The female will dig a hole—from , deep depending on the species—and deposit her eggs. Clutch sizes vary greatly with species. Small Brookesia species may only lay two to four eggs, while large veiled chameleons (Chamaeleo calyptratus) have been known to lay clutches of 20–200 (veiled chameleons) and 10–40 (panther chameleons) eggs. Clutch sizes can also vary greatly among the same species. Eggs generally hatch after four to 12 months, again depending on the species. The eggs of Parson's chameleon (Calumma parsoni) typically take 400 to 660 days to hatch. Chameleons lay flexible-shelled eggs which are affected by environmental characteristics during incubation. The egg mass is the most important in differentiating survivors of Chameleon during incubation. An increase in egg mass will depend on temperature and water potential. To understand the dynamics of water potential in Chameleon eggs, the consideration of exerted pressure on eggshells will be essential because the pressure of eggshells play an important role in the water relation of eggs during entire incubation period The ovoviviparous species, such as the Jackson's chameleon (Trioceros jacksonii) have a five- to seven-month gestation period. Each young chameleon is born within the sticky transparent membrane of its yolk sac. The mother presses each egg onto a branch, where it sticks. The membrane bursts and the newly hatched chameleon frees itself and climbs away to hunt for itself and hide from predators. The female can have up to 30 live young from one gestation. Diet Chameleons generally eat insects, but larger species, such as the common chameleon, may also take other lizards and young birds. The range of diets can be seen from the following examples: The veiled chameleon, Chamaeleo calyptratus from Arabia, is insectivorous, but eats leaves when other sources of water are not available. It can be maintained on a diet of crickets. They can eat as many as 15–50 large crickets a day. Jackson's chameleon (Trioceros jacksonii) from Kenya and northern Tanzania eat a wide variety of small animals including ants, butterflies, caterpillars, snails, worms, lizards, geckos, amphibians, and other chameleons, as well as plant material, such as leaves, tender shoots, and berries. It can be maintained on a mixed diet including kale, dandelion leaves, lettuce, bananas, tomatoes, apples, crickets, and waxworms. The common chameleon of Europe, North Africa, and the Near East, Chamaeleo chamaeleon, mainly eats wasps and mantises; such arthropods form over three-quarters of its diet. Some experts advise that the common chameleon should not be fed exclusively on crickets; these should make up no more than half the diet, with the rest a mixture of waxworms, earthworms, grasshoppers, flies, and plant materials such as green leaves, oats, and fruit. Some chameleons like the panther chameleon of Madagascar regulate their vitamin D3 levels, of which their insect diet is a poor source, by exposing themselves to sunlight since its UV component increases internal production. Anti-predator adaptations Chameleons are preyed upon by a variety of other animals. Birds and snakes are the most important predators of adult chameleons. Invertebrates, especially ants, put a high predation pressure on chameleon eggs and juveniles. Chameleons are unlikely to be able to flee from predators and rely on crypsis as their primary defense. Chameleons can change both their colours and their patterns (to varying extents) to resemble their surroundings or disrupt the body outline and remain hidden from a potential enemy's sight. Only if detected, chameleons actively defend themselves. They adopt a defensive body posture, present an attacker with a laterally flattened body to appear larger, warn with an open mouth, and, if needed, utilize feet and jaws to fight back. Vocalization is sometimes incorporated into threat displays. Parasites Chameleons are parasitized by nematode worms, including threadworms (Filarioidea). Threadworms can be transmitted by biting insects such as ticks and mosquitoes. Other roundworms are transmitted through food contaminated with roundworm eggs; the larvae burrow through the wall of the intestine into the bloodstream. Chameleons are subject to several protozoan parasites, such as Plasmodium, which causes malaria, Trypanosoma, which causes sleeping sickness, and Leishmania, which causes leishmaniasis. Chameleons are subject to parasitism by coccidia, including species of the genera Choleoeimeria, Eimeria, and Isospora. As pets Chameleons are popular reptile pets, mostly imported from African countries like Madagascar, Tanzania, and Togo. The most common in the trade are the Senegal chameleon (Chamaeleo senegalensis), the Yemen or veiled chameleon (Chamaeleo calyptratus), the panther chameleon (Furcifer pardalis), and Jackson's chameleon (Trioceros jacksonii). Other chameleons seen in captivity (albeit on an irregular basis) include such species as the carpet chameleon (Furcifer lateralis), Meller’s chameleon (Trioceros melleri), Parson’s chameleon (Calumma parsonii), and several species of pygmy and leaf-tailed chameleons, mostly of the genera Brookesia, Rhampholeon, or Rieppeleon. These are among the most sensitive reptiles one can own, requiring specialized attention and care. The U.S. has been the main importer of chameleons since the early 1980s accounting for 69% of African reptile exports. However, there have been large declines due to tougher regulations to protect species from being taken from the wild and due to many becoming invasive in places like Florida. They have remained popular though which may be due to the captive-breeding in the U.S. which has increased to the point that the U.S. can fulfill its demand, and has now even become a major exporter as well. In the U.S. they are so popular, that despite Florida having six invasive chameleon species due to the pet trade, reptile hobbyists in these areas search for chameleons to keep as pets or to breed and sell them, with some selling for up to a thousand dollars. Historical understandings Aristotle (4th century BC) describes chameleons in his History of Animals. Pliny the Elder (1st century AD) also discusses chameleons in his Natural History, noting their ability to change colour for camouflage. The chameleon was featured in Conrad Gessner's Historia animalium (1563), copied from De aquatilibus (1553) by Pierre Belon. In Shakespeare's Hamlet, the eponymous Prince says "Excellent, i' faith, of the chameleon's dish. I eat the air, promise-crammed." This refers to the Elizabethan belief that chameleons lived on nothing but the air.
Biology and health sciences
Reptiles
null
175470
https://en.wikipedia.org/wiki/Magnetic%20monopole
Magnetic monopole
In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa). A magnetic monopole would have a net north or south "magnetic charge". Modern interest in the concept stems from particle theories, notably the grand unified and superstring theories, which predict their existence. The known elementary particles that have electric charge are electric monopoles. Magnetism in bar magnets and electromagnets is not caused by magnetic monopoles, and indeed, there is no known experimental or observational evidence that magnetic monopoles exist. Some condensed matter systems contain effective (non-isolated) magnetic monopole quasi-particles, or contain phenomena that are mathematically analogous to magnetic monopoles. Historical background Early science and classical physics Many early scientists attributed the magnetism of lodestones to two different "magnetic fluids" ("effluvia"), a north-pole fluid at one end and a south-pole fluid at the other, which attracted and repelled each other in analogy to positive and negative electric charge. However, an improved understanding of electromagnetism in the nineteenth century showed that the magnetism of lodestones was properly explained not by magnetic monopole fluids, but rather by a combination of electric currents, the electron magnetic moment, and the magnetic moments of other particles. Gauss's law for magnetism, one of Maxwell's equations, is the mathematical statement that magnetic monopoles do not exist. Nevertheless, Pierre Curie pointed out in 1894 that magnetic monopoles could conceivably exist, despite not having been seen so far. Quantum mechanics The quantum theory of magnetic charge started with a paper by the physicist Paul Dirac in 1931. In this paper, Dirac showed that if any magnetic monopoles exist in the universe, then all electric charge in the universe must be quantized (Dirac quantization condition). The electric charge is, in fact, quantized, which is consistent with (but does not prove) the existence of monopoles. Since Dirac's paper, several systematic monopole searches have been performed. Experiments in 1975 and 1982 produced candidate events that were initially interpreted as monopoles, but are now regarded as inconclusive. Therefore, whether monopoles exist remains an open question. Further advances in theoretical particle physics, particularly developments in grand unified theories and quantum gravity, have led to more compelling arguments (detailed below) that monopoles do exist. Joseph Polchinski, a string theorist, described the existence of monopoles as "one of the safest bets that one can make about physics not yet seen". These theories are not necessarily inconsistent with the experimental evidence. In some theoretical models, magnetic monopoles are unlikely to be observed, because they are too massive to create in particle accelerators (see below), and also too rare in the Universe to enter a particle detector with much probability. Some condensed matter systems propose a structure superficially similar to a magnetic monopole, known as a flux tube. The ends of a flux tube form a magnetic dipole, but since they move independently, they can be treated for many purposes as independent magnetic monopole quasiparticles. Since 2009, numerous news reports from the popular media have incorrectly described these systems as the long-awaited discovery of the magnetic monopoles, but the two phenomena are only superficially related to one another. These condensed-matter systems remain an area of active research. (See below.) Poles and magnetism in ordinary matter All matter isolated to date, including every atom on the periodic table and every particle in the Standard Model, has zero magnetic monopole charge. Therefore, the ordinary phenomena of magnetism and magnets do not derive from magnetic monopoles. Instead, magnetism in ordinary matter is due to two sources. First, electric currents create magnetic fields according to Ampère's law. Second, many elementary particles have an intrinsic magnetic moment, the most important of which is the electron magnetic dipole moment, which is related to its quantum-mechanical spin. Mathematically, the magnetic field of an object is often described in terms of a multipole expansion. This is an expression of the field as the sum of component fields with specific mathematical forms. The first term in the expansion is called the monopole term, the second is called dipole, then quadrupole, then octupole, and so on. Any of these terms can be present in the multipole expansion of an electric field, for example. However, in the multipole expansion of a magnetic field, the "monopole" term is always exactly zero (for ordinary matter). A magnetic monopole, if it exists, would have the defining property of producing a magnetic field whose monopole term is non-zero. A magnetic dipole is something whose magnetic field is predominantly or exactly described by the magnetic dipole term of the multipole expansion. The term dipole means two poles, corresponding to the fact that a dipole magnet typically contains a north pole on one side and a south pole on the other side. This is analogous to an electric dipole, which has positive charge on one side and negative charge on the other. However, an electric dipole and magnetic dipole are fundamentally quite different. In an electric dipole made of ordinary matter, the positive charge is made of protons and the negative charge is made of electrons, but a magnetic dipole does not have different types of matter creating the north pole and south pole. Instead, the two magnetic poles arise simultaneously from the aggregate effect of all the currents and intrinsic moments throughout the magnet. Because of this, the two poles of a magnetic dipole must always have equal and opposite strength, and the two poles cannot be separated from each other. Maxwell's equations Maxwell's equations of electromagnetism relate the electric and magnetic fields to each other and to the distribution of electric charge and current. The standard equations provide for electric charge, but they posit zero magnetic charge and current. Except for this constraint, the equations are symmetric under the interchange of the electric and magnetic fields. Maxwell's equations are symmetric when the charge and electric current density are zero everywhere, as in vacuum. Maxwell's equations can also be written in a fully symmetric form if one allows for "magnetic charge" analogous to electric charge. With the inclusion of a variable for the density of magnetic charge, say , there is also a "magnetic current density" variable in the equations, . If magnetic charge does not exist – or if it exists but is absent in a region of space – then the new terms in Maxwell's equations are all zero, and the extended equations reduce to the conventional equations of electromagnetism such as (where is the divergence operator and is the magnetic flux density). In Gaussian cgs units The extended Maxwell's equations are as follows, in CGS-Gaussian units: In these equations is the magnetic charge density, is the magnetic current density, and is the magnetic charge of a test particle, all defined analogously to the related quantities of electric charge and current; is the particle's velocity and is the speed of light. For all other definitions and details, see Maxwell's equations. For the equations in nondimensionalized form, remove the factors of . In SI units In the International System of Quantities used with the SI, there are two conventions for defining magnetic charge , each with different units: weber (Wb) and ampere-meter (A⋅m). The conversion between them is , since the units are , where H is the henry – the SI unit of inductance. Maxwell's equations then take the following forms (using the same notation above): Potential formulation Maxwell's equations can also be expressed in terms of potentials as follows: where Tensor formulation Maxwell's equations in the language of tensors makes Lorentz covariance clear. We introduce electromagnetic tensors and preliminary four-vectors in this article as follows: where: The signature of the Minkowski metric is . The electromagnetic tensor and its Hodge dual are antisymmetric tensors: The generalized equations are: Alternatively, where the is the Levi-Civita symbol. Duality transformation The generalized Maxwell's equations possess a certain symmetry, called a duality transformation. One can choose any real angle , and simultaneously change the fields and charges everywhere in the universe as follows (in Gaussian units): where the primed quantities are the charges and fields before the transformation, and the unprimed quantities are after the transformation. The fields and charges after this transformation still obey the same Maxwell's equations. Because of the duality transformation, one cannot uniquely decide whether a particle has an electric charge, a magnetic charge, or both, just by observing its behavior and comparing that to Maxwell's equations. For example, it is merely a convention, not a requirement of Maxwell's equations, that electrons have electric charge but not magnetic charge; after a transformation, it would be the other way around. The key empirical fact is that all particles ever observed have the same ratio of magnetic charge to electric charge. Duality transformations can change the ratio to any arbitrary numerical value, but cannot change that all particles have the same ratio. Since this is the case, a duality transformation can be made that sets this ratio at zero, so that all particles have no magnetic charge. This choice underlies the "conventional" definitions of electricity and magnetism. Dirac's quantization One of the defining advances in quantum theory was Paul Dirac's work on developing a relativistic quantum electromagnetism. Before his formulation, the presence of electric charge was simply inserted into the equations of quantum mechanics (QM), but in 1931 Dirac showed that a discrete charge is implied by QM. That is to say, we can maintain the form of Maxwell's equations and still have magnetic charges. Consider a system consisting of a single stationary electric monopole (an electron, say) and a single stationary magnetic monopole, which would not exert any forces on each other. Classically, the electromagnetic field surrounding them has a momentum density given by the Poynting vector, and it also has a total angular momentum, which is proportional to the product , and is independent of the distance between them. Quantum mechanics dictates, however, that angular momentum is quantized as a multiple of , so therefore the product must also be quantized. This means that if even a single magnetic monopole existed in the universe, and the form of Maxwell's equations is valid, all electric charges would then be quantized. Although it would be possible simply to integrate over all space to find the total angular momentum in the above example, Dirac took a different approach. This led him to new ideas. He considered a point-like magnetic charge whose magnetic field behaves as and is directed in the radial direction, located at the origin. Because the divergence of is equal to zero everywhere except for the locus of the magnetic monopole at , one can locally define the vector potential such that the curl of the vector potential equals the magnetic field . However, the vector potential cannot be defined globally precisely because the divergence of the magnetic field is proportional to the Dirac delta function at the origin. We must define one set of functions for the vector potential on the "northern hemisphere" (the half-space above the particle), and another set of functions for the "southern hemisphere". These two vector potentials are matched at the "equator" (the plane through the particle), and they differ by a gauge transformation. The wave function of an electrically charged particle (a "probe charge") that orbits the "equator" generally changes by a phase, much like in the Aharonov–Bohm effect. This phase is proportional to the electric charge of the probe, as well as to the magnetic charge of the source. Dirac was originally considering an electron whose wave function is described by the Dirac equation. Because the electron returns to the same point after the full trip around the equator, the phase of its wave function must be unchanged, which implies that the phase added to the wave function must be a multiple of . This is known as the Dirac quantization condition. In various units, this condition can be expressed as: {| class="wikitable" |- ! Units ! Condition |- | SI units (weber convention) | |- | SI units (ampere-meter convention) | |- | Gaussian-cgs units | |- |} where is the vacuum permittivity, is the reduced Planck constant, is the speed of light, and is the set of integers. The hypothetical existence of a magnetic monopole would imply that the electric charge must be quantized in certain units; also, the existence of the electric charges implies that the magnetic charges of the hypothetical magnetic monopoles, if they exist, must be quantized in units inversely proportional to the elementary electric charge. At the time it was not clear if such a thing existed, or even had to. After all, another theory could come along that would explain charge quantization without need for the monopole. The concept remained something of a curiosity. However, in the time since the publication of this seminal work, no other widely accepted explanation of charge quantization has appeared. (The concept of local gauge invariance—see Gauge theory—provides a natural explanation of charge quantization, without invoking the need for magnetic monopoles; but only if the U(1) gauge group is compact, in which case we have magnetic monopoles anyway.) If we maximally extend the definition of the vector potential for the southern hemisphere, it is defined everywhere except for a semi-infinite line stretched from the origin in the direction towards the northern pole. This semi-infinite line is called the Dirac string and its effect on the wave function is analogous to the effect of the solenoid in the Aharonov–Bohm effect. The quantization condition comes from the requirement that the phases around the Dirac string are trivial, which means that the Dirac string must be unphysical. The Dirac string is merely an artifact of the coordinate chart used and should not be taken seriously. The Dirac monopole is a singular solution of Maxwell's equation (because it requires removing the worldline from spacetime); in more sophisticated theories, it is superseded by a smooth solution such as the 't Hooft–Polyakov monopole. Topological interpretation Dirac string A gauge theory like electromagnetism is defined by a gauge field, which associates a group element to each path in space time. For infinitesimal paths, the group element is close to the identity, while for longer paths the group element is the successive product of the infinitesimal group elements along the way. In electrodynamics, the group is U(1), unit complex numbers under multiplication. For infinitesimal paths, the group element is which implies that for finite paths parametrized by , the group element is: The map from paths to group elements is called the Wilson loop or the holonomy, and for a U(1) gauge group it is the phase factor which the wavefunction of a charged particle acquires as it traverses the path. For a loop: So that the phase a charged particle gets when going in a loop is the magnetic flux through the loop. When a small solenoid has a magnetic flux, there are interference fringes for charged particles which go around the solenoid, or around different sides of the solenoid, which reveal its presence. But if all particle charges are integer multiples of , solenoids with a flux of have no interference fringes, because the phase factor for any charged particle is . Such a solenoid, if thin enough, is quantum-mechanically invisible. If such a solenoid were to carry a flux of , when the flux leaked out from one of its ends it would be indistinguishable from a monopole. Dirac's monopole solution in fact describes an infinitesimal line solenoid ending at a point, and the location of the solenoid is the singular part of the solution, the Dirac string. Dirac strings link monopoles and antimonopoles of opposite magnetic charge, although in Dirac's version, the string just goes off to infinity. The string is unobservable, so you can put it anywhere, and by using two coordinate patches, the field in each patch can be made nonsingular by sliding the string to where it cannot be seen. Grand unified theories In a U(1) gauge group with quantized charge, the group is a circle of radius . Such a U(1) gauge group is called compact. Any U(1) that comes from a grand unified theory (GUT) is compact – because only compact higher gauge groups make sense. The size of the gauge group is a measure of the inverse coupling constant, so that in the limit of a large-volume gauge group, the interaction of any fixed representation goes to zero. The case of the U(1) gauge group is a special case because all its irreducible representations are of the same size – the charge is bigger by an integer amount, but the field is still just a complex number – so that in U(1) gauge field theory it is possible to take the decompactified limit with no contradiction. The quantum of charge becomes small, but each charged particle has a huge number of charge quanta so its charge stays finite. In a non-compact U(1) gauge group theory, the charges of particles are generically not integer multiples of a single unit. Since charge quantization is an experimental certainty, it is clear that the U(1) gauge group of electromagnetism is compact. GUTs lead to compact U(1) gauge groups, so they explain charge quantization in a way that seems logically independent from magnetic monopoles. However, the explanation is essentially the same, because in any GUT that breaks down into a U(1) gauge group at long distances, there are magnetic monopoles. The argument is topological: The holonomy of a gauge field maps loops to elements of the gauge group. Infinitesimal loops are mapped to group elements infinitesimally close to the identity. If you imagine a big sphere in space, you can deform an infinitesimal loop that starts and ends at the north pole as follows: stretch out the loop over the western hemisphere until it becomes a great circle (which still starts and ends at the north pole) then let it shrink back to a little loop while going over the eastern hemisphere. This is called lassoing the sphere. Lassoing is a sequence of loops, so the holonomy maps it to a sequence of group elements, a continuous path in the gauge group. Since the loop at the beginning of the lassoing is the same as the loop at the end, the path in the group is closed. If the group path associated to the lassoing procedure winds around the U(1), the sphere contains magnetic charge. During the lassoing, the holonomy changes by the amount of magnetic flux through the sphere. Since the holonomy at the beginning and at the end is the identity, the total magnetic flux is quantized. The magnetic charge is proportional to the number of windings , the magnetic flux through the sphere is equal to . This is the Dirac quantization condition, and it is a topological condition that demands that the long distance U(1) gauge field configurations be consistent. When the U(1) gauge group comes from breaking a compact Lie group, the path that winds around the U(1) group enough times is topologically trivial in the big group. In a non-U(1) compact Lie group, the covering space is a Lie group with the same Lie algebra, but where all closed loops are contractible. Lie groups are homogeneous, so that any cycle in the group can be moved around so that it starts at the identity, then its lift to the covering group ends at , which is a lift of the identity. Going around the loop twice gets you to , three times to , all lifts of the identity. But there are only finitely many lifts of the identity, because the lifts can't accumulate. This number of times one has to traverse the loop to make it contractible is small, for example if the GUT group is SO(3), the covering group is SU(2), and going around any loop twice is enough. This means that there is a continuous gauge-field configuration in the GUT group allows the U(1) monopole configuration to unwind itself at short distances, at the cost of not staying in the U(1). To do this with as little energy as possible, you should leave only the U(1) gauge group in the neighborhood of one point, which is called the core of the monopole. Outside the core, the monopole has only magnetic field energy. Hence, the Dirac monopole is a topological defect in a compact U(1) gauge theory. When there is no GUT, the defect is a singularity – the core shrinks to a point. But when there is some sort of short-distance regulator on spacetime, the monopoles have a finite mass. Monopoles occur in lattice U(1), and there the core size is the lattice size. In general, they are expected to occur whenever there is a short-distance regulator. String theory In the universe, quantum gravity provides the regulator. When gravity is included, the monopole singularity can be a black hole, and for large magnetic charge and mass, the black hole mass is equal to the black hole charge, so that the mass of the magnetic black hole is not infinite. If the black hole can decay completely by Hawking radiation, the lightest charged particles cannot be too heavy. The lightest monopole should have a mass less than or comparable to its charge in natural units. So in a consistent holographic theory, of which string theory is the only known example, there are always finite-mass monopoles. For ordinary electromagnetism, the upper mass bound is not very useful because it is about same size as the Planck mass. Mathematical formulation In mathematics, a (classical) gauge field is defined as a connection over a principal G-bundle over spacetime. is the gauge group, and it acts on each fiber of the bundle separately. A connection on a -bundle tells you how to glue fibers together at nearby points of . It starts with a continuous symmetry group that acts on the fiber , and then it associates a group element with each infinitesimal path. Group multiplication along any path tells you how to move from one point on the bundle to another, by having the element associated to a path act on the fiber . In mathematics, the definition of bundle is designed to emphasize topology, so the notion of connection is added on as an afterthought. In physics, the connection is the fundamental physical object. One of the fundamental observations in the theory of characteristic classes in algebraic topology is that many homotopical structures of nontrivial principal bundles may be expressed as an integral of some polynomial over any connection over it. Note that a connection over a trivial bundle can never give us a nontrivial principal bundle. If spacetime is the space of all possible connections of the -bundle is connected. But consider what happens when we remove a timelike worldline from spacetime. The resulting spacetime is homotopically equivalent to the topological sphere . A principal -bundle over is defined by covering by two charts, each homeomorphic to the open 2-ball such that their intersection is homeomorphic to the strip . 2-balls are homotopically trivial and the strip is homotopically equivalent to the circle . So a topological classification of the possible connections is reduced to classifying the transition functions. The transition function maps the strip to , and the different ways of mapping a strip into are given by the first homotopy group of . So in the -bundle formulation, a gauge theory admits Dirac monopoles provided is not simply connected, whenever there are paths that go around the group that cannot be deformed to a constant path (a path whose image consists of a single point). U(1), which has quantized charges, is not simply connected and can have Dirac monopoles while , its universal covering group, is simply connected, doesn't have quantized charges and does not admit Dirac monopoles. The mathematical definition is equivalent to the physics definition provided that—following Dirac—gauge fields are allowed that are defined only patch-wise, and the gauge field on different patches are glued after a gauge transformation. The total magnetic flux is none other than the first Chern number of the principal bundle, and depends only upon the choice of the principal bundle, and not the specific connection over it. In other words, it is a topological invariant. This argument for monopoles is a restatement of the lasso argument for a pure U(1) theory. It generalizes to dimensions with in several ways. One way is to extend everything into the extra dimensions, so that U(1) monopoles become sheets of dimension . Another way is to examine the type of topological singularity at a point with the homotopy group . Grand unified theories In more recent years, a new class of theories has also suggested the existence of magnetic monopoles. During the early 1970s, the successes of quantum field theory and gauge theory in the development of electroweak theory and the mathematics of the strong nuclear force led many theorists to move on to attempt to combine them in a single theory known as a Grand Unified Theory (GUT). Several GUTs were proposed, most of which implied the presence of a real magnetic monopole particle. More accurately, GUTs predicted a range of particles known as dyons, of which the most basic state was a monopole. The charge on magnetic monopoles predicted by GUTs is either 1 or 2 gD, depending on the theory. The majority of particles appearing in any quantum field theory are unstable, and they decay into other particles in a variety of reactions that must satisfy various conservation laws. Stable particles are stable because there are no lighter particles into which they can decay and still satisfy the conservation laws. For instance, the electron has a lepton number of one and an electric charge of one, and there are no lighter particles that conserve these values. On the other hand, the muon, essentially a heavy electron, can decay into the electron plus two quanta of energy, and hence it is not stable. The dyons in these GUTs are also stable, but for an entirely different reason. The dyons are expected to exist as a side effect of the "freezing out" of the conditions of the early universe, or a symmetry breaking. In this scenario, the dyons arise due to the configuration of the vacuum in a particular area of the universe, according to the original Dirac theory. They remain stable not because of a conservation condition, but because there is no simpler topological state into which they can decay. The length scale over which this special vacuum configuration exists is called the correlation length of the system. A correlation length cannot be larger than causality would allow, therefore the correlation length for making magnetic monopoles must be at least as big as the horizon size determined by the metric of the expanding universe. According to that logic, there should be at least one magnetic monopole per horizon volume as it was when the symmetry breaking took place. Cosmological models of the events following the Big Bang make predictions about what the horizon volume was, which lead to predictions about present-day monopole density. Early models predicted an enormous density of monopoles, in clear contradiction to the experimental evidence. This was called the "monopole problem". Its widely accepted resolution was not a change in the particle-physics prediction of monopoles, but rather in the cosmological models used to infer their present-day density. Specifically, more recent theories of cosmic inflation drastically reduce the predicted number of magnetic monopoles, to a density small enough to make it unsurprising that humans have never seen one. This resolution of the "monopole problem" was regarded as a success of cosmic inflation theory. (However, of course, it is only a noteworthy success if the particle-physics monopole prediction is correct.) For these reasons, monopoles became a major interest in the 1970s and 80s, along with the other "approachable" predictions of GUTs such as proton decay. Many of the other particles predicted by these GUTs were beyond the abilities of current experiments to detect. For instance, a wide class of particles known as the X and Y bosons are predicted to mediate the coupling of the electroweak and strong forces, but these particles are extremely heavy and well beyond the capabilities of any reasonable particle accelerator to create. Searches for magnetic monopoles Experimental searches for magnetic monopoles can be placed in one of two categories: those that try to detect preexisting magnetic monopoles and those that try to create and detect new magnetic monopoles. Passing a magnetic monopole through a coil of wire induces a net current in the coil. This is not the case for a magnetic dipole or higher order magnetic pole, for which the net induced current is zero, and hence the effect can be used as an unambiguous test for the presence of magnetic monopoles. In a wire with finite resistance, the induced current quickly dissipates its energy as heat, but in a superconducting loop the induced current is long-lived. By using a highly sensitive "superconducting quantum interference device" (SQUID) one can, in principle, detect even a single magnetic monopole. According to standard inflationary cosmology, magnetic monopoles produced before inflation would have been diluted to an extremely low density today. Magnetic monopoles may also have been produced thermally after inflation, during the period of reheating. However, the current bounds on the reheating temperature span 18 orders of magnitude and as a consequence the density of magnetic monopoles today is not well constrained by theory. There have been many searches for preexisting magnetic monopoles. Although there has been one tantalizing event recorded, by Blas Cabrera Navarro on the night of February 14, 1982 (thus, sometimes referred to as the "Valentine's Day Monopole"), there has never been reproducible evidence for the existence of magnetic monopoles. The lack of such events places an upper limit on the number of monopoles of about one monopole per 1029 nucleons. Another experiment in 1975 resulted in the announcement of the detection of a moving magnetic monopole in cosmic rays by the team led by P. Buford Price. Price later retracted his claim, and a possible alternative explanation was offered by Luis Walter Alvarez. In his paper it was demonstrated that the path of the cosmic ray event that was claimed due to a magnetic monopole could be reproduced by the path followed by a platinum nucleus decaying first to osmium, and then to tantalum. High-energy particle colliders have been used to try to create magnetic monopoles. Due to the conservation of magnetic charge, magnetic monopoles must be created in pairs, one north and one south. Due to conservation of energy, only magnetic monopoles with masses less than half of the center of mass energy of the colliding particles can be produced. Beyond this, very little is known theoretically about the creation of magnetic monopoles in high-energy particle collisions. This is due to their large magnetic charge, which invalidates all the usual calculational techniques. As a consequence, collider-based searches for magnetic monopoles cannot, as yet, provide lower bounds on the mass of magnetic monopoles. They can however provide upper bounds on the probability (or cross section) of pair production, as a function of energy. The ATLAS experiment at the Large Hadron Collider currently has the most stringent cross section limits for magnetic monopoles of 1 and 2 Dirac charges, produced through Drell–Yan pair production. A team led by Wendy Taylor searches for these particles based on theories that define them as long lived (they do not quickly decay), as well as being highly ionizing (their interaction with matter is predominantly ionizing). In 2019 the search for magnetic monopoles in the ATLAS detector reported its first results from data collected from the LHC Run 2 collisions at center of mass energy of 13 TeV, which at 34.4 fb−1 is the largest dataset analyzed to date. The MoEDAL experiment, installed at the Large Hadron Collider, is currently searching for magnetic monopoles and large supersymmetric particles using nuclear track detectors and aluminum bars around LHCb's VELO detector. The particles it is looking for damage the plastic sheets that comprise the nuclear track detectors along their path, with various identifying features. Further, the aluminum bars can trap sufficiently slowly moving magnetic monopoles. The bars can then be analyzed by passing them through a SQUID. "Monopoles" in condensed-matter systems Since around 2003, various condensed-matter physics groups have used the term "magnetic monopole" to describe a different and largely unrelated phenomenon. A true magnetic monopole would be a new elementary particle, and would violate Gauss's law for magnetism . A monopole of this kind, which would help to explain the law of charge quantization as formulated by Paul Dirac in 1931, has never been observed in experiments. The monopoles studied by condensed-matter groups have none of these properties. They are not a new elementary particle, but rather are an emergent phenomenon in systems of everyday particles (protons, neutrons, electrons, photons); in other words, they are quasi-particles. They are not sources for the -field (i.e., they do not violate ); instead, they are sources for other fields, for example the -field, the "-field" (related to superfluid vorticity), or various other quantum fields. They are not directly relevant to grand unified theories or other aspects of particle physics, and do not help explain charge quantization—except insofar as studies of analogous situations can help confirm that the mathematical analyses involved are sound. There are a number of examples in condensed-matter physics where collective behavior leads to emergent phenomena that resemble magnetic monopoles in certain respects, including most prominently the spin ice materials. While these should not be confused with hypothetical elementary monopoles existing in the vacuum, they nonetheless have similar properties and can be probed using similar techniques. Some researchers use the term magnetricity to describe the manipulation of magnetic monopole quasiparticles in spin ice, in analogy to the word "electricity". One example of the work on magnetic monopole quasiparticles is a paper published in the journal Science in September 2009, in which researchers described the observation of quasiparticles resembling magnetic monopoles. A single crystal of the spin ice material dysprosium titanate was cooled to a temperature between 0.6 kelvin and 2.0 kelvin. Using observations of neutron scattering, the magnetic moments were shown to align into interwoven tubelike bundles resembling Dirac strings. At the defect formed by the end of each tube, the magnetic field looks like that of a monopole. Using an applied magnetic field to break the symmetry of the system, the researchers were able to control the density and orientation of these strings. A contribution to the heat capacity of the system from an effective gas of these quasiparticles was also described. This research went on to win the 2012 Europhysics Prize for condensed matter physics. In another example, a paper in the February 11, 2011 issue of Nature Physics describes creation and measurement of long-lived magnetic monopole quasiparticle currents in spin ice. By applying a magnetic-field pulse to crystal of dysprosium titanate at 0.36 K, the authors created a relaxing magnetic current that lasted for several minutes. They measured the current by means of the electromotive force it induced in a solenoid coupled to a sensitive amplifier, and quantitatively described it using a chemical kinetic model of point-like charges obeying the Onsager–Wien mechanism of carrier dissociation and recombination. They thus derived the microscopic parameters of monopole motion in spin ice and identified the distinct roles of free and bound magnetic charges. In superfluids, there is a field , related to superfluid vorticity, which is mathematically analogous to the magnetic -field. Because of the similarity, the field is called a "synthetic magnetic field". In January 2014, it was reported that monopole quasiparticles for the field were created and studied in a spinor Bose–Einstein condensate. This constitutes the first example of a quasi-magnetic monopole observed within a system governed by quantum field theory. Updates to the theoretical and experimental searches in matter can be found in the reports by G. Giacomelli (2000) and by S. Balestra (2011) in the Bibliography section.
Physical sciences
Magnetostatics
Physics
175537
https://en.wikipedia.org/wiki/Netflix
Netflix
Netflix is an American subscription video on-demand over-the-top streaming service. The service primarily distributes original and acquired films and television shows from various genres, and it is available internationally in multiple languages. Launched in 2007, nearly a decade after Netflix, Inc. began its pioneering DVD-by-mail movie rental service, Netflix is the most-subscribed video on demand streaming media service, with 301.6 million paid memberships in more than 190 countries as of 2025. By 2022, "Netflix Original" productions accounted for half of its library in the United States and the namesake company had ventured into other categories, such as video game publishing of mobile games through its flagship service. As of 2023, Netflix is the 23rd most-visited website in the world, with 23.66% of its traffic coming from the United States, followed by the United Kingdom at 5.84%, and Brazil at 5.64%. History Launch as a mail-based rental business (1997–2006) Netflix was founded by Marc Randolph and Reed Hastings on August 29, 1997, in Scotts Valley, California. Hastings, a computer scientist and mathematician, was a co-founder of Pure Software, which was acquired by Rational Software that year for $750 million, the then biggest acquisition in Silicon Valley history. Randolph had worked as a marketing director for Pure Software after Pure Atria acquired a company where Randolph worked. He was previously a co-founder of MicroWarehouse, a computer mail-order company, as well as vice president of marketing for Borland. Hastings and Randolph came up with the idea for Netflix while carpooling between their homes in Santa Cruz, California, and Pure Atria's headquarters in Sunnyvale. Patty McCord, later head of human resources at Netflix, was also in the carpool group. Randolph admired Amazon and wanted to find a large category of portable items to sell over the Internet using a similar model. Hastings and Randolph considered and rejected selling and renting VHS as too expensive to stock and too delicate to ship. When they heard about DVDs, first introduced in the United States in early 1997, they tested the concept of selling or renting DVDs by mail, by mailing a compact disc to Hastings's house in Santa Cruz. When the CD arrived intact, they decided to enter the $16 billion Home-video sales and rental industry. Hastings is often quoted saying that he decided to start Netflix after being fined $40 at a Blockbuster store for being late to return a copy of Apollo 13. Hastings invested $2.5 million into Netflix from the sale of Pure Atria. Netflix launched as the first DVD rental and sales website with 30 employees and 925 titles available—nearly all DVDs published. Randolph and Hastings met with Jeff Bezos, where Amazon offered to acquire Netflix for between $14 and $16 million. Fearing competition from Amazon, Randolph at first thought the offer was fair, but Hastings, who owned 70% of the company, turned it down on the plane ride home. Initially, Netflix offered a per-rental model for each DVD but introduced a monthly subscription concept in September 1999. The per-rental model was dropped by early 2000, allowing the company to focus on the business model of flat-fee unlimited rentals without due dates, late fees, shipping and handling fees, or per-title rental fees. In September 2000, during the dot-com bubble, while Netflix was suffering losses, Hastings and Randolph offered to sell the company to Blockbuster for $50 million. John Antioco, CEO of Blockbuster, thought the offer was a joke and declined, saying, "The dot-com hysteria is completely overblown." While Netflix experienced fast growth in early 2001, the continued effects of the dot-com bubble collapse and the September 11 attacks caused the company to hold off plans for its initial public offering (IPO) and to lay off one-third of its 120 employees. DVD players were a popular gift for holiday sales in late 2001, and demand for DVD subscription services were "growing like crazy", according to chief talent officer Patty McCord. The company went public on May 23, 2002, selling 5.5 million shares of common stock at US$15.00 per share. In 2003, Netflix was issued a patent by the U.S. Patent & Trademark Office to cover its subscription rental service and several extensions. Netflix posted its first profit in 2003, earning $6.5 million on revenues of $272 million; by 2004, profit had increased to $49 million on over $500 million in revenues. In 2005, 35,000 different films were available, and Netflix shipped 1 million DVDs out every day. In 2004, Blockbuster introduced a DVD rental service, which not only allowed users to check out titles through online sites but allowed for them to return them at brick and-mortar stores. By 2006, Blockbuster's service reached two million users, and while trailing Netflix's subscriber count, was drawing business away from Netflix. Netflix lowered fees in 2007. While it was an urban legend that Netflix ultimately "killed" Blockbuster in the DVD rental market, Blockbuster's debt load and internal disagreements hurt the company. On April 4, 2006, Netflix filed a patent infringement lawsuit in which it demanded a jury trial in the United States District Court for the Northern District of California, alleging that Blockbuster's online DVD rental subscription program violated two patents held by Netflix. The first cause of action alleged Blockbuster's infringement of copying the "dynamic queue" of DVDs available for each customer, Netflix's method of using the ranked preferences in the queue to send DVDs to subscribers, and Netflix's method permitting the queue to be updated and reordered. The second cause of action alleged infringement of the subscription rental service as well as Netflix's methods of communication and delivery. The companies settled their dispute on June 25, 2007; terms were not disclosed. On October 1, 2006, Netflix announced the Netflix Prize, $1,000,000 to the first developer of a video-recommendation algorithm that could beat its existing algorithm Cinematch, at predicting customer ratings by more than 10%. On September 21, 2009, it awarded the $1,000,000 prize to team "BellKor's Pragmatic Chaos". Cinematch, launched in 2000, was a system that recommended movies to its users, many of which might have been entirely new to the user. Through its division Red Envelope Entertainment, Netflix licensed and distributed independent films such as Born into Brothels and Sherrybaby. In late 2006, Red Envelope Entertainment also expanded into producing original content with filmmakers such as John Waters. Netflix closed Red Envelope Entertainment in 2008. Transition to streaming services (2007–2012) In January 2007, the company launched a streaming media service, introducing video on demand via the Internet. However, at that time it only had 1,000 films available for streaming, compared to 70,000 available on DVD. The company had for some time considered offering movies online, but it was only in the mid-2000s that data speeds and bandwidth costs had improved sufficiently to allow customers to download movies from the internet. The original idea was a "Netflix box" that could download movies overnight, and be ready to watch the next day. By 2005, Netflix had acquired movie rights and designed the box and service. But after witnessing how popular streaming services such as YouTube were despite the lack of high-definition content, the concept of using a hardware device was scrapped and replaced with a streaming concept. In February 2007, Netflix delivered its billionth DVD, a copy of Babel to a customer in Texas. In April 2007, Netflix recruited ReplayTV founder Anthony Wood, to build a "Netflix Player" that would allow streaming content to be played directly on a television rather than a desktop or laptop. Hastings eventually shut down the project to help encourage other hardware manufacturers to include built-in Netflix support, which would be spun off as the digital media player product Roku. In January 2008, all rental-disc subscribers became entitled to unlimited streaming at no additional cost. This change came in a response to the introduction of Hulu and to Apple's new video-rental services. In August 2008, the Netflix database was corrupted and the company was not able to ship DVDs to customers for 3 days, leading the company to move all its data to the Amazon Web Services cloud. In November 2008, Netflix began offering subscribers rentals on Blu-ray and discontinued its sale of used DVDs. In 2009, Netflix streams overtook DVD shipments. On January 6, 2010, Netflix agreed with Warner Bros. to delay new release rentals to 28 days after the DVDs became available for sale, in an attempt to help studios sell physical copies, and similar deals involving Universal Pictures and 20th Century Fox were reached on April 9. In July 2010, Netflix signed a deal to stream movies of Relativity Media. In August 2010, Netflix reached a five-year deal worth nearly $1 billion to stream films from Paramount, Lionsgate and Metro-Goldwyn-Mayer. The deal increased Netflix's annual spending fees, adding roughly $200 million per year. It spent $117 million in the first six months of 2010 on streaming, up from $31 million in 2009. On September 22, 2010, Netflix launched in Canada, its first international market. In November 2010, Netflix began offering a standalone streaming service separate from DVD rentals. In 2010, Netflix acquired the rights to Breaking Bad, produced by Sony Pictures Television, after the show's third season, at a point where original broadcaster AMC had expressed the possibility of cancelling the show. Sony pushed Netflix to release Breaking Bad in time for the fourth season, which as a result, greatly expanded the show's audience on AMC due to new viewers bingeing on the Netflix past episodes, and doubling the viewership by the time of the fifth season. Breaking Bad is considered the first such show to have this "Netflix effect". In January 2011, Netflix announced agreements with several manufacturers to include branded Netflix buttons on the remote controls of devices compatible with the service, such as Blu-ray players. By May 2011, Netflix had become the largest source of Internet streaming traffic in North America, accounting for 30% of traffic during peak hours. On July 12, 2011, Netflix announced that it would separate its existing subscription plans into two separate plans: one covering the streaming and the other DVD rental services. The cost for streaming would be $7.99 per month, while DVD rental would start at the same price. On September 11, 2011, Netflix expanded to countries in Latin America. On September 18, 2011, Netflix announced its intentions to rebrand and restructure its DVD home media rental service as an independent subsidiary called Qwikster, separating DVD rental and streaming services. On September 26, 2011, Netflix announced a content deal with DreamWorks Animation. On October 10, 2011, Netflix announced that it would retain its DVD service under the name Netflix and that its streaming and DVD-rental plans would remain branded together, citing customer dissatisfaction with the split. In October 2011. Netflix and The CW signed a multi-year output deal for its television shows. On January 9, 2012, Netflix started its expansion to Europe, launching in the United Kingdom and Ireland. In February 2012, Netflix reached a multi-year agreement with The Weinstein Company. In March 2012, Netflix acquired the domain name DVD.com. By 2016, Netflix rebranded its DVD-by-mail service under the name DVD.com, A Netflix Company. In April 2012, Netflix filed with the Federal Election Commission (FEC) to form a political action committee (PAC) called FLIXPAC. Netflix spokesperson Joris Evers tweeted that the intent was to "engage on issues like net neutrality, bandwidth caps, UBB and VPPA". In June 2012, Netflix signed a deal with Open Road Films. On August 23, 2012, Netflix and The Weinstein Company signed a multi-year output deal for RADiUS-TWC films. In September 2012, Epix signed a five-year streaming deal with Netflix. For the initial two years of this agreement, first-run and back-catalog content from Epix was exclusive to Netflix. Epix films came to Netflix 90 days after premiering on Epix. These included films from Paramount, Metro-Goldwyn-Mayer and Lionsgate. On October 18, 2012, Netflix launched in Denmark, Finland, Norway and Sweden. On December 4, 2012, Netflix and Disney announced an exclusive multi-year agreement for first-run United States subscription television rights to Walt Disney Studios' animated and live-action films, with classics such as Dumbo, Alice in Wonderland and Pocahontas available immediately and others available on Netflix beginning in 2016. Direct-to-video releases were made available in 2013. On January 14, 2013, Netflix signed an agreement with Time Warner's Turner Broadcasting System and Warner Bros. Television to distribute Cartoon Network, Warner Bros. Animation, and Adult Swim content, as well as TNT's Dallas, beginning in March 2013. The rights to these programs were given to Netflix shortly after deals with Viacom to stream Nickelodeon and Nick Jr. Channel programs expired. For cost reasons, Netflix stated that it would limit its expansion in 2013, adding only one new market—the Netherlands—in September of that year. This expanded its availability to 40 territories. Development of original programming and distribution expansion (2013–2017) In 2011, Netflix began its efforts into original content development. In March, it made a straight-to-series order from MRC for the political drama House of Cards, led by Kevin Spacey, outbidding U.S. cable networks. This marked the first instance of a first-run television series being specifically commissioned by the service. In November the same year, Netflix added two more significant productions to its roster: the comedy-drama Orange Is the New Black, adapted from Piper Kerman's memoir, and a new season of the previously cancelled Fox sitcom Arrested Development. Netflix acquired the U.S. rights to the Norwegian drama Lilyhammer after its television premiere on Norway's NRK1 on January 25, 2012. Notably departing from the traditional broadcast television model of weekly episode premieres, Netflix chose to release the entire first season on February 8 of the same year. House of Cards was released by Netflix on February 1, 2013, marketed as the first "Netflix Original" production. Later that month, Netflix announced an agreement with DreamWorks Animation to commission children's television series based on its properties, beginning with Turbo: F.A.S.T., a spin-off of its film Turbo. Orange is the New Black would premiere in July 2013; Netflix stated that Orange is the New Black had been its most-watched original series so far, with all of them having "an audience comparable with successful shows on cable and broadcast TV." On March 13, 2013, Netflix added a Facebook sharing feature, letting United States subscribers access "Watched by your friends" and "Friends' Favorites" by agreeing. This was not legal until the Video Privacy Protection Act was modified in early 2013. On August 1, 2013, Netflix reintroduced the "Profiles" feature that permits accounts to accommodate up to five user profiles. In November 2013, Marvel Television and ABC Studios announced Netflix had ordered a slate of four television series based on the Marvel Comics characters Daredevil, Jessica Jones, Iron Fist and Luke Cage. Each of the four series received an initial order of 13 episodes, and Netflix also ordered a Defenders miniseries that would tie them together. Daredevil and Jessica Jones premiered in 2015. The Luke Cage series premiered on September 30, 2016, followed by Iron Fist on March 17, 2017, and The Defenders on August 18, 2017. Marvel owner Disney later entered into other content agreements with Netflix, including acquiring its animated Star Wars series Star Wars: The Clone Wars, and a new sixth season. In February 2014, Netflix began to enter into agreements with U.S. internet service providers, beginning with Comcast (whose customers had repeatedly complained of frequent buffering when streaming Netflix), in order to provide the service a direct connection to their networks. In April 2014, Netflix signed Arrested Development creator Mitchell Hurwitz and his production firm The Hurwitz Company to a multi-year deal to create original projects for the service. In May 2014, Netflix & Sony Pictures Animation had a major multi-deal to acquired streaming rights to produce films. It also began to introduce an updated logo, with a flatter appearance and updated typography. In September 2014, Netflix expanded into six new European markets, including Austria, Belgium, France, Germany, Luxembourg, and Switzerland. On September 10, 2014, Netflix participated in Internet Slowdown Day by deliberately slowing down its speed in support of net neutrality regulations in the United States. In October 2014, Netflix announced a four-film deal with Adam Sandler and his Happy Madison Productions. In April 2015, following the launch of Daredevil, Netflix director of content operations Tracy Wright announced that Netflix had added support for audio description, and had begun to work with its partners to add descriptions to its other original series over time. The following year, as part of a settlement with the American Council of the Blind, Netflix agreed to provide descriptions for its original series within 30 days of their premiere, and add screen reader support and the ability to browse content by availability of descriptions. In March 2015, Netflix expanded to Australia and New Zealand. In September 2015, Netflix launched in Japan, its first country in Asia. In October 2015, Netflix launched in Italy, Portugal, and Spain. In January 2016, at the Consumer Electronics Show, Netflix announced a major international expansion of its service into 130 additional countries. It then had become available worldwide except China, Syria, North Korea, Kosovo and Crimea. In May 2016, Netflix created a tool called Fast.com to determine the speed of an Internet connection. It received praise for being "simple" and "easy to use", and does not include online advertising, unlike competitors. On November 30, 2016, Netflix launched an offline playback feature, allowing users of the Netflix mobile apps on Android or iOS to cache content on their devices in standard or high quality for viewing offline, without an Internet connection. In 2016, Netflix released an estimated 126 original series or films, more than any network or cable channel. In April 2016, Hastings stated that the company planned to expand its in-house, Los Angeles-based Netflix Studios to grow its output; Hastings ruled out any potential acquisitions of existing studios. In February 2017, Netflix signed a music publishing deal with BMG Rights Management, whereby BMG will oversee rights outside of the United States for music associated with Netflix original content. Netflix continues to handle these tasks in-house in the United States. On April 25, 2017, Netflix signed a licensing deal with IQIYI, a Chinese video streaming platform owned by Baidu, to allow selected Netflix original content to be distributed in China on the platform. On August 7, 2017, Netflix acquired Millarworld, the creator-owned publishing company of comic book writer Mark Millar. The purchase marked the first corporate acquisition to have been made by Netflix. On August 14, 2017, Netflix entered into an exclusive development deal with Shonda Rhimes and her production company Shondaland. In September 2017, Netflix announced it would offer its low-broadband mobile technology to airlines to provide better in-flight Wi-Fi so that passengers can watch movies on Netflix while on planes. In September 2017, Minister of Heritage Mélanie Joly announced that Netflix had agreed to make a (US$400 million) investment over the next five years in producing content in Canada. The company denied that the deal was intended to result in a tax break. Netflix realized this goal by December 2018. In October 2017, Netflix iterated a goal of having half of its library consist of original content by 2019, announcing a plan to invest $8 billion on original content in 2018. In October 2017, Netflix introduced the "Skip Intro" feature which allows customers to skip the intros to shows on its platform through a variety of techniques including manual reviewing, audio tagging, and machine learning. In November 2017, Netflix signed an exclusive multi-year deal with Orange Is the New Black creator Jenji Kohan. In November 2017, Netflix withdrew from co-hosting a party at the 75th Golden Globe Awards with The Weinstein Company due to the Harvey Weinstein sexual abuse cases. Expansion into international productions and new productions (2017–2020) In November 2017, Netflix announced that it would be making its first original Colombian series, to be executive produced by Ciro Guerra. In December 2017, Netflix signed Stranger Things director-producer Shawn Levy and his production company 21 Laps Entertainment to what sources say is a four-year deal. In 2017, Netflix invested in distributing exclusive stand-up comedy specials from Dave Chappelle, Louis C.K., Chris Rock, Jim Gaffigan, Bill Burr and Jerry Seinfeld. In February 2018, Netflix acquired the rights to The Cloverfield Paradox from Paramount Pictures for $50 million and launched on its service on February 4, 2018, shortly after airing its first trailer during Super Bowl LII. Analysts believed that Netflix's purchase of the film helped to make the film instantly profitable for Paramount compared to a more traditional theatrical release, while Netflix benefited from the surprise reveal. Other films acquired by Netflix include international distribution for Paramount's Annihilation and Universal's News of the World and worldwide distribution of Universal's Extinction, Warner Bros.' Mowgli: Legend of the Jungle, Paramount's The Lovebirds and 20th Century Studios' The Woman in the Window. In March, the service ordered Formula 1: Drive to Survive, a racing docuseries following teams in the Formula One world championship. In March 2018, Sky UK announced an agreement with Netflix to integrate Netflix's subscription VOD offering into its pay-TV service. Customers with its high-end Sky Q set-top box and service will be able to see Netflix titles alongside their regular Sky channels. In October 2022, Netflix revealed that its annual revenue from the UK subscribers in 2021 was £1.4bn. In April 2018, Netflix pulled out of the Cannes Film Festival, in response to new rules requiring competition films to have been released in French theaters. The Cannes premiere of Okja in 2017 was controversial, and led to discussions over the appropriateness of films with simultaneous digital releases being screened at an event showcasing theatrical film; audience members also booed the Netflix production logo at the screening. Netflix's attempts to negotiate to allow a limited release in France were curtailed by organizers, as well as French cultural exception law—where theatrically screened films are legally forbidden from being made available via video-on-demand services until at least 36 months after their release. Besides traditional Hollywood markets as well as from partners like the BBC, Sarandos said the company also looking to expand investments in non-traditional foreign markets due to the growth of viewers outside of North America. At the time, this included programs such as Dark from Germany, Ingobernable from Mexico and 3% from Brazil. On May 22, 2018, former president, Barack Obama, and his wife, Michelle Obama, signed a deal to produce docu-series, documentaries and features for Netflix under the Obamas' newly formed production company, Higher Ground Productions. In June 2018, Netflix announced a partnership with Telltale Games to port its adventure games to the service in a streaming video format, allowing simple controls through a television remote. The first game, Minecraft: Story Mode, was released in November 2018. In July 2018, Netflix earned the most Emmy nominations of any network for the first time with 112 nods. On August 27, 2018, the company signed a five-year exclusive overall deal with international best–selling author Harlan Coben. On the same day, the company signed an overall deal with Gravity Falls creator Alex Hirsch. In October 2018, Netflix paid under $30 million to acquire Albuquerque Studios (ABQ Studios), a $91 million film and TV production facility with eight sound stages in Albuquerque, New Mexico, for its first U.S. production hub, pledging to spend over $1 billion over the next decade to create one of the largest film studios in North America. In November 2018, Paramount Pictures signed a multi-picture film deal with Netflix, making Paramount the first major film studio to sign a deal with Netflix. A sequel to AwesomenessTV's To All the Boys I've Loved Before was released on Netflix under the title To All the Boys: P.S. I Still Love You as part of the agreement. In December 2018, the company announced a partnership with ESPN Films on a television documentary chronicling Michael Jordan and the 1997–98 Chicago Bulls season titled The Last Dance. It was released internationally on Netflix and became available for streaming in the United States three months after a broadcast airing on ESPN. In January 2019, Sex Education made its debut as a Netflix original series, receiving much critical acclaim. On January 22, 2019, Netflix sought and was approved for membership into the Motion Picture Association of America (MPAA), making it the first streaming service to join the association. In February 2019, The Haunting creator Mike Flanagan joined frequent collaborator Trevor Macy as a partner in Intrepid Pictures and the duo signed an exclusive overall deal with Netflix to produce television content. On May 9, 2019, Netflix contracted with Dark Horse Entertainment to make television series and films based on comics from Dark Horse Comics. In July 2019, Netflix announced that it would be opening a hub at Shepperton Studios as part of a deal with Pinewood Group. In early-August 2019, Netflix negotiated an exclusive multi-year film and television deal with Game of Thrones creators and showrunners David Benioff and D.B. Weiss. The first Netflix production created by Benioff and Weiss was planned as an adaptation of Liu Cixin's science fiction novel The Three-Body Problem, part of the Remembrance of Earth's Past trilogy. On September 30, 2019, in addition to renewing Stranger Things for a fourth season, Netflix signed The Duffer Brothers to an overall deal covering future film and television projects for the service. On November 13, 2019, Netflix and Nickelodeon entered into a multi-year agreement to produce several original animated feature films and television series based on Nickelodeon's library of characters. This agreement expanded on their existing relationship, in which new specials based on the past Nickelodeon series Invader Zim and Rocko's Modern Life (Invader Zim: Enter the Florpus and Rocko's Modern Life: Static Cling respectively) were released by Netflix. Other new projects planned under the team-up include a music project featuring Squidward Tentacles from the animated television series SpongeBob SquarePants, and films based on The Loud House and Rise of the Teenage Mutant Ninja Turtles. The agreement with Disney ended in 2019 due to the launch of Disney+, with its Marvel productions moving exclusively to the service in 2022. In November 2019, Netflix announced that it had signed a long-term lease to save the Paris Theatre, the last single-screen movie theater in Manhattan. The company oversaw several renovations at the theater, including new seats and a concession stand. In January 2020, Netflix announced a new four-film deal with Adam Sandler worth up to $275 million. On February 25, 2020, Netflix formed partnerships with six Japanese creators to produce an original Japanese anime project. This partnership includes manga creator group CLAMP, mangaka Shin Kibayashi, mangaka Yasuo Ohtagaki, novelist and film director Otsuichi, novelist Tow Ubutaka, and manga creator Mari Yamazaki. On March 4, 2020, ViacomCBS announced that it will be producing two spin-off films based on SpongeBob SquarePants for Netflix. On April 7, 2020, Peter Chernin's Chernin Entertainment made a multi-year first-look deal with Netflix to make films. On May 29, 2020, Netflix announced the acquisition of Grauman's Egyptian Theatre from the American Cinematheque to use as a special events venue. In July 2020, Netflix appointed Sarandos as co-CEO. In July 2020, Netflix invested in Black Mirror creators Charlie Brooker and Annabel Jones' new production outfit Broke And Bones. In September 2020, Netflix signed a multi-million dollar deal with the Duke and Duchess of Sussex. Harry and Meghan agreed to a multi-year deal promising to create TV shows, films, and children's content as part of their commitment to stepping away from the duties of the royal family. In September 2020, Hastings released a book about Netflix culture titled No Rules Rules: Netflix and the Culture of Reinvention, which was coauthored by Erin Meyer. In December 2020, Netflix signed a first-look deal with Millie Bobby Brown to develop and star in several projects including a potential action franchise. Expansion into gaming, Squid Game, new programing and new initiatives (2021–2022) In March 2021, Netflix earned the most Academy Award nominations of any studio, with 36. Netflix won seven Academy Awards, which was the most by any studio. Later that year, Netflix also won more Emmys than any other network or studio with 44 wins, tying the record for most Emmys won in a single year set by CBS in 1974. On April 8, 2021, Sony Pictures Entertainment announced an agreement for Netflix to hold the U.S. pay television window rights to its releases beginning in 2022, replacing Starz and expanding upon an existing agreement with Sony Pictures Animation. The agreement also includes a first-look deal for any future direct-to-streaming films being produced by Sony Pictures, with Netflix required to commit to a minimum number of them. On April 27, Netflix announced that it was opening its first Canadian headquarters in Toronto. The company also announced that it would open an office in Sweden as well as Rome and Istanbul to increase its original content in those regions. In early-June, Netflix hosted a first-ever week-long virtual event called "Geeked Week", where it shared exclusive news, new trailers, cast appearances and more about upcoming genre titles like The Witcher, The Cuphead Show!, and The Sandman. On June 7, 2021, Jennifer Lopez's Nuyorican Productions signed a multi-year first-look deal with Netflix spanning feature films, TV series, and unscripted content, with an emphasis on projects that support diverse female actors, writers, and filmmakers. On June 10, 2021, Netflix announced it was launching an online store for curated products tied to the Netflix brand and shows such as Stranger Things and The Witcher. On June 21, 2021, Steven Spielberg's Amblin Partners signed a deal with Netflix to release multiple new feature films for the streaming service. On June 30, 2021, Powerhouse Animation Studios (the studio behind Netflix's Castlevania) announced signing a first-look deal with the streamer to produce more animated series. In July 2021, Netflix hired Mike Verdu, a former executive from Electronic Arts and Facebook, as vice president of game development, along with plans to add video games by 2022. Netflix announced plans to release mobile games that would be included in subscribers' service plans. Trial offerings were first launched for Netflix users in Poland in August 2021, offering premium mobile games based on Stranger Things including Stranger Things 3: The Game, for free to subscribers through the Netflix mobile app. On July 14, 2021, Netflix signed a first-look deal with Joey King, star of The Kissing Booth franchise, in which King will produce and develop films for Netflix via her All The King's Horses production company. On July 21, 2021, Zack Snyder, director of Netflix's Army of the Dead, announced he had signed his production company The Stone Quarry to a first-look deal with Netflix; his upcoming projects include a sequel to Army of the Dead and a sci-fi adventure film titled Rebel Moon. In 2019, he agreed to produce an anime-style web series inspired by Norse mythology. As of August 2021, Netflix Originals made up 40% of Netflix's overall library in the United States. The company announced that "TUDUM: A Netflix Global Fan Event", a three-hour virtual behind the scenes featuring first-look reveals for 100 of the streamer's series, films and specials, would have its inaugural show in late September 2021. According to Netflix, the show garnered 25.7 million views across Netflix's 29 Netflix YouTube channels, Twitter, Twitch, Facebook, TikTok and Tudum.com. Also in September, the company announced The Queen's Ball: A Bridgerton Experience, launching in 2022 in Los Angeles, Chicago, Montreal, and Washington, D.C.. Squid Game, a South Korean survival drama created and produced by Hwang Dong-hyuk, rapidly became the service's most-watched show within a week of its launch in many markets on September 17, 2021, including Korea, the U.S. and the United Kingdom. Within its first 28 days on the service, Squid Game drew more than 111 million viewers, surpassing Bridgerton and becoming Netflix's most-watched show. On September 20, 2021, Netflix signed a long-term lease with Aviva Investors to operate and expand the Longcross Studios in Surrey, UK. On September 21, 2021, Netflix announced that it would acquire the Roald Dahl Story Company, which manages the rights to Roald Dahl's stories and characters, for an undisclosed price and would operate it as an independent company. The company acquired Night School Studio, an independent video game developer, on September 28, 2021. On October 13, 2021, Netflix announced the launch of the Netflix Book Club, partnering with Starbucks for a social series called But Have You Read the Book?. Uzo Aduba became inaugural host of the series and announced monthly book selections set to be adapted by the streamer. Aduba speaks with the cast, creators, and authors about the book adaptation process over a cup of coffee at Starbucks. Through October 2021, Netflix commonly reported viewership for its programming based on the number of viewers or households that watched a show in a given period (such as the first 28 days from its premiere) for at least two minutes. On the announcement of its quarterly earnings in October 2021, the company stated that it would switch its viewership metrics to measuring the number of hours that a show was watched, including rewatches, which the company said was closer to the measurements used in linear broadcast television, and thus "our members and the industry can better measure success in the streaming world." Netflix officially launched mobile games on November 2, 2021, for Android users around the world. Through the app, subscribers had free access to five games, including two previously made Stranger Things titles. Netflix intends to add more games to this service over time. On November 9, the collection launched for iOS. Some games in the collection require an active internet connection to play, while others will be available offline. Netflix Kids' accounts will not have games available. On November 16, Netflix announced the launch of "Top10 on Netflix.com", a new website with weekly global and country lists of the most popular titles on their service based on their new viewership metrics. On November 22, Netflix announced that it would acquire Scanline VFX, the visual effects and animation company behind Cowboy Bebop and Stranger Things. On the same day, Roberto Patino signed a deal with Netflix and established his production banner, Analog Inc., in partnership with the company. Patino's first project under the deal is a series adaptation of Image Comics' Nocterra. On December 6, 2021, Netflix and Stage 32 announced that they have teamed up the workshops at the Creating Content for the Global Marketplace program. On December 7, 2021, Netflix partnered with IllumiNative, a woman-led non-profit organization, for the Indigenous Producers Training Program. On December 9, Netflix announced the launch of "Tudum", an official companion website that offers news, exclusive interviews and behind-the-scenes videos for its original television shows and films. On December 13, Netflix signed a multi-year overall deal with Kalinda Vazquez. On December 16, 2021, Netflix signed a multi-year creative partnership with Spike Lee and his production company 40 Acres and a Mule Filmworks to develop film and television projects. In compliance with the EU Audiovisual Media Services Directive and its implementation in France, Netflix reached commitments with French broadcasting authorities and film guilds, as required by law, to invest a specific amount of its annual revenue into original French films and series. These films must be theatrically released and would not be allowed to be carried on Netflix until 15 months after their release. In January 2022, Netflix ordered additional sports docuseries from Drive to Survive producers Box to Box Films, including a series that would follow PGA Tour golfers, and another that would follow professional tennis players on the ATP and WTA Tour circuits. The company announced plans to acquire Next Games in March 2022 for €65 million as part of Netflix's expansions into gaming. Next Games had developed the mobile title Stranger Things: Puzzle Tales as well as two The Walking Dead mobile games. Later in the month, Netflix also acquired the Texas-based mobile game developer, Boss Fight Entertainment, for an undisclosed sum. On March 15, 2022, Netflix announced a partnership with Dr. Seuss Enterprises to produce five new series and specials based on Seuss properties following the success of Green Eggs and Ham. On March 29, 2022, Netflix announced that it would open an office in Poland to serve as a hub for its original productions across Central and Eastern Europe. On March 30, 2022, Netflix extended its lease agreement with Martini Film Studios, just outside Vancouver, Canada, for another five years. On March 31, 2022, Netflix ordered a docuseries that would follow teams in the 2022 Tour de France, which would also be co-produced by Box to Box Films. Following the 2022 Russian invasion of Ukraine, Netflix suspended its operations and future projects in Russia. It also announced that it would not comply with a proposed directive by Roskomnadzor requiring all internet streaming services with more than 100,000 subscribers to integrate the major free-to-air channels (which are primarily state-owned). A month later, ex-Russian subscribers filed a class action lawsuit against Netflix. Netflix stated that 100 million households globally were sharing passwords to their account with others, and that Canada and the United States accounted for 30 million of them. Following these announcements, Netflix's stock price fell by 35 percent. By June 2022, Netflix had laid off 450 full-time and contract employees as part of the company's plan to trim costs amid lower than expected subscriber growth. On April 13, 2022, Netflix released the series Our Great National Parks, which was hosted and narrated by former US President Barack Obama. It also partnered with Group Effort Initiative, a company founded by Ryan Reynolds and Blake Lively, to provide opportunities behind the camera for those in underrepresented communities. On the same day, Netflix partnered with Lebanon-based Arab Fund For Arts And Culture for supporting the Arab female filmmakers. It will provide a one-time grant of $250,000 to female producers and directors in the Arab world through the company's Fund for Creative Equity. Also on the same day, Netflix announced an Exploding Kittens mobile card game tied to a new animated TV series, which will launch in May. Netflix formed a creative partnership with J. Miles Dale. The company also formed a partnership with Japan's Studio Colorido, signing a multi-film deal to boost their anime content in Asia. The streaming giant is said to co-produce three feature films with the studio, the first of which will premiere in September 2022. On April 28, the company launched its inaugural Netflix Is a Joke comedy festival, featuring more than 250 shows over 12 nights at 30-plus locations across Los Angeles, including the first-ever stand-up show at Dodger Stadium. The first volume of Stranger Things 4 logged Netflix's biggest premiere weekend ever for an original series with 286.79 million hours viewed. This was preceded by a new Stranger Things interactive experience hosted in New York City that was developed by the show's creators. After the release of the second volume of Stranger Things 4 on July 1, 2022, it became Netflix's second title to receive more than one billion hours viewed. On July 19, 2022, Netflix announced plans to acquire Australian animation studio Animal Logic. That month, in collaboration with Sennheiser, Netflix began to add Ambeo 2-channel audio mixes (referred to as "spatial audio") to selected original productions, which allows simulated surround sound on stereo speakers and headphones. On September 5, 2022, Netflix opened an office in Warsaw, Poland responsible for the service's operations in 28 markets in Central and Eastern Europe. On October 4, 2022, Netflix have signed a creative partnership with Andrea Berloff and John Gatins. On October 11, Netflix signed up with the Broadcasters' Audience Research Board for external measurement of viewership in the UK. On October 12, Netflix signed to build a production complex at Fort Monmouth in Eatontown, New Jersey. On October 18, Netflix began exploring a cloud gaming offering and opened a new gaming studio in Southern California. On November 7, 2022, Netflix announced a strategic partnership with The Seven, a Japanese production company owned by TBS Holdings, to produce multiple original live-action titles for the subscribers over the next five years. On December 12, 2022, Netflix announced that sixty-percent of its subscribers had watched a Korean drama. CEO Ted Sarandos attributed the increase in viewership of Korean content among Americans to Korean films and dramas being "often unpredictable" and catching "the American audience by surprise". On January 10, 2023, Netflix announced plans to open an engineering hub in its Warsaw office. The hub is to provide Netflix's creative partners with software solutions for the production of films and series. In February 2023, Netflix launched a wider rollout of spatial audio, and began allowing Premium subscribers to download content for offline playback on up to six devices (expanded from four). On March 4, 2023, Netflix broadcast its first-ever global live-streaming event, the stand-up comedy special Chris Rock: Selective Outrage. Netflix reworked its viewership metrics again in June 2023. Viewership of shows was measured during the first 91 days of availability, instead of the first 28 days, and now are based on the total viewership hours divided by the total hours of the show itself. This provided more equal considerations for shorter shows and movies compared to longer ones. In August 2023, the company announced Netflix Stories, a collection of interactive narrative games from Netflix series and movies such as Love is Blind, Money Heist and Virgin River. Co-CEOs, discontinuation of DVD rentals, expansion of live events, WWE agreement (2023–present) In January 2023, Greg Peters and Ted Sarandos were named co-CEOs of Netflix, with Hastings assuming the role of executive chairman. Peters previously served as COO and Chief Product Officer, while Sarandos served as Chief Content Officer. On April 18, 2023, Netflix announced that it would discontinue its DVD-by-mail service on September 29. Users of the service were able to keep the DVDs that they had received. Over its lifetime the service had sent out over 5 billion shipments. In October 2023, Eunice Kim was promoted to Chief Product Officer and Elizabeth Stone was promoted to Chief Technology Officer. That same month, amid a restructuring of its animation division, Netflix announced a multi-film agreement with Skydance Animation beginning with Spellbound (2024). The agreement partially replaces one it had with Apple TV+. In December 2023, Netflix released its first "What We Watched: A Netflix Engagement Report", a look at viewership for every original and licensed title watched more than 50,000 hours from January to June 2023. The company also announced plans to publish the report twice a year. In its first report for the first six months of 2023, it reported that The Night Agent was the most watched show on globally in that period. On January 23, 2024, Netflix announced a major agreement with professional wrestling promotion WWE, under which it will acquire the international rights to its live weekly program Raw beginning January 6, 2025; the rights will initially cover the United States, Canada, the United Kingdom, and Latin America, and expand to other territories over time. Outside of the United States, it will also hold international rights to all three of WWE's main weekly programs (Raw, SmackDown, and NXT), premium live events, and documentaries among other content. The agreement was reported to be valued at $500 million per-year over ten years. In February 2024, Netflix joined with Peter Morgan, creator of the Netflix series The Crown, to produce the play Patriots on Broadway. The venture is the first Broadway credit for the company but not its first stage project. It was actively involved as a producer of Stranger Things: The First Shadow in London. Both productions share a lead producer, Sonia Friedman. In May 2024, the company hosted its second Netflix Is a Joke festival in Los Angeles. It streamed several specials from the festival live, including Katt Williams's Woke Folk and The Roast of Tom Brady, both of which ranked on Netflix's global top 10 the following two weeks. That same month, Netflix announced that it would stream both National Football League Christmas games in 2024. For 2025 and 2026, the streamer will have exclusive rights to at least one NFL Christmas game each year. In June 2024, Netflix announced that it would develop new entertainment venues known as "Netflix House" at King of Prussia Mall in Pennsylvania and Galleria Dallas in Texas. The spaces will feature retail shops, restaurants, and other interactive experiences related to Netflix original content, building upon other "pop-up" initiatives to promote individual programs. In November 2024, Netflix announced that it would discontinue further work on interactive specials and remove all but four of them from the platform, citing a desire to focus on "technological efforts in other areas". On November 15, 2024, Netflix streamed a boxing event from AT&T Stadium in Arlington, Texas, featuring as co-main events an exhibition match between Jake Paul and Mike Tyson, and Katie Taylor vs. Amanda Serrano for the WBA, WBC, IBF, WBO, and The Ring lightweight titles. While afflicted by technical issues, Paul's promoter reported that the stream had a peak concurrent viewership of 65 million viewers, surpassing the 2023 ICC Men's Cricket World Cup final (which had a reported 57 million concurrent streams on Disney+ Hotstar) as the most live-streamed sporting event. Netflix stated that the event had an "average minute audience" (AMA) of 108 million worldwide, and that the AMA of 47 million in the United States made the Taylor vs. Serrano bout the most-watched women's professional sporting event in U.S. history. On December 20, 2024, FIFA announced that Netflix would be the exclusive U.S. broadcaster of the 2027 and 2031 FIFA Women's World Cup, in what was deemed the platform's most significant push into sports content. On Christmas Day 2024, Netflix aired its first-ever NFL games: the Kansas City Chiefs versus the Pittsburgh Steelers, and the Baltimore Ravens versus the Houston Texans. The games both averaged over 30 million global viewers and became the two most-streamed NFL games in US history, while simultaneously creating Netflix’s most-watched Christmas Day ever in the US. In January 2025, Netflix announced that it had exceeded 300 million subscribers worldwide after adding a record 18.9 million in the fourth quarter of 2024, amounting to 41 million for the full year. Availability and access Global availability Netflix is available in every country and territory except for China, North Korea, Syria, and Russia. In January 2016, Netflix announced it would begin VPN blocking since it can be used to watch videos from a country where they are unavailable. The result of the VPN block is that people can only watch videos available worldwide and other videos are hidden from search results. Variety is present on Netflix. Hebrew and right-to-left interface orientation, which is a common localization strategy in many markets, are what define the Israeli user interface's localization, and in some regions, Netflix offers a more affordable mobile-only subscription. Subscriptions Customers can subscribe to one of three plans; the difference in plans relates to video resolution, the number of simultaneous streams, and the number of devices to which content can be downloaded. At the end of Q1 2022, Netflix estimated that 100 million households globally were sharing passwords to their account with others. In March 2022, Netflix began to charge a fee for additional users in Chile, Peru, and Costa Rica to attempt to control account sharing. On July 18, 2022, Netflix announced that it would test the account sharing feature in more countries, including Argentina, Dominican Republic, El Salvador, Guatemala and Honduras. On October 17, Netflix launched Profile Transfer to help end account sharing. On July 13, 2022, Netflix announced plans to launch an advertising-supported subscription option. Netflix's planned advertising tier would not allow subscribers to download content like the existing ad-free platform. On July 20, 2022, it was announced that the advertising-supported tier would be coming to Netflix in 2023 but it would not feature the full library of content. In October, the launch date was announced as November 3, 2022, and was launched in 12 countries: United States, Canada, Mexico, Brazil, United Kingdom, France, Germany, Italy, Spain, Australia, Japan and South Korea. The ad-supported plan was called "Basic with Ads" and it cost $6.99 per month in the United States at launch. On February 24, 2023, Netflix cut subscription prices in more than 30 countries around the world to attract more subscribers from those countries. Malaysia, Indonesia, Thailand, the Philippines, Croatia, Venezuela, Kenya, and Iran are on the list of countries where the cost for a subscription will be reduced. In the same month stronger anti-password-sharing rules were expanded to Canada, New Zealand, Portugal, and Spain. In May 2023, these measures were further expanded to United States and Brazil subscribers. In July 2023, Netflix added 5.9 million subscribers for the second quarter of the year for a total of 238.39 million subscribers overall. The United States and Canada accounted for 1.2 million subscribers which was the largest regional quarterly gain since 2021. Netflix announced in February that it was going to enforce stricter regulations for password sharing. In May 2023, Netflix began cracking down on password-sharing in the US, UK, and Australia. Under these new rules, multiple people can use and share one account, but they have to be under the same household. Netflix defines a household as people who live in the same location as the owner of the account. Users are asked to set a primary location based on the device's IP address. Netflix reported 8.05 million new subscribers in Q2 2024, up from 5.9 million subscribers added in Q2 2023. In July 2024, Netflix started phasing out its cheapest subscription plan for users in France and the US, a year after the plan was removed for Canada and the UK. Members in these countries have the option to sign up for either the standard ad-free plan or the ad plan. Device support Netflix can be accessed via a web browser, while Netflix apps are available on various platforms, including Blu-ray players, tablet computers, mobile phones, smart TVs, digital media players, and video game consoles. Currently supported game consoles include: Microsoft Xbox 360, Xbox One and Xbox Series X/S Sony PlayStation 3, PlayStation 4 and PlayStation 5 Several older devices no longer support Netflix. For home gaming consoles, this includes the PlayStation 2, PlayStation TV, Wii and Wii U. For handheld gaming consoles, this includes the Nintendo 3DS family of systems and the PlayStation Vita. The second and third generation Apple TV previously supported Netflix with an ad-free plan, but the app was automatically removed on these devices on July 31, 2024. In addition, a growing number of multichannel television providers, including cable television and IPTV services, have added Netflix apps accessible within their own set-top boxes, sometimes with the ability for its content (along with those of other online video services) to be presented within a unified search interface alongside linear television programming as an "all-in-one" solution. The maximum video resolution supported on computers is dependent on the DRM systems available on a particular operating system and web browser. Content Original programming "Netflix Originals" are content that is produced, co-produced, or distributed exclusively by Netflix. Netflix funds its original shows differently than other TV networks when they sign a project, providing the money upfront and immediately ordering two seasons of most series. It keeps licensing rights, which normally give production companies future revenue opportunities from syndication, merchandising, etc. Over the years, Netflix output ballooned to a level unmatched by any television network or streaming service. According to Variety Insight, Netflix produced a total of 240 new original shows and movies in 2018, then climbed to 371 in 2019, a figure "greater than the number of original series that the entire U.S. TV industry released in 2005." The Netflix budget allocated to production increased annually, reaching $13.6 billion in 2021 and projected to hit $18.9 billion by 2025, a figure that once again overshadowed any of its competitors. As of August 2022, original productions made up 50% of Netflix's overall library in the United States. Film and television deals Netflix has exclusive pay TV deals with several studios. The deals give Netflix exclusive streaming rights while adhering to the structures of traditional pay TV terms. Distributors that have licensed content to Netflix include Warner Bros., Universal Pictures, Sony Pictures Entertainment and previously The Walt Disney Studios. Netflix also holds current and back-catalog rights to television programs distributed by Walt Disney Television, DreamWorks Classics, Kino International, PBS, Warner Bros. Television and Paramount Global Content Distribution, along with titles from other companies such as ABS-CBN Studios, GMA Pictures, Cignal Entertainment, MQ Studios, Regal Entertainment, Viva Films, MNC Media, Screenplay Films, Soraya Intercine Films, , , Rapi Films, , CJ ENM, JTBC, Kakao Entertainment, TBS, TV Asahi, Fuji TV, Mediacorp, Primeworks Studios, GMM Grammy, Public Television Service, Gala Television, ITV Studios, Hasbro Entertainment and StudioCanal. Formerly, the streaming service also held rights to select television programs distributed by NBCUniversal Television Distribution, Sony Pictures Television and 20th Century Fox Television. Netflix negotiated to distribute animated films from Universal that HBO declined to acquire, such as The Lorax, ParaNorman, and Minions. Netflix holds exclusive streaming rights to the film library of Studio Ghibli (except Grave of the Fireflies) worldwide except in the U.S. and Japan as part of an agreement signed with Ghibli's international sales holder Wild Bunch in 2020. Netflix Games In July 2021, Netflix hired Mike Verdu, a former executive from Electronic Arts and Facebook, as vice president of game development, along with plans to add video games by 2022. Netflix announced plans to release mobile games that would be included in subscribers' service plans. Trial offerings were first launched for Netflix users in Poland in August 2021, offering premium mobile games based on Stranger Things including Stranger Things 3: The Game, for free to subscribers through the Netflix mobile app. Netflix officially launched mobile games on November 2, 2021, for Android users around the world. Through the app, subscribers had free access to five games, including two previously made Stranger Things titles. Netflix intends to add more games to this service over time. On November 9, the collection launched for iOS. Verdu said in October 2022 that besides continuing to expand their portfolio of games, they were also interested in cloud gaming options. To support the games effort, Netflix began acquiring and forming a number of studios. The company acquired Night School Studio, an independent video game developer, in September 2021. Netflix announced plans to acquire Next Games in March 2022 for as part of Netflix's expansions into gaming. Next Games had developed the mobile title Stranger Things: Puzzle Tales as well as two The Walking Dead mobile games. Later in the month, Netflix also acquired the Texas-based mobile game developer, Boss Fight Entertainment, for an undisclosed sum. Netflix opened a mobile game studio in Helsinki, Finland in September 2022, and a new studio, their fifth overall, in southern California in October 2022, alongside the acquisition of Spry Fox in Seattle. In June 2024, Verdu was moved into a new role focusing on "innovation in game development." The next month, Netflix hired Alain Tascan, vice president of game development at Epic Games, to head up Netflix Games. As of July 2024, Netflix has over 80 games in development, releasing at least one game each month to attract fans. The company shut down its Southern California "Team Blue" AAA gaming studio in October 2024, leading to the departure of developers like Overwatch producer Chacko Sonny, Halo veteran Joseph Staten and God of War art director Rafael Grassetti. Netflix indicated that it maintains a commitment to grow its gaming business despite the changes. In late October, Netflix announced several games based on hit series including Netflix Stories: Outer Banks, Netflix Stories: A Perfect Couple, Netflix Stories: A Virgin River Christmas, and The Ultimatum: Choices, as well as a new daily word game in partnership with TED Talks, TED Tumblewords. Technology Content delivery Netflix freely peers with Internet service providers (ISPs) directly and at common Internet exchange points. In June 2012, a custom content delivery network, Open Connect, was announced. For larger ISPs with over 100,000 subscribers, Netflix offers free Netflix Open Connect computer appliances that cache their content within the ISPs' data centers or networks to further reduce Internet transit costs. By August 2016, Netflix closed its last physical data center, but continued to develop its Open Connect technology. A 2016 study at the University of London detected 233 individual Open Connect locations on over six continents, with the largest amount of traffic in the US, followed by Mexico. As of July 2017, Netflix series and movies accounted for more than a third of all prime-time download Internet traffic in North America. API On October 1, 2008, Netflix offered access to its service via a public application programming interface (API). It allowed access to data for all Netflix titles, and allows users to manage their movie queues. The API was free and allowed commercial use. In June 2012, Netflix began to restrict the availability of its public API. Netflix instead focused on a small number of known partners using private interfaces, since most traffic came from those private interfaces. In November 2014, Netflix retired the public API. Netflix then partnered with the developers of eight services deemed the most valuable, including Instant Watcher, Fanhattan, Yidio and Nextguide. Recommendations and thumbnails Netflix presents viewers with recommendations based on interactions with the service, such as previous viewing history and ratings of viewed content. These are often grouped into genres and formats, or feature the platform's highest-rated content. Each title is presented with a thumbnail. Before around 2015, these were the same key art for everyone, but since then has been customized. Netflix may select a specific key art for a thumbnail based on viewing history, such as an actor or scene type based on genre preferences. Some thumbnails are generated from video stills. The Netflix recommendation system is a vital part of the streaming platform's success, enabling personalized content suggestions for hundreds of millions of subscribers worldwide. Using advanced machine learning algorithms, Netflix analyzes user interactions, including viewing history, searches, and ratings, to deliver personalized recommendations for movies and TV shows. The recommendation system considers individual user preferences, similarities with other users with comparable tastes, specific title attributes (genre, release year, etc.), device usage patterns, and viewing time. As users interact with the platform and provide feedback with their viewing habits, the recommendation system is able to adapt and refine its suggestions over time. Netflix uses a two-tiered ranking system, using the presentation of titles on the homepage for easy navigation to maximize user engagement. This is done by organizing content into rows and ranking the titles within each row based on how much the user would be interested in it. Netflix also uses A/B testing to determine what causes the biggest interest and engagement related to options concerning movie suggestions and how titles are organized. Tags like "bittersweet", "sitcom", or "intimate" are assigned to each title by Netflix employees. Netflix also uses the tags to create recommendation micro-genres like "Goofy TV Shows" or "Girls Night In". Awards On July 18, 2013, Netflix earned the first Primetime Emmy Awards nominations for original streaming programs at the 65th Primetime Emmy Awards. Three of its series, Arrested Development, Hemlock Grove and House of Cards, earned a combined 14 nominations (nine for House of Cards, three for Arrested Development and two for Hemlock Grove). The House of Cards episode "Chapter 1" received four nominations for both the 65th Primetime Emmy Awards and 65th Primetime Creative Arts Emmy Awards, becoming the first episode of a streaming television series to receive a major Primetime Emmy Award nomination. With its win for Outstanding Cinematography for a Single-Camera Series, "Chapter 1" became the first episode from a streaming service to be awarded an Emmy. David Fincher's win for Directing for a Drama Series for House of Cards made the episode the first from a streaming service to win a Primetime Emmy. On November 6, 2013, Netflix earned its first Grammy nomination when You've Got Time by Regina Spektor—the main title theme song for Orange Is the New Black—was nominated for Best Song Written for Visual Media. On December 12, 2013, the network earned six nominations for Golden Globe Awards, including four for House of Cards. Among those nominations was Wright for Golden Globe Award for Best Actress – Television Series Drama for her portrayal of Claire Underwood, which she won. With the accolade, Wright became the first actress to win a Golden Globe for a streaming television series. It also marked Netflix's first major acting award. House of Cards and Orange is the New Black also won Peabody Awards in 2013. On January 16, 2014, Netflix became the first streaming service to earn an Academy Award nomination when The Square was nominated for Best Documentary Feature. On July 10, 2014, Netflix received 31 Emmy nominations. Among other nominations, House of Cards received nominations for Outstanding Drama Series, Outstanding Directing in a Drama Series and Outstanding Writing in a Drama Series. Kevin Spacey and Robin Wright were nominated for Outstanding Lead Actor and Outstanding Lead Actress in a Drama Series. Orange is the New Black was nominated in the comedy categories, earning nominations for Outstanding Comedy Series, Outstanding Writing for a Comedy Series and Outstanding Directing for a Comedy Series. Taylor Schilling, Kate Mulgrew, and Uzo Aduba were respectively nominated for Outstanding Lead Actress in a Comedy Series, Outstanding Supporting Actress in a Comedy Series and Outstanding Guest Actress in a Comedy Series (the latter was for Aduba's recurring role in season one, as she was promoted to series regular for the show's second season). Netflix got the largest share of 2016 Emmy Award nominations, with 16 major nominations. However, streaming shows only got 24 nominations out of a total of 139, falling significantly behind cable. The 16 Netflix nominees were: House of Cards with Kevin Spacey, A Very Murray Christmas with Bill Murray, Unbreakable Kimmy Schmidt, Master of None, and Bloodline. Stranger Things received 19 nominations at the 2017 Primetime Emmy Awards, while The Crown received 13 nominations. In December 2017, Netflix was awarded PETA's Company of the Year for promoting animal rights movies and documentaries like Forks Over Knives and What the Health. At the 90th Academy Awards, held on March 4, 2018, the film Icarus, distributed by Netflix, won its first Academy Award for Best Documentary Feature Film. During his remarks backstage, director and writer Bryan Fogel remarked that Netflix had "single-handedly changed the documentary world." Icarus had its premiere at the 2017 Sundance Film Festival and was bought by Netflix for $5 million, one of the biggest deals ever for a non-fiction film. Netflix became the network whose programs received more nomination at the 2018 Primetime and Creative Arts Emmy Awards with 112 nominations, therefore breaking HBO's 17-years record as a network whose programs received more nomination at the Emmys, which received 108 nominations. On January 22, 2019, films distributed by Netflix scored 15 nominations for the 91st Academy Awards, including Academy Award for Best Picture for Alfonso Cuarón's Roma, which was nominated for 10 awards. The 15 nominations equal the total nominations films distributed by Netflix had received in previous years. In 2020, Netflix received 20 TV nominations and films distributed by Netflix also got 22 film nominations at the 78th Golden Globe Awards. It secured three out of the five nominations for best drama TV series for The Crown, Ozark and Ratched and four of the five nominations for best actress in a TV series: Olivia Colman, Emma Corrin, Laura Linney and Sarah Paulson. In 2020, Netflix earned 24 Academy Award nominations, marking the first time a streaming service led all studios. Films and programs distributed by Netflix received 30 nominations at the 2021 Screen Actors Guild Awards, more than any other distribution company, where their distributed films and programs won seven awards including best motion picture for The Trial of the Chicago 7 and best TV drama for The Crown. Netflix also received the most nominations of any studio at the 93rd Academy Awards—35 total nominations with 7 award wins. In February 2022, The Power of the Dog, a gritty western distributed by Netflix and directed by Jane Campion, received 12 nominations, including Best Picture, for the 94th annual Academy Awards. Films distributed by the streamer received a total of 72 nominations. Campion became the third female to receive the Best Director award, winning her second Oscar for The Power of the Dog. At the 50th International Emmy Awards, Netflix original Sex Education won Best Comedy Series. Later that year, Netflix received 26 Emmy Awards including six for Squid Game. The Squid Game wins for Outstanding Lead Actor in a Drama Series and Outstanding Directing for a Drama Series were the first-ever for a non-English language series in those categories. In March 2023, Netflix won six Academy Awards including four for All Quiet on the Western Front which was the most awarded Netflix film in its history. Guillermo del Toro's Pinocchio was the first streaming film to named Best Animated Feature and The Elephant Whisperers was the first Indian-produced film to receive Best Documentary Short Film. Netflix received 103 Emmy nominations including 13 each for the limited series Beef and Dahmer – Monster: The Jeffrey Dahmer Story. In July 2024, Netflix received 107 Emmy nominations, which was the most of any network. Criticism Netflix has been subject to criticism from various groups and individuals as its popularity and market reach increased in the 2010s. Customers have complained about price increases in Netflix offerings dating back to the company's decision to separate its DVD rental and streaming services, which was quickly reversed. As Netflix increased its streaming output, it has faced calls to limit accessibility to graphic violence and include viewer advisories for issues such as sensationalism and promotion of pseudoscience. Netflix's content has also been criticized by disability rights movement advocates for lack of closed captioning quality. Some media organizations and competitors have criticized Netflix for selectively releasing ratings and viewer numbers of its original programming. The company has made claims boasting about viewership records without providing data to substantiate its successes or using problematic estimation methods. In March 2020, some government agencies called for Netflix and other streamers to limit services due to increased broadband and energy consumption as use of the platform increased. In response, the company announced it would reduce bit rate across all streams in Europe, thus decreasing Netflix traffic on European networks by around 25 percent. These same steps were later taken in India. In May 2022, Netflix's shareholder Imperium Irrevocable Trust filed a lawsuit against the company for violating the U.S. securities laws. In January 2024, a federal judge dismissed the suit, stating that shareholders failed to provide instances of Netflix lying about subscriber growth. In May 2023, Netflix officially banned the use of password sharing between individuals of different households, meaning sharing an account was only available to those living in the same house.
Technology
Multimedia
null
175596
https://en.wikipedia.org/wiki/Animal%20testing
Animal testing
Animal testing, also known as animal experimentation, animal research, and in vivo testing, is the use of non-human animals, such as model organisms, in experiments that seek to control the variables that affect the behavior or biological system under study. This approach can be contrasted with field studies in which animals are observed in their natural environments or habitats. Experimental research with animals is usually conducted in universities, medical schools, pharmaceutical companies, defense establishments, and commercial facilities that provide animal-testing services to the industry. The focus of animal testing varies on a continuum from pure research, focusing on developing fundamental knowledge of an organism, to applied research, which may focus on answering some questions of great practical importance, such as finding a cure for a disease. Examples of applied research include testing disease treatments, breeding, defense research, and toxicology, including cosmetics testing. In education, animal testing is sometimes a component of biology or psychology courses. Research using animal models has been central to most of the achievements of modern medicine. It has contributed to most of the basic knowledge in fields such as human physiology and biochemistry, and has played significant roles in fields such as neuroscience and infectious disease. The results have included the near-eradication of polio and the development of organ transplantation, and have benefited both humans and animals. From 1910 to 1927, Thomas Hunt Morgan's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes, and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". Research in model organisms led to further medical advances, such as the production of the diphtheria antitoxin and the 1922 discovery of insulin and its use in treating diabetes, which had previously meant death. Modern general anaesthetics such as halothane were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine. Animal testing is widely used to aid in research of human disease when human experimentation would be unfeasible or unethical. This strategy is made possible by the common descent of all living organisms, and the conservation of metabolic and developmental pathways and genetic material over the course of evolution. Performing experiments in model organisms allows for better understanding the disease process without the added risk of harming an actual human. The species of the model organism is usually chosen so that it reacts to disease or its treatment in a way that resembles human physiology as needed. Biological activity in a model organism does not ensure an effect in humans, and care must be taken when generalizing from one organism to another. However, many drugs, treatments and cures for human diseases are developed in part with the guidance of animal models. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful in vitro model system available. The annual use of vertebrate animals—from zebrafish to non-human primates—was estimated at 192 million as of 2015. In the European Union, vertebrate species represent 93% of animals used in research, and 11.5 million animals were used there in 2011. The mouse (Mus musculus) is associated with many important biological discoveries of the 20th and 21st centuries, and by one estimate, the number of mice and rats used in the United States alone in 2001 was 80 million. In 2013, it was reported that mammals (mice and rats), fish, amphibians, and reptiles together accounted for over 85% of research animals. In 2022, a law was passed in the United States that eliminated the FDA requirement that all drugs be tested on animals. Animal testing is regulated to varying degrees in different countries. In some cases it is strictly controlled while others have more relaxed regulations. There are ongoing debates about the ethics and necessity of animal testing. Proponents argue that it has led to significant advancements in medicine and other fields while opponents raise concerns about cruelty towards animals and question its effectiveness and reliability. There are efforts underway to find alternatives to animal testing such as computer simulation models, organs-on-chips technology that mimics human organs for lab tests, microdosing techniques which involve administering small doses of test compounds to human volunteers instead of non-human animals for safety tests or drug screenings; positron emission tomography (PET) scans which allow scanning of the human brain without harming humans; comparative epidemiological studies among human populations; simulators and computer programs for teaching purposes; among others. Definitions The terms animal testing, animal experimentation, animal research, in vivo testing, and vivisection have similar denotations but different connotations. Literally, "vivisection" means "live sectioning" of an animal, and historically referred only to experiments that involved the dissection of live animals. The term is occasionally used to refer pejoratively to any experiment using living animals; for example, the Encyclopædia Britannica defines "vivisection" as: "Operation on a living animal for experimental rather than healing purposes; more broadly, all experimentation on live animals", although dictionaries point out that the broader definition is "used only by people who are opposed to such work". The word has a negative connotation, implying torture, suffering, and death. The word "vivisection" is preferred by those opposed to this research, whereas scientists typically use the term "animal experimentation". The following text excludes as much as possible practices related to in vivo veterinary surgery, which is left to the discussion of vivisection. History The earliest references to animal testing are found in the writings of the Greeks in the 2nd and 4th centuries BCE. Aristotle and Erasistratus were among the first to perform experiments on living animals. Galen, a 2nd-century Roman physician, performed post-mortem dissections of pigs and goats. Avenzoar, a 12th-century Arabic physician in Moorish Spain introduced an experimental method of testing surgical procedures before applying them to human patients. Discoveries in the 18th and 19th centuries included Antoine Lavoisier's use of a guinea pig in a calorimeter to prove that respiration was a form of combustion, and Louis Pasteur's demonstration of the germ theory of disease in the 1880s using anthrax in sheep. Robert Koch used animal testing of mice and guinea pigs to discover the bacteria that cause anthrax and tuberculosis. In the 1890s, Ivan Pavlov famously used dogs to describe classical conditioning. Research using animal models has been central to most of the achievements of modern medicine. It has contributed most of the basic knowledge in fields such as human physiology and biochemistry, and has played significant roles in fields such as neuroscience and infectious disease. For example, the results have included the near-eradication of polio and the development of organ transplantation, and have benefited both humans and animals. From 1910 to 1927, Thomas Hunt Morgan's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes. Drosophila became one of the first, and for some time the most widely used, model organisms, and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". D. melanogaster remains one of the most widely used eukaryotic model organisms. During the same time period, studies on mouse genetics in the laboratory of William Ernest Castle in collaboration with Abbie Lathrop led to generation of the DBA ("dilute, brown and non-agouti") inbred mouse strain and the systematic generation of other inbred strains. The mouse has since been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st centuries. In the late 19th century, Emil von Behring isolated the diphtheria toxin and demonstrated its effects in guinea pigs. He went on to develop an antitoxin against diphtheria in animals and then in humans, which resulted in the modern methods of immunization and largely ended diphtheria as a threatening disease. The diphtheria antitoxin is famously commemorated in the Iditarod race, which is modeled after the delivery of antitoxin in the 1925 serum run to Nome. The success of animal studies in producing the diphtheria antitoxin has also been attributed as a cause for the decline of the early 20th-century opposition to animal research in the United States. Subsequent research in model organisms led to further medical advances, such as Frederick Banting's research in dogs, which determined that the isolates of pancreatic secretion could be used to treat dogs with diabetes. This led to the 1922 discovery of insulin (with John Macleod) and its use in treating diabetes, which had previously meant death. John Cade's research in guinea pigs discovered the anticonvulsant properties of lithium salts, which revolutionized the treatment of bipolar disorder, replacing the previous treatments of lobotomy or electroconvulsive therapy. Modern general anaesthetics, such as halothane and related compounds, were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. In the 1940s, Jonas Salk used rhesus monkey studies to isolate the most virulent forms of the polio virus, which led to his creation of a polio vaccine. The vaccine, which was made publicly available in 1955, reduced the incidence of polio 15-fold in the United States over the following five years. Albert Sabin improved the vaccine by passing the polio virus through animal hosts, including monkeys; the Sabin vaccine was produced for mass consumption in 1963, and had virtually eradicated polio in the United States by 1965. It has been estimated that developing and producing the vaccines required the use of 100,000 rhesus monkeys, with 65 doses of vaccine produced from each monkey. Sabin wrote in 1992, "Without the use of animals and human beings, it would have been impossible to acquire the important knowledge needed to prevent much suffering and premature death not only among humans, but also among animals." On 3 November 1957, a Soviet dog, Laika, became the first of many animals to orbit the Earth. In the 1970s, antibiotic treatments and vaccines for leprosy were developed using armadillos, then given to humans. The ability of humans to change the genetics of animals took an enormous step forward in 1974 when Rudolf Jaenisch could produce the first transgenic mammal, by integrating DNA from simians into the genome of mice. This genetic research progressed rapidly and, in 1996, Dolly the sheep was born, the first mammal to be cloned from an adult cell. Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful in vitro model system available. Toxicology testing became important in the 20th century. In the 19th century, laws regulating drugs were more relaxed. For example, in the US, the government could only ban a drug after they had prosecuted a company for selling products that harmed customers. However, in response to the Elixir Sulfanilamide disaster of 1937 in which the eponymous drug killed over 100 users, the US Congress passed laws that required safety testing of drugs on animals before they could be marketed. Other countries enacted similar legislation. In the 1960s, in reaction to the Thalidomide tragedy, further laws were passed requiring safety testing on pregnant animals before a drug can be sold. Model organisms Invertebrates Although many more invertebrates than vertebrates are used in animal testing, these studies are largely unregulated by law. The most frequently used invertebrate species are Drosophila melanogaster, a fruit fly, and Caenorhabditis elegans, a nematode worm. In the case of C. elegans, the worm's body is completely transparent and the precise lineage of all the organism's cells is known, while studies in the fly D. melanogaster can use an amazing array of genetic tools. These invertebrates offer some advantages over vertebrates in animal testing, including their short life cycle and the ease with which large numbers may be housed and studied. However, the lack of an adaptive immune system and their simple organs prevent worms from being used in several aspects of medical research such as vaccine development. Similarly, the fruit fly immune system differs greatly from that of humans, and diseases in insects can be different from diseases in vertebrates; however, fruit flies and waxworms can be useful in studies to identify novel virulence factors or pharmacologically active compounds. Several invertebrate systems are considered acceptable alternatives to vertebrates in early-stage discovery screens. Because of similarities between the innate immune system of insects and mammals, insects can replace mammals in some types of studies. Drosophila melanogaster and the Galleria mellonella waxworm have been particularly important for analysis of virulent traits of mammalian pathogens. Waxworms and other insects have also proven valuable for the identification of pharmaceutical compounds with favorable bioavailability. The decision to adopt such models generally involves accepting a lower degree of biological similarity with mammals for significant gains in experimental throughput. Rodents In the U.S., the numbers of rats and mice used is estimated to be from 11 million to between 20 and 100 million a year. Other rodents commonly used are guinea pigs, hamsters, and gerbils. Mice are the most commonly used vertebrate species because of their size, low cost, ease of handling, and fast reproduction rate. Mice are widely considered to be the best model of inherited human disease and share 95% of their genes with humans. With the advent of genetic engineering technology, genetically modified mice can be generated to order and can provide models for a range of human diseases. Rats are also widely used for physiology, toxicology and cancer research, but genetic manipulation is much harder in rats than in mice, which limits the use of these rodents in basic science. Dogs Dogs are widely used in biomedical research, testing, and education—particularly beagles, because they are gentle and easy to handle, and to allow for comparisons with historical data from beagles (a Reduction technique). They are used as models for human and veterinary diseases in cardiology, endocrinology, and bone and joint studies, research that tends to be highly invasive, according to the Humane Society of the United States. The most common use of dogs is in the safety assessment of new medicines for human or veterinary use as a second species following testing in rodents, in accordance with the regulations set out in the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. One of the most significant advancements in medical science involves the use of dogs in developing the answers to insulin production in the body for diabetics and the role of the pancreas in this process. They found that the pancreas was responsible for producing insulin in the body and that removal of the pancreas, resulted in the development of diabetes in the dog. After re-injecting the pancreatic extract (insulin), the blood glucose levels were significantly lowered. The advancements made in this research involving the use of dogs has resulted in a definite improvement in the quality of life for both humans and animals. The U.S. Department of Agriculture's Animal Welfare Report shows that 60,979 dogs were used in USDA-registered facilities in 2016. In the UK, according to the UK Home Office, there were 3,847 procedures on dogs in 2017. Of the other large EU users of dogs, Germany conducted 3,976 procedures on dogs in 2016 and France conducted 4,204 procedures in 2016. In both cases this represents under 0.2% of the total number of procedures conducted on animals in the respective countries. Zebrafish Zebrafish are commonly used for the basic study and development of various cancers. Used to explore the immune system and genetic strains. They are low in cost, small in size, have a fast reproduction rate, and able to observe cancer cells in real time. Humans and zebrafish share neoplasm similarities which is why they are used for research. The National Library of Medicine shows many examples of the types of cancer zebrafish are used in. The use of zebrafish have allowed them to find differences between MYC-driven pre-B vs T-ALL and be exploited to discover novel pre-B ALL therapies on acute lymphocytic leukemia. The National Library of Medicine also explains how a neoplasm is difficult to diagnose at an early stage. Understanding the molecular mechanism of digestive tract tumorigenesis and searching for new treatments is the current research. Zebrafish and humans share similar gastric cancer cells in the gastric cancer xenotransplantation model. This allowed researchers to find that Triphala could inhibit the growth and metastasis of gastric cancer cells. Since zebrafish liver cancer genes are related with humans they have become widely used in liver cancer search, as will as many other cancers. Non-human primates Non-human primates (NHPs) are used in toxicology tests, studies of AIDS and hepatitis, studies of neurology, behavior and cognition, reproduction, genetics, and xenotransplantation. They are caught in the wild or purpose-bred. In the United States and China, most primates are domestically purpose-bred, whereas in Europe the majority are imported purpose-bred. The European Commission reported that in 2011, 6,012 monkeys were experimented on in European laboratories. According to the U.S. Department of Agriculture, there were 71,188 monkeys in U.S. laboratories in 2016. 23,465 monkeys were imported into the U.S. in 2014 including 929 who were caught in the wild. Most of the NHPs used in experiments are macaques; but marmosets, spider monkeys, and squirrel monkeys are also used, and baboons and chimpanzees are used in the US. , there are approximately 730 chimpanzees in U.S. laboratories. In a survey in 2003, it was found that 89% of singly-housed primates exhibited self-injurious or abnormal stereotypyical behaviors including pacing, rocking, hair pulling, and biting among others. The first transgenic primate was produced in 2001, with the development of a method that could introduce new genes into a rhesus macaque. This transgenic technology is now being applied in the search for a treatment for the genetic disorder Huntington's disease. Notable studies on non-human primates have been part of the polio vaccine development, and development of Deep Brain Stimulation, and their current heaviest non-toxicological use occurs in the monkey AIDS model, SIV. In 2008, a proposal to ban all primates experiments in the EU has sparked a vigorous debate. Other species Over 500,000 fish and 9,000 amphibians were used in the UK in 2016. The main species used is the zebrafish, Danio rerio, which are translucent during their embryonic stage, and the African clawed frog, Xenopus laevis. Over 20,000 rabbits were used for animal testing in the UK in 2004. Albino rabbits are used in eye irritancy tests (Draize test) because rabbits have less tear flow than other animals, and the lack of eye pigment in albinos make the effects easier to visualize. The numbers of rabbits used for this purpose has fallen substantially over the past two decades. In 1996, there were 3,693 procedures on rabbits for eye irritation in the UK, and in 2017 this number was just 63. Rabbits are also frequently used for the production of polyclonal antibodies. Cats are most commonly used in neurological research. In 2016, 18,898 cats were used in the United States alone, around a third of which were used in experiments which have the potential to cause "pain and/or distress" though only 0.1% of cat experiments involved potential pain which was not relieved by anesthetics/analgesics. In the UK, just 198 procedures were carried out on cats in 2017. The number has been around 200 for most of the last decade. Care and use of animals Regulations and laws The regulations that apply to animals in laboratories vary across species. In the U.S., under the Animal Welfare Act and the Guide for the Care and Use of Laboratory Animals (the Guide), published by the National Academy of Sciences, any procedure can be performed on an animal if it can be successfully argued that it is scientifically justified. Researchers are required to consult with the institution's veterinarian and its Institutional Animal Care and Use Committee (IACUC), which every research facility is obliged to maintain. The IACUC must ensure that alternatives, including non-animal alternatives, have been considered, that the experiments are not unnecessarily duplicative, and that pain relief is given unless it would interfere with the study. The IACUCs regulate all vertebrates in testing at institutions receiving federal funds in the USA. Although the Animal Welfare Act does not include purpose-bred rodents and birds, these species are equally regulated under Public Health Service policies that govern the IACUCs. The Public Health Service policy oversees the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC). The CDC conducts infectious disease research on nonhuman primates, rabbits, mice, and other animals, while FDA requirements cover use of animals in pharmaceutical research. Animal Welfare Act (AWA) regulations are enforced by the USDA, whereas Public Health Service regulations are enforced by OLAW and in many cases by AAALAC. According to the 2014 U.S. Department of Agriculture Office of the Inspector General (OIG) report—which looked at the oversight of animal use during a three-year period—"some Institutional Animal Care and Use Committees ...did not adequately approve, monitor, or report on experimental procedures on animals". The OIG found that "as a result, animals are not always receiving basic humane care and treatment and, in some cases, pain and distress are not minimized during and after experimental procedures". According to the report, within a three-year period, nearly half of all American laboratories with regulated species were cited for AWA violations relating to improper IACUC oversight. The USDA OIG made similar findings in a 2005 report. With only a broad number of 120 inspectors, the United States Department of Agriculture (USDA) oversees more than 12,000 facilities involved in research, exhibition, breeding, or dealing of animals. Others have criticized the composition of IACUCs, asserting that the committees are predominantly made up of animal researchers and university representatives who may be biased against animal welfare concerns. Larry Carbone, a laboratory animal veterinarian, writes that, in his experience, IACUCs take their work very seriously regardless of the species involved, though the use of non-human primates always raises what he calls a "red flag of special concern". A study published in Science magazine in July 2001 confirmed the low reliability of IACUC reviews of animal experiments. Funded by the National Science Foundation, the three-year study found that animal-use committees that do not know the specifics of the university and personnel do not make the same approval decisions as those made by animal-use committees that do know the university and personnel. Specifically, blinded committees more often ask for more information rather than approving studies. Scientists in India are protesting a recent guideline issued by the University Grants Commission to ban the use of live animals in universities and laboratories. Numbers Accurate global figures for animal testing are difficult to obtain; it has been estimated that 100 million vertebrates are experimented on around the world every year, 10–11 million of them in the EU. The Nuffield Council on Bioethics reports that global annual estimates range from 50 to 100 million animals. None of the figures include invertebrates such as shrimp and fruit flies.The USDA/APHIS has published the 2016 animal research statistics. Overall, the number of animals (covered by the Animal Welfare Act) used in research in the US rose 6.9% from 767,622 (2015) to 820,812 (2016). This includes both public and private institutions. By comparing with EU data, where all vertebrate species are counted, Speaking of Research estimated that around 12 million vertebrates were used in research in the US in 2016. A 2015 article published in the Journal of Medical Ethics, argued that the use of animals in the US has dramatically increased in recent years. Researchers found this increase is largely the result of an increased reliance on genetically modified mice in animal studies. In 1995, researchers at Tufts University Center for Animals and Public Policy estimated that 14–21 million animals were used in American laboratories in 1992, a reduction from a high of 50 million used in 1970. In 1986, the U.S. Congress Office of Technology Assessment reported that estimates of the animals used in the U.S. range from 10 million to upwards of 100 million each year, and that their own best estimate was at least 17 million to 22 million. In 2016, the Department of Agriculture listed 60,979 dogs, 18,898 cats, 71,188 non-human primates, 183,237 guinea pigs, 102,633 hamsters, 139,391 rabbits, 83,059 farm animals, and 161,467 other mammals, a total of 820,812, a figure that includes all mammals except purpose-bred mice and rats. The use of dogs and cats in research in the U.S. decreased from 1973 to 2016 from 195,157 to 60,979, and from 66,165 to 18,898, respectively. In the UK, Home Office figures show that 3.79 million procedures were carried out in 2017. 2,960 procedures used non-human primates, down over 50% since 1988. A "procedure" refers here to an experiment that might last minutes, several months, or years. Most animals are used in only one procedure: animals are frequently euthanized after the experiment; however death is the endpoint of some procedures. The procedures conducted on animals in the UK in 2017 were categorised as: 43% (1.61 million) sub-threshold, 4% (0.14 million) non-recovery, 36% (1.35 million) mild, 15% (0.55 million) moderate, and 4% (0.14 million) severe. A 'severe' procedure would be, for instance, any test where death is the end-point or fatalities are expected, whereas a 'mild' procedure would be something like a blood test or an MRI scan. The Three Rs The Three Rs (3Rs) are guiding principles for more ethical use of animals in testing. These were first described by W.M.S. Russell and R.L. Burch in 1959. The 3Rs state: Replacement which refers to the preferred use of non-animal methods over animal methods whenever it is possible to achieve the same scientific aims. These methods include computer modeling. Reduction which refers to methods that enable researchers to obtain comparable levels of information from fewer animals, or to obtain more information from the same number of animals. Refinement which refers to methods that alleviate or minimize potential pain, suffering or distress, and enhance animal welfare for the animals used. These methods include non-invasive techniques. The 3Rs have a broader scope than simply encouraging alternatives to animal testing, but aim to improve animal welfare and scientific quality where the use of animals can not be avoided. These 3Rs are now implemented in many testing establishments worldwide and have been adopted by various pieces of legislation and regulations. Despite the widespread acceptance of the 3Rs, many countries—including Canada, Australia, Israel, South Korea, and Germany—have reported rising experimental use of animals in recent years with increased use of mice and, in some cases, fish while reporting declines in the use of cats, dogs, primates, rabbits, guinea pigs, and hamsters. Along with other countries, China has also escalated its use of GM animals, resulting in an increase in overall animal use. Sources Animals used by laboratories are largely supplied by specialist dealers. Sources differ for vertebrate and invertebrate animals. Most laboratories breed and raise flies and worms themselves, using strains and mutants supplied from a few main stock centers. For vertebrates, sources include breeders and dealers including Fortrea and Charles River Laboratories, which supply purpose-bred and wild-caught animals; businesses that trade in wild animals such as Nafovanny; and dealers who supply animals sourced from pounds, auctions, and newspaper ads. Animal shelters also supply the laboratories directly. Large centers also exist to distribute strains of genetically modified animals; the International Knockout Mouse Consortium, for example, aims to provide knockout mice for every gene in the mouse genome. In the U.S., Class A breeders are licensed by the U.S. Department of Agriculture (USDA) to sell animals for research purposes, while Class B dealers are licensed to buy animals from "random sources" such as auctions, pound seizure, and newspaper ads. Some Class B dealers have been accused of kidnapping pets and illegally trapping strays, a practice known as bunching. It was in part out of public concern over the sale of pets to research facilities that the 1966 Laboratory Animal Welfare Act was ushered in—the Senate Committee on Commerce reported in 1966 that stolen pets had been retrieved from Veterans Administration facilities, the Mayo Institute, the University of Pennsylvania, Stanford University, and Harvard and Yale Medical Schools. The USDA recovered at least a dozen stolen pets during a raid on a Class B dealer in Arkansas in 2003. Four states in the U.S.—Minnesota, Utah, Oklahoma, and Iowa—require their shelters to provide animals to research facilities. Fourteen states explicitly prohibit the practice, while the remainder either allow it or have no relevant legislation. In the European Union, animal sources are governed by Council Directive 86/609/EEC, which requires lab animals to be specially bred, unless the animal has been lawfully imported and is not a wild animal or a stray. The latter requirement may also be exempted by special arrangement. In 2010 the Directive was revised with EU Directive 2010/63/EU. In the UK, most animals used in experiments are bred for the purpose under the 1988 Animal Protection Act, but wild-caught primates may be used if exceptional and specific justification can be established. The United States also allows the use of wild-caught primates; between 1995 and 1999, 1,580 wild baboons were imported into the U.S. Most of the primates imported are handled by Charles River Laboratories or by Fortrea, which are very active in the international primate trade. Pain and suffering The extent to which animal testing causes pain and suffering, and the capacity of animals to experience and comprehend them, is the subject of much debate. According to the USDA, in 2016 501,560 animals (61%) (not including rats, mice, birds, or invertebrates) were used in procedures that did not include more than momentary pain or distress. 247,882 (31%) animals were used in procedures in which pain or distress was relieved by anesthesia, while 71,370 (9%) were used in studies that would cause pain or distress that would not be relieved. The idea that animals might not feel pain as human beings feel it traces back to the 17th-century French philosopher, René Descartes, who argued that animals do not experience pain and suffering because they lack consciousness. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, writes that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. In his interactions with scientists and other veterinarians, he was regularly asked to "prove" that animals are conscious, and to provide "scientifically acceptable" grounds for claiming that they feel pain. Carbone writes that the view that animals feel pain differently is now a minority view. Academic reviews of the topic are more equivocal, noting that although the argument that animals have at least simple conscious thoughts and feelings has strong support, some critics continue to question how reliably animal mental states can be determined. However, some canine experts are stating that, while intelligence does differ animal to animal, dogs have the intelligence of a two to two-and-a-half-year old. This does support the idea that dogs, at the very least, have some form of consciousness. The ability of invertebrates to experience pain and suffering is less clear, however, legislation in several countries (e.g. U.K., New Zealand, Norway) protects some invertebrate species if they are being used in animal testing. In the U.S., the defining text on animal welfare regulation in animal testing is the Guide for the Care and Use of Laboratory Animals. This defines the parameters that govern animal testing in the U.S. It states "The ability to experience and respond to pain is widespread in the animal kingdom...Pain is a stressor and, if not relieved, can lead to unacceptable levels of stress and distress in animals." The Guide states that the ability to recognize the symptoms of pain in different species is vital in efficiently applying pain relief and that it is essential for the people caring for and using animals to be entirely familiar with these symptoms. On the subject of analgesics used to relieve pain, the Guide states "The selection of the most appropriate analgesic or anesthetic should reflect professional judgment as to which best meets clinical and humane requirements without compromising the scientific aspects of the research protocol". Accordingly, all issues of animal pain and distress, and their potential treatment with analgesia and anesthesia, are required regulatory issues in receiving animal protocol approval. Currently, traumatic methods of marking laboratory animals are being replaced with non-invasive alternatives. In 2019, Katrien Devolder and Matthias Eggel proposed gene editing research animals to remove the ability to feel pain. This would be an intermediate step towards eventually stopping all experimentation on animals and adopting alternatives. Additionally, this would not stop research animals from experiencing psychological harm. Euthanasia Regulations require that scientists use as few animals as possible, especially for terminal experiments. However, while policy makers consider suffering to be the central issue and see animal euthanasia as a way to reduce suffering, others, such as the RSPCA, argue that the lives of laboratory animals have intrinsic value. Regulations focus on whether particular methods cause pain and suffering, not whether their death is undesirable in itself. The animals are euthanized at the end of studies for sample collection or post-mortem examination; during studies if their pain or suffering falls into certain categories regarded as unacceptable, such as depression, infection that is unresponsive to treatment, or the failure of large animals to eat for five days; or when they are unsuitable for breeding or unwanted for some other reason. Methods of euthanizing laboratory animals are chosen to induce rapid unconsciousness and death without pain or distress. The methods that are preferred are those published by councils of veterinarians. The animal can be made to inhale a gas, such as carbon monoxide and carbon dioxide, by being placed in a chamber, or by use of a face mask, with or without prior sedation or anesthesia. Sedatives or anesthetics such as barbiturates can be given intravenously, or inhalant anesthetics may be used. Amphibians and fish may be immersed in water containing an anesthetic such as tricaine. Physical methods are also used, with or without sedation or anesthesia depending on the method. Recommended methods include decapitation (beheading) for small rodents or rabbits. Cervical dislocation (breaking the neck or spine) may be used for birds, mice, rats, and rabbits depending on the size and weight of the animal. High-intensity microwave irradiation of the brain can preserve brain tissue and induce death in less than 1 second, but this is currently only used on rodents. Captive bolts may be used, typically on dogs, ruminants, horses, pigs and rabbits. It causes death by a concussion to the brain. Gunshot may be used, but only in cases where a penetrating captive bolt may not be used. Some physical methods are only acceptable after the animal is unconscious. Electrocution may be used for cattle, sheep, swine, foxes, and mink after the animals are unconscious, often by a prior electrical stun. Pithing (inserting a tool into the base of the brain) is usable on animals already unconscious. Slow or rapid freezing, or inducing air embolism are acceptable only with prior anesthesia to induce unconsciousness. Research classification Pure research Basic or pure research investigates how organisms behave, develop, and function. Those opposed to animal testing object that pure research may have little or no practical purpose, but researchers argue that it forms the necessary basis for the development of applied research, rendering the distinction between pure and applied research—research that has a specific practical aim—unclear. Pure research uses larger numbers and a greater variety of animals than applied research. Fruit flies, nematode worms, mice and rats together account for the vast majority, though small numbers of other species are used, ranging from sea slugs through to armadillos. Examples of the types of animals and experiments used in basic research include: Studies on embryogenesis and developmental biology. Mutants are created by adding transposons into their genomes, or specific genes are deleted by gene targeting. By studying the changes in development these changes produce, scientists aim to understand both how organisms normally develop, and what can go wrong in this process. These studies are particularly powerful since the basic controls of development, such as the homeobox genes, have similar functions in organisms as diverse as fruit flies and man. Experiments into behavior, to understand how organisms detect and interact with each other and their environment, in which fruit flies, worms, mice, and rats are all widely used. Studies of brain function, such as memory and social behavior, often use rats and birds. For some species, behavioral research is combined with enrichment strategies for animals in captivity because it allows them to engage in a wider range of activities. Breeding experiments to study evolution and genetics. Laboratory mice, flies, fish, and worms are inbred through many generations to create strains with defined characteristics. These provide animals of a known genetic background, an important tool for genetic analyses. Larger mammals are rarely bred specifically for such studies due to their slow rate of reproduction, though some scientists take advantage of inbred domesticated animals, such as dog or cattle breeds, for comparative purposes. Scientists studying how animals evolve use many animal species to see how variations in where and how an organism lives (their niche) produce adaptations in their physiology and morphology. As an example, sticklebacks are now being used to study how many and which types of mutations are selected to produce adaptations in animals' morphology during the evolution of new species. Applied research Applied research aims to solve specific and practical problems. These may involve the use of animal models of diseases or conditions, which are often discovered or generated by pure research programmes. In turn, such applied studies may be an early stage in the drug discovery process. Examples include: Genetic modification of animals to study disease. Transgenic animals have specific genes inserted, modified or removed, to mimic specific conditions such as single gene disorders, such as Huntington's disease. Other models mimic complex, multifactorial diseases with genetic components, such as diabetes, or even transgenic mice that carry the same mutations that occur during the development of cancer. These models allow investigations on how and why the disease develops, as well as providing ways to develop and test new treatments. The vast majority of these transgenic models of human disease are lines of mice, the mammalian species in which genetic modification is most efficient. Smaller numbers of other animals are also used, including rats, pigs, sheep, fish, birds, and amphibians. Studies on models of naturally occurring disease and condition. Certain domestic and wild animals have a natural propensity or predisposition for certain conditions that are also found in humans. Cats are used as a model to develop immunodeficiency virus vaccines and to study leukemia because their natural predisposition to FIV and Feline leukemia virus. Certain breeds of dog experience narcolepsy making them the major model used to study the human condition. Armadillos and humans are among only a few animal species that naturally have leprosy; as the bacteria responsible for this disease cannot yet be grown in culture, armadillos are the primary source of bacilli used in leprosy vaccines. Studies on induced animal models of human diseases. Here, an animal is treated so that it develops pathology and symptoms that resemble a human disease. Examples include restricting blood flow to the brain to induce stroke, or giving neurotoxins that cause damage similar to that seen in Parkinson's disease. Much animal research into potential treatments for humans is wasted because it is poorly conducted and not evaluated through systematic reviews. For example, although such models are now widely used to study Parkinson's disease, the British anti-vivisection interest group BUAV argues that these models only superficially resemble the disease symptoms, without the same time course or cellular pathology. In contrast, scientists assessing the usefulness of animal models of Parkinson's disease, as well as the medical research charity The Parkinson's Appeal, state that these models were invaluable and that they led to improved surgical treatments such as pallidotomy, new drug treatments such as levodopa, and later deep brain stimulation. Animal testing has also included the use of placebo testing. In these cases animals are treated with a substance that produces no pharmacological effect, but is administered in order to determine any biological alterations due to the experience of a substance being administered, and the results are compared with those obtained with an active compound. Xenotransplantation Xenotransplantation research involves transplanting tissues or organs from one species to another, as a way to overcome the shortage of human organs for use in organ transplants. Current research involves using primates as the recipients of organs from pigs that have been genetically modified to reduce the primates' immune response against the pig tissue. Although transplant rejection remains a problem, recent clinical trials that involved implanting pig insulin-secreting cells into diabetics did reduce these people's need for insulin. Documents released to the news media by the animal rights organization Uncaged Campaigns showed that, between 1994 and 2000, wild baboons imported to the UK from Africa by Imutran Ltd, a subsidiary of Novartis Pharma AG, in conjunction with Cambridge University and Huntingdon Life Sciences, to be used in experiments that involved grafting pig tissues, had serious and sometimes fatal injuries. A scandal occurred when it was revealed that the company had communicated with the British government in an attempt to avoid regulation. Toxicology testing Toxicology testing, also known as safety testing, is conducted by pharmaceutical companies testing drugs, or by contract animal testing facilities, such as Huntingdon Life Sciences, on behalf of a wide variety of customers. According to 2005 EU figures, around one million animals are used every year in Europe in toxicology tests; which are about 10% of all procedures. According to Nature, 5,000 animals are used for each chemical being tested, with 12,000 needed to test pesticides. The tests are conducted without anesthesia, because interactions between drugs can affect how animals detoxify chemicals, and may interfere with the results. Toxicology tests are used to examine finished products such as pesticides, medications, food additives, packing materials, and air freshener, or their chemical ingredients. Most tests involve testing ingredients rather than finished products, but according to BUAV, manufacturers believe these tests overestimate the toxic effects of substances; they therefore repeat the tests using their finished products to obtain a less toxic label. The substances are applied to the skin or dripped into the eyes; injected intravenously, intramuscularly, or subcutaneously; inhaled either by placing a mask over the animals and restraining them, or by placing them in an inhalation chamber; or administered orally, through a tube into the stomach, or simply in the animal's food. Doses may be given once, repeated regularly for many months, or for the lifespan of the animal. There are several different types of acute toxicity tests. The ("Lethal Dose 50%") test is used to evaluate the toxicity of a substance by determining the dose required to kill 50% of the test animal population. This test was removed from OECD international guidelines in 2002, replaced by methods such as the fixed dose procedure, which use fewer animals and cause less suffering. Abbott writes that, as of 2005, "the LD50 acute toxicity test ... still accounts for one-third of all animal [toxicity] tests worldwide". Irritancy can be measured using the Draize test, where a test substance is applied to an animal's eyes or skin, usually an albino rabbit. For Draize eye testing, the test involves observing the effects of the substance at intervals and grading any damage or irritation, but the test should be halted and the animal killed if it shows "continuing signs of severe pain or distress". The Humane Society of the United States writes that the procedure can cause redness, ulceration, hemorrhaging, cloudiness, or even blindness. This test has also been criticized by scientists for being cruel and inaccurate, subjective, over-sensitive, and failing to reflect human exposures in the real world. Although no accepted in vitro alternatives exist, a modified form of the Draize test called the low volume eye test may reduce suffering and provide more realistic results and this was adopted as the new standard in September 2009. However, the Draize test will still be used for substances that are not severe irritants. The most stringent tests are reserved for drugs and foodstuffs. For these, a number of tests are performed, lasting less than a month (acute), one to three months (subchronic), and more than three months (chronic) to test general toxicity (damage to organs), eye and skin irritancy, mutagenicity, carcinogenicity, teratogenicity, and reproductive problems. The cost of the full complement of tests is several million dollars per substance and it may take three or four years to complete. These toxicity tests provide, in the words of a 2006 United States National Academy of Sciences report, "critical information for assessing hazard and risk potential". Animal tests may overestimate risk, with false positive results being a particular problem, but false positives appear not to be prohibitively common. Variability in results arises from using the effects of high doses of chemicals in small numbers of laboratory animals to try to predict the effects of low doses in large numbers of humans. Although relationships do exist, opinion is divided on how to use data on one species to predict the exact level of risk in another. Scientists face growing pressure to move away from using traditional animal toxicity tests to determine whether manufactured chemicals are safe. Among variety of approaches to toxicity evaluation the ones which have attracted increasing interests are in vitro cell-based sensing methods applying fluorescence. Cosmetics testing Cosmetics testing on animals is particularly controversial. Such tests, which are still conducted in the U.S., involve general toxicity, eye and skin irritancy, phototoxicity (toxicity triggered by ultraviolet light) and mutagenicity. Cosmetics testing on animals is banned in India, the United Kingdom, the European Union, Israel and Norway while legislation in the U.S. and Brazil is currently considering similar bans. In 2002, after 13 years of discussion, the European Union agreed to phase in a near-total ban on the sale of animal-tested cosmetics by 2009, and to ban all cosmetics-related animal testing. France, which is home to the world's largest cosmetics company, L'Oreal, has protested the proposed ban by lodging a case at the European Court of Justice in Luxembourg, asking that the ban be quashed. The ban is also opposed by the European Federation for Cosmetics Ingredients, which represents 70 companies in Switzerland, Belgium, France, Germany, and Italy. In October 2014, India passed stricter laws that also ban the importation of any cosmetic products that are tested on animals. Drug testing Before the early 20th century, laws regulating drugs were lax. Currently, all new pharmaceuticals undergo rigorous animal testing before being licensed for human use. Tests on pharmaceutical products involve: metabolic tests, investigating pharmacokinetics—how drugs are absorbed, metabolized and excreted by the body when introduced orally, intravenously, intraperitoneally, intramuscularly, or transdermally. toxicology tests, which gauge acute, sub-acute, and chronic toxicity. Acute toxicity is studied by using a rising dose until signs of toxicity become apparent. Current European legislation demands that "acute toxicity tests must be carried out in two or more mammalian species" covering "at least two different routes of administration". Sub-acute toxicity is where the drug is given to the animals for four to six weeks in doses below the level at which it causes rapid poisoning, in order to discover if any toxic drug metabolites build up over time. Testing for chronic toxicity can last up to two years and, in the European Union, is required to involve two species of mammals, one of which must be non-rodent. efficacy studies, which test whether experimental drugs work by inducing the appropriate illness in animals. The drug is then administered in a double-blind controlled trial, which allows researchers to determine the effect of the drug and the dose-response curve. Specific tests on reproductive function, embryonic toxicity, or carcinogenic potential can all be required by law, depending on the result of other studies and the type of drug being tested. Education It is estimated that 20 million animals are used annually for educational purposes in the United States including, classroom observational exercises, dissections and live-animal surgeries. Frogs, fetal pigs, perch, cats, earthworms, grasshoppers, crayfish and starfish are commonly used in classroom dissections. Alternatives to the use of animals in classroom dissections are widely used, with many U.S. States and school districts mandating students be offered the choice to not dissect. Citing the wide availability of alternatives and the decimation of local frog species, India banned dissections in 2014. The Sonoran Arthropod Institute hosts an annual Invertebrates in Education and Conservation Conference to discuss the use of invertebrates in education. There also are efforts in many countries to find alternatives to using animals in education. The NORINA database, maintained by Norecopa, lists products that may be used as alternatives or supplements to animal use in education, and in the training of personnel who work with animals. These include alternatives to dissection in schools. InterNICHE has a similar database and a loans system. In November 2013, the U.S.-based company Backyard Brains released for sale to the public what they call the "Roboroach", an "electronic backpack" that can be attached to cockroaches. The operator is required to amputate a cockroach's antennae, use sandpaper to wear down the shell, insert a wire into the thorax, and then glue the electrodes and circuit board onto the insect's back. A mobile phone app can then be used to control it via Bluetooth. It has been suggested that the use of such a device may be a teaching aid that can promote interest in science. The makers of the "Roboroach" have been funded by the National Institute of Mental Health and state that the device is intended to encourage children to become interested in neuroscience. Defense Animals are used by the military to develop weapons, vaccines, battlefield surgical techniques, and defensive clothing. For example, in 2008 the United States Defense Advanced Research Projects Agency used live pigs to study the effects of improvised explosive device explosions on internal organs, especially the brain. In the US military, goats are commonly used to train combat medics. (Goats have become the main animal species used for this purpose after the Pentagon phased out using dogs for medical training in the 1980s.) While modern mannequins used in medical training are quite efficient in simulating the behavior of a human body, some trainees feel that "the goat exercise provide[s] a sense of urgency that only real life trauma can provide". Nevertheless, in 2014, the U.S. Coast Guard announced that it would reduce the number of animals it uses in its training exercises by half after PETA released video showing Guard members cutting off the limbs of unconscious goats with tree trimmers and inflicting other injuries with a shotgun, pistol, ax and a scalpel. That same year, citing the availability of human simulators and other alternatives, the Department of Defense announced it would begin reducing the number of animals it uses in various training programs. In 2013, several Navy medical centers stopped using ferrets in intubation exercises after complaints from PETA. Besides the United States, six out of 28 NATO countries, including Poland and Denmark, use live animals for combat medic training. Ethics Most animals are euthanized after being used in an experiment. Sources of laboratory animals vary between countries and species; most animals are purpose-bred, while a minority are caught in the wild or supplied by dealers who obtain them from auctions and pounds. Supporters of the use of animals in experiments, such as the British Royal Society, argue that virtually every medical achievement in the 20th century relied on the use of animals in some way. The Institute for Laboratory Animal Research of the United States National Academy of Sciences has argued that animal testing cannot be replaced by even sophisticated computer models, which are unable to deal with the extremely complex interactions between molecules, cells, tissues, organs, organisms and the environment. Animal rights organizations—such as PETA and BUAV—question the need for and legitimacy of animal testing, arguing that it is cruel and poorly regulated, that medical progress is actually held back by misleading animal models that cannot reliably predict effects in humans, that some of the tests are outdated, that the costs outweigh the benefits, or that animals have the intrinsic right not to be used or harmed in experimentation. Viewpoints The moral and ethical questions raised by performing experiments on animals are subject to debate, and viewpoints have shifted significantly over the 20th century. There remain disagreements about which procedures are useful for which purposes, as well as disagreements over which ethical principles apply to which species. A 2015 Gallup poll found that 67% of Americans were "very concerned" or "somewhat concerned" about animals used in research. A Pew poll taken the same year found 50% of American adults opposed the use of animals in research. Still, a wide range of viewpoints exist. The view that animals have moral rights (animal rights) is a philosophical position proposed by Tom Regan, among others, who argues that animals are beings with beliefs and desires, and as such are the "subjects of a life" with moral value and therefore moral rights. Regan still sees ethical differences between killing human and non-human animals, and argues that to save the former it is permissible to kill the latter. Likewise, a "moral dilemma" view suggests that avoiding potential benefit to humans is unacceptable on similar grounds, and holds the issue to be a dilemma in balancing such harm to humans to the harm done to animals in research. In contrast, an abolitionist view in animal rights holds that there is no moral justification for any harmful research on animals that is not to the benefit of the individual animal. Bernard Rollin argues that benefits to human beings cannot outweigh animal suffering, and that human beings have no moral right to use an animal in ways that do not benefit that individual. Donald Watson has stated that vivisection and animal experimentation "is probably the cruelest of all Man's attack on the rest of Creation." Another prominent position is that of philosopher Peter Singer, who argues that there are no grounds to include a being's species in considerations of whether their suffering is important in utilitarian moral considerations. Malcolm Macleod and collaborators argue that most controlled animal studies do not employ randomization, allocation concealment, and blinding outcome assessment, and that failure to employ these features exaggerates the apparent benefit of drugs tested in animals, leading to a failure to translate much animal research for human benefit. Governments such as the Netherlands and New Zealand have responded to the public's concerns by outlawing invasive experiments on certain classes of non-human primates, particularly the great apes. In 2015, captive chimpanzees in the U.S. were added to the Endangered Species Act adding new road blocks to those wishing to experiment on them. Similarly, citing ethical considerations and the availability of alternative research methods, the U.S. NIH announced in 2013 that it would dramatically reduce and eventually phase out experiments on chimpanzees. The British government has required that the cost to animals in an experiment be weighed against the gain in knowledge. Some medical schools and agencies in China, Japan, and South Korea have built cenotaphs for killed animals. In Japan there are also annual memorial services Ireisai () for animals sacrificed at medical school. Various specific cases of animal testing have drawn attention, including both instances of beneficial scientific research, and instances of alleged ethical violations by those performing the tests. The fundamental properties of muscle physiology were determined with work done using frog muscles (including the force generating mechanism of all muscle, the length-tension relationship, and the force-velocity curve), and frogs are still the preferred model organism due to the long survival of muscles in vitro and the possibility of isolating intact single-fiber preparations (not possible in other organisms). Modern physical therapy and the understanding and treatment of muscular disorders is based on this work and subsequent work in mice (often engineered to express disease states such as muscular dystrophy). In February 1997 a team at the Roslin Institute in Scotland announced the birth of Dolly the sheep, the first mammal to be cloned from an adult somatic cell. Concerns have been raised over the mistreatment of primates undergoing testing. In 1985, the case of Britches, a macaque monkey at the University of California, Riverside, gained public attention. He had his eyelids sewn shut and a sonar sensor on his head as part of an experiment to test sensory substitution devices for blind people. The laboratory was raided by Animal Liberation Front in 1985, removing Britches and 466 other animals. The National Institutes of Health conducted an eight-month investigation and concluded, however, that no corrective action was necessary. During the 2000s other cases have made headlines, including experiments at the University of Cambridge and Columbia University in 2002. In 2004 and 2005, undercover footage of staff of in an animal testing facility in Virginia owned by Covance (now Fortrea) was shot by People for the Ethical Treatment of Animals (PETA). Following release of the footage, the U.S. Department of Agriculture fined the company $8,720 for 16 citations, three of which involved lab monkeys; the other citations involved administrative issues and equipment. Threats to researchers Threats of violence to animal researchers are not uncommon. In 2006, a primate researcher at the University of California, Los Angeles (UCLA) shut down the experiments in his lab after threats from animal rights activists. The researcher had received a grant to use 30 macaque monkeys for vision experiments; each monkey was anesthetized for a single physiological experiment lasting up to 120 hours, and then euthanized. The researcher's name, phone number, and address were posted on the website of the Primate Freedom Project. Demonstrations were held in front of his home. A Molotov cocktail was placed on the porch of what was believed to be the home of another UCLA primate researcher; instead, it was accidentally left on the porch of an elderly woman unrelated to the university. The Animal Liberation Front claimed responsibility for the attack. As a result of the campaign, the researcher sent an email to the Primate Freedom Project stating "you win", and "please don't bother my family anymore". In another incident at UCLA in June 2007, the Animal Liberation Brigade placed a bomb under the car of a UCLA children's ophthalmologist who experiments on cats and rhesus monkeys; the bomb had a faulty fuse and did not detonate. In 1997, PETA filmed staff from Huntingdon Life Sciences, showing dogs being mistreated. The employees responsible were dismissed, with two given community service orders and ordered to pay £250 costs, the first lab technicians to have been prosecuted for animal cruelty in the UK. The Stop Huntingdon Animal Cruelty campaign used tactics ranging from non-violent protest to the alleged firebombing of houses owned by executives associated with HLS's clients and investors. The Southern Poverty Law Center, which monitors US domestic extremism, has described SHAC's modus operandi as "frankly terroristic tactics similar to those of anti-abortion extremists", and in 2005 an official with the FBI's counter-terrorism division referred to SHAC's activities in the United States as domestic terrorist threats. 13 members of SHAC were jailed for between 15 months and eleven years on charges of conspiracy to blackmail or harm HLS and its suppliers. These attacks—as well as similar incidents that caused the Southern Poverty Law Center to declare in 2002 that the animal rights movement had "clearly taken a turn toward the more extreme"—prompted the US government to pass the Animal Enterprise Terrorism Act and the UK government to add the offense of "Intimidation of persons connected with animal research organisation" to the Serious Organised Crime and Police Act 2005. Such legislation and the arrest and imprisonment of activists may have decreased the incidence of attacks. Scientific criticism Systematic reviews have pointed out that animal testing often fails to accurately mirror outcomes in humans. For instance, a 2013 review noted that some 100 vaccines have been shown to prevent HIV in animals, yet none of them have worked on humans. Effects seen in animals may not be replicated in humans, and vice versa. Many corticosteroids cause birth defects in animals, but not in humans. Conversely, thalidomide causes serious birth defects in humans, but not in some animals such as mice (however, it does cause birth defects in rabbits). A 2004 paper concluded that much animal research is wasted because systemic reviews are not used, and due to poor methodology. A 2006 review found multiple studies where there were promising results for new drugs in animals, but human clinical studies did not show the same results. The researchers suggested that this might be due to researcher bias, or simply because animal models do not accurately reflect human biology. Lack of meta-reviews may be partially to blame. Poor methodology is an issue in many studies. A 2009 review noted that many animal experiments did not use blinded experiments, a key element of many scientific studies in which researchers are not told about the part of the study they are working on to reduce bias. A 2021 paper found, in a sample of Open Access Alzheimer Disease studies, that if the authors omit from the title that the experiment was performed in mice, the News Headline follow suit, and that also the Twitter repercussion is higher. Activism There are various examples of activists utilizing Freedom of Information Act (FOIA) requests to obtain information about taxpayer funding of animal testing. For example, the White Coat Waste Project, a group of activists that hold that taxpayers should not have to pay $20 billion every year for experiments on animals, highlighted that the National Institute of Allergy and Infectious Diseases provided $400,000 in taxpayer money to fund experiments in which 28 beagles were infected by disease-causing parasites. The White Coat Project found reports that said dogs taking part in the experiments were "vocalizing in pain" after being injected with foreign substances. Following public outcry, People for the Ethical Treatment of Animals (PETA) made a call to action that all members of the National Institute of Health resign effective immediately and that there is a "need to find a new NIH director to replace the outgoing Francis Collins who will shut down research that violates the dignity of nonhuman animals." Historical debate As the experimentation on animals increased, especially the practice of vivisection, so did criticism and controversy. In 1655, the advocate of Galenic physiology Edmund O'Meara said that "the miserable torture of vivisection places the body in an unnatural state". O'Meara and others argued pain could affect animal physiology during vivisection, rendering results unreliable. There were also objections ethically, contending that the benefit to humans did not justify the harm to animals. Early objections to animal testing also came from another angle—many people believed animals were inferior to humans and so different that results from animals could not be applied to humans. On the other side of the debate, those in favor of animal testing held that experiments on animals were necessary to advance medical and biological knowledge. Claude Bernard—who is sometimes known as the "prince of vivisectors" and the father of physiology, and whose wife, Marie Françoise Martin, founded the first anti-vivisection society in France in 1883—famously wrote in 1865 that "the science of life is a superb and dazzlingly lighted hall which may be reached only by passing through a long and ghastly kitchen". Arguing that "experiments on animals are entirely conclusive for the toxicology and hygiene of man effects of these substances are the same on man as on animals, save for differences in degree", Bernard established animal experimentation as part of the standard scientific method. In 1896, the physiologist and physician Dr. Walter B. Cannon said "The antivivisectionists are the second of the two types Theodore Roosevelt described when he said, 'Common sense without conscience may lead to crime, but conscience without common sense may lead to folly, which is the handmaiden of crime. These divisions between pro- and anti-animal testing groups first came to public attention during the Brown Dog affair in the early 1900s, when hundreds of medical students clashed with anti-vivisectionists and police over a memorial to a vivisected dog. In 1822, the first animal protection law was enacted in the British parliament, followed by the Cruelty to Animals Act (1876), the first law specifically aimed at regulating animal testing. The legislation was promoted by Charles Darwin, who wrote to Ray Lankester in March 1871: "You ask about my opinion on vivisection. I quite agree that it is justifiable for proper investigations on physiology; but not for mere damnable and detestable curiosity. It is a subject which makes me sick with horror, so I will not say another word about it, else I shall not sleep to-night." In response to the lobbying by anti-vivisectionists, several organizations were set up in Britain to defend animal research: The Physiological Society was formed in 1876 to give physiologists "mutual benefit and protection", the Association for the Advancement of Medicine by Research was formed in 1882 and focused on policy-making, and the Research Defence Society (now Understanding Animal Research) was formed in 1908 "to make known the facts as to experiments on animals in this country; the immense importance to the welfare of mankind of such experiments and the great saving of human life and health directly attributable to them". Opposition to the use of animals in medical research first arose in the United States during the 1860s, when Henry Bergh founded the American Society for the Prevention of Cruelty to Animals (ASPCA), with America's first specifically anti-vivisection organization being the American AntiVivisection Society (AAVS), founded in 1883. Antivivisectionists of the era generally believed the spread of mercy was the great cause of civilization, and vivisection was cruel. However, in the USA the antivivisectionists' efforts were defeated in every legislature, overwhelmed by the superior organization and influence of the medical community. Overall, this movement had little legislative success until the passing of the Laboratory Animal Welfare Act, in 1966. Real progress in thinking about animal rights build on the "theory of justice" (1971) by the philosopher John Rawls and work on ethics by philosopher Peter Singer. Alternatives Most scientists and governments state that animal testing should cause as little suffering to animals as possible, and that animal tests should only be performed where necessary.)The "Three Rs" are guiding principles for the use of animals in research in most countries. Whilst replacement of animals, i.e. alternatives to animal testing, is one of the principles, their scope is much broader. Although such principles have been welcomed as a step forwards by some animal welfare groups, they have also been criticized as both outdated by current research, and of little practical effect in improving animal welfare. The scientists and engineers at Harvard's Wyss Institute have created "organs-on-a-chip", including the "lung-on-a-chip" and "gut-on-a-chip". Researchers at cellasys in Germany developed a "skin-on-a-chip". These tiny devices contain human cells in a 3-dimensional system that mimics human organs. The chips can be used instead of animals in in vitro disease research, drug testing, and toxicity testing. Researchers have also begun using 3-D bioprinters to create human tissues for in vitro testing. Another non-animal research method is in silico or computer simulation and mathematical modeling which seeks to investigate and ultimately predict toxicity and drug effects on humans without using animals. This is done by investigating test compounds on a molecular level using recent advances in technological capabilities with the ultimate goal of creating treatments unique to each patient. Microdosing is another alternative to the use of animals in experimentation. Microdosing is a process whereby volunteers are administered a small dose of a test compound allowing researchers to investigate its pharmacological affects without harming the volunteers. Microdosing can replace the use of animals in pre-clinical drug screening and can reduce the number of animals used in safety and toxicity testing. Additional alternative methods include positron emission tomography (PET), which allows scanning of the human brain in vivo, and comparative epidemiological studies of disease risk factors among human populations. Simulators and computer programs have also replaced the use of animals in dissection, teaching and training exercises. Official bodies such as the European Centre for the Validation of Alternative Test Methods of the European Commission, the Interagency Coordinating Committee for the Validation of Alternative Methods in the US, ZEBET in Germany, and the Japanese Center for the Validation of Alternative Methods (among others) also promote and disseminate the 3Rs. These bodies are mainly driven by responding to regulatory requirements, such as supporting the cosmetics testing ban in the EU by validating alternative methods. The European Partnership for Alternative Approaches to Animal Testing serves as a liaison between the European Commission and industries. The European Consensus Platform for Alternatives coordinates efforts amongst EU member states. Academic centers also investigate alternatives, including the Center for Alternatives to Animal Testing at the Johns Hopkins University and the NC3Rs in the UK.
Physical sciences
Research methods
Basics and measurement
175641
https://en.wikipedia.org/wiki/Chelation
Chelation
Chelation () is a type of bonding of ions and their molecules to metal ions. It involves the formation or presence of two or more separate coordinate bonds between a polydentate (multiple bonded) ligand and a single central metal atom. These ligands are called chelants, chelators, chelating agents, or sequestering agents. They are usually organic compounds, but this is not a necessity. The word chelation is derived from Greek χηλή, chēlē, meaning "claw"; the ligands lie around the central atom like the claws of a crab. The term chelate () was first applied in 1920 by Sir Gilbert T. Morgan and H. D. K. Drew, who stated: "The adjective chelate, derived from the great claw or chele (Greek) of the crab or other crustaceans, is suggested for the caliperlike groups which function as two associating units and fasten to the central atom so as to produce heterocyclic rings." Chelation is useful in applications such as providing nutritional supplements, in chelation therapy to remove toxic metals from the body, as contrast agents in MRI scanning, in manufacturing using homogeneous catalysts, in chemical water treatment to assist in the removal of metals, and in fertilizers. Chelate effect The chelate effect is the greater affinity of chelating ligands for a metal ion than that of similar nonchelating (monodentate) ligands for the same metal. The thermodynamic principles underpinning the chelate effect are illustrated by the contrasting affinities of copper(II) for ethylenediamine (en) vs. methylamine. In () the ethylenediamine forms a chelate complex with the copper ion. Chelation results in the formation of a five-membered CuC2N2 ring. In () the bidentate ligand is replaced by two monodentate methylamine ligands of approximately the same donor power, indicating that the Cu–N bonds are approximately the same in the two reactions. The thermodynamic approach to describing the chelate effect considers the equilibrium constant for the reaction: the larger the equilibrium constant, the higher the concentration of the complex. Electrical charges have been omitted for simplicity of notation. The square brackets indicate concentration, and the subscripts to the stability constants, β, indicate the stoichiometry of the complex. When the analytical concentration of methylamine is twice that of ethylenediamine and the concentration of copper is the same in both reactions, the concentration [Cu(en)] is much higher than the concentration [Cu(MeNH2)2] because . An equilibrium constant, K, is related to the standard Gibbs free energy, by where R is the gas constant and T is the temperature in kelvins. is the standard enthalpy change of the reaction and is the standard entropy change. Since the enthalpy should be approximately the same for the two reactions, the difference between the two stability constants is due to the effects of entropy. In equation () there are two particles on the left and one on the right, whereas in equation () there are three particles on the left and one on the right. This difference means that less entropy of disorder is lost when the chelate complex is formed with bidentate ligand than when the complex with monodentate ligands is formed. This is one of the factors contributing to the entropy difference. Other factors include solvation changes and ring formation. Some experimental data to illustrate the effect are shown in the following table. These data confirm that the enthalpy changes are approximately equal for the two reactions and that the main reason for the greater stability of the chelate complex is the entropy term, which is much less unfavorable. In general it is difficult to account precisely for thermodynamic values in terms of changes in solution at the molecular level, but it is clear that the chelate effect is predominantly an effect of entropy. Other explanations, including that of Schwarzenbach, are discussed in Greenwood and Earnshaw (loc.cit). In nature Numerous biomolecules exhibit the ability to dissolve certain metal cations. Thus, proteins, polysaccharides, and polynucleic acids are excellent polydentate ligands for many metal ions. Organic compounds such as the amino acids glutamic acid and histidine, organic diacids such as malate, and polypeptides such as phytochelatin are also typical chelators. In addition to these adventitious chelators, several biomolecules are specifically produced to bind certain metals (see next section). Virtually all metalloenzymes feature metals that are chelated, usually to peptides or cofactors and prosthetic groups. Such chelating agents include the porphyrin rings in hemoglobin and chlorophyll. Many microbial species produce water-soluble pigments that serve as chelating agents, termed siderophores. For example, species of Pseudomonas are known to secrete pyochelin and pyoverdine that bind iron. Enterobactin, produced by E. coli, is the strongest chelating agent known. The marine mussels use metal chelation, especially Fe3+ chelation with the Dopa residues in mussel foot protein-1 to improve the strength of the threads that they use to secure themselves to surfaces. In earth science, chemical weathering is attributed to organic chelating agents (e.g., peptides and sugars) that extract metal ions from minerals and rocks. Most metal complexes in the environment and in nature are bound in some form of chelate ring (e.g., with a humic acid or a protein). Thus, metal chelates are relevant to the mobilization of metals in the soil, the uptake and the accumulation of metals into plants and microorganisms. Selective chelation of heavy metals is relevant to bioremediation (e.g., removal of 137Cs from radioactive waste). Applications Animal feed additives Synthetic chelates such as ethylenediaminetetraacetic acid (EDTA) proved too stable and not nutritionally viable. If the mineral was taken from the EDTA ligand, the ligand could not be used by the body and would be expelled. During the expulsion process, the EDTA ligand randomly chelated and stripped other minerals from the body. According to the Association of American Feed Control Officials (AAFCO), a metal–amino acid chelate is defined as the product resulting from the reaction of metal ions from a soluble metal salt with amino acids, with a mole ratio in the range of 1–3 (preferably 2) moles of amino acids for one mole of metal. The average weight of the hydrolyzed amino acids must be approximately 150 and the resulting molecular weight of the chelate must not exceed 800 Da. Since the early development of these compounds, much more research has been conducted, and has been applied to human nutrition products in a similar manner to the animal nutrition experiments that pioneered the technology. Ferrous bis-glycinate is an example of one of these compounds that has been developed for human nutrition. Dental use Dentin adhesives were first designed and produced in the 1950s based on a co-monomer chelate with calcium on the surface of the tooth and generated very weak water-resistant chemical bonding (2–3 MPa). Chelation therapy Chelation therapy is an antidote for poisoning by mercury, arsenic, and lead. Chelating agents convert these metal ions into a chemically and biochemically inert form that can be excreted. Chelation using sodium calcium edetate has been approved by the U.S. Food and Drug Administration (FDA) for serious cases of lead poisoning. It is not approved for treating "heavy metal toxicity". Although beneficial in cases of serious lead poisoning, use of disodium EDTA (edetate disodium) instead of calcium disodium EDTA has resulted in fatalities due to hypocalcemia. Disodium EDTA is not approved by the FDA for any use, and all FDA-approved chelation therapy products require a prescription. Contrast agents Chelate complexes of gadolinium are often used as contrast agents in MRI scans, although iron particle and manganese chelate complexes have also been explored. Bifunctional chelate complexes of zirconium, gallium, fluorine, copper, yttrium, bromine, or iodine are often used for conjugation to monoclonal antibodies for use in antibody-based PET imaging. These chelate complexes often employ the usage of hexadentate ligands such as desferrioxamine B (DFO), according to Meijs et al., and the gadolinium complexes often employ the usage of octadentate ligands such as DTPA, according to Desreux et al. Auranofin, a chelate complex of gold, is used in the treatment of rheumatoid arthritis, and penicillamine, which forms chelate complexes of copper, is used in the treatment of Wilson's disease and cystinuria, as well as refractory rheumatoid arthritis. Nutritional advantages and issues Chelation in the intestinal tract is a cause of numerous interactions between drugs and metal ions (also known as "minerals" in nutrition). As examples, antibiotic drugs of the tetracycline and quinolone families are chelators of Fe2+, Ca2+, and Mg2+ ions. EDTA, which binds to calcium, is used to alleviate the hypercalcemia that often results from band keratopathy. The calcium may then be removed from the cornea, allowing for some increase in clarity of vision for the patient. Homogeneous catalysts are often chelated complexes. A representative example is the use of BINAP (a bidentate phosphine) in Noyori asymmetric hydrogenation and asymmetric isomerization. The latter has the practical use of manufacture of synthetic (–)-menthol. Cleaning and water softening A chelating agent is the main component of some rust removal formulations. Citric acid is used to soften water in soaps and laundry detergents. A common synthetic chelator is EDTA. Phosphonates are also well-known chelating agents. Chelators are used in water treatment programs and specifically in steam engineering. Although the treatment is often referred to as "softening", chelation has little effect on the water's mineral content, other than to make it soluble and lower the water's pH level. Fertilizers Metal chelate compounds are common components of fertilizers to provide micronutrients. These micronutrients (manganese, iron, zinc, copper) are required for the health of the plants. Most fertilizers contain phosphate salts that, in the absence of chelating agents, typically convert these metal ions into insoluble solids that are of no nutritional value to the plants. EDTA is the typical chelating agent that keeps these metal ions in a soluble form. Economic situation Because of their wide needs, the overall chelating agents growth was 4% annually during 2009–2014 and the trend is likely to increase. Aminopolycarboxylic acids chelators are the most widely consumed chelating agents; however, the percentage of the greener alternative chelators in this category continues to grow. The consumption of traditional aminopolycarboxylates chelators, in particular the EDTA (ethylenediaminetetraacetic acid) and NTA (nitrilotriacetic acid), is declining (−6% annually), because of the persisting concerns over their toxicity and negative environmental impact. In 2013, these greener alternative chelants represented approximately 15% of the total aminopolycarboxylic acids demand. This is expected to rise to around 21% by 2018, replacing and aminophosphonic acids used in cleaning applications. Examples of some Greener alternative chelating agents include ethylenediamine disuccinic acid (EDDS), polyaspartic acid (PASA), methylglycinediacetic acid (MGDA), glutamic diacetic acid (L-GLDA), citrate, gluconic acid, amino acids, plant extracts etc. Reversal Dechelation (or de-chelation) is a reverse process of the chelation in which the chelating agent is recovered by acidifying solution with a mineral acid to form a precipitate.
Physical sciences
Bond structure
Chemistry
175722
https://en.wikipedia.org/wiki/Boiler
Boiler
A boiler is a closed vessel in which fluid (generally water) is heated. The fluid does not necessarily boil. The heated or vaporized fluid exits the boiler for use in various processes or heating applications, including water heating, central heating, boiler-based power generation, cooking, and sanitation. Heat sources In a fossil fuel power plant using a steam cycle for power generation, the primary heat source will be combustion of coal, oil, or natural gas. In some cases byproduct fuel such as the carbon monoxide rich offgasses of a coke battery can be burned to heat a boiler; biofuels such as bagasse, where economically available, can also be used. In a nuclear power plant, boilers called steam generators are heated by the heat produced by nuclear fission. Where a large volume of hot gas is available from some process, a heat recovery steam generator or recovery boiler can use the heat to produce steam, with little or no extra fuel consumed; such a configuration is common in a combined cycle power plant where a gas turbine and a steam boiler are used. In all cases the combustion product waste gases are separate from the working fluid of the steam cycle, making these systems examples of external combustion engines. Materials The pressure vessel of a boiler is usually made of steel (or alloy steel), or historically of wrought iron. Stainless steel, especially of the austenitic types, is not used in wetted parts of boilers due to corrosion and stress corrosion cracking. However, ferritic stainless steel is often used in superheater sections that will not be exposed to boiling water, and electrically-heated stainless steel shell boilers are allowed under the European "Pressure Equipment Directive" for production of steam for sterilizers and disinfectors. In live steam models, copper or brass is often used because it is more easily fabricated in smaller size boilers. Historically, copper was often used for fireboxes (particularly for steam locomotives), because of its better formability and higher thermal conductivity; however, in more recent times, the high price of copper often makes this an uneconomic choice and cheaper substitutes (such as steel) are used instead. For much of the Victorian "age of steam", the only material used for boilermaking was the highest grade of wrought iron, with assembly by riveting. This iron was often obtained from specialist ironworks, such as those in the Cleator Moor (UK) area, noted for the high quality of their rolled plate, which was especially suitable for use in critical applications such as high-pressure boilers. In the 20th century, design practice moved towards the use of steel, with welded construction, which is stronger and cheaper, and can be fabricated more quickly and with less labour. Wrought iron boilers corrode far more slowly than their modern-day steel counterparts, and are less susceptible to localized pitting and stress-corrosion. That makes the longevity of older wrought-iron boilers far superior to that of welded steel boilers. Cast iron may be used for the heating vessel of domestic water heaters. Although such heaters are usually termed "boilers" in some countries, their purpose is usually to produce hot water, not steam, and so they run at low pressure and try to avoid boiling. The brittleness of cast iron makes it impractical for high-pressure steam boilers. Energy The source of heat for a boiler is combustion of any of several fuels, such as wood, coal, oil, or natural gas. Electric steam boilers use resistance- or immersion-type heating elements. Nuclear fission is also used as a heat source for generating steam, either directly (BWR) or, in most cases, in specialised heat exchangers called "steam generators" (PWR). Heat recovery steam generators (HRSGs) use the heat rejected from other processes such as gas turbine. Boiler efficiency There are two methods to measure the boiler efficiency in the ASME performance test code (PTC) for boilers ASME PTC 4 and for HRSG ASME PTC 4.4 and EN 12952-15 for water tube boilers: Input-output method (direct method) Heat-loss method (indirect method) Input-output method (or, direct method) Direct method of boiler efficiency test is more usable or more common. Boiler efficiency = power out / power in = Q × (Hg − Hf) / (q × GCV) × 100% where Q, rate of steam flow in kg/h Hg, enthalpy of saturated steam in kcal/kg Hf, enthalpy of feed water in kcal/kg q, rate of fuel use in kg/h GCV, gross calorific value in kcal/kg (e.g., pet coke 8200kcal/kg) Heat-loss method (or, indirect method) To measure the boiler efficiency in indirect method, parameter like these are needed: Ultimate analysis of fuel (H2, S2, S, C, moisture constraint, ash constraint) Percentage of O2 or CO2 at flue gas Flue gas temperature at outlet Ambient temperature in °C and humidity of air in kg/kg GCV of fuel in kcal/kg Ash percentage in combustible fuel GCV of ash in kcal/kg Configurations Boilers can be classified into the following configurations: Pot boiler or Haycock boiler/Haystack boiler A primitive "kettle" where a fire heats a partially filled water container from below. 18th century Haycock boilers generally produced and stored large volumes of very low-pressure steam, often hardly above that of the atmosphere. These could burn wood or most often, coal. Efficiency was very low. Flued boiler With one or two large flues—an early type or forerunner of fire-tube boiler. Fire-tube boiler Here, water partially fills a boiler barrel with a small volume left above to accommodate the steam (steam space). This is the type of boiler used in nearly all steam locomotives. The heat source is inside a furnace or firebox that has to be kept permanently surrounded by the water in order to maintain the temperature of the heating surface below the boiling point. The furnace can be situated at one end of a fire-tube which lengthens the path of the hot gases, thus augmenting the heating surface which can be further increased by making the gases reverse direction through a second parallel tube or a bundle of multiple tubes (two-pass or return flue boiler); alternatively, the gases may be taken along the sides and then beneath the boiler through flues (3-pass boiler). In case of a locomotive-type boiler, a boiler barrel extends from the firebox and the hot gases pass through a bundle of fire tubes inside the barrel which greatly increases the heating surface compared to a single tube and further improves heat transfer. Fire-tube boilers usually have a comparatively low rate of steam production, but high steam storage capacity. Fire-tube boilers mostly burn solid fuels, but are readily adaptable to those of the liquid or gas variety. Fire-tube boilers may also be referred to as "scotch-marine" or "marine" type boilers. Water-tube boiler In this type, tubes filled with water are arranged inside a furnace in a number of possible configurations. Often the water tubes connect large drums, the lower ones containing water and the upper ones steam and water; in other cases, such as a mono-tube boiler, water is circulated by a pump through a succession of coils. This type generally gives high steam production rates, but less storage capacity than the above. Water tube boilers can be designed to exploit any heat source and are generally preferred in high-pressure applications since the high-pressure water/steam is contained within small diameter pipes which can withstand the pressure with a thinner wall. These boilers are commonly constructed in place, roughly square in shape, and can be multiple stories tall. Flash boiler A flash boiler is a specialized type of water-tube boiler in which tubes are close together and water is pumped through them. A flash boiler differs from the type of mono-tube steam generator in which the tube is permanently filled with water. In a flash boiler, the tube is kept so hot that the water feed is quickly flashed into steam and superheated. Flash boilers had some use in automobiles in the 19th century and this use continued into the early 20th century. Fire-tube boiler with water-tube firebox Sometimes the two above types have been combined in the following manner: the firebox contains an assembly of water tubes, called thermic siphons. The gases then pass through a conventional firetube boiler. Water-tube fireboxes were installed in many Hungarian locomotives, but have met with little success in other countries. Sectional boiler In a cast iron sectional boiler, sometimes called a "pork chop boiler" the water is contained inside cast iron sections. These sections are assembled on site to create the finished boiler. Safety To define and secure boilers safely, some professional specialized organizations such as the American Society of Mechanical Engineers (ASME) develop standards and regulation codes. For instance, the ASME Boiler and Pressure Vessel Code is a standard providing a wide range of rules and directives to ensure compliance of the boilers and other pressure vessels with safety, security and design standards. Historically, boilers were a source of many serious injuries and property destruction due to poorly understood engineering principles. Thin and brittle metal shells can rupture, while poorly welded or riveted seams could open up, leading to a violent eruption of the pressurized steam. When water is converted to steam it expands to over 1,000 times its original volume and travels down steam pipes at over . Because of this, steam is an efficient method of moving energy and heat around a site from a central boiler house to where it is needed, but without the right boiler feedwater treatment, a steam-raising plant will suffer from scale formation and corrosion. At best, this increases energy costs and can lead to poor quality steam, reduced efficiency, shorter plant life and unreliable operation. At worst, it can lead to catastrophic failure and loss of life. Collapsed or dislodged boiler tubes can also spray scalding-hot steam and smoke out of the air intake and firing chute, injuring the firemen who load the coal into the fire chamber. Extremely large boilers providing hundreds of horsepower to operate factories can potentially demolish entire buildings. A boiler that has a loss of feed water and is permitted to boil dry can be extremely dangerous. If feed water is then sent into the empty boiler, the small cascade of incoming water instantly boils on contact with the superheated metal shell and leads to a violent explosion that cannot be controlled even by safety steam valves. Draining of the boiler can also happen if a leak occurs in the steam supply lines that is larger than the make-up water supply could replace. The Hartford Loop was invented in 1919 by the Hartford Steam Boiler Inspection and Insurance Company as a method to help prevent this condition from occurring, and thereby reduce their insurance claims. Superheated steam boiler When water is boiled the result is saturated steam, also referred to as "wet steam." Saturated steam, while mostly consisting of water vapor, carries some unevaporated water in the form of droplets. Saturated steam is useful for many purposes, such as cooking, heating and sanitation, but is not desirable when steam is expected to convey energy to machinery, such as a ship's propulsion system or the "motion" of a steam locomotive. This is because unavoidable temperature and/or pressure loss that occurs as steam travels from the boiler to the machinery will cause some condensation, resulting in liquid water being carried into the machinery. The water entrained in the steam may damage turbine blades or in the case of a reciprocating steam engine, may cause serious mechanical damage due to hydrostatic lock. Superheated steam boilers evaporate the water and then further heat the steam in a superheater, causing the discharged steam temperature to be substantially above the boiling temperature at the boiler's operating pressure. As the resulting "dry steam" is much hotter than needed to stay in the vaporous state it will not contain any significant unevaporated water. Also, higher steam pressure will be possible than with saturated steam, enabling the steam to carry more energy. Although superheating adds more energy to the steam in the form of heat there is no effect on pressure, which is determined by the rate at which steam is drawn from the boiler and the pressure settings of the safety valves. The fuel consumption required to generate superheated steam is greater than that required to generate an equivalent volume of saturated steam. However, the overall energy efficiency of the steam plant (the combination of boiler, superheater, piping and machinery) generally will be improved enough to more than offset the increased fuel consumption. Superheater operation is similar to that of the coils on an air conditioning unit, although for a different purpose. The steam piping is directed through the flue gas path in the boiler furnace, an area in which the temperature is typically between . Some superheaters are radiant type, which as the name suggests, they absorb heat by radiation. Others are convection type, absorbing heat from a fluid. Some are a combination of the two types. Through either method, the extreme heat in the flue gas path will also heat the superheater steam piping and the steam within. The design of any superheated steam plant presents several engineering challenges due to the high working temperatures and pressures. One consideration is the introduction of feedwater to the boiler. The pump used to charge the boiler must be able to overcome the boiler's operating pressure, else water will not flow. As a superheated boiler is usually operated at high pressure, the corresponding feedwater pressure must be even higher, demanding a more robust pump design. Another consideration is safety. High pressure, superheated steam can be extremely dangerous if it unintentionally escapes. To give the reader some perspective, the steam plants used in many U.S. Navy destroyers built during World War II operated at pressure and superheat. In the event of a major rupture of the system, an ever-present hazard in a warship during combat, the enormous energy release of escaping superheated steam, expanding to more than 1600 times its confined volume, would be equivalent to a cataclysmic explosion, whose effects would be exacerbated by the steam release occurring in a confined space, such as a ship's engine room. Also, small leaks that are not visible at the point of leakage could be lethal if an individual were to step into the escaping steam's path. Hence designers endeavor to give the steam-handling components of the system as much strength as possible to maintain integrity. Special methods of coupling steam pipes together are used to prevent leaks, with very high pressure systems employing welded joints to avoided leakage problems with threaded or gasketed connections. Supercritical steam generator Supercritical steam generators are frequently used for the production of electric power. They operate at supercritical pressure. In contrast to a "subcritical boiler", a supercritical steam generator operates at such a high pressure (over ) that the physical turbulence that characterizes boiling ceases to occur; the fluid is neither liquid nor gas but a super-critical fluid. There is no generation of steam bubbles within the water, because the pressure is above the critical pressure point at which steam bubbles can form. As the fluid expands through the turbine stages, its thermodynamic state drops below the critical point as it does work turning the turbine which turns the electrical generator from which power is ultimately extracted. The fluid at that point may be a mix of steam and liquid droplets as it passes into the condenser. This results in slightly less fuel use and therefore less greenhouse gas production. The term "boiler" should not be used for a supercritical pressure steam generator, as no "boiling" occurs in this device. Accessories Boiler fittings and accessories Pressuretrols to control the steam pressure in the boiler. Boilers generally have 2 or 3 pressuretrols: a manual-reset pressuretrol, which functions as a safety by setting the upper limit of steam pressure, the operating pressuretrol, which controls when the boiler fires to maintain pressure, and for boilers equipped with a modulating burner, a modulating pressuretrol which controls the amount of fire. Safety valve: It is used to relieve pressure and prevent possible explosion of a boiler. Water level indicators: They show the operator the level of fluid in the boiler, also known as a sight glass, water gauge or water column. Bottom blowdown valves: They provide a means for removing solid particulates that condense and lie on the bottom of a boiler. As the name implies, this valve is usually located directly on the bottom of the boiler, and is occasionally opened to use the pressure in the boiler to push these particulates out. Continuous blowdown valve: This allows a small quantity of water to escape continuously. Its purpose is to prevent the water in the boiler becoming saturated with dissolved salts. Saturation would lead to foaming and cause water droplets to be carried over with the steam – a condition known as priming. Blowdown is also often used to monitor the chemistry of the boiler water. Trycock: a type of valve that is often used to manually check a liquid level in a tank. Most commonly found on a water boiler. Flash tank: High-pressure blowdown enters this vessel where the steam can 'flash' safely and be used in a low-pressure system or be vented to atmosphere while the ambient pressure blowdown flows to drain. Automatic blowdown/continuous heat recovery system: This system allows the boiler to blowdown only when makeup water is flowing to the boiler, thereby transferring the maximum amount of heat possible from the blowdown to the makeup water. No flash tank is generally needed as the blowdown discharged is close to the temperature of the makeup water. Hand holes: They are steel plates installed in openings in "header" to allow for inspections & installation of tubes and inspection of internal surfaces. Steam drum internals, a series of screen, scrubber & cans (cyclone separators). Low-water cutoff: It is a mechanical means (usually a float switch) or an electrode with a safety switch that is used to turn off the burner or shut off fuel to the boiler to prevent it from running once the water goes below a certain point. If a boiler is "dry-fired" (burned without water in it) it can cause rupture or catastrophic failure. Surface blowdown line: It provides a means for removing foam or other lightweight non-condensible substances that tend to float on top of the water inside the boiler. Circulating pump: It is designed to circulate water back to the boiler after it has expelled some of its heat. Feedwater check valve or clack valve: A non-return stop valve in the feedwater line. This may be fitted to the side of the boiler, just below the water level, or to the top of the boiler. Top feed: In this design for feedwater injection, the water is fed to the top of the boiler. This can reduce boiler fatigue caused by thermal stress. By spraying the feedwater over a series of trays the water is quickly heated and this can reduce limescale. Desuperheater tubes or bundles: A series of tubes or bundles of tubes in the water drum or the steam drum designed to cool superheated steam, in order to supply auxiliary equipment that does not need, or may be damaged by, dry steam. Chemical injection line: A connection to add chemicals for controlling feedwater pH. Steam accessories Main steam stop valve: Steam traps: Main steam stop/check valve: It is used on multiple boiler installations. Combustion accessories Fuel oil system:fuel oil heaters Gas system: Coal system: Other essential items Pressure gauges: Feed pumps: Fusible plug: Insulation and lagging; Inspectors test pressure gauge attachment: Name plate: Registration plate: Draught A fuel-heated boiler must provide air to oxidize its fuel. Early boilers provided this stream of air, or draught, through the natural action of convection in a chimney connected to the exhaust of the combustion chamber. Since the heated flue gas is less dense than the ambient air surrounding the boiler, the flue gas rises in the chimney, pulling denser, fresh air into the combustion chamber. Most modern boilers depend on mechanical draught rather than natural draught. This is because natural draught is subject to outside air conditions and temperature of flue gases leaving the furnace, as well as the chimney height. All these factors make proper draught hard to attain and therefore make mechanical draught equipment much more reliable and economical. Types of draught can also be divided into induced draught, where exhaust gases are pulled out of the boiler; forced draught, where fresh air is pushed into the boiler; and balanced draught, where both effects are employed. Natural draught through the use of a chimney is a type of induced draught; mechanical draught can be induced, forced or balanced. There are two types of mechanical induced draught. The first is through use of a steam jet. The steam jet oriented in the direction of flue gas flow induces flue gases into the stack and allows for a greater flue gas velocity increasing the overall draught in the furnace. This method was common on steam driven locomotives which could not have tall chimneys. The second method is by simply using an induced draught fan (ID fan) which removes flue gases from the furnace and forces the exhaust gas up the stack. Almost all induced draught furnaces operate with a slightly negative pressure. Mechanical forced draught is provided by means of a fan forcing air into the combustion chamber. Air is often passed through an air heater; which, as the name suggests, heats the air going into the furnace in order to increase the overall efficiency of the boiler. Dampers are used to control the quantity of air admitted to the furnace. Forced draught furnaces usually have a positive pressure. Balanced draught is obtained through use of both induced and forced draught. This is more common with larger boilers where the flue gases have to travel a long distance through many boiler passes. The induced draught fan works in conjunction with the forced draught fan allowing the furnace pressure to be maintained slightly below atmospheric.
Technology
Heating and cooling
null
175734
https://en.wikipedia.org/wiki/Local%20anesthetic
Local anesthetic
A local anesthetic (LA) is a medication that causes absence of all sensation (including pain) in a specific body part without loss of consciousness, providing local anesthesia, as opposed to a general anesthetic, which eliminates all sensation in the entire body and causes unconsciousness. Local anesthetics are most commonly used to eliminate pain during or after surgery. When it is used on specific nerve pathways (local anesthetic nerve block), paralysis (loss of muscle function) also can be induced. Classification LAs are of 2 types: Clinical LAs: amino amide LAs amino ester LAs Synthetic LAs Cocaine derivatives Synthetic cocaine-derived LAs differ from cocaine because they have a much lower abuse potential and do not cause hypertension vasoconstriction (with few exceptions). The suffix "-caine" at the ends of these medication names is derived from the word "cocaine", because cocaine was formerly used as a local anesthetic. Examples Short Duration of Action and Low Potency Benzocaine Procaine Chloroprocaine Cocaine Medium Duration of Action and Medium Potency Lidocaine Prilocaine High Duration and High Potency Tetracaine Bupivacaine Cinchocaine Ropivacaine Medical uses Local anesthetics may be used to prevent and/or treat acute pain, to treat chronic pain, and as a supplement to general anesthesia. They are used in various techniques of local anesthesia such as: Topical anesthesia (surface anesthesia) Topical administration of cream, gel, ointment, liquid, or spray of anesthetic dissolved in DMSO or other solvents/carriers for deeper absorption Infiltration Brachial plexus block Epidural block (extradural) Spinal anesthesia (subarachnoid block) Iontophoresis Diagnostic purposes (e.g. dibucaine) Anti-arrhythmic agents (e.g. lidocaine). Acute pain Even though acute pain can be managed using analgesics, conduction anesthesia may be preferable because of superior pain control and fewer side effects. For purposes of pain therapy, LA drugs are often given by repeated injection or continuous infusion through a catheter. LA drugs are also often combined with other agents such as opioids for synergistic analgesic action. Low doses of LA drugs can be sufficient so that muscle weakness does not occur and patients may be mobilized. Some typical uses of conduction anesthesia for acute pain are: Chronic pain Chronic pain is a complex and often serious condition that requires diagnosis and treatment by an expert in pain medicine. LAs can be applied repeatedly or continuously for prolonged periods to relieve chronic pain, usually in combination with medication such as opioids, NSAIDs, and anticonvulsants. Though it can be easily performed, repeated local anesthetic blocks in chronic pain conditions are not recommended as there is no evidence of long-term benefits. Surgery Virtually every part of the body can be anesthetized using conduction anesthesia. However, only a limited number of techniques are in common clinical use. Sometimes, conduction anesthesia is combined with general anesthesia or sedation for the patient's comfort and ease of surgery. However, many anesthetists, surgeons, patients and nurses believe that it is safer to perform major surgeries under local anesthesia than general anesthesia. Typical operations performed under conduction anesthesia include: Diagnostic tests Diagnostic tests such as bone marrow aspiration, lumbar puncture (spinal tap) and aspiration of cysts or other structures are made to be less painful upon administration of local anesthetic before insertion of larger needles. Other uses Local anesthesia is also used during insertion of IV devices, such as pacemakers and implantable defibrillators, ports used for giving chemotherapy medications and hemodialysis access catheters. Topical anesthesia, in the form of lidocaine/prilocaine (EMLA) is most commonly used to enable relatively painless venipuncture (blood collection) and placement of intravenous cannulae. It may also be suitable for other kinds of punctures such as ascites drainage and amniocentesis. Surface anesthesia also facilitates some endoscopic procedures such as bronchoscopy (visualization of the lower airways) or cystoscopy (visualization of the inner surface of the bladder) Side effects Localized side effects Edema of tongue, pharynx and larynx may develop as a side effect of local anesthesia. This could be caused by a variety of reasons including trauma during injection, infection, an allergic reaction, haematoma or injection of irritating solutions such as cold-sterilization solutions. Usually there is tissue swelling at the point of injection. This is due to puncturing of the vein which allows the blood to flow into loose tissues in the surrounding area. Blanching of the tissues in the area where the local anesthetic is deposited is also common. This gives the area a white appearance as the blood flow is prevented due to vasoconstriction of arteries in the area. The vasoconstriction stimulus gradually wears off and subsequently the tissue returns to normal in less than two hours. The side effects of inferior alveolar nerve block include feeling tense, clenching of the fists and moaning. The duration of soft tissue anesthesia is longer than pulpal anesthesia and is often associated with difficulty eating, drinking and speaking. Risks The risk of temporary or permanent nerve damage varies between different locations and types of nerve blocks. There is risk of accidental damage to local blood vessels during injection of the local anesthetic solution. This is referred to as haematoma and could result in pain, trismus, swelling and/or discolouration of the region. The density of tissues surrounding the injured vessels is an important factor for haematoma. There is greatest chance of this occurring in a posterior superior alveolar nerve block or in a pterygomandibular block. Giving local anesthesia to patients with liver disease can have significant consequences. Thorough evaluation of the disease should be carried out to assess potential risk to the patient as in significant liver dysfunction, the half-life of amide local anesthetic agents may be drastically increased thus increasing the risk of overdose. Local anesthetics and vasoconstrictors may be administered to pregnant patients however it is very important to be extra cautious when giving a pregnant patient any type of drug. Lidocaine can be safely used but bupivacaine and mepivacaine should be avoided.  Consultation with the obstetrician is vital before administering any type of local anesthetic to a pregnant patient. Recovery Permanent nerve damage after a peripheral nerve block is rare. Symptoms are likely to resolve within a few weeks. The vast majority of those affected (92–97%) recover within four to six weeks; 99% of these people have recovered within a year. An estimated one in 5,000 to 30,000 nerve blocks results in some degree of permanent persistent nerve damage. Symptoms may continue to improve for up to 18 months following injury. Potential side effects General systemic adverse effects are due to the pharmacological effects of the anesthetic agents used. The conduction of electric impulses follows a similar mechanism in peripheral nerves, the central nervous system, and the heart. The effects of local anesthetics are, therefore, not specific for the signal conduction in peripheral nerves. Side effects on the central nervous system and the heart may be severe and potentially fatal. However, toxicity usually occurs only at plasma levels which are rarely reached if proper anesthetic techniques are adhered to. High plasma levels might arise, for example, when doses intended for epidural or intrasupport tissue administration are accidentally delivered as intravascular injection. Emotional reactions When patients are emotionally affected in the form of nervousness or fear, it can lead to vasovagal collapse. This is the anticipation of pain during administration that activates the parasympathetic nervous system while inhibiting the orthosympathetic nervous system. What results is a dilation of arteries in muscles which can lead to a reduction in circulating blood volume inducing a temporary shortness of blood flow to the brain. Notable symptoms include restlessness, visibly looking pale, perspiration and possible loss of consciousness. In severe cases, clonic cramps resembling an epileptic insult may occur. On the other hand, fear of administration can also result in accelerated, shallow breathing, or hyperventilation. The patient may feel a tingling sensation in hands and feet or a sense of light-headedness and increased chest pressure. Hence, it is crucial for the medical professional administrating the local anesthesia, especially in the form of an injection, to ensure that the patient is in a comfortable setting and has any potential fears alleviated in order to avoid these possible complications. Central nervous system Depending on local tissue concentrations of local anesthetics, excitatory or depressant effects on the central nervous system may occur. Initial symptoms of systemic toxicity include ringing in the ears (tinnitus), a metallic taste in the mouth, tingling or numbness of the mouth, dizziness and/or disorientation. At higher concentrations, a relatively selective depression of inhibitory neurons results in cerebral excitation, which may lead to more advanced symptoms include motor twitching in the periphery followed by grand mal seizures. It is reported that seizures are more likely to occur when bupivacaine is used, particularly in combination with chloroprocaine. A profound depression of brain functions may occur at even higher concentrations which may lead to coma, respiratory arrest, and death. Such tissue concentrations may be due to very high plasma levels after intravenous injection of a large dose. Another possibility is direct exposure of the central nervous system through the cerebrospinal fluid, i.e., overdose in spinal anesthesia or accidental injection into the subarachnoid space in epidural anesthesia. Cardiovascular system Cardiac toxicity can result from improper injection of agent into a vessel. Even with proper administration, it is inevitable for some diffusion of agent into the body from the site of application due to unforeseeable anatomical idiosyncrasies of the patient. This may affect the nervous system or cause the agent to enter into general circulation. However, infections are very seldom transmitted. Cardiac toxicity associated with overdose of intravascular injection of local anesthetic is characterized by hypotension, atrioventricular conduction delay, idioventricular rhythms, and eventual cardiovascular collapse. Although all local anesthetics potentially shorten the myocardial refractory period, bupivacaine blocks the cardiac sodium channels, thereby making it most likely to precipitate malignant arrhythmias. Even levobupivacaine and ropivacaine (single-enantiomer derivatives), developed to ameliorate cardiovascular side effects, still harbor the potential to disrupt cardiac function. Toxicity from anesthetic combinations is additive. Endocrine Endocrine and metabolic systems only have slightly adverse effects with most cases being without clinical repercussions. Immunologic allergy Adverse reactions to local anesthetics (especially the esters) are not uncommon, but legitimate allergies are very rare. Allergic reactions to the esters is usually due to a sensitivity to their metabolite, para-aminobenzoic acid, and does not result in cross-allergy to amides. Therefore, amides can be used as alternatives in those patients. Nonallergic reactions may resemble allergy in their manifestations. In some cases, skin tests and provocative challenge may be necessary to establish a diagnosis of allergy. Also cases of allergy to paraben derivatives occur, which are often added as preservatives to local anesthetic solutions. Methemoglobinemia Methemoglobinemia is a process where iron in hemoglobin is altered, reducing its oxygen-carrying capability, which produces cyanosis and symptoms of hypoxia. Exposure to aniline group chemicals such as benzocaine, lidocaine, and prilocaine can produce this effect, especially benzocaine. The systemic toxicity of prilocaine is comparatively low, but its metabolite, o-toluidine, is known to cause methemoglobinemia. Second-generation effects Application of local anesthetics during oocyte removal during in vitro fertilization has been up to debate. Pharmacological concentrations of anesthetic agents have been found in follicular fluid. Clinical trials have not concluded any effects on pregnant women. However, there is some concern with the behavioral effects of lidocaine on offspring in rats. During pregnancy, it is not common for local anesthetics to have any adverse effect on the fetus. Despite this, risks of toxicity may be higher in pregnancy due to an increase in unbound fraction of local anesthetic and physiological changes increase the transfer of local anesthetic into the central nervous system. Hence, it is recommended that pregnant women use a lower dose of local anesthetic to reduce any potential complications. Treatment of overdose: "Lipid rescue" Lipid emulsion therapy or lipid rescue is a method of toxicity treatment was invented by Dr. Guy Weinberg in 1998, and was not widely used until after the first published successful rescue in 2006. Evidence indicates Intralipid, a commonly available intravenous lipid emulsion, can be effective in treating severe cardiotoxicity secondary to local anesthetic overdose, including human case reports. However, the evidence at this point is still limited. Though most case reports to date have recorded most common use of Intralipid, other emulsions, such as Liposyn and Medialipid, have also been shown effective. Ample supporting animal evidence and human case reports show successful use of lipid rescue in this way. In the UK, efforts have been made to publicize lipid rescue more widely. In 2010, lipid rescue had been officially promoted as a treatment of local anesthetic toxicity by the Association of Anaesthetists of Great Britain and Ireland. One published case has been reported of successful treatment of refractory cardiac arrest in bupropion and lamotrigine overdose using lipid emulsion. The design of a 'homemade' lipid rescue kit has been described. Although lipid rescue mechanism of action is not completely understood, the added lipid in the blood stream may act as a sink, allowing for the removal of lipophilic toxins from affected tissues. This theory is compatible with two studies on lipid rescue for clomipramine toxicity in rabbits and with a clinical report on the use of lipid rescue in veterinary medicine to treat a puppy with moxidectin toxicosis. Mechanism of action All LAs are membrane-stabilizing drugs; they reversibly decrease the rate of depolarization and repolarization of excitable membranes (like nociceptors). Though many other drugs also have membrane-stabilizing properties, not all are used as LAs (propranolol, for example, though it has LA properties). LA drugs act mainly by inhibiting sodium influx through sodium-specific ion channels in the neuronal cell membrane, in particular the so-called voltage-gated sodium channels. When the influx of sodium is interrupted, an action potential cannot arise and signal conduction is inhibited. The receptor site is thought to be located at the cytoplasmic (inner) portion of the sodium channel. Local anesthetic drugs bind more readily to sodium channels in an activated state, thus onset of neuronal blockade is faster in rapidly firing neurons. This is referred to as state-dependent blockade. LAs are weak bases and are usually formulated as the hydrochloride salt to render them water-soluble. At a pH equal to the protonated base's pKa, the protonated (ionized) and unprotonated (unionized) forms of the molecule exist in equimolar amounts, but only the unprotonated base diffuses readily across cell membranes. Once inside the cell, the local anesthetic will be in equilibrium, with the formation of the protonated (ionized) form, which does not readily pass back out of the cell. This is referred to as "ion-trapping". In the protonated form, the molecule binds to the LA binding site on the inside of the ion channel near the cytoplasmic end. Most LAs work on the internal surface of the membrane - the drug has to penetrate the cell membrane, which is achieved best in the nonionised form. This is exemplified by the permanently ionised LA RAC 421-II which cannot diffuse across the cell membrane but, if injected into the cytosol of a nerve fibre, can induce NaKATPase blockage and anesthetic effects. Acidosis such as caused by inflammation at a wound partly reduces the action of LAs. This is partly because most of the anesthetic is ionized and therefore unable to cross the cell membrane to reach its cytoplasmic-facing site of action on the sodium channel. Sensitivity of nerve fibers to local anesthetics For most patients, administration of local anesthetics causes the sensation of pain to be lost first, followed by temperature, touch, deep pressure, and finally motor function. The sensitivity of nerve fibers to blockade depends on a combination of diameter and myelination. Their different sensitivities to LA blockade is termed differential blockade. Myelinated fibers are more sensitive to blockade as they are interrupted by nodes of Ranvier, thus interruption of only consecutive nodes of Ranvier will prevent action potential propagation. In turn, in unmyelinated nerves, an entire length needs to be blocked. Regarding diameter, the generally accepted principle is that susceptibility to local anesthesia depends inversely on fiber diameter. In general, autonomic fibers Type B fibers, small unmyelinated type C (pain sensation), and small myelinated Aδ fibers(pain and temperature sensations) are blocked before the larger myelinated Aγ, Aβ, and Aα fibers (mediating postural, touch, pressure, and motor information). Techniques Local anesthetics can block almost every nerve between the peripheral nerve endings and the central nervous system. The most peripheral technique is topical anesthesia to the skin or other body surface. Small and large peripheral nerves can be anesthetized individually (peripheral nerve block) or in anatomic nerve bundles (plexus anesthesia). Spinal anesthesia and epidural anesthesia merge into the central nervous system. Injection of LAs is often painful. A number of methods can be used to decrease this pain, including buffering of the solution with bicarbonate and warming. Clinical techniques include: Surface anesthesia is the application of an LA spray, solution, or cream to the skin or a mucous membrane; the effect is short lasting and is limited to the area of contact. Infiltration anesthesia is infiltration of LA into the tissue to be anesthetized; surface and infiltration anesthesia are collectively topical anesthesia Field block is subcutaneous injection of an LA in an area bordering on the field to be anesthetized. Peripheral nerve block is injection of LA in the vicinity of a peripheral nerve to anesthetize that nerve's area of innervation. Plexus anesthesia is injection of LA in the vicinity of a nerve plexus, often inside a tissue compartment that limits the diffusion of the drug away from the intended site of action. The anesthetic effect extends to the innervation areas of several or all nerves stemming from the plexus. Epidural anesthesia is an LA injected into the epidural space, where it acts primarily on the spinal nerve roots; depending on the site of injection and the volume injected, the anesthetized area varies from limited areas of the abdomen or chest to large regions of the body. Spinal anesthesia is an LA injected into the cerebrospinal fluid, usually at the lumbar spine (in the lower back), where it acts on spinal nerve roots and part of the spinal cord; the resulting anesthesia usually extends from the legs to the abdomen or chest. Intravenous regional anesthesia (Bier's block) is when blood circulation of a limb is interrupted using a tourniquet (a device similar to a blood-pressure cuff), then a large volume of LA is injected into a peripheral vein. The drug fills the limb's venous system and diffuses into tissues, where peripheral nerves and nerve endings are anesthetized. The anesthetic effect is limited to the area that is excluded from blood circulation and resolves quickly once circulation is restored. Local anesthesia of body cavities includes intrapleural anesthesia and intra-articular anesthesia. Transincision (or transwound) catheter anesthesia uses a multilumen catheter inserted through an incision or wound and aligned across it on the inside as the incision or wound is closed, providing continuous administration of local anesthetic along the incision or wounds Dental-specific techniques include: Vazirani–Akinosi technique The Vazirani–Akinosi technique is also known as the closed-mouth mandibular nerve block. It is mostly used in patients who have limited opening of the mandible or in those that have trismus; spasm of the muscles of mastication. The nerves which are anesthetised in this technique are the inferior alveolar, incisive, mental, lingual and mylohyoid nerves. Dental needles are available in two lengths, short and long. As Vazirani–Akinosi is a local anesthetic technique which requires penetration of a significant thickness of soft tissues, a long needle is used. The needle is inserted into the soft tissue which covers the medial border of the mandibular ramus, in region of the inferior alveolar, lingual and mylohyoid nerves. The positioning of the bevel of the needle is very important as it must be positioned away from the bone of the mandibular ramus and instead towards the midline. Intraligamentary Infiltration Intraligamentary infiltration, also known as periodontal ligament injection or intraligamentary injection (ILI), is known as "the most universal of the supplemental injections". ILIs are usually administered when inferior alveolar nerve block techniques are inadequate or ineffective. ILIs are purposed for: Single-tooth anesthesia Low anesthetic dose Contraindication for systemic anesthesia Presence of systemic health problems ILI utilization is expected to increase because dental patients prefer fewer soft tissue anesthesia and dentists aim to reduce administration of traditional inferior alveolar nerve block (INAB) for routine restorative procedures. Injection methodology: The periodontal ligament space provides an accessible route to the cancellous alveolar bone, and the anesthetic reaches the pulpal nerve via natural perforation of intraoral bone tissue. Advantages of ILI over INAB: rapid onset (within 30 seconds), small dosage required (0.2–1.0 mL), limited area of numbness, lower intrinsic risks such as neuropathy, hematoma, trismus/jaw sprain and self-inflicted periodontal tissue injury, as well as decreased cardiovascular disturbances. Its usage as a secondary or supplementary anesthesia on the mandible has reported a high success rate of above 90%. Disadvantages: Risk of temporary periodontal tissue damage, likelihood of bacteriemia and endocarditis for at-risk populations, appropriate pressure and correct needle placement are imperative for anesthetic success, short duration of pulpal anesthesia limits the use of ILIs for several restorative procedures that require longer duration, postoperative discomfort, and injury on unerupted teeth such as enamel hypoplasia and defects. Technique description: All plaque and calculus to be eradicated, optimally before the operative visit to assist gingival tissue healing. Before injection, disinfect gingival sulcus with 0.2% chlorhexidine solution. Administration of soft tissue anesthesia is recommended prior to ILI administration. This helps to enhance patient comfort. Needle gauges of sizes 27-gauge short or 30-gauge ultra-short needle are usually utilized. The needle is inserted along the long axis, at a 30 degree angle, of the mesial or distal root for single rooted teeth and on the mesial and distal roots of multi-rooted teeth. Bevel orientation toward the root provides easier advancement of the needle apically. When the needle reaches between the root and crestal bone, significant resistance is experience. Anesthetic deposition is recommended at 0.2 mL, per root or site, over minimally 20 seconds. For its success, the anesthetic must be administered under pressure. It must not leak out of the sulcus into the mouth. Withdraw needle for minimally 10–15 seconds to permit complete deposition of solution. This can be slower than other injections as there is pressure build-up from the anesthetic administration. Blanching of the tissue is observed and may be more evident when vasoconstrictors are used. It is caused by a temporary obstruction of blood flow to the tissue. Syringes: Standard syringes can be used. The intraligamentary syringe offers mechanical advantage by using a trigger-grasp or click apparatus to employ a gear or lever that improves control and results in increased force to push the anesthetic cartridge's rubber stopper forward for medication deposition with greater ease. C-CLADs (computer controlled local anesthetic delivery devices) can be used. Its usage of computer microprocessors allows for control of fluid dynamics and anesthetic deposition. This minimizes subjective flow rates and variability in pressure. This thereby results in enhanced hydrodynamic diffusion of solution into bone or the target area of deposition, thus permitting larger amounts of anesthetic solution to be delivered during ILIs without increased tissue damage. Things to note: ILIs are not recommended for patients with active periodontal inflammation. ILIs should not be administered at tooth sites with 5 mm or more of periodontal attachment loss. Gow-Gates Technique Gow-Gates technique is used to provide anesthetics to the mandible of the patient's mouth. With the aid of extra and intraoral landmarks, the needle is injected into the intraoral latero-anterior surface of the condyle, steering clear below the insertion of the lateral pterygoid muscle. The extraoral landmarks used for this technique are the lower border of the ear tragus, corners of the mouth and the angulation of the tragus on the side of the face. Biophysical forces (pulsation of the maxillary artery, muscular function of jaw movement) and gravity will aid with the diffusion of anesthetic to fill the whole pterygomandibular space. All three oral sensory parts of the mandibular branch of the trigeminal nerve and other sensory nerves in the region will come in contact with the anesthetic and this reduces the need to anesthetise supplementary innervation. In comparison to other regional block methods of anestheising the lower jaw, the Gow-Gates technique has a higher success rate in fully anesthetising the lower jaw. One study found that out of 1,200 patients receiving injections through the Gow-Gates technique, only 2 of them did not obtain complete anesthesia. Types Local anesthetic solutions for injection typically consist of: The local anesthetic agent itself A vehicle, which is usually water-based or just sterile water Vasoconstrictor possibly (see below) Reducing agent (antioxidant), e.g. if epinephrine is used, then sodium metabisulfite is used as a reducing agent Preservative, e.g. methylparaben Buffer Esters are prone to producing allergic reactions, which may necessitate the use of an amide. The names of each locally clinical anesthetic have the suffix "-caine". Most ester LAs are metabolized by pseudocholinesterase, while amide LAs are metabolized in the liver. This can be a factor in choosing an agent in patients with liver failure, although since cholinesterases are produced in the liver, physiologically (e.g. very young or very old individual) or pathologically (e.g. cirrhosis) impaired hepatic metabolism is also a consideration when using esters. Sometimes, LAs are combined, e.g.: Lidocaine/prilocaine (EMLA, eutectic mixture of local anesthetic) Lidocaine/tetracaine (Rapydan) TAC LA solutions for injection are sometimes mixed with vasoconstrictors (combination drug) to increase the duration of local anesthesia by constricting the blood vessels, thereby safely concentrating the anesthetic agent for an extended duration, as well as reducing hemorrhage. Because the vasoconstrictor temporarily reduces the rate at which the systemic circulation removes the local anesthetic from the area of the injection, the maximum doses of LAs when combined with a vasoconstrictor is higher compared to the same LA without any vasoconstrictor. Occasionally, cocaine is administered for this purpose. Examples include: Prilocaine hydrochloride and epinephrine (trade name Citanest Forte) Lidocaine, bupivacaine, and epinephrine (recommended final concentrations of 0.5, 0.25, and 0.5%, respectively) Iontocaine, consisting of lidocaine and epinephrine Septocaine (trade name Septodont), a combination of articaine and epinephrine One combination product of this type is used topically for surface anaesthesia, TAC (5–12% tetracaine,1/2000 (0.05%, 500 ppm, per mille) adrenaline, 4 or 10% cocaine). Using LA with vasoconstrictor is safe in regions supplied by end arteries. The commonly held belief that LA with vasoconstrictor can cause necrosis in extremities such as the nose, ears, fingers, and toes (due to constriction of end arteries), is invalidated, since no case of necrosis has been reported since the introduction of commercial lidocaine with epinephrine in 1948. Ester group Benzocaine Chloroprocaine Cocaine Cyclomethycaine Dimethocaine (Larocaine) Piperocaine Propoxycaine Procaine (Novocaine) Proparacaine Tetracaine (Amethocaine) Amide group Articaine Bupivacaine Cinchocaine (Dibucaine) Etidocaine Levobupivacaine Lidocaine (Lignocaine) Mepivacaine Prilocaine Ropivacaine Trimecaine Naturally derived Saxitoxin Neosaxitoxin Tetrodotoxin Menthol Eugenol Cocaine Spilanthol Most naturally occurring local anesthetics with the exceptions of menthol, eugenol and cocaine are neurotoxins, and have the suffix -toxin in their names. Cocaine binds the intracellular side of the channels while saxitoxin, neosaxitoxin and tetrodotoxin bind to the extracellular side of sodium channels. History In Peru, the ancient Incas are believed to have used the leaves of the coca plant as a local anesthetic in addition to its stimulant properties. It was also used for slave payment and is thought to play a role in the subsequent destruction of Incas culture when Spaniards realized the effects of chewing the coca leaves and took advantage of it. Cocaine was first used as a local anesthetic in 1884. The search for a less toxic and less addictive substitute led to the development of the aminoester local anesthetics stovaine in 1903 and procaine in 1904. Since then, several synthetic local anesthetic drugs have been developed and put into clinical use, notably lidocaine in 1943, bupivacaine in 1957, and prilocaine in 1959. The invention of clinical use of local anaesthesia is credited to the Vienna School which included Sigmund Freud (1856-1939), Carl Koller (1857-1944) and Leopold Konigstein (1850–1942). They introduced local anaesthesia, using cocaine, through 'self-experimation' on their oral mucosa before introducing it to animal or human experimentation. The Vienna school first started using cocaine as local anaesthesia in ophthalmology and it was later incorporated into ophthalmologic practice. Dr. Halsted and Dr. Hall, in the United States in 1885 described an intraoral anesthetic technique of blocking the inferior alveolar nerve and the antero-superior dental nerve using 4% cocaine.{ Shortly after the first use of cocaine for topical anesthesia, blocks on peripheral nerves were described. Brachial plexus anesthesia by percutaneous injection through axillary and supraclavicular approaches was developed in the early 20th century. The search for the most effective and least traumatic approach for plexus anesthesia and peripheral nerve blocks continues to this day. In recent decades, continuous regional anesthesia using catheters and automatic pumps has evolved as a method of pain therapy. Intravenous regional anesthesia was first described by August Bier in 1908. This technique is still in use and is remarkably safe when drugs of low systemic toxicity such as prilocaine are used. Spinal anesthesia was first used in 1885, but not introduced into clinical practice until 1899, when August Bier subjected himself to a clinical experiment in which he observed the anesthetic effect, but also the typical side effect of postpunctural headache. Within a few years, spinal anesthesia became widely used for surgical anesthesia and was accepted as a safe and effective technique. Although atraumatic (noncutting-tip) cannulae and modern drugs are used today, the technique has otherwise changed very little over many decades. Epidural anesthesia by a caudal approach had been known in the early 20th century, but a well-defined technique using lumbar injection was not developed until 1921, when Fidel Pagés published his article "Anestesia Metamérica". This technique was popularized in the 1930s and 1940s by Achile Mario Dogliotti. With the advent of thin, flexible catheters, continuous infusion and repeated injections have become possible, making epidural anesthesia still a highly successful technique. Besides its many uses for surgery, epidural anesthesia is particularly popular in obstetrics for the treatment of labor pain.
Biology and health sciences
Anesthetics
Health
175859
https://en.wikipedia.org/wiki/Plasma%20display
Plasma display
A plasma display panel is a type of flat-panel display that uses small cells containing plasma: ionized gas that responds to electric fields. Plasma televisions were the first large (over 32 inches/81 cm diagonal) flat-panel displays to be released to the public. Until about 2007, plasma displays were commonly used in large televisions. By 2013, they had lost nearly all market share due to competition from low-cost liquid crystal displays (LCD)s. Manufacturing of plasma displays for the United States retail market ended in 2014, and manufacturing for the Chinese market ended in 2016. Plasma displays are obsolete, having been superseded in most if not all aspects by OLED displays. Competing display technologies include cathode-ray tube (CRT), organic light-emitting diode (OLED), CRT projectors, AMLCD, Digital Light Processing DLP, SED-tv, LED display, field emission display (FED), and quantum dot display (QLED). History Early development Kálmán Tihanyi, a Hungarian engineer, described a proposed flat-panel plasma display system in a 1936 paper. The first practical plasma video display was co-invented in 1964 at the University of Illinois at Urbana–Champaign by Donald Bitzer, H. Gene Slottow, and graduate student Robert Willson for the PLATO computer system. The goal was to create a display that had inherent memory to reduce the cost of the terminals. The original neon orange monochrome Digivue display panels built by glass producer Owens-Illinois were very popular in the early 1970s because they were rugged and needed neither memory nor circuitry to refresh the images. A long period of sales decline occurred in the late 1970s because semiconductor memory made CRT displays cheaper than the $2500 USD PLATO plasma displays. Nevertheless, the plasma displays' relatively large screen size and 1 inch (25.4 mm) thickness made them suitable for high-profile placement in lobbies and stock exchanges. Burroughs Corporation, a maker of adding machines and computers, developed the Panaplex display in the early 1970s. The Panaplex display, generically referred to as a gas-discharge or gas-plasma display, uses the same technology as later plasma video displays, but began life as a seven-segment display for use in adding machines. They became popular for their bright orange luminous look and found nearly ubiquitous use throughout the late 1970s and into the 1990s in cash registers, calculators, pinball machines, aircraft avionics such as radios, navigational instruments, and stormscopes; test equipment such as frequency counters and multimeters; and generally anything that previously used nixie tube or numitron displays with a high digit-count. These displays were eventually replaced by LEDs because of their low current-draw and module-flexibility, but are still found in some applications where their high brightness is desired, such as pinball machines and avionics. 1980s In 1983, IBM introduced a orange-on-black monochrome display (Model 3290 Information Panel) which was able to show up to four simultaneous IBM 3270 terminal sessions. By the end of the decade, orange monochrome plasma displays were used in a number of high-end AC-powered portable computers, such as the Ericsson Portable PC (the first use of such a display in 1985), the Compaq Portable 386 (1987) and the IBM P75 (1990). Plasma displays had a better contrast ratio, viewability angle, and less motion blur than the LCDs that were available at the time, and were used until the introduction of active-matrix color LCD displays in 1992. Due to heavy competition from monochrome LCDs used in laptops and the high costs of plasma display technology, in 1987 IBM planned to shut down its factory in Kingston, New York, the largest plasma plant in the world, in favor of manufacturing mainframe computers, which would have left development to Japanese companies. Dr. Larry F. Weber, a University of Illinois ECE PhD (in plasma display research) and staff scientist working at CERL (home of the PLATO System), co-founded Plasmaco with Stephen Globus and IBM plant manager James Kehoe, and bought the plant from IBM for US$50,000. Weber stayed in Urbana as CTO until 1990, then moved to upstate New York to work at Plasmaco. 1990s In 1992, Fujitsu introduced the world's first full-color display. It was based on technology created at the University of Illinois at Urbana–Champaign and NHK Science & Technology Research Laboratories. In 1994, Weber demonstrated a color plasma display at an industry convention in San Jose. Panasonic Corporation began a joint development project with Plasmaco, which led in 1996 to the purchase of Plasmaco, its color AC technology, and its American factory for US$26 million. In 1995, Fujitsu introduced the first plasma display panel; it had 852×480 resolution and was progressively scanned. Two years later, Philips introduced at CES and CeBIT the first large commercially available flat-panel TV, using the Fujitsu panels. Philips had plans to sell it for 70,000 french francs. It was released as the Philips 42PW9962. It was available at four Sears locations in the US for $14,999, including in-home installation. Pioneer and Fujitsu also began selling plasma televisions that year, and other manufacturers followed. By the year 2000 prices had dropped to $10,000. 2000s In the year 2000, the first 60-inch (152-cm) plasma display was developed by Plasmaco. Panasonic was also reported to have developed a process to make plasma displays using ordinary window glass instead of the much more expensive "high strain point" glass. High strain point glass is made similarly to conventional float glass, but it is more heat resistant, deforming at higher temperatures. High strain point glass is normally necessary because plasma displays have to be baked during manufacture to dry the rare-earth phosphors after they are applied to the display. However, high strain point glass may be less scratch resistant. Until the early 2000s, plasma displays were the most popular choice for HDTV flat-panel display as they had many benefits over LCDs. Beyond plasma's deeper blacks, increased contrast, faster response time, greater color spectrum, and wider viewing angle; they were also much bigger than LCDs, and it was believed that LCDs were suited only to smaller sized televisions. Plasma had overtaken rear-projection systems in 2005. However, improvements in LCD fabrication narrowed the technological gap. The increased size, lower weight, falling prices, and often lower electrical power consumption of LCDs made them competitive with plasma television sets. In 2006, LCD prices started to fall rapidly and their screen sizes increased, although plasma televisions maintained a slight edge in picture quality and a price advantage for sets at the critical 42" size and larger. By late 2006, several vendors were offering 42" LCDs, albeit at a premium price, encroaching upon plasma's only stronghold. More decisively, LCDs offered higher resolutions and true 1080p support, while plasmas were stuck at 720p, which made up for the price difference. In late 2006, analysts noted that LCDs had overtaken plasmas, particularly in the and above segment where plasma had previously gained market share. Another industry trend was the consolidation of plasma display manufacturers, with around 50 brands available but only five manufacturers. In the first quarter of 2008, a comparison of worldwide TV sales broke down to 22.1 million for direct-view CRT, 21.1 million for LCD, 2.8 million for plasma, and 0.1 million for rear projection. When the sales figures for the 2007 Christmas season were finally tallied, analysts were surprised to find that not only had LCD outsold plasma, but CRTs as well, during the same period. This development drove competing large-screen systems from the market almost overnight. The February 2009 announcement that Pioneer Electronics was ending production of plasma screens was widely considered the tipping point in the technology's history as well. Screen sizes have increased since the introduction of plasma displays. The largest plasma video display in the world at the 2008 Consumer Electronics Show in Las Vegas, Nevada, was a unit manufactured by Matsushita Electric Industrial (Panasonic) standing tall by wide. 2010s At the 2010 Consumer Electronics Show in Las Vegas, Panasonic introduced their 152" 2160p 3D plasma. In 2010, Panasonic shipped 19.1 million plasma TV panels. In 2010, the shipments of plasma TVs reached 18.2 million units globally. Since that time, shipments of plasma TVs have declined substantially. This decline has been attributed to the competition from liquid crystal (LCD) televisions, whose prices have fallen more rapidly than those of the plasma TVs. In late 2013, Panasonic announced that they would stop producing plasma TVs from March 2014 onwards. In 2014, LG and Samsung discontinued plasma TV production as well, effectively killing the technology, probably because of lowering demand. Design A panel of a plasma display typically comprises millions of tiny compartments in between two panels of glass. These compartments, or "bulbs" or "cells", hold a mixture of noble gases and a minuscule amount of another gas (e.g., mercury vapor). Just as in the fluorescent lamps over an office desk, when a high voltage is applied across the cell, the gas in the cells forms a plasma. With flow of electricity (electrons), some of the electrons strike mercury particles as the electrons move through the plasma, momentarily increasing the energy level of the atom until the excess energy is shed. Mercury sheds the energy as ultraviolet (UV) photons. The UV photons then strike phosphor that is painted on the inside of the cell. When the UV photon strikes a phosphor molecule, it momentarily raises the energy level of an outer orbit electron in the phosphor molecule, moving the electron from a stable to an unstable state; the electron then sheds the excess energy as a photon at a lower energy level than UV light; the lower energy photons are mostly in the infrared range but about 40% are in the visible light range. Thus the input energy is converted to mostly infrared but also as visible light. The screen heats up to between during operation. Depending on the phosphors used, different colors of visible light can be achieved. Each pixel in a plasma display is made up of three cells comprising the primary colors of visible light. Varying the voltage of the signals to the cells thus allows different perceived colors. The long electrodes are stripes of electrically conducting material that also lies between the glass plates in front of and behind the cells. The "address electrodes" sit behind the cells, along the rear glass plate, and can be opaque. The transparent display electrodes are mounted in front of the cell, along the front glass plate. As can be seen in the illustration, the electrodes are covered by an insulating protective layer. A magnesium oxide layer may be present to protect the dielectric layer and to emit secondary electrons. Control circuitry charges the electrodes that cross paths at a cell, creating a voltage difference between front and back. Some of the atoms in the gas of a cell then lose electrons and become ionized, which creates an electrically conducting plasma of atoms, free electrons, and ions. The collisions of the flowing electrons in the plasma with the inert gas atoms leads to light emission; such light-emitting plasmas are known as glow discharges. In a monochrome plasma panel, the gas is mostly neon, and the color is the characteristic orange of a neon-filled lamp (or sign). Once a glow discharge has been initiated in a cell, it can be maintained by applying a low-level voltage between all the horizontal and vertical electrodes–even after the ionizing voltage is removed. To erase a cell all voltage is removed from a pair of electrodes. This type of panel has inherent memory. A small amount of nitrogen is added to the neon to increase hysteresis and thus help with the memory effect. Plasma panels may be built without nitrogen gas, using xenon, neon, argon, and helium instead with mercury being used in some early displays. In color panels, the back of each cell is coated with a phosphor. The ultraviolet photons emitted by the plasma excite these phosphors, which give off visible light with colors determined by the phosphor materials. This aspect is comparable to fluorescent lamps and to the neon signs that use colored phosphors. Every pixel is made up of three separate subpixel cells, each with different colored phosphors. One subpixel has a red light phosphor, one subpixel has a green light phosphor and one subpixel has a blue light phosphor. These colors blend together to create the overall color of the pixel, the same as a triad of a shadow mask CRT or color LCD. Plasma panels use pulse-width modulation (PWM) to control brightness: by varying the pulses of current flowing through the different cells thousands of times per second, the control system can increase or decrease the intensity of each subpixel color to create billions of different combinations of red, green and blue. In this way, the control system can produce most of the visible colors. Plasma displays use the same phosphors as CRTs, which accounts for the extremely accurate color reproduction when viewing television or computer video images (which use an RGB color system designed for CRT displays). To produce light, the cells need to be driven at a relatively high voltage (~300 volts) and the pressure of the gases inside the cell needs to be low (~500 torr). Plasma displays have a wide color gamut and can be produced in fairly large sizes—up to diagonally. They had a very low luminance "dark-room" black level compared with the lighter grey of the unilluminated parts of an LCD screen. (As plasma panels are locally lit and do not require a back light, blacks are blacker on plasma and grayer on LCDs.) LED-backlit LCD televisions have been developed to reduce this distinction. The display panel itself is about thick, generally allowing the device's total thickness (including electronics) to be less than . Power consumption varies greatly with picture content, with bright scenes drawing significantly more power than darker ones – this is also true for CRTs as well as modern LCDs where LED backlight brightness is adjusted dynamically. The plasma that illuminates the screen can reach a temperature of at least . Typical power consumption is 400 watts for a screen. Most screens are set to "vivid" mode by default in the factory (which maximizes the brightness and raises the contrast so the image on the screen looks good under the extremely bright lights that are common in big box stores), which draws at least twice the power (around 500–700 watts) of a "home" setting of less extreme brightness. The lifetime of the latest generation of plasma displays is estimated at 100,000 hours (11 years) of actual display time, or 27 years at 10 hours per day. This is the estimated time over which maximum picture brightness degrades to half the original value. Plasma screens are made out of glass, which may result in glare on the screen from nearby light sources. Plasma display panels cannot be economically manufactured in screen sizes smaller than . Although a few companies have been able to make plasma enhanced-definition televisions (EDTV) this small, even fewer have made 32-inch (81-cm) plasma HDTVs. With the trend toward large-screen television technology, the 32-inch (81-cm) screen size was rapidly disappearing by mid-2009. Though considered bulky and thick compared with their LCD counterparts, some sets such as Panasonic's Z1 and Samsung's B860 series are as slim as thick making them comparable to LCDs in this respect. Plasma displays are generally heavier than LCD and may require more careful handling, such as being kept upright. Plasma displays use more electrical power, on average, than an LCD TV using a LED backlight. Older CCFL backlights for LCD panels used quite a bit more power, and older plasma TVs used quite a bit more power than recent models. Plasma displays do not work as well at high altitudes above due to pressure differential between the gases inside the screen and the air pressure at altitude. It may cause a buzzing noise. Manufacturers rate their screens to indicate the altitude parameters. For those who wish to listen to AM radio, or are amateur radio operators (hams) or shortwave listeners (SWL), the radio frequency interference (RFI) from these devices can be irritating or disabling. In their heyday, they were less expensive for the buyer per square inch than LCD, particularly when considering equivalent performance. Plasma displays have wider viewing angles than those of LCD; images do not suffer from degradation at less than straight ahead angles like LCDs. LCDs using IPS technology have the widest angles, but they do not equal the range of plasma primarily due to "IPS glow", a generally whitish haze that appears due to the nature of the IPS pixel design. Plasma displays have less visible motion blur, thanks in large part to very high refresh rates and a faster response time, contributing to superior performance when displaying content with significant amounts of rapid motion such as auto racing, hockey, baseball, etc. Plasma displays have superior uniformity to LCD panel backlights, which nearly always produce uneven brightness levels, although this is not always noticeable. High-end computer monitors have technologies to try to compensate for the uniformity problem. Contrast ratio Contrast ratio is the difference between the brightest and darkest parts of an image, measured in discrete steps, at any given moment. Generally, the higher the contrast ratio, the more realistic the image is (though the "realism" of an image depends on many factors including color accuracy, luminance linearity, and spatial linearity). Contrast ratios for plasma displays are often advertised as high as 5,000,000:1. On the surface, this is a significant advantage of plasma over most other current display technologies, a notable exception being organic light-emitting diode. Although there are no industry-wide guidelines for reporting contrast ratio, most manufacturers follow either the ANSI standard or perform a full-on-full-off test. The ANSI standard uses a checkered test pattern whereby the darkest blacks and the lightest whites are simultaneously measured, yielding the most accurate "real-world" ratings. In contrast, a full-on-full-off test measures the ratio using a pure black screen and a pure white screen, which gives higher values but does not represent a typical viewing scenario. Some displays, using many different technologies, have some "leakage" of light, through either optical or electronic means, from lit pixels to adjacent pixels so that dark pixels that are near bright ones appear less dark than they do during a full-off display. Manufacturers can further artificially improve the reported contrast ratio by increasing the contrast and brightness settings to achieve the highest test values. However, a contrast ratio generated by this method is misleading, as content would be essentially unwatchable at such settings. Each cell on a plasma display must be precharged before it is lit, otherwise the cell would not respond quickly enough. Precharging normally increases power consumption, so energy recovery mechanisms may be in place to avoid an increase in power consumption. This precharging means the cells cannot achieve a true black, whereas an LED backlit LCD panel can actually turn off parts of the backlight, in "spots" or "patches" (this technique, however, does not prevent the large accumulated passive light of adjacent lamps, and the reflection media, from returning values from within the panel). Some manufacturers have reduced the precharge and the associated background glow, to the point where black levels on modern plasmas are starting to become close to some high-end CRTs Sony and Mitsubishi produced ten years before the comparable plasma displays. With an LCD, black pixels are generated by a light polarization method; many panels are unable to completely block the underlying backlight. More recent LCD panels using LED illumination can automatically reduce the backlighting on darker scenes, though this method cannot be used in high-contrast scenes, leaving some light showing from black parts of an image with bright parts, such as (at the extreme) a solid black screen with one fine intense bright line. This is called a "halo" effect which has been minimized on newer LED-backlit LCDs with local dimming. Edgelit models cannot compete with this as the light is reflected via a light guide to distribute the light behind the panel. Plasma displays are capable of producing deeper blacks than LCD allowing for a superior contrast ratio. Earlier generation displays (circa 2006 and prior) had phosphors that lost luminosity over time, resulting in gradual decline of absolute image brightness. Newer models have advertised lifespans exceeding 100,000 hours (11 years), far longer than older CRTs. Screen burn-in Image burn-in occurs on CRTs and plasma panels when the same picture is displayed for long periods. This causes the phosphors to overheat, losing some of their luminosity and producing a "shadow" image that is visible with the power off. Burn-in is especially a problem on plasma panels because they run hotter than CRTs. Early plasma televisions were plagued by burn-in, making it impossible to use video games or anything else that displayed static images. Plasma displays also exhibit another image retention issue which is sometimes confused with screen burn-in damage. In this mode, when a group of pixels are run at high brightness (when displaying white, for example) for an extended period, a charge build-up in the pixel structure occurs and a ghost image can be seen. However, unlike burn-in, this charge build-up is transient and self-corrects after the image condition that caused the effect has been removed and a long enough period has passed (with the display either off or on). Plasma manufacturers have tried various ways of reducing burn-in such as using gray pillarboxes, pixel orbiters and image washing routines. Recent models have a pixel orbiter that moves the entire picture slower than is noticeable to the human eye, which reduces the effect of burn-in but does not prevent it. None to date have eliminated the problem and all plasma manufacturers continue to exclude burn-in from their warranties. Screen resolution Fixed-pixel displays such as plasma TVs scale the video image of each incoming signal to the native resolution of the display panel. The most common native resolutions for plasma display panels are 852×480 (EDTV), 1,366×768 and 1920×1080 (HDTV). As a result, picture quality varies depending on the performance of the video scaling processor and the upscaling and downscaling algorithms used by each display manufacturer. Early plasma televisions were enhanced-definition (ED) with a native resolution of 840×480 (discontinued) or 852×480 and down-scaled their incoming high-definition video signals to match their native display resolutions. The following ED resolutions were common prior to the introduction of HD displays, but have long been phased out in favor of HD displays, as well as because the overall pixel count in ED displays is lower than the pixel count on SD PAL displays (852×480 vs 720×576, respectively). 840×480p 852×480p Early high-definition (HD) plasma displays had a resolution of 1024x1024 and were alternate lighting of surfaces (ALiS) panels made by Fujitsu and Hitachi. These were interlaced displays, with non-square pixels. Later HDTV plasma televisions usually have a resolution of 1,024×768 found on many 42-inch (107-cm) plasma screens, 1280×768 and 1,366×768 found on 50 in, 60 in, and 65 in plasma screens, or 1920×1080 found on plasma screen sizes from 42 to 103 inches (107-262 cm). These displays are usually progressive displays, with non-square pixels, and will up-scale and de-interlace their incoming standard-definition signals to match their native display resolutions. 1024×768 resolution requires that 720p content be downscaled in one direction and upscaled in the other. Notable manufacturers Fujitsu (only produced panels) Chunghwa Picture Tubes (only produced panels ) Formosa plastics (only produced panels) Hitachi (produced panels) LG (produced panels) Panasonic Viera (produced panels) Pioneer (produced panels) Samsung (produced panels) Toshiba (produced panels) Environmental impact Plasma screens use significantly more energy than CRT and LCD screens.
Technology
Media and communication: Basics
null
175875
https://en.wikipedia.org/wiki/Critical%20mass
Critical mass
In nuclear engineering, a critical mass is the smallest amount of fissile material needed for a sustained nuclear chain reaction. The critical mass of a fissionable material depends upon its nuclear properties (specifically, its nuclear fission cross-section), density, shape, enrichment, purity, temperature, and surroundings. The concept is important in nuclear weapon design. Point of criticality When a nuclear chain reaction in a mass of fissile material is self-sustaining, the mass is said to be in a critical state in which there is no increase or decrease in power, temperature, or neutron population. A numerical measure of a critical mass depends on the effective neutron multiplication factor , the average number of neutrons released per fission event that go on to cause another fission event rather than being absorbed or leaving the material. A subcritical mass is a mass that does not have the ability to sustain a fission chain reaction. A population of neutrons introduced to a subcritical assembly will exponentially decrease. In this case, known as , . A critical mass is a mass of fissile material that self-sustains a fission chain reaction. In this case, known as , . A steady rate of spontaneous fission causes a proportionally steady level of neutron activity. A supercritical mass is a mass which, once fission has started, will proceed at an increasing rate. In this case, known as , . The constant of proportionality increases as increases. The material may settle into equilibrium (i.e. become critical again) at an elevated temperature/power level or destroy itself. Due to spontaneous fission a supercritical mass will undergo a chain reaction. For example, a spherical critical mass of pure uranium-235 (235U) with a mass of about would experience around 15 spontaneous fission events per second. The probability that one such event will cause a chain reaction depends on how much the mass exceeds the critical mass. If there is uranium-238 (238U) present, the rate of spontaneous fission will be much higher. Fission can also be initiated by neutrons produced by cosmic rays. Changing the point of criticality The mass where criticality occurs may be changed by modifying certain attributes such as fuel, shape, temperature, density and the installation of a neutron-reflective substance. These attributes have complex interactions and interdependencies. These examples only outline the simplest ideal cases: Varying the amount of fuel It is possible for a fuel assembly to be critical at near zero power. If the perfect quantity of fuel were added to a slightly subcritical mass to create an "exactly critical mass", fission would be self-sustaining for only one neutron generation (fuel consumption then makes the assembly subcritical again). Similarly, if the perfect quantity of fuel were added to a slightly subcritical mass, to create a barely supercritical mass, the temperature of the assembly would increase to an initial maximum (for example: 1 K above the ambient temperature) and then decrease back to the ambient temperature after a period of time, because fuel consumed during fission brings the assembly back to subcriticality once again. Changing the shape A mass may be exactly critical without being a perfect homogeneous sphere. More closely refining the shape toward a perfect sphere will make the mass supercritical. Conversely changing the shape to a less perfect sphere will decrease its reactivity and make it subcritical. Changing the temperature A mass may be exactly critical at a particular temperature. Fission and absorption cross-sections increase as the relative neutron velocity decreases. As fuel temperature increases, neutrons of a given energy appear faster and thus fission/absorption is less likely. This is not unrelated to Doppler broadening of the 238U resonances but is common to all fuels/absorbers/configurations. Neglecting the very important resonances, the total neutron cross-section of every material exhibits an inverse relationship with relative neutron velocity. Hot fuel is always less reactive than cold fuel (over/under moderation in LWR is a different topic). Thermal expansion associated with temperature increase also contributes a negative coefficient of reactivity since fuel atoms are moving farther apart. A mass that is exactly critical at room temperature would be sub-critical in an environment anywhere above room temperature due to thermal expansion alone. Varying the density of the mass The higher the density, the lower the critical mass. The density of a material at a constant temperature can be changed by varying the pressure or tension or by changing crystal structure (see allotropes of plutonium). An ideal mass will become subcritical if allowed to expand or conversely the same mass will become supercritical if compressed. Changing the temperature may also change the density; however, the effect on critical mass is then complicated by temperature effects (see "Changing the temperature") and by whether the material expands or contracts with increased temperature. Assuming the material expands with temperature (enriched uranium-235 at room temperature for example), at an exactly critical state, it will become subcritical if warmed to lower density or become supercritical if cooled to higher density. Such a material is said to have a negative temperature coefficient of reactivity to indicate that its reactivity decreases when its temperature increases. Using such a material as fuel means fission decreases as the fuel temperature increases. Use of a neutron reflector Surrounding a spherical critical mass with a neutron reflector further reduces the mass needed for criticality. A common material for a neutron reflector is beryllium metal. This reduces the number of neutrons which escape the fissile material, resulting in increased reactivity. Use of a tamper In a bomb, a dense shell of material surrounding the fissile core will contain, via inertia, the expanding fissioning material, which increases the efficiency. This is known as a tamper. A tamper also tends to act as a neutron reflector. Because a bomb relies on fast neutrons (not ones moderated by reflection with light elements, as in a reactor), the neutrons reflected by a tamper are slowed by their collisions with the tamper nuclei, and because it takes time for the reflected neutrons to return to the fissile core, they take rather longer to be absorbed by a fissile nucleus. But they do contribute to the reaction, and can decrease the critical mass by a factor of four. Also, if the tamper is (e.g. depleted) uranium, it can fission due to the high energy neutrons generated by the primary explosion. This can greatly increase yield, especially if even more neutrons are generated by fusing hydrogen isotopes, in a so-called boosted configuration. Critical size The critical size is the minimum size of a nuclear reactor core or nuclear weapon that can be made for a specific geometrical arrangement and material composition. The critical size must at least include enough fissionable material to reach critical mass. If the size of the reactor core is less than a certain minimum, too many fission neutrons escape through its surface and the chain reaction is not sustained. Critical mass of a bare sphere The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the following table. Most information on bare sphere masses is considered classified, since it is critical to nuclear weapons design, but some documents have been declassified. The critical mass for lower-grade uranium depends strongly on the grade: with 45% 235U, the bare-sphere critical mass is around ; with 19.75% 235U it is over ; and with 15% 235U, it is well over . In all of these cases, the use of a neutron reflector like beryllium can substantially drop this amount, however: with a reflector, the critical mass of 19.75%-enriched uranium drops to , and with a reflector it drops to , for example. The critical mass is inversely proportional to the square of the density. If the density is 1% more and the mass 2% less, then the volume is 3% less and the diameter 1% less. The probability for a neutron per cm travelled to hit a nucleus is proportional to the density. It follows that 1% greater density means that the distance travelled before leaving the system is 1% less. This is something that must be taken into consideration when attempting more precise estimates of critical masses of plutonium isotopes than the approximate values given above, because plutonium metal has a large number of different crystal phases which can have widely varying densities. Note that not all neutrons contribute to the chain reaction. Some escape and others undergo radiative capture. Let q denote the probability that a given neutron induces fission in a nucleus. Consider only prompt neutrons, and let ν denote the number of prompt neutrons generated in a nuclear fission. For example, ν ≈ 2.5 for uranium-235. Then, criticality occurs when ν·q = 1. The dependence of this upon geometry, mass, and density appears through the factor q. Given a total interaction cross section σ (typically measured in barns), the mean free path of a prompt neutron is where n is the nuclear number density. Most interactions are scattering events, so that a given neutron obeys a random walk until it either escapes from the medium or causes a fission reaction. So long as other loss mechanisms are not significant, then, the radius of a spherical critical mass is rather roughly given by the product of the mean free path and the square root of one plus the number of scattering events per fission event (call this s), since the net distance travelled in a random walk is proportional to the square root of the number of steps: Note again, however, that this is only a rough estimate. In terms of the total mass M, the nuclear mass m, the density ρ, and a fudge factor f which takes into account geometrical and other effects, criticality corresponds to which clearly recovers the aforementioned result that critical mass depends inversely on the square of the density. Alternatively, one may restate this more succinctly in terms of the areal density of mass, Σ: where the factor f has been rewritten as f''' to account for the fact that the two values may differ depending upon geometrical effects and how one defines Σ. For example, for a bare solid sphere of 239Pu criticality is at 320 kg/m2, regardless of density, and for 235U at 550 kg/m2. In any case, criticality then depends upon a typical neutron "seeing" an amount of nuclei around it such that the areal density of nuclei exceeds a certain threshold. This is applied in implosion-type nuclear weapons where a spherical mass of fissile material that is substantially less than a critical mass is made supercritical by very rapidly increasing ρ (and thus Σ as well) (see below). Indeed, sophisticated nuclear weapons programs can make a functional device from less material than more primitive weapons programs require. Aside from the math, there is a simple physical analog that helps explain this result. Consider diesel fumes belched from an exhaust pipe. Initially the fumes appear black, then gradually you are able to see through them without any trouble. This is not because the total scattering cross section of all the soot particles has changed, but because the soot has dispersed. If we consider a transparent cube of length L on a side, filled with soot, then the optical depth of this medium is inversely proportional to the square of L, and therefore proportional to the areal density of soot particles: we can make it easier to see through the imaginary cube just by making the cube larger. Several uncertainties contribute to the determination of a precise value for critical masses, including (1) detailed knowledge of fission cross sections, (2) calculation of geometric effects. This latter problem provided significant motivation for the development of the Monte Carlo method in computational physics by Nicholas Metropolis and Stanislaw Ulam. In fact, even for a homogeneous solid sphere, the exact calculation is by no means trivial. Finally, note that the calculation can also be performed by assuming a continuum approximation for the neutron transport. This reduces it to a diffusion problem. However, as the typical linear dimensions are not significantly larger than the mean free path, such an approximation is only marginally applicable. Finally, note that for some idealized geometries, the critical mass might formally be infinite, and other parameters are used to describe criticality. For example, consider an infinite sheet of fissionable material. For any finite thickness, this corresponds to an infinite mass. However, criticality is only achieved once the thickness of this slab exceeds a critical value. Criticality in nuclear weapon design Until detonation is desired, a nuclear weapon must be kept subcritical. In the case of a uranium gun-type bomb, this can be achieved by keeping the fuel in a number of separate pieces, each below the critical size either because they are too small or unfavorably shaped. To produce detonation, the pieces of uranium are brought together rapidly. In Little Boy, this was achieved by firing a piece of uranium (a 'doughnut') down a gun barrel onto another piece (a 'spike'). This design is referred to as a gun-type fission weapon''. A theoretical 100% pure 239Pu weapon could also be constructed as a gun-type weapon, like the Manhattan Project's proposed Thin Man design. In reality, this is impractical because even "weapons grade" 239Pu is contaminated with a small amount of 240Pu, which has a strong propensity toward spontaneous fission. Because of this, a reasonably sized gun-type weapon would suffer nuclear reaction (predetonation) before the masses of plutonium would be in a position for a full-fledged explosion to occur. Instead, the plutonium is present as a subcritical sphere (or other shape), which may or may not be hollow. Detonation is produced by exploding a shaped charge surrounding the sphere, increasing the density (and collapsing the cavity, if present) to produce a prompt critical configuration. This is known as an implosion type weapon. Prompt criticality The event of fission must release, on the average, more than one free neutron of the desired energy level in order to sustain a chain reaction, and each must find other nuclei and cause them to fission. Most of the neutrons released from a fission event come immediately from that event, but a fraction of them come later, when the fission products decay, which may be on the average from microseconds to minutes later. This is fortunate for atomic power generation, for without this delay "going critical" would be an immediately catastrophic event, as it is in a nuclear bomb where upwards of 80 generations of chain reaction occur in less than a microsecond, far too fast for a human, or even a machine, to react. Physicists recognize two points in the gradual increase of neutron flux which are significant: critical, where the chain reaction becomes self-sustaining thanks to the contributions of both kinds of neutron generation, and prompt critical, where the immediate "prompt" neutrons alone will sustain the reaction without need for the decay neutrons. Nuclear power plants operate between these two points of reactivity, while above the prompt critical point is the domain of nuclear weapons, pulsed reactors designs such as TRIGA research reactors and the pulsed nuclear thermal rocket, and some nuclear power accidents, such as the 1961 US SL-1 accident and 1986 Soviet Chernobyl disaster.
Physical sciences
Nuclear physics
Physics
175959
https://en.wikipedia.org/wiki/Mains%20electricity
Mains electricity
Mains electricity or utility power, grid power, domestic power, and wall power, or, in some parts of Canada, hydro, is a general-purpose alternating-current (AC) electric power supply. It is the form of electrical power that is delivered to homes and businesses through the electrical grid in many parts of the world. People use this electricity to power everyday items (such as domestic appliances, televisions and lamps) by plugging them into a wall outlet. The voltage and frequency of electric power differs between regions. In much of the world, a voltage (nominally) of 230 volts and frequency of 50 Hz is used. In North America, the most common combination is 120 V and a frequency of 60 Hz. Other combinations exist, for example, 230 V at 60 Hz. Travellers' portable appliances may be inoperative or damaged by foreign electrical supplies. Non-interchangeable plugs and sockets in different regions provide some protection from accidental use of appliances with incompatible voltage and frequency requirements. Terminology In the US, mains electric power is referred to by several names including "utility power", "household power", "household electricity", "house current", "powerline", "domestic power", "wall power", "line power", "wall current", "AC power", "city power", "street power", and "120 (one twenty)". In the UK, mains electric power is generally referred to as "the mains". More than half of power in Canada is hydroelectricity, and mains electricity is often referred to as "hydro" in some regions of the country. This is also reflected in names of current and historical electricity utilities such as Hydro-Québec, BC Hydro, Manitoba Hydro, Hydro One (Ontario), and Newfoundland and Labrador Hydro. Power systems Worldwide, many different mains power systems are found for the operation of household and light commercial electrical appliances and lighting. The different systems are primarily characterized by: Voltage Frequency Plugs and sockets (receptacles or outlets) Earthing system (grounding) Protection against overcurrent damage (e.g., due to short circuit), electric shock, and fire hazards Parameter tolerances. All these parameters vary among regions. The voltages are generally in the range 100–240 V (always expressed as root-mean-square voltage). The two commonly used frequencies are 50 Hz and 60 Hz. Single-phase or three-phase power is most commonly used today, although two-phase systems were used early in the 20th century. Foreign enclaves, such as large industrial plants or overseas military bases, may have a different standard voltage or frequency from the surrounding areas. Some city areas may use standards different from that of the surrounding countryside (e.g. in Libya). Regions in an effective state of anarchy may have no central electrical authority, with electric power provided by incompatible private sources. Many other combinations of voltage and utility frequency were formerly used, with frequencies between 25 Hz and 133 Hz and voltages from 100 V to 250 V. Direct current (DC) has been displaced by alternating current (AC) in public power systems, but DC was used especially in some city areas to the end of the 20th century. The modern combinations of 230 V/50 Hz and 120 V/60 Hz, listed in IEC 60038, did not apply in the first few decades of the 20th century and are still not universal. Industrial plants with three-phase power will have different, higher voltages installed for large equipment (and different sockets and plugs), but the common voltages listed here would still be found for lighting and portable equipment. Common uses of electricity Electricity is used for lighting, heating, cooling, electric motors and electronic equipment. The US Energy Information Administration (EIA) has published: U.S. residential sector electricity consumption by major end uses in 2021 1 Includes televisions, set-top boxes, home theatre systems, DVD players, and video game consoles. 2 Includes desktop and laptop computers, monitors, and networking equipment. 3 Does not include water heating. 4 Includes small electric devices, heating elements, exterior lights, outdoor grills, pool and spa heaters, backup electricity generators, and motors not listed above. Does not include electric vehicle charging. Electronic appliances such as computers or televisions sets typically use an AC to DC converter or AC adapter to power the device. This is often capable of operation with a wide range of voltage and with both common power frequencies. Other AC applications usually have much more restricted input ranges. Building wiring Portable appliances use single-phase electric power, with two or three wired contacts at each outlet. Two wires (neutral and live/active/hot) carry current to operate the device. A third wire, not always present, connects conductive parts of the appliance case to earth ground. This protects users from electric shock if live internal parts accidentally contact the case. In northern and central Europe, residential electrical supply is commonly 400 V three-phase electric power, which gives 230 V between any single phase and neutral; house wiring may be a mix of three-phase and single-phase circuits, but three-phase residential use is rare in the UK. High-power appliances such as kitchen stoves, water heaters and household power heavy tools like log splitters may be supplied from the 400 V three-phase power supply. Small portable electrical equipment is connected to the power supply through flexible cables terminated in a plug, which is inserted into a fixed receptacle (socket). Larger household electrical equipment and industrial equipment may be permanently wired to the fixed wiring of the building. For example, in North American homes a window-mounted self-contained air conditioner unit would be connected to a wall plug, whereas the central air conditioning for a whole home would be permanently wired. Larger plug and socket combinations are used for industrial equipment carrying larger currents, higher voltages, or three phase electric power. Circuit breakers and fuses are used to detect short circuits between the line and neutral or ground wires or the drawing of more current than the wires are rated to handle (overload protection) to prevent overheating and possible fire. These protective devices are usually mounted in a central panel—most commonly a distribution board or consumer unit—in a building, but some wiring systems also provide a protection device at the socket or within the plug. Residual-current devices, also known as ground-fault circuit interrupters and appliance leakage current interrupters, are used to detect ground faults—flow of current in other than the neutral and line wires (like the ground wire or a person). When a ground fault is detected, the device quickly cuts off the circuit. Voltage levels Most of the world population (Europe, Africa, Asia, Australia, New Zealand, and much of South America) use a supply that is within 6% of 230 V. In the United Kingdom the nominal supply voltage is 230 V +10%/−6% to accommodate the fact that most transformers are in fact still set to 240 V. The 230 V standard has become widespread so that 230 V equipment can be used in most parts of the world with the aid of an adapter or a change to the equipment's plug to the standard for the specific country. The United States and Canada use a supply voltage of 120 volts ± 6%. Japan, Taiwan, Saudi Arabia, North America, Central America and some parts of northern South America use a voltage between 100 V and 127 V. However, most of the households in Japan equip split-phase electric power like the United States, which can supply 200 V by using reversed phase at the same time. Brazil is unusual in having both 127 V and 220 V systems at 60 Hz and also permitting interchangeable plugs and sockets. Saudi Arabia and Mexico have mixed voltage systems; in residential and light commercial buildings both countries use 127 volts, with 220 volts at 60 Hz in commercial and industrial applications. The Saudi government approved plans in August 2010 to transition the country to a totally 230/400-volt 60 Hz system. Measuring voltage A distinction should be made between the voltage at the point of supply (nominal voltage at the point of interconnection between the electrical utility and the user) and the voltage rating of the equipment (utilization or load voltage). Typically the utilization voltage is 3% to 5% lower than the nominal system voltage; for example, a nominal 208 V supply system will be connected to motors with "200 V" on their nameplates. This allows for the voltage drop between equipment and supply. Voltages in this article are the nominal supply voltages and equipment used on these systems will carry slightly lower nameplate voltages. Power distribution system voltage is nearly sinusoidal in nature. Voltages are expressed as root mean square (RMS) voltage. Voltage tolerances are for steady-state operation. Momentary heavy loads, or switching operations in the power distribution network, may cause short-term deviations out of the tolerance band and storms and other unusual conditions may cause even larger transient variations. In general, power supplies derived from large networks with many sources are more stable than those supplied to an isolated community with perhaps only a single generator. Choice of voltage The choice of supply voltage is due more to historical reasons than optimization of the electric power distribution system—once a voltage is in use and equipment using this voltage is widespread, changing voltage is a drastic and expensive measure. A 230 V distribution system will use less conductor material than a 120 V system to deliver a given amount of power because the current, and consequently the resistive loss, is lower. While large heating appliances can use smaller conductors at 230 V for the same output rating, few household appliances use anything like the full capacity of the outlet to which they are connected. Minimum wire size for hand-held or portable equipment is usually restricted by the mechanical strength of the conductors. Many areas, such as the US, which use (nominally) 120 V, make use of three-wire, split-phase 240 V systems to supply large appliances. In this system a 240 V supply has a centre-tapped neutral to give two 120 V supplies which can also supply 240 V to loads connected between the two line wires. Three-phase systems can be connected to give various combinations of voltage, suitable for use by different classes of equipment. Where both single-phase and three-phase loads are served by an electrical system, the system may be labelled with both voltages such as 120/208 or 230/400 V, to show the line-to-neutral voltage and the line-to-line voltage. Large loads are connected for the higher voltage. Other three-phase voltages, up to 830 volts, are occasionally used for special-purpose systems such as oil well pumps. Large industrial motors (say, more than 250 hp or 150 kW) may operate on medium voltage. On 60 Hz systems a standard for medium voltage equipment is 2,400/4,160 V whereas 3,300 V is the common standard for 50 Hz systems. Standardization Until 1987, mains voltage in large parts of Europe, including Germany, Austria and Switzerland, was while the UK used . Standard ISO IEC 60038:1983 defined the new standard European voltage to be . From 1987 onwards, a step-wise shift towards was implemented. From 2009 on, the voltage is permitted to be . No change in voltage was required by either the Central European or the UK system, as both 220 V and 240 V fall within the lower 230 V tolerance bands (230 V ±6%). Usually the voltage of 230 V ±3% is maintained. Some areas of the UK still have 250 volts for legacy reasons, but these also fall within the 10% tolerance band of 230 volts. In practice, this allowed countries to have supplied the same voltage (220 or 240 V), at least until existing supply transformers are replaced. Equipment (with the exception of filament bulbs) used in these countries is designed to accept any voltage within the specified range. In 2000, Australia converted to 230 V as the nominal standard with a tolerance of +10%/−6%, this superseding the old 240 V standard, AS 2926-1987. The tolerance was increased in 2022 to ± 10% with the release of AS IEC 60038:2022. The utilization voltage available at an appliance may be below this range, due to voltage drops within the customer installation. As in the UK, 240 V is within the allowable limits and "240 volt" is a synonym for mains in Australian and British English. In the United States and Canada, national standards specify that the nominal voltage at the source should be 120 V and allow a range of 114 V to 126 V (RMS) (−5% to +5%). Historically, 110 V, 115 V and 117 V have been used at different times and places in North America. Mains power is sometimes spoken of as 110 V; however, 120 V is the nominal voltage. In Japan, the electrical power supply to households is at 100 and 200 V. Eastern and northern parts of Honshū (including Tokyo) and Hokkaidō have a frequency of 50 Hz, whereas western Honshū (including Nagoya, Osaka, and Hiroshima), Shikoku, Kyūshū and Okinawa operate at 60 Hz. The boundary between the two regions contains four back-to-back high-voltage direct-current (HVDC) substations which interconnect the power between the two grid systems; these are Shin Shinano, Sakuma Dam, Minami-Fukumitsu, and the Higashi-Shimizu Frequency Converter. To accommodate the difference, frequency-sensitive appliances marketed in Japan can often be switched between the two frequencies. History The world's first public electricity supply was a water wheel driven system constructed in the small English town of Godalming in 1881. It was an alternating current (AC) system using a Siemens alternator supplying power for both street lights and consumers at two voltages, 250 V for arc lamps, and 40 V for incandescent lamps. The world's first large scale central plant—Thomas Edison's steam powered station at Holborn Viaduct in London—started operation in January 1882, providing direct current (DC) at 110 V. The Holborn Viaduct station was used as a proof of concept for the construction of the much larger Pearl Street Station in New York, the world's first permanent commercial central power plant. The Pearl Street Station also provided DC at 110 V, considered to be a "safe" voltage for consumers, beginning 4 September 1882. AC systems started appearing in the US in the mid-1880s, using higher distribution voltage stepped down via transformers to the same 110 V customer utilization voltage that Edison used. In 1883, Edison patented a three–wire distribution system to allow DC generation plants to serve a wider radius of customers to save on copper costs. By connecting two groups of 110 V lamps in series more load could be served by the same size conductors run with 220 V between them; a neutral conductor carried any imbalance of current between the two sub-circuits. AC circuits adopted the same form during the war of the currents, allowing lamps to be run at around 110 V and major appliances to be connected to 220 V. Nominal voltages gradually crept upward to 112 V and 115 V, or even 117 V. After World War II the standard voltage in the U.S. became 117 V, but many areas lagged behind even into the 1960s. In 1954, the American National Standards Institute (ANSI) published C84.1 "American National Standard for Electric Power Systems and Equipment – Voltage Ratings (60 Hertz)". This standard established 120 volt nominal system and two ranges for service voltage and utilization voltage variations. Today, virtually all American homes and businesses have access to 120 and 240 V at 60 Hz. Both voltages are available on the three wires (two "hot" legs of opposite phase and one "neutral" leg). In 1899, the Berliner Elektrizitäts-Werke (BEW), a Berlin electrical utility, decided to greatly increase its distribution capacity by switching to 220 V nominal distribution, taking advantage of the higher voltage capability of newly developed metal filament lamps. The company was able to offset the cost of converting the customer's equipment by the resulting saving in distribution conductors cost. This became the model for electrical distribution in Germany and the rest of Europe and the 220 V system became common. North American practice remained with voltages near 110 V for lamps. In the first decade after the introduction of alternating current in the US (from the early 1880s to about 1893) a variety of different frequencies were used, with each electric provider setting their own, so that no single one prevailed. The most common frequency was  Hz. The rotation speed of induction generators and motors, the efficiency of transformers, and flickering of carbon arc lamps all played a role in frequency setting. Around 1893 the Westinghouse Electric Company in the United States and AEG in Germany decided to standardize their generation equipment on 60 Hz and 50 Hz respectively, eventually leading to most of the world being supplied at one of these two frequencies. Today most 60 Hz systems deliver nominal 120/240 V, and most 50 Hz nominally 230 V. The significant exceptions are in Brazil, which has a synchronized 60 Hz grid with both 127 V and 220 V as standard voltages in different regions, and Japan, which has two frequencies: 50 Hz for East Japan and 60 Hz for West Japan. Voltage regulation To maintain the voltage at the customer's service within the acceptable range, electrical distribution utilities use regulating equipment at electrical substations or along the distribution line. At a substation, the step-down transformer will have an automatic on-load tap changer, allowing the ratio between transmission voltage and distribution voltage to be adjusted in steps. For long (several kilometres) rural distribution circuits, automatic voltage regulators may be mounted on poles of the distribution line. These are autotransformers, again, with on-load tap changers to adjust the ratio depending on the observed voltage changes. At each customer's service, the step-down transformer has up to five taps to allow some range of adjustment, usually ±5% of the nominal voltage. Since these taps are not automatically controlled, they are used only to adjust the long-term average voltage at the service and not to regulate the voltage seen by the utility customer. Power quality The stability of the voltage and frequency supplied to customers varies among countries and regions. "Power quality" is a term describing the degree of deviation from the nominal supply voltage and frequency. Short-term surges and drop-outs affect sensitive electronic equipment such as computers and flat-panel displays. Longer-term power outages, brownouts and blackouts and low reliability of supply generally increase costs to customers, who may have to invest in uninterruptible power supply or stand-by generator sets to provide power when the utility supply is unavailable or unusable. Erratic power supply may be a severe economic handicap to businesses and public services which rely on electrical machinery, illumination, climate control and computers. Even the best quality power system may have breakdowns or require servicing. As such, companies, governments and other organizations sometimes have backup generators at sensitive facilities, to ensure that power will be available even in the event of a power outage or black out. Power quality can also be affected by distortions of the current or voltage waveform in the form of harmonics of the fundamental (supply) frequency, or non-harmonic (inter)modulation distortion such as that caused by electromagnetic interference. In contrast, harmonic distortion is usually caused by conditions of the load or generator. In multi-phase power, phase shift distortions caused by imbalanced loads can occur.
Technology
Power transmission
null
176052
https://en.wikipedia.org/wiki/Molecular%20evolution
Molecular evolution
Molecular evolution describes how inherited DNA and/or RNA change over evolutionary time, and the consequences of this for proteins and other components of cells and organisms. Molecular evolution is the basis of phylogenetic approaches to describing the tree of life. Molecular evolution overlaps with population genetics, especially on shorter timescales. Topics in molecular evolution include the origins of new genes, the genetic nature of complex traits, the genetic basis of adaptation and speciation, the evolution of development, and patterns and processes underlying genomic changes during evolution. History The history of molecular evolution starts in the early 20th century with comparative biochemistry, and the use of "fingerprinting" methods such as immune assays, gel electrophoresis, and paper chromatography in the 1950s to explore homologous proteins. The advent of protein sequencing allowed molecular biologists to create phylogenies based on sequence comparison, and to use the differences between homologous sequences as a molecular clock to estimate the time since the most recent common ancestor. The surprisingly large amount of molecular divergence within and between species inspired the neutral theory of molecular evolution in the late 1960s. Neutral theory also provided a theoretical basis for the molecular clock, although this is not needed for the clock's validity. After the 1970s, nucleic acid sequencing allowed molecular evolution to reach beyond proteins to highly conserved ribosomal RNA sequences, the foundation of a reconceptualization of the early history of life. The Society for Molecular Biology and Evolution was founded in 1982. Molecular phylogenetics Molecular phylogenetics uses DNA, RNA, or protein sequences to resolve questions in systematics, i.e. about their correct scientific classification from the point of view of evolutionary history. The result of a molecular phylogenetic analysis is expressed in a phylogenetic tree. Phylogenetic inference is conducted using data from DNA sequencing. This is aligned to identify which sites are homologous. A substitution model describes what patterns are expected to be common or rare. Sophisticated computational inference is then used to generate one or more plausible trees. Some phylogenetic methods account for variation among sites and among tree branches. Different genes, e.g. hemoglobin vs. cytochrome c, generally evolve at different rates. These rates are relatively constant over time (e.g., hemoglobin does not evolve at the same rate as cytochrome c, but hemoglobins from humans, mice, etc. do have comparable rates of evolution), although rapid evolution along one branch can indicate increased directional selection on that branch. Purifying selection causes functionally important regions to evolve more slowly, and amino acid substitutions involving similar amino acids occurs more often than dissimilar substitutions. Gene family evolution Gene duplication can produce multiple homologous proteins (paralogs) within the same species. Phylogenetic analysis of proteins has revealed how proteins evolve and change their structure and function over time. For example, ribonucleotide reductase (RNR) has evolved a multitude of structural and functional variants. Class I RNRs use a ferritin subunit and differ by the metal they use as cofactors. In class II RNRs, the thiyl radical is generated using an adenosylcobalamin cofactor and these enzymes do not require additional subunits (as opposed to class I which do). In class III RNRs, the thiyl radical is generated using S-adenosylmethionine bound to a [4Fe-4S] cluster. That is, within a single family of proteins numerous structural and functional mechanisms can evolve. In a proof-of-concept study, Bhattacharya and colleagues converted myoglobin, a non-enzymatic oxygen storage protein, into a highly efficient Kemp eliminase using only three mutations. This demonstrates that only few mutations are needed to radically change the function of a protein. Directed evolution is the attempt to engineer proteins using methods inspired by molecular evolution. Molecular evolution at one site Change at one locus begins with a new mutation, which might become fixed due to some combination of natural selection, genetic drift, and gene conversion. Mutation Mutations are permanent, transmissible changes to the genetic material (DNA or RNA) of a cell or virus. Mutations result from errors in DNA replication during cell division and by exposure to radiation, chemicals, other environmental stressors, viruses, or transposable elements. When point mutations to just one base-pair of the DNA fall within a region coding for a protein, they are characterized by whether they are synonymous (do not change the amino acid sequence) or non-synonymous. Other types of mutations modify larger segments of DNA and can cause duplications, insertions, deletions, inversions, and translocations. The distribution of rates for diverse kinds of mutations is called the "mutation spectrum" (see App. B of ). Mutations of different types occur at widely varying rates. Point mutation rates for most organisms are very low, roughly 10−9 to 10−8 per site per generation, though some viruses have higher mutation rates on the order of 10−6 per site per generation. Transitions (A ↔ G or C ↔ T) are more common than transversions (purine (adenine or guanine)) ↔ pyrimidine (cytosine or thymine, or in RNA, uracil)). Perhaps the most common type of mutation in humans is a change in the length of a short tandem repeat (e.g., the CAG repeats underlying various disease-associated mutations). Such STR mutations may occur at rates on the order of 10−3 per generation. Different frequencies of different types of mutations can play an important role in evolution via bias in the introduction of variation (arrival bias), contributing to parallelism, trends, and differences in the navigability of adaptive landscapes. Mutation bias makes systematic or predictable contributions to parallel evolution. Since the 1960s, genomic GC content has been thought to reflect mutational tendencies. Mutational biases also contribute to codon usage bias. Although such hypotheses are often associated with neutrality, recent theoretical and empirical results have established that mutational tendencies can influence both neutral and adaptive evolution via bias in the introduction of variation (arrival bias). Selection Selection can occur when an allele confers greater fitness, i.e. greater ability to survive or reproduce, on the average individual than carries it. A selectionist approach emphasizes e.g. that biases in codon usage are due at least in part to the ability of even weak selection to shape molecular evolution. Selection can also operate at the gene level at the expense of organismal fitness, resulting in intragenomic conflict. This is because there can be a selective advantage for selfish genetic elements in spite of a host cost. Examples of such selfish elements include transposable elements, meiotic drivers, and selfish mitochondria. Selection can be detected using the Ka/Ks ratio, the McDonald–Kreitman test. Rapid adaptive evolution is often found for genes involved in intragenomic conflict, sexual antagonistic coevolution, and the immune system. Genetic drift Genetic drift is the change of allele frequencies from one generation to the next due to stochastic effects of random sampling in finite populations. These effects can accumulate until a mutation becomes fixed in a population. For neutral mutations, the rate of fixation per generation is equal to the mutation rate per replication. A relatively constant mutation rate thus produces a constant rate of change per generation (molecular clock). Slightly deleterious mutations with a selection coefficient less than a threshold value of 1 / the effective population size can also fix. Many genomic features have been ascribed to accumulation of nearly neutral detrimental mutations as a result of small effective population sizes. With a smaller effective population size, a larger variety of mutations will behave as if they are neutral due to inefficiency of selection. Gene conversion Gene conversion occurs during recombination, when nucleotide damage is repaired using an homologous genomic region as a template. It can be a biased process, i.e. one allele may have a higher probability of being the donor than the other in a gene conversion event. In particular, GC-biased gene conversion tends to increase the GC-content of genomes, particularly in regions with higher recombination rates. There is also evidence for GC bias in the mismatch repair process. It is thought that this may be an adaptation to the high rate of methyl-cytosine deamination which can lead to C→T transitions. The dynamics of biased gene conversion resemble those of natural selection, in that a favored allele will tend to increase exponentially in frequency when rare. Genome architecture Genome size Genome size is influenced by the amount of repetitive DNA as well as number of genes in an organism. Some organisms, such as most bacteria, Drosophila, and Arabidopsis have particularly compact genomes with little repetitive content or non-coding DNA. Other organisms, like mammals or maize, have large amounts of repetitive DNA, long introns, and substantial spacing between genes. The C-value paradox refers to the lack of correlation between organism 'complexity' and genome size. Explanations for the so-called paradox are two-fold. First, repetitive genetic elements can comprise large portions of the genome for many organisms, thereby inflating DNA content of the haploid genome. Repetitive genetic elements are often descended from transposable elements. Secondly, the number of genes is not necessarily indicative of the number of developmental stages or tissue types in an organism. An organism with few developmental stages or tissue types may have large numbers of genes that influence non-developmental phenotypes, inflating gene content relative to developmental gene families. Neutral explanations for genome size suggest that when population sizes are small, many mutations become nearly neutral. Hence, in small populations repetitive content and other 'junk' DNA can accumulate without placing the organism at a competitive disadvantage. There is little evidence to suggest that genome size is under strong widespread selection in multicellular eukaryotes. Genome size, independent of gene content, correlates poorly with most physiological traits and many eukaryotes, including mammals, harbor very large amounts of repetitive DNA. However, birds likely have experienced strong selection for reduced genome size, in response to changing energetic needs for flight. Birds, unlike humans, produce nucleated red blood cells, and larger nuclei lead to lower levels of oxygen transport. Bird metabolism is far higher than that of mammals, due largely to flight, and oxygen needs are high. Hence, most birds have small, compact genomes with few repetitive elements. Indirect evidence suggests that non-avian theropod dinosaur ancestors of modern birds also had reduced genome sizes, consistent with endothermy and high energetic needs for running speed. Many bacteria have also experienced selection for small genome size, as time of replication and energy consumption are so tightly correlated with fitness. Chromosome number and organization The ant Myrmecia pilosula has only a single pair of chromosomes whereas the Adders-tongue fern Ophioglossum reticulatum has up to 1260 chromosomes. The number of chromosomes in an organism's genome does not necessarily correlate with the amount of DNA in its genome. The genome-wide amount of recombination is directly controlled by the number of chromosomes, with one crossover per chromosome or per chromosome arm, depending on the species. Changes in chromosome number can play a key role in speciation, as differing chromosome numbers can serve as a barrier to reproduction in hybrids. Human chromosome 2 was created from a fusion of two chimpanzee chromosomes and still contains central telomeres as well as a vestigial second centromere. Polyploidy, especially allopolyploidy, which occurs often in plants, can also result in reproductive incompatibilities with parental species. Agrodiatus blue butterflies have diverse chromosome numbers ranging from n=10 to n=134 and additionally have one of the highest rates of speciation identified to date. Cilliate genomes house each gene in individual chromosomes. Organelles In addition to the nuclear genome, endosymbiont organelles contain their own genetic material. Mitochondrial and chloroplast DNA varies across taxa, but membrane-bound proteins, especially electron transport chain constituents are most often encoded in the organelle. Chloroplasts and mitochondria are maternally inherited in most species, as the organelles must pass through the egg. In a rare departure, some species of mussels are known to inherit mitochondria from father to son. Origins of new genes New genes arise from several different genetic mechanisms including gene duplication, de novo gene birth, retrotransposition, chimeric gene formation, recruitment of non-coding sequence into an existing gene, and gene truncation. Gene duplication initially leads to redundancy. However, duplicated gene sequences can mutate to develop new functions or specialize so that the new gene performs a subset of the original ancestral functions. Retrotransposition duplicates genes by copying mRNA to DNA and inserting it into the genome. Retrogenes generally insert into new genomic locations, lack introns. and sometimes develop new expression patterns and functions. Chimeric genes form when duplication, deletion, or incomplete retrotransposition combine portions of two different coding sequences to produce a novel gene sequence. Chimeras often cause regulatory changes and can shuffle protein domains to produce novel adaptive functions. De novo gene birth can give rise to protein-coding genes and non-coding genes from previously non-functional DNA. For instance, Levine and colleagues reported the origin of five new genes in the D. melanogaster genome. Similar de novo origin of genes has been also shown in other organisms such as yeast, rice and humans. De novo genes may evolve from spurious transcripts that are already expressed at low levels. Constructive neutral evolution Constructive neutral evolution (CNE) explains that complex systems can emerge and spread into a population through neutral transitions with the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities. Journals and societies The Society for Molecular Biology and Evolution publishes the journals "Molecular Biology and Evolution" and "Genome Biology and Evolution" and holds an annual international meeting. Other journals dedicated to molecular evolution include Journal of Molecular Evolution and Molecular Phylogenetics and Evolution. Research in molecular evolution is also published in journals of genetics, molecular biology, genomics, systematics, and evolutionary biology.
Biology and health sciences
Genetics
Biology
176124
https://en.wikipedia.org/wiki/Orient%20Express
Orient Express
The Orient Express was a long-distance passenger luxury train service created in 1883 by the Belgian company Compagnie Internationale des Wagons-Lits (CIWL) that operated until 2009. The train traveled the length of continental Europe, with terminal stations in Paris in the northwest and Istanbul in the southeast, and branches extending service to Athens, Brussels, and London. The Orient Express embarked on its initial journey on June 5, 1883, from Paris to Vienna, eventually extending to Istanbul, thus connecting the western and eastern extremities of Europe. The route saw alterations and expansions, including the introduction of the Simplon Orient Express following the opening of the Simplon Tunnel in 1919, enhancing the service's allure and importance. Several routes concurrently used the Orient Express name, or variations. Although the original Orient Express was simply a normal international railway service, the name became synonymous with intrigue and luxury rail travel. The city names most prominently served and associated with the Orient Express are Paris and Istanbul, the original termini of the timetabled service. The rolling stock of the Orient Express changed many times. However, post-World War II, the Orient Express struggled to maintain its preeminence amid changing geopolitical landscapes and the rise of air travel. The route stopped serving Istanbul in 1977, cut back to a through overnight service from Paris to Bucharest, which was cut back further in 1991 to Budapest, then in 2001 to Vienna, before departing for the last time from Paris on 8 June 2007. After this, the route, still called the Orient Express, was shortened to start from Strasbourg, leaving daily after the arrival of a TGV from Paris. On 14 December 2009, the Orient Express ceased to operate entirely and the route disappeared from European railway timetables, a "victim of high-speed trains and cut-rate airlines". In contemporary times, the legacy of the Orient Express has been revived through private ventures like the Venice Simplon-Orient-Express, initiated by James Sherwood in 1982, which offers nostalgic journeys through Europe in restored 1920s and 1930s CIWL carriages, including the original route from Paris to Istanbul. Since December 2021, an ÖBB Nightjet runs three times per week on the Paris-Vienna route, although not branded as Orient Express. In late 2026, Accor will launch its own Orient Express with journeys from Paris to Istanbul. Train Eclair de luxe (the "test" train) In 1882, Georges Nagelmackers, a Belgian banker's son, invited guests to a railway trip of on his Train Eclair de luxe ("lightning luxury train"). The train left Paris Gare de l'Est on Tuesday, 10 October 1882, just after 18:30 and arrived in Vienna the next day at 23:20. The return trip left Vienna on Friday, 13 October at 16:40 and, as planned, re-entered the Gare de Strasbourg at 20:00 on Saturday 14 October. Georges Nagelmackers was the founder of Compagnie Internationale des Wagons-Lits (CIWL), which expanded its luxury trains, travel agencies and hotels all over Europe, Asia, and North Africa. Its most famous train remains the Orient Express. The train was composed of: Baggage car Sleeping coach with 16 beds (with bogies) Sleeping coach with 14 beds (3 axles) Restaurant coach (nr. 107) Sleeping coach with 13 beds (3 axles) Sleeping coach with 13 beds (3 axles) Baggage car (complete 101 ton) The first menu on board (10 October 1882): oysters, soup with Italian pasta, turbot with green sauce, chicken ‘à la chasseur’, fillet of beef with ‘château’ potatoes, ‘chaud-froid’ of game animals, lettuce, chocolate pudding, buffet of desserts. Routes History On 5 June 1883, the first Express d'Orient left Paris for Vienna via Munich. Vienna remained the terminus until 4 October 1883, when the route was extended to Giurgiu, Romania. At Giurgiu, passengers were ferried across the Danube to Ruse, Bulgaria, to pick up another train to Varna. They then completed their journey to Constantinople, as the city was still commonly called in the west at the time, by ferry. In 1885, another route began operations, this time reaching Constantinople via rail from Vienna to Belgrade and Niš, carriage to Plovdiv, and rail again to Istanbul. On 1 June 1889, the first direct train to Constantinople left Paris from Gare de l'Est. Istanbul, as it became known in English by the 1930s, remained its easternmost stop until 19 May 1977. The eastern terminus was the Sirkeci Terminal by the Golden Horn. Ferry service from piers next to the terminal would take passengers across the Bosphorus to Haydarpaşa Terminal, the terminus of the Asian lines of the Ottoman Railways. The train was officially renamed the Orient Express in 1891. The onset of the First World War in 1914 saw Orient Express services suspended. They resumed at the end of hostilities in 1918, and in 1919 the opening of the Simplon Tunnel allowed the introduction of a more southerly route via Milan, Venice, and Trieste. The service on this route was known as the Simplon Orient Express, and it ran in addition to continuing services on the old route. The Treaty of Saint-Germain contained a clause requiring Austria to accept this train: formerly, Austria allowed international services to pass through Austrian territory (which included Trieste at the time) only if they ran via Vienna. The Simplon Orient Express soon became the most important rail route between Paris and Istanbul. The 1930s saw the Orient Express services at its most popular, with three parallel services running: the Orient Express, the Simplon Orient Express, and also the Arlberg Orient Express, which ran via the Arlberg railway between Zürich and Innsbruck to Budapest, with sleeper cars running onwards from there to Bucharest and Athens. During this time, the Orient Express acquired its reputation for comfort and luxury, carrying sleeping cars with permanent service and restaurant cars known for the quality of their cuisine. Royalty, nobles, diplomats, business people, and the bourgeoisie in general patronized it. Each of the Orient Express services also incorporated sleeping cars which had run from Calais to Paris, thus extending the service from one end of continental Europe to the other. The start of the Second World War in 1939 again interrupted the service, which did not resume until 1945. During the war, the German Mitropa company had run some services on the route through the Balkans, but Yugoslav Partisans frequently sabotaged the track, forcing a stop to this service. Following the end of the war, normal services resumed except on the Athens leg, where the closure of the border between Yugoslavia and Greece prevented services from running. That border re-opened in 1951, but the closure of the Bulgarian–Turkish border from 1951 to 1952 prevented services running to Istanbul during that time. As the Iron Curtain fell across Europe, the service continued to run, but the Communist nations increasingly replaced the Wagon-Lits cars with carriages run by their own railway services. By 1962, the original Orient Express and Arlberg Orient Express had stopped running, leaving only the Simplon Orient Express. This was replaced in 1962 by a slower service called the Direct Orient Express, which ran daily cars from Paris to Belgrade, and twice weekly services from Paris to Istanbul and Athens. In 1971, the Wagon-Lits company stopped running carriages itself and making revenues from a ticket supplement. Instead, it sold or leased all its carriages to the various national railway companies, but continued to provide staff for the carriages. 1976 saw the withdrawal of the Paris–Athens direct service, and in 1977, the Direct Orient Express was withdrawn completely, with the last Paris–Istanbul service running on 19 May of that year. The withdrawal of the Direct Orient Express was thought by many to signal the end of the Orient Express as a whole, but in fact a service under this name continued to run from Paris to Bucharest as before (via Strasbourg, Munich, Vienna, and Budapest). However, a through sleeping car from Paris to Bucharest was only operated until 1982, and also a through seating car was only operated seasonally. This meant that, as Paris–Budapest and Vienna–Bucharest coaches were running overlapped, a journey was only possible with changing carriages – despite the unchanged name and numbering of the train. In 1991 the Budapest-Bucharest leg of the train was discontinued, the new final station now becoming Budapest. In the summer season of 1999 and 2000 a sleeping car from Bucharest to Paris reappeared running twice a week, now operated by CFR. This continued until 2001, when the service was cut back to just Paris–Vienna, as a EuroNight train, though the coaches were actually attached to a regular Paris–Strasbourg express for that leg of the journey. This service continued daily, listed in the timetables under the name Orient Express, until 8 June 2007. With the opening of the LGV Est Paris–Strasbourg high speed rail line on 10 June 2007, the Orient Express service was further cut back to Strasbourg–Vienna, departing nightly at 22:20 from Strasbourg, and still bearing the name, but lost the train numbers 262/263 which it had borne for decades. The remains of the original train had a convenient connection to the Strasbourg-Paris TGV, but due to the less flexible prices the route became less attractive. In the final years through coaches between Vienna and Karlsruhe (continuing first to Dortmund, then to Amsterdam, and finally to Frankfurt) were attached. The last train with the name Orient-Express (now with a hyphen) departed from Vienna on 10 December 2009, and one day later from Strasbourg. On 13 December 2021, an ÖBB Nightjet train began running three times per week on the Paris-Vienna route, although it is not branded as Orient Express. One of the last known CIWL teak sleeping cars from the period before the First World War can be seen at the former Amfikleia station site in Greece. Privately run trains using the name In 1976, the Swiss travel company Intraflug AG first rented, then later bought several CIWL-carriages. They were operated as the Nostalgic Istanbul Orient Express by Seattle-based Society Expeditions. The route went first from Zürich to Istanbul, following the route of the Arlberg Orient Express. In 1983, the 100th anniversary of the Orient Express was celebrated by extending the route to run from Paris to Istanbul. The train ceased operations in 2007. Belmond In 1982, the Venice Simplon-Orient-Express was established by businessman James Sherwood as a private venture and is currently owned and operated by Belmond. It operates restored 1920s and 1930s carriages on routes around Europe. It also offered a connecting service from London to Folkestone on the British Pullman, using similarly restored vintage British Pullman cars, but it was announced in April 2023 that due to complications ensuing from Brexit this would cease, and travelers from London would have to take Eurostar to Paris in order to join the Orient Express. The Venice Simplon-Orient-Express operates from March to December and is aimed at leisure travellers. Tickets start at US$3,262 per person and it operates on multiple different routes most notably Paris-Istanbul via Vienna and Budapest. Despite its name, the train runs via the Brenner Pass instead of the Simplon tunnel. Belmond also offers a similarly themed luxury train in Singapore, Malaysia and Thailand, called the Eastern and Oriental Express. Sherwood also operated a chain of Orient Express-branded luxury hotels, licensed from SNCF, owner of the Orient Express branding. The chain was renamed Belmond in 2014 when the branding license ended. Accor In 2017, Accor purchased a 50% stake in the Orient Express brand from SNCF for the right to use the name. In 2018, Accor began renovation work on 17 CIWL carriages from the defunct Nostalgie Istanbul Orient Express, which date back to the 1920s and 1930s. It will carry passengers between Paris and Istanbul beginning in late 2026. In popular culture The glamour and rich history of the Orient Express has frequently lent itself to the plot of books and films and as the subject of television documentaries. Literature Dracula (1897) by Bram Stoker: while Count Dracula escapes from England to Varna by sea, the group sworn to destroy him travels to Paris and takes the Orient Express, arriving in Varna ahead of him. Gentlemen Prefer Blondes (1925) by Anita Loos wherein Lorelei and her friend Dorothy take a journey on the Oriental Express from Paris to Central Europe. Stamboul Train (1932) by Graham Greene The short story "Have You Got Everything You Want?" (1933), by Agatha Christie Murder on the Orient Express (1934), one of the most famous works by Agatha Christie, takes place on the Simplon Orient Express Oriënt-Express (1934) a novel by A. den Doolaard: it takes place in North Macedonia. From Russia, with Love (1957), a James Bond novel by Ian Fleming, sees Bond travel from Istanbul to Venice aboard the Simplon Orient Express. Travels with My Aunt (1969) by Graham Greene Paul Theroux (1975) devotes a chapter of The Great Railway Bazaar to his journey from Paris to Istanbul on the Direct-Orient Express. Neither Here nor There: Travels in Europe (1991) by Bill Bryson describes riding the train in 1973, when it was a run-down and neglected route. The Orient Express (1992) a novel by Gregor von Rezzori follows a European American who, having ridden the original Orient Express in his youth, returns late in life to ride the refurbished version. Flashman and the Tiger (1999) by George MacDonald Fraser: Harry Paget Flashman travels on the train's first journey as a guest of the journalist Henri Blowitz. The Orient Express appeared in the 2004 novel Lionboy and its sequel Lionboy: The Case by Zizou Corder. Charlie Ashanti was stowing away on the train on his way to Venice when he met King Boris of Bulgaria. The short story "On the Orient, North" by Ray Bradbury The Orient Express appeared as a technologically advanced (for its time) train in the book Behemoth, by Scott Westerfeld. Thea Stilton and the Mystery on the Orient Express by Elisabetta Dami Madness on the Orient Express is an anthology of horror stories, all connected to the Orient Express, edited by James Lowder. First Class Murder (2015) by Robin Stevens from the Murder Most Unladylike series is set on the Orient Express. The Oriënt-Express served as the venue for a chess game described in the (1997) novel The Lüneburg Variation by Paolo Maurensig. One of the criminal mysteries solved by Randall Garrett's alternative history detective Lord Darcy takes place on a luxurious cross-Europe train manifestly modeled on the Orient Express, though in this setting its final destination is Naples rather than Istanbul. Film Orient Express (1934), film adaptation of Graham Greene's Stamboul Train. Orient Express (1944), German film about a murder on the train. Sleeping Car to Trieste (1948), film by the Rank Organisation, story by Clifford Grey. A stolen diplomatic document is the quest of various groups on the Orient Express from Paris to Trieste. Copyright by Two Cities Films Ltd. Orient Express (1954), whose plot revolves around a two-day stop at a village in the Alps by passengers on the Orient Express. From Russia with Love (1963): James Bond, along with Bond girl Tatiana Romanova and ally Ali Kerim Bey, tries to travel on the Orient Express from Istanbul to Trieste, but complications involving SPECTRE assassin Red Grant force Bond and Tatiana to jump off the train in Yugoslav Istria. (1968): thriller, made for television, starring Gene Barry. Travels with My Aunt (1972): Henry Pulling accompanies his aunt, Augusta Bertram, on a trip from London to Turkey. The two board the Orient Express in Paris; the train takes them to Turkey (though they disembark briefly at the Milan stop). The Agatha Christie novel has been adapted into films in 1974, 2001, and 2017 Romance on the Orient Express (1985): TV movie with Cheryl Ladd. 102 Dalmatians (2000) Death, Deceit and Destiny Aboard the Orient Express (2000) Around the World in 80 Days (2004): Mr Fogg travels on the train to Istanbul. Orient Express (2004) Murder on the Orient Express (2017) Murder Mystery (2019): In the final scene Nick and Audrey Spitz are travelling on the Orient Express. Mission Impossible: Dead Reckoning Part One (2023): The third act of the film primarily takes place on the Orient Express bound from Venice to Innsbruck Television Orient Express was a syndicated TV series in the early- to mid-1950s. Filmed in Europe, its half-hour dramas featured such stars as Paul Lukas, Jean-Pierre Aumont, Geraldine Brooks and Erich von Stroheim. In "The Orient Express" (episode number 48 of The World of Commander McBragg cartoon series), the Commander tells the story of how he once rode on that fabled train, dodging several assassination attempts on his life en route. In the Pink Panther cartoon "Pinkfinger" the Pink Panther tries to be a secret agent and is almost blown up by a bomb on the Orient Express. Daylight Robbery on the Orient Express, an episode of the award-winning British comedy television series The Goodies, was first broadcast on 5 October 1976 and is partially set aboard the train. Mystery on the Orient Express: a television special featuring illusionist David Copperfield. During the special, Copperfield rode aboard the train and, at its conclusion, made the dining car seemingly disappear. "The Istambul Train", "Il treno d'Istanbul" (1980) Hungarian–Italian television series "Stamboul Train" original title by Graham Greene (1932). "Minder on the Orient Express" (1985): a special episode of the long-running ITV sit-com Minder. Whicker's World – Aboard The Orient Express: Travel journalist Alan Whicker joined the inaugural service of the Venice Simplon-Orient-Express to Venice in 1982, interviewing invited guests and celebrities along the way. Gavin Stamp's Orient Express: in 2007 UK's Five broadcast an arts/travel series which saw the historian journey from Paris to Istanbul along the old Orient Express route. The 1987 cartoon Teenage Mutant Ninja Turtles had an episode titled "Turtles on the Orient Express". As the title suggests it is primarily based on the train. A 1993 advert for Bisto Fuller Flavour Gravy Granules featured in it with a young couple. The 1995 cartoon Madeline had an episode titled Madeline on the Orient Express, in which a chef stole a snake. The episode "Emergence" of the science fiction television series Star Trek: The Next Generation partially takes place on a Holodeck representation of the Orient Express. On 15 May 2007 broadcast of Jeopardy!, the shows theme music "Think" was played by a person on the train’s piano, since the Final Jeopardy clue was about the Orient Express. In the British soap opera EastEnders, in 1986, characters Den and Angie Watts spent their honeymoon on the train. "Aboard the Orient Express" Get Smart series 1, episode 13 is set on the Orient Express. In one episode of the British cartoon series Danger Mouse, called "Danger Mouse on the Orient Express" (a parody of Murder on the Orient Express), Danger Mouse and Penfold travel on the train on their way back to London from Venice. Danger Mouse's arch enemy Greenback is also on the train. In an episode of the television series Chuck, Chuck and Sarah decide to go AWOL and take a trip on the Orient Express. At the end of the Doctor Who episode "The Big Bang", the Doctor receives a call for help from the "Orient Express — in space". This setting is used in the episode "Mummy on the Orient Express", including a reference to the ending of "The Big Bang", four years later. In episode 15 of television series Forever (U.S. TV series), Dr Henry Morgan travelled from Budapest to Istanbul with his wife Abigail Morgan on his honeymoon in 1955. He performed an appendectomy on a member of the fictional Urkesh royalty. The Backyardigans episode "Le Master of Disguise" features the Orient Express, showing Uniqua, Pablo, Austin, Tasha and Tyrone going to Istanbul from Paris. The series Agatha Christie's Poirot, which adapted the entirety of Christie's works featuring Hercule Poirot as played by David Suchet, included an adaptation of Murder on the Orient Express as part of its 2010 episodes. Michael Palin's Around The World in Eighty Days (1988). Michael Palin travelled on the Orient Express in episode 1 from London Victoria to Innsbruck, using a ferry across the English Channel from Folkestone. The train did not continue on to Venice because of a strike on the Italian railways. Music Alex Otterlei’s "Horror on the Orient Express" is inspired by the Call of Cthulhu RPG. The integral symphonic version was released on CD in 2002, a 26-minute Suite for Concert Band was published in 2012. Orient Expressions, a musical group from Turkey who combine traditional Turkish music with elements of electronica, take their name from the train service. The Jean Michel Jarre album The Concerts in China has a track entitled "Orient Express" as track 1 of disc 2, though the relation to the train is unknown. A concert band piece, Orient Express was written by Philip Sparke. There was a band based in Hawaii called Liz Damon's Orient Express. Games Sources: The role-playing game Call of Cthulhu (1981) used the train for one of its more famous campaigns, Horror on the Orient Express. The TSR role-playing game Top Secret had a 1983 module based on the train titled "Operation Orient Express". Just Games released a murder mystery boardgame (1985) called Orient Express using the famous train route as a backdrop for solving murders. The game is based on the novel Murder on the Orient Express by Agatha Christie. Heart of China (1991 computer game) has a final sequence in the Orient Express. An action scene takes place on the roof. In 1994's season 1 episode of Where on Earth Is Carmen Sandiego? called, "The Gold Old Bad Days", Carmen Sandiego and her V.I.L.E. gang are given a challenge to do something low tech by The Player robbery. Carmen's goal is the train. The Orient Express plays host to an adventure game by Jordan Mechner. The Last Express (1997 computer game) is a murder mystery game set around the last ride of the Orient Express before it suspended operations at the start of World War I. Robert Cath, an American doctor wanted by French police as he is suspected of the murder of an Irish police officer, becomes involved in a maelstrom of treachery, lies, political conspiracies, personal interests, romance and murder. The game has 30 characters representing a cross-section of European forces at the time. In the game Crash Bandicoot 3: Warped (1998) for PS1, the third level (which is Asian-themed) is named Orient Express. The Orient Express was featured in two scenarios in the Railroad Tycoon series: In Railroad Tycoon II (1998), players get to connect Paris to Constantinople in a territory buying challenge. In Railroad Tycoon 3 (2003) players need to connect Vienna to Istanbul. The train is featured in Microsoft Train Simulator (2001), where its route is a section from Innsbruck to Sankt Anton am Arlberg in Austria. The Orient Express cars were made available for download to use in Auran's Trainz Railroad Simulator 2004 or later versions by the content creation group FMA. The video game adaptation of From Russia with Love includes scenes aboard the Orient Express The Adventure Company developed a point-and-click adventure based on Agatha Christie's novel, Agatha Christie: Murder on the Orient Express (2006). The first scenes of The Raven: Legacy of a Master Thief, a 2013 game for PC, involve a mystery set amongst train carriages inspired by the Orient Express. The entire Orient Express set was used in the Facebook game, TrainStation (2010). The Orient Express is a usable engine and caboose in the mobile game Tiny Rails (2016). In Euro Truck Simulator 2 (2012) there is an achievement called Orient Express requiring players to complete deliveries between the following cities: Paris-Strasbourg, Strasbourg-Munich, Munich-Vienna, Vienna-Budapest, Budapest-Bucharest, Bucharest-Istanbul. In Train Simulator, it features several routes of the Arlberg-Orient Express from London to Faversham, Bludenz to Innsbruck, a few lines around Salzburg, and a small section of the Simplon-Orient Express in Ljubljana. It also features a part of the ÖBB EN Orient Express and the original Orient Express line between Strasbourg and Munich.
Technology
Railway services
null
176159
https://en.wikipedia.org/wiki/Polymer%20physics
Polymer physics
Polymer physics is the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation of polymers and polymerisation of monomers. While it focuses on the perspective of condensed matter physics, polymer physics was originally a branch of statistical physics. Polymer physics and polymer chemistry are also related to the field of polymer science, which is considered to be the applicative part of polymers. Polymers are large molecules and thus are very complicated for solving using a deterministic method. Yet, statistical approaches can yield results and are often pertinent, since large polymers (i.e., polymers with many monomers) are describable efficiently in the thermodynamic limit of infinitely many monomers (although the actual size is clearly finite). Thermal fluctuations continuously affect the shape of polymers in liquid solutions, and modeling their effect requires the use of principles from statistical mechanics and dynamics. As a corollary, temperature strongly affects the physical behavior of polymers in solution, causing phase transitions, melts, and so on. The statistical approach to polymer physics is based on an analogy between polymer behavior and either Brownian motion or another type of a random walk, the self-avoiding walk. The simplest possible polymer model is presented by the ideal chain, corresponding to a simple random walk. Experimental approaches for characterizing polymers are also common, using polymer characterization methods, such as size exclusion chromatography, viscometry, dynamic light scattering, and Automatic Continuous Online Monitoring of Polymerization Reactions (ACOMP) for determining the chemical, physical, and material properties of polymers. These experimental methods help the mathematical modeling of polymers and give a better understanding of the properties of polymers. Flory is considered the first scientist establishing the field of polymer physics. French scientists contributed since the 70s (e.g. Pierre-Gilles de Gennes, J. des Cloizeaux). Doi and Edwards wrote a famous book in polymer physics. Soviet/Russian school of physics (I. M. Lifshitz, A. Yu. Grosberg, A.R. Khokhlov, V.N. Pokrovskii) have been very active in the development of polymer physics. Models Models of polymer chains are split into two types: "ideal" models, and "real" models. Ideal chain models assume that there are no interactions between chain monomers. This assumption is valid for certain polymeric systems, where the positive and negative interactions between the monomer effectively cancel out. Ideal chain models provide a good starting point for the investigation of more complex systems and are better suited for equations with more parameters. Ideal chains The freely-jointed chain is the simplest model of a polymer. In this model, fixed length polymer segments are linearly connected, and all bond and torsion angles are equiprobable. The polymer can therefore be described by a simple random walk and ideal chain. The model can be extended to include extensible segments in order to represent bond stretching. The freely-rotating chain improves the freely-jointed chain model by taking into account that polymer segments make a fixed bond angle to neighbouring units because of specific chemical bonding. Under this fixed angle, the segments are still free to rotate and all torsion angles are equally likely. The hindered rotation model assumes that the torsion angle is hindered by a potential energy. This makes the probability of each torsion angle proportional to a Boltzmann factor: , where is the potential determining the probability of each value of . In the rotational isomeric state model, the allowed torsion angles are determined by the positions of the minima in the rotational potential energy. Bond lengths and bond angles are constant. The Worm-like chain is a more complex model. It takes the persistence length into account. Polymers are not completely flexible; bending them requires energy. At the length scale below persistence length, the polymer behaves more or less like a rigid rod. The finite extensible nonlinear elastic model takes into account non-linearity for finite chains. It is used for computational simulations. Real chains Interactions between chain monomers can be modelled as excluded volume. This causes a reduction in the conformational possibilities of the chain, and leads to a self-avoiding random walk. Self-avoiding random walks have different statistics to simple random walks. Solvent and temperature effect The statistics of a single polymer chain depends upon the solubility of the polymer in the solvent. For a solvent in which the polymer is very soluble (a "good" solvent), the chain is more expanded, while for a solvent in which the polymer is insoluble or barely soluble (a "bad" solvent), the chain segments stay close to each other. In the limit of a very bad solvent the polymer chain merely collapses to form a hard sphere, while in a good solvent the chain swells in order to maximize the number of polymer-fluid contacts. For this case the radius of gyration is approximated using Flory's mean field approach which yields a scaling for the radius of gyration of: , where is the radius of gyration of the polymer, is the number of bond segments (equal to the degree of polymerization) of the chain and is the Flory exponent. For good solvent, ; for poor solvent, . Therefore, polymer in good solvent has larger size and behaves like a fractal object. In bad solvent it behaves like a solid sphere. In the so-called solvent, , which is the result of simple random walk. The chain behaves as if it were an ideal chain. The quality of solvent depends also on temperature. For a flexible polymer, low temperature may correspond to poor quality and high temperature makes the same solvent good. At a particular temperature called theta (θ) temperature, the solvent behaves as an ideal chain. Excluded volume interaction The ideal chain model assumes that polymer segments can overlap with each other as if the chain were a phantom chain. In reality, two segments cannot occupy the same space at the same time. This interaction between segments is called the excluded volume interaction. The simplest formulation of excluded volume is the self-avoiding random walk, a random walk that cannot repeat its previous path. A path of this walk of N steps in three dimensions represents a conformation of a polymer with excluded volume interaction. Because of the self-avoiding nature of this model, the number of possible conformations is significantly reduced. The radius of gyration is generally larger than that of the ideal chain. Flexibility and reptation Whether a polymer is flexible or not depends on the scale of interest. For example, the persistence length of double-stranded DNA is about 50 nm. Looking at length scale smaller than 50 nm, it behaves more or less like a rigid rod. At length scale much larger than 50 nm, it behaves like a flexible chain. Reptation is the thermal motion of very long linear, entangled basically macromolecules in polymer melts or concentrated polymer solutions. Derived from the word reptile, reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. Sir Sam Edwards and Masao Doi later refined reptation theory. The consistent theory of thermal motion of polymers was given by Vladimir Pokrovskii . Similar phenomena also occur in proteins. Example model (simple random-walk, freely jointed) The study of long chain polymers has been a source of problems within the realms of statistical mechanics since about the 1950s. One of the reasons however that scientists were interested in their study is that the equations governing the behavior of a polymer chain were independent of the chain chemistry. What is more, the governing equation turns out to be a random walk, or diffusive walk, in space. Indeed, the Schrödinger equation is itself a diffusion equation in imaginary time, t' = it. Random walks in time The first example of a random walk is one in space, whereby a particle undergoes a random motion due to external forces in its surrounding medium. A typical example would be a pollen grain in a beaker of water. If one could somehow "dye" the path the pollen grain has taken, the path observed is defined as a random walk. Consider a toy problem, of a train moving along a 1D track in the x-direction. Suppose that the train moves either a distance of +b or −b (b is the same for each step), depending on whether a coin lands heads or tails when flipped. Lets start by considering the statistics of the steps the toy train takes (where Si is the ith step taken): ; due to a priori equal probabilities The second quantity is known as the correlation function. The delta is the kronecker delta which tells us that if the indices i and j are different, then the result is 0, but if i = j then the kronecker delta is 1, so the correlation function returns a value of b2. This makes sense, because if i = j then we are considering the same step. Rather trivially then it can be shown that the average displacement of the train on the x-axis is 0; As stated , so the sum is still 0. It can also be shown, using the same method demonstrated above, to calculate the root mean square value of problem. The result of this calculation is given below From the diffusion equation it can be shown that the distance a diffusing particle moves in a medium is proportional to the root of the time the system has been diffusing for, where the proportionality constant is the root of the diffusion constant. The above relation, although cosmetically different reveals similar physics, where N is simply the number of steps moved (is loosely connected with time) and b is the characteristic step length. As a consequence we can consider diffusion as a random walk process. Random walks in space Random walks in space can be thought of as snapshots of the path taken by a random walker in time. One such example is the spatial configuration of long chain polymers. There are two types of random walk in space: self-avoiding random walks, where the links of the polymer chain interact and do not overlap in space, and pure random walks, where the links of the polymer chain are non-interacting and links are free to lie on top of one another. The former type is most applicable to physical systems, but their solutions are harder to get at from first principles. By considering a freely jointed, non-interacting polymer chain, the end-to-end vector is where ri is the vector position of the i-th link in the chain. As a result of the central limit theorem, if N ≫ 1 then we expect a Gaussian distribution for the end-to-end vector. We can also make statements of the statistics of the links themselves; ; by the isotropy of space ; all the links in the chain are uncorrelated with one another Using the statistics of the individual links, it is easily shown that . Notice this last result is the same as that found for random walks in time. Assuming, as stated, that that distribution of end-to-end vectors for a very large number of identical polymer chains is gaussian, the probability distribution has the following form What use is this to us? Recall that according to the principle of equally likely a priori probabilities, the number of microstates, Ω, at some physical value is directly proportional to the probability distribution at that physical value, viz; where c is an arbitrary proportionality constant. Given our distribution function, there is a maxima corresponding to R = 0. Physically this amounts to there being more microstates which have an end-to-end vector of 0 than any other microstate. Now by considering where F is the Helmholtz free energy, and it can be shown that which has the same form as the potential energy of a spring, obeying Hooke's law. This result is known as the entropic spring result and amounts to saying that upon stretching a polymer chain you are doing work on the system to drag it away from its (preferred) equilibrium state. An example of this is a common elastic band, composed of long chain (rubber) polymers. By stretching the elastic band you are doing work on the system and the band behaves like a conventional spring, except that unlike the case with a metal spring, all of the work done appears immediately as thermal energy, much as in the thermodynamically similar case of compressing an ideal gas in a piston. It might at first be astonishing that the work done in stretching the polymer chain can be related entirely to the change in entropy of the system as a result of the stretching. However, this is typical of systems that do not store any energy as potential energy, such as ideal gases. That such systems are entirely driven by entropy changes at a given temperature, can be seen whenever it is the case that are allowed to do work on the surroundings (such as when an elastic band does work on the environment by contracting, or an ideal gas does work on the environment by expanding). Because the free energy change in such cases derives entirely from entropy change rather than internal (potential) energy conversion, in both cases the work done can be drawn entirely from thermal energy in the polymer, with 100% efficiency of conversion of thermal energy to work. In both the ideal gas and the polymer, this is made possible by a material entropy increase from contraction that makes up for the loss of entropy from absorption of the thermal energy, and cooling of the material.
Physical sciences
Physics basics: General
Physics
176271
https://en.wikipedia.org/wiki/Microsoft%20Outlook
Microsoft Outlook
Microsoft Outlook is a personal information manager software system from Microsoft, available as a part of the Microsoft 365 software suites. Primarily popular as an email client for businesses, Outlook also includes functions such as calendaring, task managing, contact managing, note-taking, journal logging, web browsing, and RSS news aggregation. Individuals can use Outlook as a stand-alone application; organizations can deploy it as multi-user software (through Microsoft Exchange Server or SharePoint) for shared functions such as mailboxes, calendars, folders, data aggregation (i.e., SharePoint lists), and as appointment scheduling apps. Other than the paid software on Windows and Mac desktops that this article talks about, the Outlook name also covers several other current software: Outlook on the web, formerly Outlook Web App, a web version of Microsoft Outlook, included in Microsoft 365, Exchange Server, and Exchange Online (domain outlook.office365.com) Outlook for Windows, a new, free Outlook application that is preloaded with Windows 11 from 2024 Outlook Mobile, a mobile app version of Outlook Outlook.com, formerly Hotmail, a free personal email service offered by Microsoft alongside a webmail client (domain outlook.live.com) Versions Outlook has replaced Microsoft's previous scheduling and email clients, Schedule+ and Exchange Client. Outlook 98 and Outlook 2000 offer two configurations: Internet Mail Only (aka IMO mode): A lighter application mode with specific emphasis on POP3 and IMAP accounts, including a lightweight Fax application. Corporate Work group (aka CW mode): A full MAPI client with specific emphasis on Microsoft Exchange accounts. Perpetual versions of Microsoft Outlook include: Microsoft Outlook Microsoft Outlook is an email and personal information manager software primarily used in professional settings. As part of the Microsoft Office suite, it offers email management, contact storage, calendar scheduling, and task tracking. Outlook can function independently or as part of a larger Microsoft ecosystem, including integration with SharePoint for file sharing. While it stores email data locally for offline access, newer versions restrict link opening to Microsoft's own browsers. Privacy is severely degraded in latest versions, as new Outlook sends passwords, mails and other data to Microsoft. Outlook 2002 Outlook 2002 introduced these new features: Autocomplete for email addresses Colored categories for calendar items Group schedules Hyperlink support in email subject lines Native support for Outlook.com (formerly Hotmail) Improved search functionality, including the ability to stop a search and resume it later Lunar calendar support MSN Messenger integration Performance improvements Preview pane improvements, including the ability to: open hyperlinks; respond to meeting requests; and display email properties without opening a message Reminder window that consolidates all reminders for appointments and tasks in a single view Retention policies for documents and email Security improvements, including the automatic blocking of potentially unsafe attachments and of programmatic access to information in Outlook: SP1 introduced the ability to view all non-digitally signed email or unencrypted email as plain text; SP2 allows users to—through the Registry—prevent the addition of new email accounts or the creation of new Personal Storage Tables; SP3 updates the object model guard security for applications that access messages and other items. Smart tags when Word is configured as the default email editor. This option was available only when the versions of Outlook and Word were the same, i.e. both were 2002. Outlook 2003 Outlook 2003 introduced these new features: Autocomplete suggestions for a single character Cached Exchange mode Colored (quick) flags Desktop Alert Email filtering to combat spam Images in HTML mail are blocked by default to prevent spammers from determining whether an email address is active via web beacon; SP1 introduced the ability to block email based on country code top-level domains; SP2 introduced anti-phishing functionality that automatically disables hyperlinks present in spam Expandable distribution lists Information rights management Intrinsic support for tablet PC functionality (e.g., handwriting recognition) Reading pane Search folders Unicode support Outlook 2007 Features that debuted in Outlook 2007 include: Attachment preview, with which the contents of attachments can be previewed before opening Supported file types include Excel, PowerPoint, Visio, and Word files. If Outlook 2007 is installed on Windows Vista, then audio and video files can be previewed. If a compatible PDF reader such as Adobe Acrobat 8.1 is installed, PDF files can also be previewed. Auto Account Setup, which allows users to enter a username and password for an email account without entering a server name, port number, or other information Calendar sharing improvements including the ability to export a calendar as an HTML file—for viewing by users without Outlook—and the ability to publish calendars to an external service (e.g., Office Web Apps) with an online provider (e.g., Microsoft account) Colored categories with support for user roaming, which replace colored (quick) flags introduced in Outlook 2003 Improved email spam filtering and anti-phishing features Postmark intends to reduce spam by making it difficult and time-consuming to send it Information rights management improvements with Windows Rights Management Services and managed policy compliance integration with Exchange Server 2007 Japanese Yomi name support for contacts Multiple calendars can be overlaid with one another to assess details such as potential scheduling conflicts Ribbon (Office Fluent) interface Outlook Mobile Service support, which allowed multimedia and SMS text messages to be sent directly to mobile phones Instant search through Windows Search, an index-based desktop search platform Instant search functionality is also available in Outlook 2002 and Outlook 2003 if these versions are installed alongside Windows Search Integrated RSS aggregation Support for Windows SideShow with the introduction of a calendar gadget To-Do Bar that consolidates calendar information, flagged email, and tasks from OneNote 2007, Outlook 2007, Project 2007, and Windows SharePoint Services 3.0 websites within a central location. The ability to export items as PDF or XPS files Unified messaging support with Exchange Server 2007, including features such as missed-call notifications, and voicemail with voicemail preview and Windows Media Player Word 2007 replaces Internet Explorer as the default viewer for HTML email, and becomes the default email editor in this and all subsequent versions. Outlook 2010 Features that debuted in Outlook 2010 include: Additional command-line switches An improved conversation view that groups messages based on different criteria regardless of originating folders IMAP messages are sent to the Deleted Items folder, eliminating the need to mark messages for future deletion Notification when an email is about to be sent without a subject Quick Steps, individual collections of commands that allow users to perform multiple actions simultaneously Ribbon interface in all views Search Tools contextual tab on the ribbon that appears when performing searches and that includes basic or advanced criteria filters Social Connector to connect to various social networks and aggregate appointments, contacts, communication history, and file attachments Spell check-in additional areas of the user interface Support for multiple Exchange accounts in a single Outlook profile The ability to schedule a meeting with a contact by replying to an email message To-Do Bar enhancements including visual indicators for conflicts and unanswered meeting requests Voicemail transcripts for Unified Messaging communications Zooming user interface for calendar and mail views Outlook 2013 Features that debuted in Outlook 2013, which was released on January 29, 2013, include: Attachment reminder Exchange ActiveSync (EAS) Add-in resiliency Cached Exchange mode improvements IMAP improvements Outlook data file (.ost) compression People hub Startup performance improvements Outlook 2016 Features that debuted in Outlook 2016, include: Attachment link to cloud resource Groups redesign Search cloud Clutter folder Email Address Internationalization Scalable Vector Graphics Outlook 2019 Features that debuted in Outlook 2019, include: Focused Inbox Add multiple time zones Listen to your emails Easier email sorting Automatic download of cloud attachments True Dark Mode (version 1907 onward) Macintosh Microsoft made several versions of Outlook for older Mac computers, but only for email accounts on specific company servers (Exchange). It wasn't part of the regular Microsoft Office package for Mac. Microsoft Entourage was Microsoft's email app for Mac. It was similar to Outlook but didn't work well with Exchange email at first. Over time, it got better at handling Exchange, but it was always a different program than Outlook. Entourage was replaced by Outlook for Mac 2011, which features greater compatibility and parity with Outlook for Windows than Entourage offered. It is the first native version of Outlook for MacOS. Outlook 2011 initially supported Mac OS X's Sync Services only for contacts, not events, tasks or notes. It also does not have a Project Manager equivalent to that in Entourage. With Service Pack 1 (v 14.1.0), published on April 12, 2011, Outlook can now sync calendar, notes and tasks with Exchange 2007 and Exchange 2010. On October 31, 2014, Microsoft released Outlook for Mac (v15.3 build 141024) with Office 365 (a software as a service licensing program that makes Office programs available as soon as they are developed). Outlook for Mac 15.3 improves upon its predecessors with: Better performance and reliability as a result of a new threading model and database improvements. A new modern user interface with improved scrolling and agility when switching between Ribbon tabs. Online archive support for searching Exchange (online or on-premises) archived mail. Master Category List support and enhancements delivering access to category lists (name and color) and sync between Mac, Microsoft Windows and OWA clients. Office 365 push email support for real-time email delivery. Faster first-run and email download experience with improved Exchange Web Services syncing. The "New Outlook for Mac" client, included with version 16.42 and above, became available for "Early Insider" testers in the fall of 2019, with a public "Insider" debut in October 2020. It requires macOS 10.14 or greater and introduces a redesigned interface with significantly changed internals, including native search within the client that no longer depends on macOS Spotlight. Some Outlook features are still missing from the New Outlook client as it continues in development. To date, the Macintosh client has never had the capability of syncing Contact Groups/Personal Distribution Lists from Exchange, Microsoft 365 or Outlook.com accounts, something that the Windows and web clients have always supported. A UserVoice post created in December 2019 suggesting that the missing functionality be added has shown a "Planned" tag since October 2020. In March 2023, Microsoft announced that Outlook for Mac will be available for free. This means that users no longer need a Microsoft 365 subscription or an Office licence to use the program. Phones and tablets First released in April 2014 by the venture capital-backed startup Acompli, the company was acquired by Microsoft in December 2014. On January 29, 2015, Acompli was re-branded as Outlook Mobile—sharing its name with the Microsoft Outlook desktop personal information manager and Outlook.com email service. In January 2015, Microsoft released Outlook for phones and for tablets (v1.3 build) with Office 365. This was the first Outlook for these platforms with email, calendar, and contacts. On February 4, 2015, Microsoft acquired Sunrise Calendar; on September 13, 2016, Sunrise ceased to operate, and an update was released to Outlook Mobile that contained enhancements to its calendar functions. Similar to its desktop counterpart, Outlook mobile offers an aggregation of attachments and files stored on cloud storage platforms; a "focused inbox" highlights messages from frequent contacts, and calendar events, files, and locations can be embedded in messages without switching apps. The app supports a number of email platforms and services, including Outlook.com, Microsoft Exchange and Google Workspace (formerly G Suite) among others. Outlook mobile is designed to consolidate functionality that would normally be found in separate apps on mobile devices, similar to personal information managers on personal computers. It is designed around four "hubs" for different tasks, including "Mail", "Calendar," "Files" and "People". The "People" hub lists frequently and recently used contacts and aggregates recent communications with them, and the "Files" hub aggregates recent attachments from messages, and can also integrate with other online storage services such as Dropbox, Google Drive, and OneDrive. To facilitate indexing of content for search and other features, emails and other information are stored on external servers. Outlook mobile supports a large number of different e-mail services and platforms, including Exchange, iCloud, Gmail, Google Workspace (formerly G Suite), Outlook.com, and Yahoo! Mail. The app supports multiple email accounts at once. Emails are divided into two inboxes: the "Focused" inbox displays messages of high importance, and those from frequent contacts. All other messages are displayed within an "Other" section. Files, locations, and calendar events can be embedded into email messages. Swiping gestures can be used for deleting messages. Like the desktop Outlook, Outlook mobile allows users to see appointment details, respond to Exchange meeting invites, and schedule meetings. It also incorporates the three-day view and "Interesting Calendars" features from Sunrise. Files in the Files tab are not stored offline; they require Internet access to view. Security Outlook mobile temporarily stores and indexes user data (including email, attachments, calendar information, and contacts), along with login credentials, in a "secure" form on Microsoft Azure servers located in the United States. On Exchange accounts, these servers identify as a single Exchange ActiveSync user in order to fetch e-mail. Additionally, the app does not support mobile device management, nor allows administrators to control how third-party cloud storage services are used with the app to interact with their users. Concerns surrounding these security issues have prompted some firms, including the European Parliament, to block the app on their Exchange servers. Microsoft maintains a separate, pre-existing Outlook Web Access app for Android and iOS. Outlook Groups Outlook Groups was a mobile application for Windows Phone, Windows 10 Mobile, Android and iOS that could be used with an Office 365 domain Microsoft Account, e.g. a work or school account. It is designed to take existing email threads and turn them into a group-style conversation. The app lets users create groups, mention their contacts, share Office documents via OneDrive and work on them together, and participate in an email conversation. The app also allows the finding and joining of other Outlook Groups. It was tested internally at Microsoft and launched September 18, 2015, for Windows Phone 8.1 and Windows 10 Mobile users. After its initial launch on Microsoft's own platforms they launched the application for Android and iOS on September 23, 2015. Outlook Groups was updated on September 30, 2015, that introduced a deep linking feature as well as fixing a bug that blocked the "send" button from working. In March 2016 Microsoft added the ability to attach multiple images, and the most recently used document to group messages as well as the option to delete conversations within the application programme. Outlook Groups was retired by Microsoft on May 1, 2018. The functionality was replaced by adding the "Groups node" to the folder list within the Outlook mobile app. Internet standards compliance HTML rendering Outlook 2007 was the first Outlook to switch from Internet Explorer rendering engine to Microsoft Word 2007's. This meant that HTML and Cascading Style Sheets (CSS) items not handled by Word were no longer supported. On the other hand, HTML messages composed in Word look as they appeared to the author. This affects publishing newsletters and reports, because they frequently use intricate HTML and CSS to form their layout. For example, forms can no longer be embedded in an Outlook email. Support of CSS properties and HTML attributes Outlook for Windows has very limited CSS support compared to various other e-mail clients. Neither CSS1 (1996) nor CSS2 (1998) specifications are fully implemented and many CSS properties can only to be used with certain HTML elements for the desired effect. Some HTML attributes help achieve proper rendering of e-mails in Outlook, but most of these attributes are already deprecated in the HTML 4.0 specifications (1997). In order to achieve the best compatibility with Outlook, most HTML e-mails are created using multiple boxed tables, as the table element and its sub-elements support the width and height property in Outlook. No improvements have been made towards a more standards-compliant email client since the release of Outlook 2007. Transport Neutral Encapsulation Format Outlook and Exchange Server internally handle messages, appointments, and items as objects in a data model which is derived from the old proprietary Microsoft Mail system, the Rich Text Format from Microsoft Word and the complex OLE general data model. When these programs interface with other protocols such as the various Internet and X.400 protocols, they try to map this internal model onto those protocols in a way that can be reversed if the ultimate recipient is also running Outlook or Exchange. This focus on the possibility that emails and other items will ultimately be converted back to Microsoft Mail format is so extreme that if Outlook/Exchange cannot figure out a way to encode the complete data in the standard format, it simply encodes the entire message/item in a proprietary binary format called Transport Neutral Encapsulation Format (TNEF) and sends this as an attached file (usually named "winmail.dat") to an otherwise incomplete rendering of the mail/item. If the recipient is Outlook/Exchange it can simply discard the incomplete outer message and use the encapsulated data directly, but if the recipient is any other program, the message received will be incomplete because the data in the TNEF attachment will be of little use without the Microsoft software for which it was created. As a workaround, numerous tools for partially decoding TNEF files exist. Calendar compatibility Outlook does not fully support data and syncing specifications for calendaring and contacts, such as iCalendar, CalDAV, SyncML, and vCard 3.0. Outlook 2007 claims to be fully iCalendar compliant; however, it does not support all core objects, such as VTODO or VJOURNAL. Also, Outlook supports vCard 2.1 and does not support multiple contacts in the vCard format as a single file. Outlook has also been criticized for having proprietary "Outlook extensions" to these Internet standards. .msg format Outlook (both the web version and recent non-web versions) promotes the usage of a proprietary .msg format to save individual emails, instead of the standard .eml format. Messages use by default when saved to disk or forwarded as attachments. Compatibility with past or future Outlook versions is not documented nor guaranteed; the format saw over 10 versions released since version 1 in 2008. The standard format replicates the format of emails as used for transmission and is therefore compatible with any email client which uses the normal protocols. Standard-compliant email clients, like Mozilla Thunderbird, use additional headers to store software-specific information related e.g. to the local storage of the email, while keeping the file plain-text, so that it can be read in any text editor and searched or indexed like any document by any other software. Security concerns As part of its Trustworthy Computing initiative, Microsoft took corrective steps to fix Outlook's reputation in Office Outlook 2003. Among the most publicized security features are that Office Outlook 2003 does not automatically load images in HTML emails or permit opening executable attachments by default, and includes a built-in Junk Mail filter. Service Pack 2 has augmented these features and adds an anti-phishing filter. Outlook add-ins Outlook add-ins are small additional programs for the Microsoft Outlook application, mainly purposed to add new functional capabilities into Outlook and automate various routine operations. The term also refers to programs where the main function is to work on Outlook files, such as synchronization or backup utilities. Outlook add-ins may be developed in Microsoft Visual Studio or third-party tools such as Add-in Express. Outlook add-ins are not supported in Outlook Web App. From Outlook 97 on, Exchange Client Extensions are supported in Outlook. Outlook 2000 and later support specific COM components called Outlook Add-Ins. The exact supported features (such as .NET components) for later generations were extended with each release. SalesforceIQ Inbox for Outlook In March 2016, Salesforce announced that its relationship intelligence platform, SalesforceIQ, would be able to seamlessly integrate with Outlook. SalesforceIQ works from inside the Outlook inbox providing data from CRM, email, and customer social profiles. It also provides recommendations within the inbox on various aspects like appointment scheduling, contacts, responses, etc. Hotmail Connector Microsoft Outlook Hotmail Connector (formerly Microsoft Office Outlook Connector), is a discontinued and defunct free add-in for Microsoft Outlook 2003, 2007 and 2010, intended to integrate Outlook.com (formerly Hotmail) into Microsoft Outlook. It uses DeltaSync, a proprietary Microsoft communications protocol that was formerly used by Hotmail. In version 12, access to tasks and notes and online synchronization with MSN Calendar is only available to MSN subscribers of paid premium accounts. Version 12.1, released in December 2008 as an optional upgrade, uses Windows Live Calendar instead of the former MSN Calendar. This meant that calendar features became free for all users, except for task synchronization which became unavailable. In April 2008, version 12.1 became a required upgrade to continue using the service as part of a migration from MSN Calendar to Windows Live Calendar. Microsoft Outlook 2013 and its newer versions have intrinsic support for accessing Outlook.com and its calendar over the Exchange ActiveSync (EAS) protocol, while older versions of Microsoft Outlook can read and synchronize Outlook.com emails over the IMAP protocol. Social Connector Outlook Social Connector was a free add-in for Microsoft Outlook 2003 and 2007 by Microsoft that allowed integration of social networks such as Facebook, LinkedIn and Windows Live Messenger into Microsoft Outlook. It was first introduced on November 18, 2009. Starting with Microsoft Office 2010, Outlook Social Connector is an integral part of Outlook. CardDAV and CalDAV Connector Since Microsoft Outlook does not support CalDAV and CardDAV protocol along the way, various third-party software vendors developed Outlook add-ins to enable users synchronizing with CalDAV and CardDAV servers. CalConnect has a list of software that enable users to synchronize their calendars with CalDAV servers/contacts with CardDAV servers. Importing from other email clients Traditionally, Outlook supported importing messages from Outlook Express and Lotus
Technology
Communication
null
176304
https://en.wikipedia.org/wiki/Enantiomer
Enantiomer
In chemistry, an enantiomer (/ɪˈnænti.əmər, ɛ-, -oʊ-/ ih-NAN-tee-ə-mər), also known as an optical isomer, antipode, or optical antipode, is one of a pair of molecular entities which are mirror images of each other and non-superposable. Enantiomer molecules are like right and left hands: one cannot be superposed onto the other without first being converted to its mirror image. It is solely a relationship of chirality and the permanent three-dimensional relationships among molecules or other chemical structures: no amount of re-orientiation of a molecule as a whole or conformational change converts one chemical into its enantiomer. Chemical structures with chirality rotate plane-polarized light. A mixture of equal amounts of each enantiomer, a racemic mixture or a racemate, does not rotate light. Stereoisomers include both enantiomers and diastereomers. Diastereomers, like enantiomers, share the same molecular formula and are also nonsuperposable onto each other; however, they are not mirror images of each other. Naming conventions There are three common naming conventions for specifying one of the two enantiomers (the absolute configuration) of a given chiral molecule: the R/S system is based on the geometry of the molecule; the (+)- and (−)- system (also written using the obsolete equivalents d- and l-) is based on its optical rotation properties; and the D/L system is based on the molecule's relationship to enantiomers of glyceraldehyde. The R/S system is based on the molecule's geometry with respect to a chiral center. The R/S system is assigned to a molecule based on the priority rules assigned by Cahn–Ingold–Prelog priority rules, in which the group or atom with the largest atomic number is assigned the highest priority and the group or atom with the smallest atomic number is assigned the lowest priority. The (+) or (−) symbol is used to specify a molecule's optical rotation — the direction in which the polarization of light rotates as it passes through a solution containing the molecule. When a molecule is denoted dextrorotatory, it rotates the plane of polarized light clockwise and can also be denoted as (+). When it is denoted as levorotatory, it rotates the plane of polarized light counterclockwise and can also be denoted as (−). The Latin words for left are laevus and sinister, and the word for right is dexter (or rectus in the sense of correct or virtuous). The English word right is a cognate of rectus. This is the origin of the D/L and R/S notations, and the employment of prefixes levo- and dextro- in common names. The prefix ar-, from the Latin recto (right), is applied to the right-handed version; es-, from the Latin sinister (left), to the left-handed molecule. Example: ketamine, arketamine, esketamine. Chirality centers The asymmetric atom is called a chirality center, a type of stereocenter. A chirality center is also called a chiral center or an asymmetric center. Some sources use the terms stereocenter, stereogenic center, stereogenic atom or stereogen to refer exclusively to a chirality center, while others use the terms more broadly to refer also to centers that result in diastereomers (stereoisomers that are not enantiomers). Compounds that contain exactly one (or any odd number) of asymmetric atoms are always chiral. However, compounds that contain an even number of asymmetric atoms sometimes lack chirality because they are arranged in mirror-symmetric pairs, and are known as meso compounds. For instance, meso tartaric acid (shown on the right) has two asymmetric carbon atoms, but it does not exhibit enantiomerism because there is a mirror symmetry plane. Conversely, there exist forms of chirality that do not require asymmetric atoms, such as axial, planar, and helical chirality. Even though a chiral molecule lacks reflection (Cs) and rotoreflection symmetries (S2n), it can have other molecular symmetries, and its symmetry is described by one of the chiral point groups: Cn, Dn, T, O, or I. For example, hydrogen peroxide is chiral and has C2 (two-fold rotational) symmetry. A common chiral case is the point group C1, meaning no symmetries, which is the case for lactic acid. Examples An example of such an enantiomer is the sedative thalidomide, which was sold in a number of countries around the world from 1957 until 1961. It was withdrawn from the market when it was found to cause birth defects. One enantiomer caused the desirable sedative effects, while the other, unavoidably present in equal quantities, caused birth defects. The herbicide mecoprop is a racemic mixture, with the (R)-(+)-enantiomer ("Mecoprop-P", "Duplosan KV") possessing the herbicidal activity. Another example is the antidepressant drugs escitalopram and citalopram. Citalopram is a racemate [1:1 mixture of (S)-citalopram and (R)-citalopram]; escitalopram [(S)-citalopram] is a pure enantiomer. The dosages for escitalopram are typically 1/2 of those for citalopram. Here, (S)-citalopram is called a chiral switch of Citalopram. Chiral drugs Enantiopure compounds consist of only one of the two enantiomers. Enantiopurity is of practical importance since such compositions have improved therapeutic efficacy. The switch from a racemic drug to an enantiopure drug is called a chiral switch. In many cases, the enantiomers have distinct effects. One case is that of Propoxyphene. The enantiomeric pair of propoxyphene is separately sold by Eli Lilly and company. One of the partners is dextropropoxyphene, an analgesic agent (Darvon) and the other is called levopropoxyphene, an effective antitussive (Novrad).  It is interesting to note that the trade names of the drugs, DARVON and NOVRAD, also reflect the chemical mirror-image relationship. In other cases, there may be no clinical benefit to the patient. In some jurisdictions, single-enantiomer drugs are separately patentable from the racemic mixture. It is possible that only one of the enantiomers is active. Or, it may be that both are active, in which case separating the mixture has no objective benefits, but extends the drug's patentability. Enantioselective preparations In the absence of an effective enantiomeric environment (precursor, chiral catalyst, or kinetic resolution), separation of a racemic mixture into its enantiomeric components is impossible, although certain racemic mixtures spontaneously crystallize in the form of a racemic conglomerate, in which crystals of the enantiomers are physically segregated and may be separated mechanically. However, most racemates form crystals containing both enantiomers in a 1:1 ratio. In his pioneering work, Louis Pasteur was able to isolate the isomers of sodium ammonium tartrate because the individual enantiomers crystallize separately from solution. To be sure, equal amounts of the enantiomorphic crystals are produced, but the two kinds of crystals can be separated with tweezers. This behavior is unusual. A less common method is by enantiomer self-disproportionation. The second strategy is asymmetric synthesis: the use of various techniques to prepare the desired compound in high enantiomeric excess. Techniques encompassed include the use of chiral starting materials (chiral pool synthesis), the use of chiral auxiliaries and chiral catalysts, and the application of asymmetric induction. The use of enzymes (biocatalysis) may also produce the desired compound. A third strategy is Enantioconvergent synthesis, the synthesis of one enantiomer from a racemic precursor, utilizing both enantiomers. By making use of a chiral catalyist, both enantiomers of the reactant result in a single enantiomer of product. Enantiomers may not be isolable if there is an accessible pathway for racemization (interconversion between enantiomorphs to yield a racemic mixture) at a given temperature and timescale. For example, amines with three distinct substituents are chiral, but with few exceptions (e.g. substituted N-chloroaziridines), they rapidly undergo "umbrella inversion" at room temperature, leading to racemization. If the racemization is fast enough, the molecule can often be treated as an achiral, averaged structure. Parity violation For all intents and purposes, each enantiomer in a pair has the same energy. However, theoretical physics predicts that due to parity violation of the weak nuclear force (the only force in nature that can "tell left from right"), there is actually a minute difference in energy between enantiomers (on the order of 10−12 eV or 10−10 kJ/mol or less) due to the weak neutral current mechanism. This difference in energy is far smaller than energy changes caused by even small changes in molecular conformation, and far too small to measure by current technology, and is therefore chemically inconsequential. In the sense used by particle physicists, the "true" enantiomer of a molecule, which has exactly the same mass-energy content as the original molecule, is a mirror-image that is also built from antimatter (antiprotons, antineutrons, and positrons). Throughout this article, "enantiomer" is used only in the chemical sense of compounds of ordinary matter that are not superposable on their mirror image. Quasi-enantiomers Quasi-enantiomers are molecular species that are not strictly enantiomers, but behave as if they are. In quasi-enantiomers majority of the molecule is reflected; however, an atom or group within the molecule is changed to a similar atom or group. Quasi-enantiomers can also be defined as molecules that have the potential to become enantiomers if an atom or group in the molecule is replaced. An example of quasi-enantiomers would (S)-bromobutane and (R)-iodobutane. Under normal conditions the enantiomers for (S)-bromobutane and (R)-iodobutane would (R)-bromobutane and (S)-iodobutane respectively. Quasi-enantiomers would also produce quasi-racemates, which are similar to normal racemates (see Racemic mixture) in that they form an equal mixture of quasi-enantiomers. Though not considered actual enantiomers, the naming convention for quasi-enantiomers also follows the same trend as enantiomers when looking at (R) and (S) configurations - which are considered from a geometrical basis (see Cahn–Ingold–Prelog priority rules). Quasi-enantiomers have applications in parallel kinetic resolution.
Physical sciences
Stereochemistry
Chemistry
176332
https://en.wikipedia.org/wiki/Clownfish
Clownfish
Clownfish or anemonefish are fishes from the subfamily Amphiprioninae in the family Pomacentridae. Thirty species of clownfish are recognized: one in the genus Premnas, while the remaining are in the genus Amphiprion. In the wild, they all form symbiotic mutualisms with sea anemones. Depending on the species, anemonefish are overall yellow, orange, or a reddish or blackish color, and many show white bars or patches. The largest can reach a length of , while the smallest barely achieve . Distribution and habitat Anemonefish are endemic to the warmer waters of the Indian Ocean, including the Red Sea, and Pacific Ocean, the Great Barrier Reef, Hawaii, USA, North America, Southeast Asia, Japan, and the Indo-Malaysian region. While most species have restricted distributions, others are widespread. Anemonefish typically live at the bottom of shallow seas in sheltered reefs or in shallow lagoons. No anemonefish are found in the Atlantic. Diet Anemonefish are omnivorous and can feed on undigested food from their host anemones, and the fecal matter from the anemonefish provides nutrients to the sea anemone. Anemonefish primarily feed on small zooplankton from the water column, such as copepods and tunicate larvae, with a small portion of their diet coming from algae, with the exception of Amphiprion perideraion, which primarily feeds on algae. Symbiosis and mutualism Anemonefish and sea anemones have a symbiotic, mutualistic relationship, each providing many benefits to the other. The individual species are generally highly host specific. The sea anemone protects the anemonefish from predators, as well as providing food through the scraps left from the anemone's meals and occasional dead anemone tentacles, and functions as a safe nest site. In return, the anemonefish defends the anemone from its predators and parasites. The anemone also picks up nutrients from the anemonefish's excrement. The nitrogen excreted from anemonefish increases the number of algae incorporated into the tissue of their hosts, which aids the anemone in tissue growth and regeneration. The activity of the anemonefish results in greater water circulation around the sea anemone, and it has been suggested that their bright coloring might lure small fish to the anemone, which then catches them. Studies on anemonefish have found that they alter the flow of water around sea anemone tentacles by certain behaviors and movements such as "wedging" and "switching". Aeration of the host anemone tentacles allows for benefits to the metabolism of both partners, mainly by increasing anemone body size and both anemonefish and anemone respiration. Bleaching of the host anemone can occur when warm temperatures cause a reduction in algal symbionts within the anemone. Bleaching of the host can cause a short-term increase in the metabolic rate of resident anemonefish, probably as a result of acute stress. Over time, however, there appears to be a down-regulation of metabolism and a reduced growth rate for fish associated with bleached anemones. These effects may stem from reduced food availability (e.g. anemone waste products, symbiotic algae) for the anemonefish. Several theories are given about how they can survive the sea anemone venom: The mucus coating of the fish may be based on sugars rather than proteins. This would mean that anemones fail to recognize the fish as a potential food source and do not fire their nematocysts, or sting organelles. The coevolution of certain species of anemonefish with specific anemone host species may have allowed the fish to evolve an immunity to the nematocysts and toxins of their hosts. Amphiprion percula may develop resistance to the toxin from Heteractis magnifica, but it is not totally protected since it was shown experimentally to die when its skin, devoid of mucus, was exposed to the nematocysts of its host. Anemonefish are the best known example of fish that are able to live among the venomous sea anemone tentacles, but several others occur, including juvenile threespot dascyllus, certain cardinalfish (such as Banggai cardinalfish), incognito (or anemone) goby, and juvenile painted greenling. Reproduction In a group of anemonefish, a strict dominance hierarchy exists. The largest and most aggressive female is found at the top. Only two anemonefish, a male and a female, in a group reproduce – through external fertilization. Anemonefish are protandrous sequential hermaphrodites, meaning they develop into males first, and when they mature, they become females. If the female anemonefish is removed from the group, such as by death, one of the largest and most dominant males becomes a female. The remaining males move up a rank in the hierarchy. Clownfish live in a hierarchy, like hyenas, except smaller and based on size not sex, and order of joining/birth. Anemonefish lay eggs on any flat surface close to their host anemones. In the wild, anemonefish spawn around the time of the full moon. Depending on the species, they can lay hundreds or thousands of eggs. The male parent guards the eggs until they hatch about 6–10 days later, typically two hours after dusk. Parental investment Anemonefish colonies usually consist of the reproductive male and female and a few male juveniles, which help tend the colony. Although multiple males cohabit an environment with a single female, polygamy does not occur and only the adult pair exhibits reproductive behavior. However, if the female dies, the social hierarchy shifts with the breeding male exhibiting protandrous sex reversal to become the breeding female. The largest juvenile then becomes the new breeding male after a period of rapid growth. The existence of protandry in anemonefish may rest on the case that nonbreeders modulate their phenotype in a way that causes breeders to tolerate them. This strategy prevents conflict by reducing competition between males for one female. For example, by purposefully modifying their growth rate to remain small and submissive, the juveniles in a colony present no threat to the fitness of the adult male, thereby protecting themselves from being evicted by the dominant fish.The reproductive cycle of anemonefish is often correlated with the lunar cycle. Rates of spawning for anemonefish peak around the first and third quarters of the moon. The timing of this spawn means that the eggs hatch around the full moon or new moon periods. One explanation for this lunar clock is that spring tides produce the highest tides during full or new moons. Nocturnal hatching during high tide may reduce predation by allowing for a greater capacity for escape. Namely, the stronger currents and greater water volume during high tide protect the hatchlings by effectively sweeping them to safety. Before spawning, anemonefish exhibit increased rates of anemone and substrate biting, which help prepare and clean the nest for the spawn. Before making the clutch, the parents often clear an oval-shaped clutch varying in diameter for the spawn. Fecundity, or reproductive rate, of the females, usually ranges from 600 to 1,500 eggs depending on her size. In contrast to most animal species, the female only occasionally takes responsibility for the eggs, with males expending most of the time and effort. Male anemonefish care for their eggs by fanning and guarding them for 6 to 10 days until they hatch. In general, eggs develop more rapidly in a clutch when males fan properly, and fanning represents a crucial mechanism for successfully developing eggs. This suggests that males can control the success of hatching an egg clutch by investing different amounts of time and energy toward the eggs. For example, a male could choose to fan less in times of scarcity or fan more in times of abundance. Furthermore, males display increased alertness when guarding more valuable broods, or eggs in which paternity is guaranteed. Females, though, display generally less preference for parental behavior than males. All these suggest that males have increased parental investment towards eggs compared to females. Clownfish hatchlings undergo development after hatching in regards to both their body size and fins. If maintained at the demanded thermal regulation, clownfish undergo proper development of their fins. Clownfish follow the ensuing order in their fin development "Pectorals < caudal < dorsal = anal < pelvic". The early larval stage is crucial to ensure a healthy progression of growth. Taxonomy Historically, anemonefish have been identified by morphological features and color pattern in the field, while in a laboratory, other features such as scalation of the head, tooth shape, and body proportions are used. These features have been used to group species into six complexes: percula, tomato, skunk, clarkii, saddleback, and maroon. As can be seen from the gallery, each of the fish in these complexes has a similar appearance. Genetic analysis has shown that these complexes are not monophyletic groups, particularly the 11 species in the A. clarkii group, where only A. clarkii and A. tricintus are in the same clade, with six species,A . allardi A. bicinctus, A. chagosensis, A. chrosgaster, A. fuscocaudatus, A. latifasciatus, and A. omanensis being in an Indian clade, A. chrysopterus having monospecific lineage, and A. akindynos in the Australian clade with A. mccullochi. Other significant differences are that A. latezonatus also has monospecific lineage, and A. nigripes is in the Indian clade rather than with A. akallopisos, the skunk anemonefish. A. latezonatus is more closely related to A. percula and Premnas biaculeatus than to the saddleback fish with which it was previously grouped. Obligate mutualism was thought to be the key innovation that allowed anemonefish to radiate rapidly, with rapid and convergent morphological changes correlated with the ecological niches offered by the host anemones. The complexity of mitochondrial DNA structure shown by genetic analysis of the Australian clade suggested evolutionary connectivity among samples of A. akindynos and A. mccullochi that the authors theorize was the result of historical hybridization and introgression in the evolutionary past. The two evolutionary groups had individuals of both species detected, thus the species lacked reciprocal monophyly. No shared haplotypes were found between species. Species Morphological diversity by complex In the aquarium Anemonefish make up approximately 43% of the global marine ornamental trade, and approximately 25% of the global trade comes from fish bred in captivity, while the majority is captured from the wild, accounting for decreased densities in exploited areas. Public aquaria and captive-breeding programs are essential to sustain their trade as marine ornamentals, and has recently become economically feasible. It is one of a handful of marine ornamentals whose complete lifecycle has been in closed captivity. Members of some anemonefish species, such as the maroon clownfish, become aggressive in captivity; others, like the false percula clownfish, can be kept successfully with other individuals of the same species. When a sea anemone is not available in an aquarium, the anemonefish may settle in some varieties of soft corals, or large polyp stony corals. Once an anemone or coral has been adopted, the anemonefish will defend it. Anemonefish, however, are not obligately tied to hosts, and can survive alone in captivity. Clownfish sold from captivity make up a very small account (10%) of the total trade of these fishes. Designer Clownfish, scientifically named A. ocellaris are much costlier and obtaining them has disrupted their coral reefs. Their attractive allure, color, and patterning have made them out to be an attractive target in wild trading. In popular culture In Disney Pixar's 2003 film Finding Nemo and its 2016 sequel Finding Dory main characters Nemo, his father Marlin, and his mother Coral are clownfish from the species A. ocellaris. The popularity of anemonefish for aquaria increased following the film's release; it is the first film associated with an increase in the numbers of those captured in the wild.
Biology and health sciences
Acanthomorpha
Animals
176334
https://en.wikipedia.org/wiki/Pomacanthidae
Pomacanthidae
Marine angelfish are perciform fish of the family Pomacanthidae. They are found on shallow reefs in the tropical Atlantic, Indian, and mostly western Pacific Oceans. The family contains seven genera and about 86 species. They should not be confused with the freshwater angelfish, tropical cichlids of the Amazon Basin. Description With their bright colours and deep, laterally compressed bodies, marine angelfishes are some of the more conspicuous residents of the reef. They most closely resemble the butterflyfishes, a related family of similarly showy reef fish. Marine angelfish are distinguished from butterflyfish by the presence of strong preopercle spines (part of the gill covers) in the former. This feature also explains the family name Pomacanthidae; from the Greek πομα, poma meaning "cover" and ακάνθα, akantha meaning "thorn". Many species of marine angelfishes have streamer-like extensions of the soft dorsal and anal fins. The fish have small mouths, relatively large pectoral fins, and rounded to lunate tail fins. The largest species, the gray angelfish, Pomacanthus arcuatus, may reach a length of ; at the other extreme, members of the genus Centropyge do not exceed . A length of is typical for the rest of the family. The smaller species are popular amongst aquarists, whereas the largest species are occasionally sought as a food fish; however, ciguatera poisoning has been reported as a result of eating marine angelfish. Angelfish vary in color and are very hardy fish. When kept in aquariums they can easily adapt to pH and hardness changes in water and can handle conditions that are not considered to be perfect. They are usually a long-living species and are easy to care for. They were very expensive in the aquarium trade when first discovered, but have become more popular and therefore less pricey. The queen angelfish grows to be . With neon blue and yellow scales and iridescent purple and orange markings, surprisingly it is not conspicuous, and actually hides very well, and is very shy. As juveniles, some species are different colors than when they reach adulthood. For example, the Blue Angelfish is a vibrant, electric blue color with black and white stripes or spots. When they reach adulthood, they turn a grayish color with yellow and blue fins and dark spots on their bodies. Behavior The larger species are also quite bold and seemingly fearless; they are known to approach divers. While the majority adapts easily to captive life, some are specialist feeders which are difficult to maintain. Feeding habits can be strictly defined through genus, with Genicanthus species feeding on zooplankton and Centropyge preferring filamentous algae. Other species focus on sessile benthic invertebrates; sponges, tunicates, bryozoans, and hydroids are staples. On Caribbean coral reefs, angelfishes primarily eat sponges, and have an important role in preventing the overgrowth of reef-building corals by eating faster-growing sponge species. Most marine angelfishes restrict themselves to the shallows of the reef, seldom venturing deeper than . The recently described Centropyge abei is known to inhabit depths of . They are diurnal animals, hiding amongst the nooks and crevices of the reef by night. Some species are solitary in nature and form highly territorial mated pairs; others form harems with a single male dominant over several females. As juveniles, some species may eke out a living as cleaner fish. Reproduction Common to many species is a dramatic shift in coloration associated with maturity. For example, young male ornate angelfish, Genicanthus bellus, have broad, black bands and are indistinguishable from females; as they mature, bright orange bands develop on the flanks and back. Thought to correspond to social rank, these colour shifts are not necessarily confined to males; all marine angelfish species are known to be protogynous hermaphrodites. This means that if the dominant male of a harem is removed, a female will turn into a functional male. As pelagic spawners, marine angelfishes release many tiny buoyant eggs into the water which then become part of the plankton. The eggs float freely with the currents until hatching, with a high number falling victim to planktonic feeders. In aquariums, two fish usually will breed within their community but will harass other fish in the tank, so it is best they have their own with plenty of room. Characteristics Two-spined angelfish (Centropage bispinosa), also known as the "coral beauty" or "dusky angelfish" has a vibrant blue or darkish purple body with a reddish-yellow underside that is usually covered in stripes. These stripes vary from purple, red and orange, and may even appear as spots. It is highly demanded in the tropical aquarium trade, but is at low risk on the IUCN Red List of Threatened Species. The Coral Beauty Angelfish is native to the Indo-Pacific Ocean, usually found in shallow reefy waters or sometimes in deep waters. They feed on algae and hide in coral reefs and lagoons in the wild. The Two-spined angelfish usually reaches up to 3 inches and have a rounded caudal fin. In aquarium life they nibble on corals and rocks and are considered to be starter fish. They have a high metabolism so feeding only needs to occur every other day. The blue angelfish (Pomacanthus semicirculatus) is a vibrant, electric blue color with black and white stripes and sometimes spots as a juvenile. It turns a grayish color with dark spots and sometimes yellow and blue accents as an adult. Found in stony and soft corals and are more likely to be found in vibrantly colored corals as juveniles. Dorsal and pelvic fin help with speed. Tend to hide from predators in dark areas. Vibrant electric blue color allows them to pose as toxic to predators. There are 13 different species in the Pomacanthus genus. Rarely travel in schools and can grow up to 40 cm. Can live up to 25+ years. Taxonomy The Pomacanthidae is frequently placed within the large order Perciformes but taxonomists have also placed the family within the order Acanthuriformes, alongside the Chaetodontidae and Acanthuridae, among others. Other authorities have resolved the family as incertae sedis. There are 88 species in eight genera: The more speciose genera are, generally speaking, widely distributed, however some species, especially of the Centropyge genus, are range restricted or endemic to specific islands or small island groups. Timeline
Biology and health sciences
Acanthomorpha
Animals
176354
https://en.wikipedia.org/wiki/Mineral%20wool
Mineral wool
Mineral wool is any fibrous material formed by spinning or drawing molten mineral or rock materials such as slag and ceramics. Applications of mineral wool include thermal insulation (as both structural insulation and pipe insulation), filtration, soundproofing, and hydroponic growth medium. Naming Mineral wool is also known as mineral cotton, mineral fiber, man-made mineral fiber (MMMF), and man-made vitreous fiber (MMVF). Specific mineral wool products are stone wool and slag wool. Europe also includes glass wool which, together with ceramic fiber, are entirely artificial fibers that can be made into different shapes and are spiky to touch. History Slag wool was first made in 1840 in Wales by Edward Parry, "but no effort appears to have been made to confine the wool after production; consequently it floated about the works with the slightest breeze, and became so injurious to the men that the process had to be abandoned". A method of making mineral wool was patented in the United States in 1870 by John Player and first produced commercially in 1871 at Georgsmarienhütte in Osnabrück Germany. The process involved blowing a strong stream of air across a falling flow of liquid iron slag which was similar to the natural occurrence of fine strands of volcanic slag from Kilauea called Pele's hair created by strong winds blowing apart the slag during an eruption. According to a mineral wool manufacturer, the first mineral wool intended for high-temperature applications was invented in the United States in 1942 but was not commercially viable until approximately 1953. More forms of mineral wool became available in the 1970s and 1980s. High-temperature mineral wool High-temperature mineral wool is a type of mineral wool created for use as high-temperature insulation and generally defined as being resistant to temperatures above 1,000 °C. This type of insulation is usually used in industrial furnaces and foundries. Because high-temperature mineral wool is costly to produce and has limited availability, it is almost exclusively used in high-temperature industrial applications and processes. Definitions Classification temperature is the temperature at which a certain amount of linear contraction (usually two to four percent) is not exceeded after a 24-hour heat treatment in an electrically heated laboratory oven in a neutral atmosphere. Depending on the type of product, the value may not exceed two percent for boards and shaped products and four percent for mats and papers. The classification temperature is specified in 50 °C steps starting at 850 °C and up to 1600 °C. The classification temperature does not mean that the product can be used continuously at this temperature. In the field, the continuous application temperature of amorphous high-temperature mineral wool (AES and ASW) is typically 100 °C to 150 °C below the classification temperature. Products made of polycrystalline wool can generally be used up to the classification temperature. Types There are several types of high-temperature mineral wool made from different types of minerals. The mineral chosen results in different material properties and classification temperatures. Alkaline earth silicate wool (AES wool) AES wool consists of amorphous glass fibers that are produced by melting a combination of calcium oxide (CaO−), magnesium oxide (MgO−), and silicon dioxide (SiO2). Products made from AES wool are generally used in equipment that continuously operates and in domestic appliances. Some formulations of AES wool are bio-soluble, meaning they dissolve in bodily fluids within a few weeks and are quickly cleared from the lungs. Alumino silicate wool (ASW) Alumino silicate wool, also known as refractory ceramic fiber (RCF), consists of amorphous fibers produced by melting a combination of aluminum oxide (Al2O3) and silicon dioxide (SiO2), usually in a weight ratio 50:50 (see also VDI 3469 Parts 1 and 5, as well as TRGS 521). Products made of alumino silicate wool are generally used at application temperatures of greater than 900 °C for equipment that operates intermittently and in critical application conditions (see Technical Rules TRGS 619). Polycrystalline wool (PCW) Polycrystalline wool consists of fibers that contain aluminum oxide (Al2O3) at greater than 70 percent of the total materials and is produced by sol–gel method from aqueous spinning solutions. The water-soluble green fibers obtained as a precursor are crystallized by means of heat treatment. Polycrystalline wool is generally used at application temperatures greater than 1300 °C and in critical chemical and physical application conditions. Kaowool Kaowool is a type of high-temperature mineral wool made from the mineral kaolin. It was one of the first types of high-temperature mineral wool invented and has been used into the 21st century. It can withstand temperatures close to . Manufacture Stone wool is a furnace product of molten rock at a temperature of about 1600 °C through which a stream of air or steam is blown. More advanced production techniques are based on spinning molten rock in high-speed spinning heads somewhat like the process used to produce cotton candy. The final product is a mass of fine, intertwined fibers with a typical diameter of 2 to 6 micrometers. Mineral wool may contain a binder, often a terpolymer, and an oil to reduce dusting. Use Though the individual fibers conduct heat very well, when pressed into rolls and sheets, their ability to partition air makes them excellent insulators and sound absorbers. Though not immune to the effects of a sufficiently hot fire, the fire resistance of fiberglass, stone wool, and ceramic fibers makes them common building materials when passive fire protection is required, being used as spray fireproofing, in stud cavities in drywall assemblies and as packing materials in firestops. Other uses are in resin bonded panels, as filler in compounds for gaskets, in brake pads, in plastics in the automotive industry, as a filtering medium, and as a growth medium in hydroponics. Mineral fibers are produced in the same way, without binder. The fiber as such is used as a raw material for its reinforcing purposes in various applications, such as friction materials, gaskets, plastics, and coatings. Hydroponics Mineral wool products can be engineered to hold large quantities of water and air that aid root growth and nutrient uptake in hydroponics; their fibrous nature also provides a good mechanical structure to hold the plant stable. The naturally high pH of mineral wool makes them initially unsuitable to plant growth and requires "conditioning" to produce a wool with an appropriate, stable pH. Conditioning methods include pre-soaking mineral wool in a nutrient solution adjusted to pH 5.5 until it stops bubbling. High-temperature mineral wool High-temperature mineral wool is used primarily for insulation and lining of industrial furnaces and foundries to improve efficiency and safety. It is also used to prevent the spread of fire. The use of high-temperature mineral wool enables a more lightweight construction of industrial furnaces and other technical equipment as compared to other methods such as fire bricks, due to its high heat resistance capabilities per weight, but has the disadvantage of being more expensive than other methods. Safety of material The International Agency for Research on Cancer (IARC) reviewed the carcinogenicity of man-made mineral fibers in October 2002. The IARC Monograph's working group concluded only the more biopersistent materials remain classified by IARC as "possibly carcinogenic to humans" (Group 2B). These include refractory ceramic fibers, which are used industrially as insulation in high-temperature environments such as blast furnaces, and certain special-purpose glass wools not used as insulating materials. In contrast, the more commonly used vitreous fiber wools produced since 2000, including insulation glass wool, stone wool, and slag wool, are considered "not classifiable as to carcinogenicity in humans" (Group 3). High bio soluble fibers are produced that do not cause damage to the human cell. These newer materials have been tested for carcinogenicity and most are found to be noncarcinogenic. IARC elected not to make an overall evaluation of the newly developed fibers designed to be less bio persistent such as the alkaline earth silicate or high-alumina, low-silica wools. This decision was made in part because no human data were available, although such fibers that have been tested appear to have low carcinogenic potential in experimental animals, and because the Working Group had difficulty in categorizing these fibers into meaningful groups based on chemical composition. The European Regulation (CE) n° 1272/2008 on classification, labelling and packaging of substances and mixtures updated by the Regulation (CE) n°790/2009 does not classify mineral wool fibers as a dangerous substance if they fulfil criteria defined in its Note Q. The European Certification Board for mineral wool products, EUCEB, certify mineral wool products made of fibers fulfilling Note Q ensuring that they have a low bio persistence and so that they are quickly removed from the lung. The certification is based on independent experts' advice and regular control of the chemical composition. Due to the mechanical effect of fibers, mineral wool products may cause temporary skin itching. To diminish this and to avoid unnecessary exposure to mineral wool dust, information on good practices is available on the packaging of mineral wool products with pictograms or sentences. Safe Use Instruction Sheets similar to Safety data sheet are also available from each producer. People can be exposed to mineral wool fibers in the workplace by breathing them in, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for mineral wool fiber exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 5 mg/m3 total exposure and 3 fibers per cm3 over an 8-hour workday. Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) is a European Union regulation of 18 December 2006. REACH addresses the production and use of chemical substances, and their potential impacts on both human health and the environment. A Substance Information Exchange Forum (SIEF) has been set up for several types of mineral wool. AES, ASW and PCW have been registered before the first deadline of 1 December 2010 and can, therefore, be used on the European market. ASW/RCF is classified as carcinogen category 1B. AES is exempted from carcinogen classification based on short-term in vitro study result. PCW wools are not classified; self-classification led to the conclusion that PCW are not hazardous. On 13 January 2010, some of the aluminosilicate refractory ceramic fibers and zirconia aluminosilicate refractory ceramic fibers have been included in the candidate list of Substances of Very High Concern. In response to concerns raised with the definition and the dossier two additional dossiers were posted on the ECHA website for consultation and resulted in two additional entries on the candidate list. This actual (having four entries for one substance/group of substances) situation is contrary to the REACH procedure intended. Aside from this situation, concerns raised during the two consultation periods remain valid. Regardless of the concerns raised, the inclusion of a substance in the candidate list triggers immediately the following legal obligations of manufacturers, importers and suppliers of articles containing that substance in a concentration above 0.1% (w/w): Notification to ECHA -REACH Regulation Art. 7 Provision of Safety Data Sheet- REACH Regulation Art. 31.1 Duty to communicate safe use information or responding to customer requests -REACH Regulation Art. 33 Crystalline silica Amorphous high-temperature mineral wool (AES and ASW) is produced from a molten glass stream which is aerosolized by a jet of high-pressure air or by letting the stream impinge onto spinning wheels. The droplets are drawn into fibers; the mass of both fibers and remaining droplets cool very rapidly so that no crystalline phases may form. When amorphous high-temperature mineral wool is installed and used in high-temperature applications such as industrial furnaces, at least one face may be exposed to conditions causing the fibers to partially devitrify. Depending on the chemical composition of the glassy fiber and the time and temperature to which the materials are exposed, different stable crystalline phases may form. In after-use high-temperature mineral wool crystalline silica crystals are embedded in a matrix composed of other crystals and glasses. Experimental results on the biological activity of after-use high-temperature mineral wool have not demonstrated any hazardous activity that could be related to any form of silica they may contain. Substitutes for mineral wool in construction Due to the mineral wool non-degradability and potential health risks, substitute materials are being developed: hemp, flax, wool, wood, and cork insulations are the most prominent. Biodegradability and health profile are the main advantages of those materials. Their drawbacks when compared to mineral wool are their substantially lower mold resistance, higher combustibility, and slightly higher thermal conductivity (hemp insulation: 0.040 Wmk, mineral wool insulation: 0.030-0.045 Wmk).
Technology
Materials
null
176399
https://en.wikipedia.org/wiki/Zeeman%20effect
Zeeman effect
The Zeeman effect ( , ) is the splitting of a spectral line into several components in the presence of a static magnetic field. It is caused by interaction of the magnetic field with the magnetic moment of the atomic electron associated to its orbital motion and spin; this interaction shifts some orbital energies more than others, resulting in the split spectrum. The effect is named after the Dutch physicist Pieter Zeeman, who discovered it in 1896 and received a Nobel Prize in Physics for this discovery. It is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric field. Also similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules. Since the distance between the Zeeman sub-levels is a function of magnetic field strength, this effect can be used to measure magnetic field strength, e.g. that of the Sun and other stars or in laboratory plasmas. Discovery In 1896 Zeeman learned that his laboratory had one of Henry Augustus Rowland's highest resolving diffraction gratings. Zeeman had read James Clerk Maxwell's article in Encyclopædia Britannica describing Michael Faraday's failed attempts to influence light with magnetism. Zeeman wondered if the new spectrographic techniques could succeed where early efforts had not. When illuminated by a slit shaped source, the grating produces a long array of slit images corresponding to different wavelengths. Zeeman placed a piece of asbestos soaked in salt water into a Bunsen burner flame at the source of the grating: he could easily see two lines for sodium light emission. Energizing a 10 kilogauss magnet around the flame he observed a slight broadening of the sodium images. When Zeeman switched to cadmium as the source he observed the images split when the magnet was energized. These splittings could be analyzed with Hendrik Lorentz's then-new electron theory. In retrospect we now know that the magnetic effects on sodium require quantum mechanical treatment. Zeeman and Lorentz were awarded the 1902 Nobel prize; in his acceptance speech Zeeman explained his apparatus and showed slides of the spectrographic images. Nomenclature Historically, one distinguishes between the normal and an anomalous Zeeman effect (discovered by Thomas Preston in Dublin, Ireland). The anomalous effect appears on transitions where the net spin of the electrons is non-zero. It was called "anomalous" because the electron spin had not yet been discovered, and so there was no good explanation for it at the time that Zeeman observed the effect. Wolfgang Pauli recalled that when asked by a colleague as to why he looked unhappy, he replied, "How can one look happy when he is thinking about the anomalous Zeeman effect?" At higher magnetic field strength the effect ceases to be linear. At even higher field strengths, comparable to the strength of the atom's internal field, the electron coupling is disturbed and the spectral lines rearrange. This is called the Paschen–Back effect. In the modern scientific literature, these terms are rarely used, with a tendency to use just the "Zeeman effect". Another rarely used obscure term is inverse Zeeman effect, referring to the Zeeman effect in an absorption spectral line. A similar effect, splitting of the nuclear energy levels in the presence of a magnetic field, is referred to as the nuclear Zeeman effect. Theoretical presentation The total Hamiltonian of an atom in a magnetic field is where is the unperturbed Hamiltonian of the atom, and is the perturbation due to the magnetic field: where is the magnetic moment of the atom. The magnetic moment consists of the electronic and nuclear parts; however, the latter is many orders of magnitude smaller and will be neglected here. Therefore, where is the Bohr magneton, is the total electronic angular momentum, and is the Landé g-factor. A more accurate approach is to take into account that the operator of the magnetic moment of an electron is a sum of the contributions of the orbital angular momentum and the spin angular momentum , with each multiplied by the appropriate gyromagnetic ratio: where and (the latter is called the anomalous gyromagnetic ratio; the deviation of the value from 2 is due to the effects of quantum electrodynamics). In the case of the LS coupling, one can sum over all electrons in the atom: where and are the total spin momentum and spin of the atom, and averaging is done over a state with a given value of the total angular momentum. If the interaction term is small (less than the fine structure), it can be treated as a perturbation; this is the Zeeman effect proper. In the Paschen–Back effect, described below, exceeds the LS coupling significantly (but is still small compared to ). In ultra-strong magnetic fields, the magnetic-field interaction may exceed , in which case the atom can no longer exist in its normal meaning, and one talks about Landau levels instead. There are intermediate cases which are more complex than these limit cases. Weak field (Zeeman effect) If the spin–orbit interaction dominates over the effect of the external magnetic field, and are not separately conserved, only the total angular momentum is. The spin and orbital angular momentum vectors can be thought of as precessing about the (fixed) total angular momentum vector . The (time-)"averaged" spin vector is then the projection of the spin onto the direction of : and for the (time-)"averaged" orbital vector: Thus, Using and squaring both sides, we get and: using and squaring both sides, we get Combining everything and taking , we obtain the magnetic potential energy of the atom in the applied external magnetic field, where the quantity in square brackets is the Landé g-factor gJ of the atom ( and ) and is the z-component of the total angular momentum. For a single electron above filled shells and , the Landé g-factor can be simplified into: Taking to be the perturbation, the Zeeman correction to the energy is Example: Lyman-alpha transition in hydrogen The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions and In the presence of an external magnetic field, the weak-field Zeeman effect splits the 1S1/2 and 2P1/2 levels into 2 states each () and the 2P3/2 level into 4 states (). The Landé g-factors for the three levels are: for (j=1/2, l=0) for (j=1/2, l=1) for (j=3/2, l=1). Note in particular that the size of the energy splitting is different for the different orbitals, because the gJ values are different. On the left, fine structure splitting is depicted. This splitting occurs even in the absence of a magnetic field, as it is due to spin–orbit coupling. Depicted on the right is the additional Zeeman splitting, which occurs in the presence of magnetic fields. Strong field (Paschen–Back effect) The Paschen–Back effect is the splitting of atomic energy levels in the presence of a strong magnetic field. This occurs when an external magnetic field is sufficiently strong to disrupt the coupling between orbital () and spin () angular momenta. This effect is the strong-field limit of the Zeeman effect. When , the two effects are equivalent. The effect was named after the German physicists Friedrich Paschen and Ernst E. A. Back. When the magnetic-field perturbation significantly exceeds the spin–orbit interaction, one can safely assume . This allows the expectation values of and to be easily evaluated for a state . The energies are simply The above may be read as implying that the LS-coupling is completely broken by the external field. However and are still "good" quantum numbers. Together with the selection rules for an electric dipole transition, i.e., this allows to ignore the spin degree of freedom altogether. As a result, only three spectral lines will be visible, corresponding to the selection rule. The splitting is independent of the unperturbed energies and electronic configurations of the levels being considered. More precisely, if , each of these three components is actually a group of several transitions due to the residual spin–orbit coupling and relativistic corrections (which are of the same order, known as 'fine structure'). The first-order perturbation theory with these corrections yields the following formula for the hydrogen atom in the Paschen–Back limit: Example: Lyman-alpha transition in hydrogen In this example, the fine-structure corrections are ignored. Intermediate field for j = 1/2 In the magnetic dipole approximation, the Hamiltonian which includes both the hyperfine and Zeeman interactions is where is the hyperfine splitting (in Hz) at zero applied magnetic field, and are the Bohr magneton and nuclear magneton respectively, and are the electron and nuclear angular momentum operators and is the Landé g-factor: In the case of weak magnetic fields, the Zeeman interaction can be treated as a perturbation to the basis. In the high field regime, the magnetic field becomes so strong that the Zeeman effect will dominate, and one must use a more complete basis of or just since and will be constant within a given level. To get the complete picture, including intermediate field strengths, we must consider eigenstates which are superpositions of the and basis states. For , the Hamiltonian can be solved analytically, resulting in the Breit–Rabi formula (named after Gregory Breit and Isidor Isaac Rabi). Notably, the electric quadrupole interaction is zero for (), so this formula is fairly accurate. We now utilize quantum mechanical ladder operators, which are defined for a general angular momentum operator as These ladder operators have the property as long as lies in the range (otherwise, they return zero). Using ladder operators and We can rewrite the Hamiltonian as We can now see that at all times, the total angular momentum projection will be conserved. This is because both and leave states with definite and unchanged, while and either increase and decrease or vice versa, so the sum is always unaffected. Furthermore, since there are only two possible values of which are . Therefore, for every value of there are only two possible states, and we can define them as the basis: This pair of states is a two-level quantum mechanical system. Now we can determine the matrix elements of the Hamiltonian: Solving for the eigenvalues of this matrix – as can be done by hand (see two-level quantum mechanical system), or more easily, with a computer algebra system – we arrive at the energy shifts: where is the splitting (in units of Hz) between two hyperfine sublevels in the absence of magnetic field , is referred to as the 'field strength parameter' (Note: for the expression under the square root is an exact square, and so the last term should be replaced by ). This equation is known as the Breit–Rabi formula and is useful for systems with one valence electron in an () level. Note that index in should be considered not as total angular momentum of the atom but as asymptotic total angular momentum. It is equal to total angular momentum only if otherwise eigenvectors corresponding different eigenvalues of the Hamiltonian are the superpositions of states with different but equal (the only exceptions are ). Applications Astrophysics George Ellery Hale was the first to notice the Zeeman effect in the solar spectra, indicating the existence of strong magnetic fields in sunspots. Such fields can be quite high, on the order of 0.1 tesla or higher. Today, the Zeeman effect is used to produce magnetograms showing the variation of magnetic field on the Sun, and to analyse the magnetic field geometries in other stars. Laser cooling The Zeeman effect is utilized in many laser cooling applications such as a magneto-optical trap and the Zeeman slower. Spintronics Zeeman-energy mediated coupling of spin and orbital motions is used in spintronics for controlling electron spins in quantum dots through electric dipole spin resonance. Metrology Old high-precision frequency standards, i.e. hyperfine structure transition-based atomic clocks, may require periodic fine-tuning due to exposure to magnetic fields. This is carried out by measuring the Zeeman effect on specific hyperfine structure transition levels of the source element (cesium) and applying a uniformly precise, low-strength magnetic field to said source, in a process known as degaussing. The Zeeman effect may also be utilized to improve accuracy in atomic absorption spectroscopy. Biology A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect. Nuclear spectroscopy The nuclear Zeeman effect is important in such applications as nuclear magnetic resonance spectroscopy, magnetic resonance imaging (MRI), and Mössbauer spectroscopy. Other The electron spin resonance spectroscopy is based on the Zeeman effect. Demonstrations The Zeeman effect can be demonstrated by placing a sodium vapor source in a powerful electromagnet and viewing a sodium vapor lamp through the magnet opening (see diagram). With magnet off, the sodium vapor source will block the lamp light; when the magnet is turned on the lamp light will be visible through the vapor. The sodium vapor can be created by sealing sodium metal in an evacuated glass tube and heating it while the tube is in the magnet. Alternatively, salt (sodium chloride) on a ceramic stick can be placed in the flame of Bunsen burner as the sodium vapor source. When the magnetic field is energized, the lamp image will be brighter. However, the magnetic field also affects the flame, making the observation depend upon more than just the Zeeman effect. These issues also plagued Zeeman's original work; he devoted considerable effort to ensure his observations were truly an effect of magnetism on light emission. When salt is added to the Bunsen burner, it dissociates to give sodium and chloride. The sodium atoms get excited due to photons from the sodium vapour lamp, with electrons excited from 3s to 3p states, absorbing light in the process. The sodium vapour lamp emits light at 589nm, which has precisely the energy to excite an electron of a sodium atom. If it was an atom of another element, like chlorine, shadow will not be formed. When a magnetic field is applied, due to the Zeeman effect the spectral line of sodium gets split into several components. This means the energy difference between the 3s and 3p atomic orbitals will change. As the sodium vapour lamp don't precisely deliver the right frequency any more, light doesn't get absorbed and passes through, resulting in the shadow dimming. As the magnetic field strength is increased, the shift in the spectral lines increases and lamp light is transmitted.
Physical sciences
Atomic physics
Physics
176426
https://en.wikipedia.org/wiki/Terrier
Terrier
Terrier (from Latin terra, 'earth') is a type of dog originally bred to hunt vermin. A terrier is a dog of any one of many breeds or landraces of the terrier type, which are typically small, wiry, game, and fearless. There are five different groups of terrier, with each group having different shapes and sizes. History Most terrier breeds were refined from the older purpose-bred dogs. The gameness of the early hunting terriers was exploited by using them in sporting contests. Initially, terriers competed in events such as clearing a pit of rats. The dog that was the fastest in killing all the rats won. In the eighteenth century some terriers were crossed with hounds to improve their hunting, and some with fighting dog breeds to "intensify tenacity and increase courage". Some of the crosses with fighting dogs, bull and terrier crosses, were used in the blood sport of dog-fighting. Modern pet breeds such as the Miniature Bull Terrier are listed by the Fédération Cynologique Internationale (FCI) under Bull type terriers. Today, most terriers are kept as companion dogs and family pets. They are generally loyal and affectionate to their owners. Terrier types and groups In the 18th century in Britain, only two types of terriers were recognized, long- and short-legged. Today, terriers are often informally categorized by size or by function. Hunting-types are still used to find, track, or trail quarry, especially underground, and sometimes to bolt the quarry. Modern examples include the Jack Russell Terrier, the Jagdterrier, the Rat Terrier, and the Patterdale Terrier. There are also the short-legged terriers such as the Cairn Terrier, the Scottish Terrier, and the West Highland White Terrier, which were also used to kill small vermin. The original hunting terriers include the Fell Terrier (developed in northern England to assist in the killing of foxes) and the Hunt Terrier (developed in southern England to locate, kill or bolt foxes during a traditional mounted fox hunt). The various combinations of bulldog and terrier that were used for bull-baiting and dog-fighting in the late 19th century were later refined into separate breeds that combined both terrier and bulldog qualities. Except for the Boston Terrier, they are generally included in kennel clubs' Terrier Group. Breeders have bred modern bull-type terrier breeds, such as the Bull Terrier and Staffordshire Bull Terrier, into suitable family dogs and show terriers. Toy terriers have been bred from larger terriers and are shown in the Toy or Companion group. Included among these breeds are the English Toy Terrier and the Yorkshire Terrier. While small, they retain true terrier character and are not submissive "lap dogs". Other descendants of the bull and terrier types, such as the Asian Gull Terrier, are among the dogs still raised for dog-fighting. Appearance Terriers range greatly in appearance from very small, light bodied, smooth coated dogs such as the English Toy Terrier (Black and Tan), which weighs as little as , to the very large rough-coated Airedale Terriers, which can be up to or more. As of 2004, the United Kennel Club recognized a new hairless breed of terrier derived from the Rat Terrier called the American Hairless Terrier. Kennel club classification When competing in conformation shows, most kennel clubs including the Fédération Cynologique Internationale group pedigree terrier breeds together in their own terrier group. The Fédération Cynologique Internationale grouped Terriers in Group 3.
Biology and health sciences
Dogs
null
176478
https://en.wikipedia.org/wiki/Riemann%20sum
Riemann sum
In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approximating the area of functions or lines on a graph, where it is also known as the rectangle rule. It can also be applied for approximating the length of curves and other approximations. The sum is calculated by partitioning the region into shapes (rectangles, trapezoids, parabolas, or cubics—sometimes infinitesimally small) that together form a region that is similar to the region being measured, then calculating the area for each of these shapes, and finally adding all of these small areas together. This approach can be used to find a numerical approximation for a definite integral even if the fundamental theorem of calculus does not make it easy to find a closed-form solution. Because the region by the small shapes is usually not exactly the same shape as the region being measured, the Riemann sum will differ from the area being measured. This error can be reduced by dividing up the region more finely, using smaller and smaller shapes. As the shapes get smaller and smaller, the sum approaches the Riemann integral. Definition Let be a function defined on a closed interval of the real numbers, , and as a partition of , that is A Riemann sum of over with partition is defined as where and . One might produce different Riemann sums depending on which 's are chosen. In the end this will not matter, if the function is Riemann integrable, when the difference or width of the summands approaches zero. Types of Riemann sums Specific choices of give different types of Riemann sums: If for all i, the method is the left rule and gives a left Riemann sum. If for all i, the method is the right rule and gives a right Riemann sum. If for all i, the method is the midpoint rule and gives a middle Riemann sum. If (that is, the supremum of over ), the method is the upper rule and gives an upper Riemann sum or upper Darboux sum. If (that is, the infimum of f over ), the method is the lower rule and gives a lower Riemann sum or lower Darboux sum. All these Riemann summation methods are among the most basic ways to accomplish numerical integration. Loosely speaking, a function is Riemann integrable if all Riemann sums converge as the partition "gets finer and finer". While not derived as a Riemann sum, taking the average of the left and right Riemann sums is the trapezoidal rule and gives a trapezoidal sum. It is one of the simplest of a very general way of approximating integrals using weighted averages. This is followed in complexity by Simpson's rule and Newton–Cotes formulas. Any Riemann sum on a given partition (that is, for any choice of between and ) is contained between the lower and upper Darboux sums. This forms the basis of the Darboux integral, which is ultimately equivalent to the Riemann integral. Riemann summation methods The four Riemann summation methods are usually best approached with subintervals of equal size. The interval is therefore divided into subintervals, each of length The points in the partition will then be Left rule For the left rule, the function is approximated by its values at the left endpoints of the subintervals. This gives multiple rectangles with base and height . Doing this for , and summing the resulting areas gives The left Riemann sum amounts to an overestimation if f is monotonically decreasing on this interval, and an underestimation if it is monotonically increasing. The error of this formula will be where is the maximum value of the absolute value of over the interval. Right rule For the right rule, the function is approximated by its values at the right endpoints of the subintervals. This gives multiple rectangles with base and height . Doing this for , and summing the resulting areas gives The right Riemann sum amounts to an underestimation if f is monotonically decreasing, and an overestimation if it is monotonically increasing. The error of this formula will be where is the maximum value of the absolute value of over the interval. Midpoint rule For the midpoint rule, the function is approximated by its values at the midpoints of the subintervals. This gives for the first subinterval, for the next one, and so on until . Summing the resulting areas gives The error of this formula will be where is the maximum value of the absolute value of over the interval. This error is half of that of the trapezoidal sum; as such the middle Riemann sum is the most accurate approach to the Riemann sum. Generalized midpoint rule A generalized midpoint rule formula, also known as the enhanced midpoint integration, is given by where denotes even derivative. For a function defined over interval , its integral is Therefore, we can apply this generalized midpoint integration formula by assuming that . This formula is particularly efficient for the numerical integration when the integrand is a highly oscillating function. Trapezoidal rule For the trapezoidal rule, the function is approximated by the average of its values at the left and right endpoints of the subintervals. Using the area formula for a trapezium with parallel sides and , and height , and summing the resulting areas gives The error of this formula will be where is the maximum value of the absolute value of . The approximation obtained with the trapezoidal sum for a function is the same as the average of the left hand and right hand sums of that function. Connection with integration For a one-dimensional Riemann sum over domain , as the maximum size of a subinterval shrinks to zero (that is the limit of the norm of the subintervals goes to zero), some functions will have all Riemann sums converge to the same value. This limiting value, if it exists, is defined as the definite Riemann integral of the function over the domain, For a finite-sized domain, if the maximum size of a subinterval shrinks to zero, this implies the number of subinterval goes to infinity. For finite partitions, Riemann sums are always approximations to the limiting value and this approximation gets better as the partition gets finer. The following animations help demonstrate how increasing the number of subintervals (while lowering the maximum subinterval size) better approximates the "area" under the curve: Since the red function here is assumed to be a smooth function, all three Riemann sums will converge to the same value as the number of subintervals goes to infinity. Example Taking an example, the area under the curve over [0, 2] can be procedurally computed using Riemann's method. The interval [0, 2] is firstly divided into subintervals, each of which is given a width of ; these are the widths of the Riemann rectangles (hereafter "boxes"). Because the right Riemann sum is to be used, the sequence of coordinates for the boxes will be . Therefore, the sequence of the heights of the boxes will be . It is an important fact that , and . The area of each box will be and therefore the nth right Riemann sum will be: If the limit is viewed as n → ∞, it can be concluded that the approximation approaches the actual value of the area under the curve as the number of boxes increases. Hence: This method agrees with the definite integral as calculated in more mechanical ways: Because the function is continuous and monotonically increasing over the interval, a right Riemann sum overestimates the integral by the largest amount (while a left Riemann sum would underestimate the integral by the largest amount). This fact, which is intuitively clear from the diagrams, shows how the nature of the function determines how accurate the integral is estimated. While simple, right and left Riemann sums are often less accurate than more advanced techniques of estimating an integral such as the Trapezoidal rule or Simpson's rule. The example function has an easy-to-find anti-derivative so estimating the integral by Riemann sums is mostly an academic exercise; however it must be remembered that not all functions have anti-derivatives so estimating their integrals by summation is practically important. Higher dimensions The basic idea behind a Riemann sum is to "break-up" the domain via a partition into pieces, multiply the "size" of each piece by some value the function takes on that piece, and sum all these products. This can be generalized to allow Riemann sums for functions over domains of more than one dimension. While intuitively, the process of partitioning the domain is easy to grasp, the technical details of how the domain may be partitioned get much more complicated than the one dimensional case and involves aspects of the geometrical shape of the domain. Two dimensions In two dimensions, the domain may be divided into a number of two-dimensional cells such that . Each cell then can be interpreted as having an "area" denoted by . The two-dimensional Riemann sum is where . Three dimensions In three dimensions, the domain is partitioned into a number of three-dimensional cells such that . Each cell then can be interpreted as having a "volume" denoted by . The three-dimensional Riemann sum is where . Arbitrary number of dimensions Higher dimensional Riemann sums follow a similar pattern. An n-dimensional Riemann sum is where , that is, it is a point in the n-dimensional cell with n-dimensional volume . Generalization In high generality, Riemann sums can be written where stands for any arbitrary point contained in the set and is a measure on the underlying set. Roughly speaking, a measure is a function that gives a "size" of a set, in this case the size of the set ; in one dimension this can often be interpreted as a length, in two dimensions as an area, in three dimensions as a volume, and so on.
Mathematics
Integral calculus
null
176537
https://en.wikipedia.org/wiki/M1911%20pistol
M1911 pistol
The Colt M1911 (also known as 1911, Colt 1911, Colt .45, or Colt Government in the case of Colt-produced models) is a single-action, recoil-operated, semi-automatic pistol chambered for the .45 ACP cartridge. The pistol's formal U.S. military designation as of 1940 was Automatic Pistol, Caliber .45, M1911 for the original model adopted in March 1911, and Automatic Pistol, Caliber .45, M1911A1 for the improved M1911A1 model which entered service in 1926. The designation changed to Pistol, Caliber .45, Automatic, M1911A1 in the Vietnam War era. Designed by John Browning, the M1911 is the best-known of his designs to use the short recoil principle in its basic design. The pistol was widely copied, and this operating system rose to become the preeminent type of the 20th century and of nearly all modern centerfire pistols. It is popular with civilian shooters in competitive events such as the International Defensive Pistol Association and International Practical Shooting Confederation. The U.S. military procured around 2.7 million M1911 and M1911A1 pistols during its service life. The pistol served as the standard-issue sidearm for the United States Armed Forces from 1911 to 1985. It was widely used in World War I, World War II, the Korean War, and the Vietnam War. The M1911A1 was replaced by the adoption of the 9mm Beretta M9 pistol as the standard U.S. military sidearm in 1985. However, the U.S. Army did not officially replace the M1911A1 with the Beretta M9 until October 1986; production and procurement delays kept the 1911A1 in service with some units past 1989. The 1911A1 has never been completely phased out. Modernized derivative variants of the M1911 are still in use by some units of the U.S. Army Special Forces, U.S. Marine Corps and the U.S. Navy. History Early history and adaptations The M1911 pistol originated in the late 1890s as the result of a search for a suitable self-loading (or semi-automatic) pistol to replace the variety of revolvers in service at the time. The United States was adopting new firearms at a phenomenal rate; several new pistols and two all-new service rifles (M1892/96/98 Krag and M1895 Navy Lee), as well as a series of revolvers by Colt and Smith & Wesson for the Army and Navy, were adopted just in that decade. The next decade would see a similar pace, including the adoption of several more revolvers and an intensive search for a self-loading pistol that would culminate in the official adoption of the M1911 after the turn of the decade. Hiram S. Maxim had designed a self-loading rifle in the 1880s, but was preoccupied with machine guns. Nevertheless, the application of his principle of using cartridge energy to reload led to several self-loading pistols in 1896. The designs caught the attention of various militaries, each of which began programs to find a suitable one for their forces. In the U.S., such a program would lead to a formal test at the turn of the 20th century. During the end of 1899 and into 1900, a test of self-loading pistols was conducted, including entries from Mauser (C96 "Broomhandle"), Mannlicher (Mannlicher M1894), and Colt (Colt M1900). This led to a purchase of 1,000 DWM Luger pistols, chambered in 7.65mm Luger. During field trials, these ran into some problems, especially with stopping power. Other governments had made similar complaints. Consequently, DWM produced an enlarged version of the round, the 9×19mm Parabellum with fifty weapons chambered for it tested by the U.S. Army in 1903. American units fighting Tausūg guerrillas in the Moro Rebellion in Sulu during the Philippine–American War using the then-standard Colt M1892 revolver, .38 Long Colt, found it to be unsuitable for the rigors of jungle warfare, particularly in terms of stopping power, as the Moros had high battle morale and often used drugs to inhibit the sensation of pain. The U.S. Army briefly reverted to using the M1873 single-action revolver in .45 Colt caliber, which had been standard during the late 19th century; the heavier bullet was found to be more effective against charging tribesmen. The problems prompted the Chief of Ordnance, General William Crozier, to authorize further testing for a new service pistol. Following the 1904 Thompson-LaGarde pistol round effectiveness tests, Colonel John T. Thompson stated that the new pistol "should not be of less than .45 caliber" and would preferably be semi-automatic in operation. This led to the 1906 trials of pistols from six firearms manufacturing companies (namely, Colt, Bergmann, Deutsche Waffen- und Munitionsfabriken (DWM), Savage Arms, Knoble, Webley, and White-Merrill). Of the six designs submitted, three were eliminated early on, leaving only the Savage, Colt, and DWM designs chambered in the new .45 ACP (Automatic Colt Pistol) cartridge. These three still had issues that needed correction, but only Colt and Savage resubmitted their designs. There is some debate over the reasons for DWM's withdrawal—some say they felt there was bias and that the DWM design was being used primarily as a "whipping boy" for the Savage and Colt pistols, though this does not fit well with the earlier 1900 purchase of the DWM design over the Colt and Steyr entries. In any case, a series of field tests from 1907 to 1911 were held to decide between the Savage and Colt designs. Both designs were improved between each round of testing, leading up to the final test before adoption. Among the areas of success for the Colt was a test at the end of 1910 attended by its designer, John Browning. Six thousand rounds were fired from a single pistol over the course of two days. When the gun began to grow hot, it was simply immersed in water to cool it. The Colt gun passed with no reported malfunctions, while the Savage designs had 37. Service history Following its success in trials, the Colt pistol was formally adopted by the Army on March 29, 1911, when it was designated "Model of 1911", later changed in 1917 to "Model 1911", and then "M1911" in the mid-1920s. The Director of Civilian Marksmanship began manufacture of M1911 pistols for members of the National Rifle Association of America in August 1912. Approximately 100 pistols stamped "N.R.A." below the serial number were manufactured at Springfield Armory and by Colt. The M1911 was formally adopted by the U.S. Navy and Marine Corps in 1913. The .45 ACP "Model of 1911 U.S. Army" was used by both U.S. Army Cavalry troops and infantry soldiers during the United States' Punitive Expedition into Mexico against Pancho Villa in 1916. World War I By the beginning of 1917, a total of 68,533 M1911 pistols had been delivered to U.S. armed forces by Colt's Patent Firearms Manufacturing Company and the U.S. government's Springfield Armory. However, the need to greatly expand U.S. military forces and the resultant surge in demand for the firearm in World War I saw the expansion of manufacture to other contractors besides Colt and Springfield Armory, including Remington-UMC and North American Arms Co. of Quebec. Several other manufacturers were awarded contracts to produce the M1911, including the National Cash Register Company, the Savage Arms Company, the Caron Brothers Manufacturing of Montreal, the Burroughs Adding Machine Co., Winchester Repeating Arms Company, and the Lanston Monotype Company, but the signing of the Armistice resulted in the cancellation of the contracts before any pistols had been produced. Interwar changes Battlefield experience in World War I led to some more small external changes, completed in 1924. The new version received a modified type classification, M1911A1, in 1926 with a stipulation that M1911A1s should have serial numbers higher than 700,000 with lower serial numbers designated M1911. The M1911A1 changes to the original design consisted of a shorter trigger, cutouts in the frame behind the trigger, an arched mainspring housing, a longer grip safety spur (to prevent hammer bite), a wider front sight, a shortened hammer spur, and simplified grip checkering (eliminating the "Double Diamond" reliefs). These changes were subtle and largely intended to make the pistol easier to shoot for those with smaller hands. No significant internal changes were made, and parts remained interchangeable between the M1911 and the M1911A1. Working for the U.S. Ordnance Office, David Marshall Williams developed a .22 training version of the M1911 using a floating chamber to give the .22 long rifle rimfire recoil similar to the .45 version. As the Colt Service Ace, this was available both as a pistol and as a conversion kit for .45 M1911 pistols. Before World War II, 500 M1911s were produced under license by the Norwegian arms factory Kongsberg Vaapenfabrikk, as Automatisk Pistol Model 1912. Then, production moved to a modified version designated Pistol Model 1914 and unofficially known as "Kongsberg Colt". The Pistol M/1914 is noted for its unusual extended slide stop which was specified by Norwegian ordnance authorities. Twenty-two thousand were produced between 1914 and 1940 but production continued after the German occupation of Norway in 1940 and 10,000 were produced for the German armed forces as Pistole 657 (n). Between 1927 and 1966, 102,000 M1911 pistols were produced as Sistema Colt Modelo 1927 in Argentina, first by the Dirección General de Fabricaciones Militares. A similar gun, the Ballester–Molina, was also designed and produced. The M1911 and M1911A1 pistols were also ordered from Colt or produced domestically in modified form by several other nations, including Brazil (M1937 contract pistol), Mexico (M1911 Mexican contract pistol and the Obregón pistol), and Spain (private manufacturers Star and Llama). World War II World War II and the years leading up to it created a great demand. During the war, about 1.9 million units were procured by the U.S. Government for all forces, production being undertaken by several manufacturers, including Remington Rand (900,000 produced), Colt (400,000), Ithaca Gun Company (400,000), Union Switch & Signal (50,000), and Singer (500). New M1911A1 pistols were given a parkerized metal finish instead of bluing, and the wood grip panels were replaced with panels made of brown plastic. The M1911A1 was a favored small arm of both U.S. and allied military personnel during the war. In particular, the pistol was prized by some British commando units and Britain's highly covert Special Operations Executive, as well as South African Commonwealth forces. The M1911A1 pistol was produced in very large quantities during the war. At the end of hostilities the government cancelled all contracts for further production and made use of existing stocks of weapons to equip personnel. Many of these weapons had seen service use, and had to be rebuilt and refinished prior to being issued. From the mid-1920s to the mid-1950s thousands of 1911s and 1911A1s were refurbished at U.S. arsenals and service depots. These rebuilds consisted of anything from minor inspections to major overhauls. Pistols that were refurbished at government arsenals will usually be marked on the frame/receiver with the arsenal's initials, such as RIA for Rock Island Armory or SA for Springfield Armory. Among collectors today, the Singer-produced pistols in particular are highly prized, commanding high prices even in poor condition. General Officer's Model From 1943 to 1945 a fine-grade russet-leather M1916 pistol belt set was issued to some generals in the U.S. Army. It was composed of a leather belt, leather enclosed flap-holster with braided leather tie-down leg strap, leather two-pocket magazine pouch, and a rope lanyard. The metal buckle and fittings were in gilded brass. The buckle had the seal of the U.S. on the center (or "male") piece and a laurel wreath on the circular (or "female") piece. The pistol was a standard-issue M1911A1 that came with a cleaning kit and three magazines. From 1972 to 1981 a modified M1911A1 called the RIA M15 General Officer's Model was issued to general officers in the U.S. Army and U.S. Air Force. From 1982 to 1986 the regular M1911A1 was issued. Both came with a black leather belt, open holster with retaining strap, and a two-pocket magazine pouch. The metal buckle and fittings were similar to the M1916 General Officer's Model except it came in gold metal for the Army and in silver metal for the Air Force. Post–World War II usage After World War II, the M1911 continued to be a mainstay of the U.S. Armed Forces in the Korean War and the Vietnam War, where it was used extensively by tunnel rats. It was used during Desert Storm in specialized U.S. Army units and U.S. Navy Mobile Construction Battalions (Seabees), and has seen service in both Operation Iraqi Freedom and Operation Enduring Freedom, with U.S. Army Special Forces Groups and Marine Corps Force Reconnaissance Companies. However, by the late 1970s, the M1911A1 was acknowledged to be showing its age. Under political pressure from Congress to standardize on a single modern pistol design, the U.S. Air Force ran a Joint Service Small Arms Program to select a new semi-automatic pistol using the NATO-standard 9mm Parabellum pistol cartridge. After trials, the Beretta 92S-1 was chosen. The Army contested this result and subsequently ran its own competition in 1981, the XM9 trials, eventually leading to the official adoption of the Beretta 92F on January 14, 1985. By the late 1980s production was ramping up despite a controversial XM9 retrial and a separate XM10 reconfirmation that was boycotted by some entrants of the original trials, cracks in the frames of some pre-M9 Beretta-produced pistols, and despite a problem with slide separation using higher-than-specified-pressure rounds that resulted in injuries to some U.S. Navy special operations operatives. This last issue resulted in an updated model that includes additional protection for the user, the 92FS, and updates to the ammunition used. During the Gulf War of 1990–1991, M1911A1s were deployed with reserve component U.S. Army units sent to participate in Operations Desert Shield and Desert Storm. By the early 1990s, most M1911A1s had been replaced by the Beretta M9, though a limited number remain in use by special units. The U.S. Marine Corps (USMC) in particular were noted for continuing the use of M1911 pistols for selected personnel in MEU(SOC) and reconnaissance units (though the USMC also purchased over 50,000 M9 pistols.) For its part, the United States Special Operations Command (USSOCOM) issued a requirement for a .45 ACP pistol in the Offensive Handgun Weapon System (OHWS) trials. This resulted in the Heckler & Koch OHWS becoming the MK23 Mod 0 Offensive Handgun Weapon System (itself being heavily based on the 1911's basic field strip), beating the Colt OHWS, a much-modified M1911. Dissatisfaction with the stopping power of the 9 mm Parabellum cartridge used in the Beretta M9 has actually promoted re-adoption of pistols based on the .45 ACP cartridge such as the M1911 design, along with other pistols, among USSOCOM units in recent years, though the M9 has been predominant both within SOCOM and in the U.S. military in general. Both U.S. Army Special Forces Units and SFOD-D continue to use modernized M1911s, such as the M45 MEU(SOC) and a modified version of the Colt Rail Gun (a 1911 model with an integrated picatinny rail on the underside of the frame) designated as the M45A1 CQBP (Close Quarters Battle Pistol). Design Browning's basic M1911 design has seen very little change throughout its production life. The basic principle of the pistol is recoil operation. As the expanding combustion gases force the bullet down the barrel, they give reverse momentum to the slide and barrel which are locked together during this portion of the firing cycle. After the bullet has left the barrel, the slide and barrel continue rearward a short distance. At this point, a link pivots the rear of the barrel down, out of locking recesses in the slide, and the barrel is stopped by making contact with the lower barrel lugs against the frame. As the slide continues rearward, a claw extractor pulls the spent casing from the firing chamber and an ejector strikes the rear of the case, pivoting the casing out and away from the pistol through the ejection port. The slide stops its rearward motion then, and is propelled forward again by the recoil spring to strip a fresh cartridge from the magazine and feed it into the firing chamber. At the forward end of its travel, the slide locks into the barrel and is ready to fire again. However, if the fired round was the last in the magazine, the slide will lock in the rearward position, which notifies the shooter to reload by ejecting the empty magazine and inserting a loaded magazine, and facilitates (by being rearwards) reloading the chamber, which is accomplished by either pulling the slide back slightly and releasing, or by pushing down on the slide stop, which releases the slide to move forward under spring pressure, strip a fresh cartridge from the magazine, and feed it into the firing chamber. Other than grip screws there are no fasteners of any type in the 1911 design. The main components of the gun are held in place by the force of the main spring. The pistol can be "field stripped" by partially retracting the slide, removing the slide stop, and removing the barrel bushing. Full disassembly (and subsequent reassembly) of the pistol to its component parts can be accomplished using several manually removed components as tools to complete the disassembly. The military mandated a grip safety and a manual safety. A grip safety, sear disconnect, slide stop, half cock position, and manual safety (located on the left rear of the frame) are on all standard M1911A1s. Several companies have developed a firing pin block safety. Colt's 80 series uses a trigger operated one and several other manufacturers, including Kimber and Smith & Wesson, use a Swartz firing-pin safety, which is operated by the grip safety. Language cautioning against pulling the trigger with the second finger was included in the initial M1911 manual and later manuals up to the 1940s. The same basic design has been offered commercially and has been used by other militaries. In addition to the .45 ACP (Automatic Colt Pistol), models chambered for .38 Super, 9×19mm Parabellum, 7.65mm Parabellum, 9mm Steyr, .400 Corbon, and other cartridges were offered. The M1911 was developed from earlier Colt semi-automatic designs, firing rounds such as .38 ACP. The design beat out many other contenders during the government's selection period, during the late 1890s and early 1900s, up to the pistol's adoption. The M1911 officially replaced a range of revolvers and pistols across branches of the U.S. armed forces, though a number of other designs have seen use in certain niches. Despite being challenged by newer and lighter weight pistol designs in .45 caliber, such as the Glock 21, the SIG Sauer P220, the Springfield XD and the Heckler & Koch USP, the M1911 shows no signs of decreasing popularity and continues to be widely present in various competitive matches such as those of USPSA, IDPA, IPSC, and Bullseye. Versions M45 MEU(SOC) In 1986, the USMC Precision Weapon Section (PWS) at Marine Corps Base Quantico began customizing M1911A1s for reconnaissance units. The units served in a new Marine Corps program Marine expeditionary unit (special operations capable) (MEU(SOC)). The pistol was designated the M45 MEU(SOC). Hand-selected Colt M1911A1 frames were gutted, deburred and were then assembled with after-market grip safeties, ambidextrous thumb safeties, triggers, improved high-visibility sights, accurized barrels, grips, and improved Wilson magazines. These hand-made pistols were tuned to specifications and preferences of end users. In the late 1980s, the Marines laid out a series of specifications and improvements to make Browning's design ready for 21st-century combat, many of which have been included in MEU(SOC) pistol designs, but design and supply time was limited. Discovering that the Los Angeles Police Department was pleased with their special Kimber M1911 pistols, a single source request was issued to Kimber for just such a pistol despite the imminent release of their TLE/RLII models. Kimber shortly began producing a limited number of what would be later termed the Interim Close Quarters Battle pistol (ICQB). Maintaining the simple recoil assembly, 5-inch barrel (though using a stainless steel match grade barrel), and internal extractor, the ICQB is not much different from Browning's original design. M45A1 In July 2012, the USMC awarded Colt a $22.5 million contract for up to 12,000 M45A1 pistols with an initial order of 4036 pistols to replace the M45 MEU(SOC) pistol. The Marine Corps issued the M45A1 to Force Reconnaissance companies, Marine Corps Special Operations Command (MARSOC) and Special Reaction Teams from the Provost Marshal’s Office. The new 1911 was designated M45A1 or "Close Quarters Battle Pistol" CQBP. The M45A1 features a dual recoil spring assembly, Picatinny rails and is cerakoted tan in color. In 2019, the USMC selected the SIG Sauer M18 to replace the M45A1. The Marines began the roll out of the M18 in 2020. The replacement was completed by October 2022. Civilian models Colt Commander: In 1949 Colt began production of the Colt Commander, an aluminum-framed 1911 with a inch barrel and a rounded hammer. It was developed in response to an Army requirement issued in 1949, for a lighter replacement for the M1911 pistol, for issue to officers. In 1970, Colt introduced the all-steel "Colt Combat Commander", with an optional model in satin nickel. To differentiate between the two models, the aluminum-framed model was renamed the "Lightweight Commander". Colt Government Mk. IV Series 70 (1970–1983): Introduced the accurized Split Barrel Bushing (collet bushing). The first 1000 prototypes in the serial number range 35800NM–37025NM were marked BB on the barrel and the slide. Commander-sized pistols retained the solid bushing. Colt Government Mk. IV Series 80 (1983–present): Introduced an internal firing pin safety and a new half-cock notch on the sear; pulling the trigger on these models while at half-cock will cause the hammer to drop. Models after 1988 returned to the solid barrel bushing due to concerns about breakages of collet bushings. Colt Gold Cup National Match 1911/Mk. IV Series 70/Mk. IV Series 80 MKIV/Series 70 Gold Cup 75th Anniversary National Match/Camp Perry 1978. Limited to 200 pistols. (1983–1996) Gold Cup MKIV Series 80 National Match: .45 ACP, Colt-Elliason adjustable rear sight, fully adjustable Bomar-Style rear sight, target post front sight, spur hammer, wide target trigger, lowered and flared ejection port, National Match barrel, beveled top slide, wrap-around rubber stocks with nickel medallion. Colt 1991 Series (1991–2001 ORM; 2001–present NRM): A hybrid of the M1911A1 military model redesigned to use the slide of the Mk. IV Series 80; these models aimed at providing a more "mil-spec" pistol to be sold at a lower price than Colt's other 1911 models in order to compete with imported pistols from manufacturers such as Springfield Armory and Norinco. The 1991–2001 model used a large "M1991A1" roll mark engraved on the slide. The 2001 model introduced a new "Colt's Government Model" roll mark engraving. The 1991 series incorporates full-sized blued and stainless models in either .45 ACP or .38 Super, as well as blued and stainless Commander models in .45 ACP. Custom models Since its inception, the M1911 has lent itself to easy customization. Replacement sights, grips, and other aftermarket accessories are the most commonly offered parts. Since the 1950s and the rise of competitive pistol shooting, many companies have been offering the M1911 as a base model for major customization. These modifications can range from changing the external finish, checkering the frame, to hand fitting custom hammers, triggers, and sears. Some modifications include installing compensators and the addition of accessories such as tactical lights and even scopes. A common modification of John Browning's design is to use a full-length guide rod that runs the full length of the recoil spring. This adds weight to the front of the pistol, but does not increase accuracy, and does make the pistol slightly more difficult to disassemble. As of 2002, custom guns could cost over and are built from scratch or on existing base models. Some notable companies offering custom M1911s include Dan Wesson Firearms, Les Baer, Nighthawk Custom, Springfield Custom Shop, and Wilson Combat. IPSC models are offered by BUL Armory, Strayer Voigt Inc (Infinity Firearms). Users Current users in the U.S. Many military and law enforcement organizations in the U.S. and other countries continue to use (often modified) M1911A1 pistols including Los Angeles Police Department SWAT and S.I.S., the FBI Hostage Rescue Team, FBI regional SWAT teams, and 1st Special Forces Operational Detachment—Delta (Delta Force). The M1911A1 is popular among the general public in the U.S. for practical and recreational purposes. The pistol is commonly used for concealed carry thanks in part to a single-stack magazine (which makes for a thinner pistol that is, therefore, easier to conceal), personal defense, target shooting, and competition as well as collections. Numerous aftermarket accessories allow users to customize the pistol to their liking. There are a growing number of manufacturers of M1911-style pistols and the model continues to be quite popular for its reliability, simplicity, and patriotic appeal. Various tactical, target and compact models are available. Price ranges from a low end of around $400 for basic models imported from Turkey (TİSAŞ and GİRSAN) and the Philippines (Armscor, Metro Arms, and SAM Inc.) to more than $4,000 for the best competition or tactical versions (Wilson Combat, Ed Brown, Les Baer, Nighthawk Custom, and Staccato). Due to an increased demand for M1911 pistols among Army Special Operations units, who are known to field a variety of M1911 pistols, the U.S. Army Marksmanship Unit began looking to develop a new generation of M1911s and launched the M1911-A2 project in late 2004. The goal was to produce a minimum of seven variants with various sights, internal and external extractors, flat and arched mainspring housings, integral and add-on magazine wells, a variety of finishes and other options, with the idea of providing the end-user a selection from which to select the features that best fit their missions. The AMU performed a well-received demonstration of the first group of pistols to the Marine Corps at Quantico and various Special Operations units at Ft. Bragg and other locations. The project provided a feasibility study with insight into future projects. Models were loaned to various Special Operations units, the results of which are classified. An RFP was issued for a Joint Combat Pistol but it was ultimately canceled. Currently, units are experimenting with an M1911 pistol in .40 S&W, which will incorporate lessons learned from the A2 project. Ultimately, the M1911A2 project provided a testbed for improving existing M1911s. An improved M1911 variant becoming available in the future is a possibility. The Springfield Custom Professional Model 1911A1 pistol is produced under contract by Springfield Armory for the FBI regional SWAT teams and the Hostage Rescue Team. This pistol is made in batches on a regular basis by the Springfield Custom Shop, and a few examples from most runs are made available for sale to the general public at a selling price of approximately US$2,700 each. International users The Brazilian company IMBEL (Indústria de Material Bélico do Brasil) still produces the pistol in several variants for civilian, military and law enforcement uses in .45 ACP, .40 S&W, .380 ACP and 9 mm calibers. IMBEL also produces for US civilian market as the supplier to Springfield Armory. The Chinese Arms manufacturer, Norinco, exports a clone of the M1911A1 for civilian purchase as the M1911A1 and the high-capacity NP-30, as well 9mm variants the NP-28 and NP-29. China has also manufactured conversion kits to chamber the 7.62×25mm Tokarev round following the Korean War. Importation of Norinco-made M1911 pistols into the United States was blocked by trade rules in 1993 but Norinco still manages to import the weapon into Canada and successfully adopted by IPSC shooters, gunsmiths and firearms enthusiasts there because of the cheaper price of the pistol than the other M1911s. The German Volkssturm used captured M1911s at the end of World War II under the weapon code P.660(a), in which the letter 'a' refers to "Amerika", the weapon's country of origin. Norway used the Kongsberg Colt which was a license-produced variant and is identified by the unique slide catch. Many Spanish firearms manufacturers produced pistols derived from 1911, such as the STAR Model B, the ASTRA 1911PL, and the Llama Model IX, to name just a few. Argentine Navy received 1,721 M1911 between 1914 and 1919. 21,616 were received for Argentine Armed Forces between 1914 and 1941. Later, some ex-US Navy Colts were transferred with ex-US ships. Argentina produced under license some 102,494 M1911A1s as Model 1927 Sistema Colt, which eventually led to production of the cheaper Ballester–Molina, which resembles the 1911. The Armed Forces of the Philippines issues Mil-spec M1911A1 pistols as a sidearm to the special forces, military police, and officers. These pistols are mostly produced by Colt, though some of them are produced locally by Armscor, a Philippine company specialized in making 1911-style pistols. The Indonesian Army issued a locally produced version of the Colt M1911A1, chambered in .45 ACP along with the Pindad P1, the locally manufactured Browning Hi-Power pistol as the standard-issue sidearm. In the 1950s, the Republic of China Army (Taiwan) used original M1911A1s, and the batches are now still used by some forces. In 1962, Taiwan copied the M1911A1 as the T51 pistol, and it saw limited use in the Army. After that, the T51 was improved and introduced for export as the T51K1. Now the pistols in service are replaced by locally-made Beretta 92 pistols- the T75 pistol. The Royal Thai Army and Royal Thai Police uses the Type 86, the Thai copy of the M1911 chambered in the .45 ACP round, The Turkish Land Forces uses "MC 1911" Girsan made copy of M1911. Numbers of Colt M1911s were used by the Royal Navy as sidearms during World War I in .455 Webley Automatic caliber. The pistols were then transferred to the Royal Air Force where they saw use in limited numbers up until the end of World War II as sidearms for aircrew in event of bailing out in enemy territory. The weapon also found use among the British airborne, commandos, Special Air Service, and Special Operations Executive Some units of the South Korean Air Force still use these original batches as officers' sidearms (along with Daewoo K5). Current : 16,880 pistols received, mostly from 1937 to 1941. The Brazilian Army uses a version of the M1911 developed by IMBEL chambered in 9×19mm Parabellum and designated M973. : Used by the Chilean Marine Corps in security tasks. used by both Sa'ka Forces and Unit 777 : Used by Police Special Forces. : Lithuanian Armed Forces : In service with PGK special forces of the Royal Malaysian Police : 5,400 M1911s and M1911A1s were acquired from 1922 to 1941. : Local copies used by North Korean Special forces and Presidential Guard. - Armed Forces of the Philippines Standard issue sidearm for regular infantry units. Being refurbished by Government Arsenal, while replacing key parts. : The Armed Forces was equipped with 4,603 M1911A1s before the Korean War, and 6,604 were in service with the Army by the end of the war. Also manufactured around 500 clone variant Type Independence (aka Busanjin Colt) from 1950 to 1951 at Busanjin Ironworks. Currently mostly used by the Navy while a limited number is used by the Special Warfare Command. : Made under license. Known as the "Type 86" pistol. : Former standard-issue service pistol of the U.S. Armed Forces and is in use by some U.S. Special Operations troops. The pistol is in service with various law enforcement agencies across the U.S. : Local copies chambered in 7.62×25mm Tokarev and captured US M1911A1s in .45 ACP used by the Viet Cong and the North Vietnamese Army during Vietnam War. Former : Manufactured M1911 pistols under license from 1945 to 1966 by Dirección General de Fabricaciones Militares. : In both World Wars, Canadian officers had the option of privately purchasing their own sidearm and the M1911/M1911A1 was a popular choice. The joint Canadian-US First Special Service Force (aka "The Devil's Brigade") also used American infantry weapons, including the M1911A1. : Some use indigenously-made copies. : replaced by USP pistols : used by the Kagnew Battalion : About 51,000 bought by Russian military from United States in years 1915–1917. But only relatively small number of these captured pistols ended up to hands of authorities after Finnish Civil War. Finnish military had about 120 pistols during World War 2, most of them were issued to field army. : 5,500 M1911 received during World War I, especially for tank units, officers and trench raiders. Free French Forces received 19,325 Colts. Known in French service as Pistolet automatique 11 mm 4 (C.45) (Automatic pistol 11.4mm (calibre .45)). Both M1911 and M1911A1 pistols were used. : Received M1911A1s from US during Laotian Civil War (1955-1975). : In service with 1st Artillery Battalion 1963–1967. : Used captured pistols during World War II. : Used during WWII : After World War II, the Japan Self-Defense Forces and Police were provided 101,700 M1911A1s from the US. These were used until the 1980s. : 50 received during World War I : 700 received during World War I Produced under license as Kongsberg Colt. : Used by the Panama Defense Forces : Polish Armed Forces in the West used pistols during World War II. : 51,000 purchased between February 1916 and January 1917 Shanghai International Settlement: Colt M1911 and M1911A1s were used by non-Chinese members of the Shanghai Municipal Police from 1926 : Some M1911 pistols were captured during Allied intervention in the Russian Civil War and used in Red Army. Extra 12,977 pistols were received as Lend-Lease during World War II. : Some M1911s chambered for .455 Webley Automatic were supplied to the Royal Flying Corps during WWI. Saw service among elite and special forces during WWII in .45 and .455. Possibly still in use by UKSF. : Vernon Police Department, California Viet Cong: Crude clones used by VC guerrillas with some captured in the Vietnam War. State firearm On March 18, 2011, the U.S. state of Utah—as a way of honoring M1911 designer John Browning, who was born and raised in the state—adopted the Browning M1911 as the "official firearm of Utah". Similar pistols AMT Hardballer Ballester–Molina Browning Hi-Power Kimber Custom Kongsberg Colt M15 pistol Obregón pistol FB Vis FN Model 1903 Rock Island Armory 1911 Ruger SR1911 SIG Sauer 1911 Smith & Wesson SW1911 Springfield Armory 911 Springfield Armory EMP Star Model BM TT pistol
Technology
Specific firearms
null
10845
https://en.wikipedia.org/wiki/February
February
February is the second month of the year in the Julian and Gregorian calendars. The month has 28 days in common years and 29 in leap years, with the 29th day being called the leap day. February is the third and last month of meteorological winter in the Northern Hemisphere. In the Southern Hemisphere, February is the third and last month of meteorological summer (being the seasonal equivalent of what is August in the Northern Hemisphere). Pronunciation "February" can be pronounced in several different ways. The beginning of the word is commonly pronounced either as or ; many people drop the first "r", replacing it with , as if it were spelled "Febuary". This comes about by analogy with "January" (), as well as by a dissimilation effect whereby having two "r"s close to each other causes one to change. The ending of the word is pronounced in the US and in the UK. History The Roman month was named after the Latin term , which means "purification", via the purification ritual held on February 15 (full moon) in the old lunar Roman calendar. January and February were the last two months to be added to the Roman calendar, since the Romans originally considered winter a monthless period of the year. They were added by Numa Pompilius about 713 BC. February remained the last month of the calendar year until the time of the decemvirs (), when it became the second month. At certain times February was truncated to 23 or 24 days, and a 27-day intercalary month, Intercalaris, was occasionally inserted immediately after February to realign the year with the seasons. February observances in Ancient Rome included Amburbium (precise date unknown), Sementivae (February 2), Februa (February 13–15), Lupercalia (February 13–15), Parentalia (February 13–22), Quirinalia (February 17), Feralia (February 21), Caristia (February 22), Terminalia (February 23), Regifugium (February 24), and Agonium Martiale (February 27). These days do not correspond to the modern Gregorian calendar. Under the reforms that instituted the Julian calendar, Intercalaris was abolished, leap years occurred regularly every fourth year, and in leap years February gained a 29th day. Thereafter, it remained the second month of the calendar year, meaning the order that months are displayed (January, February, March, ..., December) within a year-at-a-glance calendar. Even during the Middle Ages, when the numbered Anno Domini year began on March 25 or December 25, the second month was February whenever all twelve months were displayed in order. The Gregorian calendar reforms made slight changes to the system for determining which years were leap years, but also contained a 29-day February. Historical names for February include the Old English terms Solmonath (mud month) and Kale-monath (named for cabbage) as well as Charlemagne's designation Hornung. In Finnish, the month is called , meaning "month of the pearl"; when snow melts on tree branches, it forms droplets, and as these freeze again, they are like pearls of ice. In Polish and Ukrainian, respectively, the month is called or (), meaning the month of ice or hard frost. In Macedonian the month is (), meaning month of cutting (wood). In Czech, it is called , meaning month of submerging (of river ice). In Slovene, February is traditionally called , related to icicles or Candlemas. This name originates from , written as in the New Carniolan Almanac from 1775 and changed to its final form by Franc Metelko in his New Almanac from 1824. The name was also spelled , meaning "the month of cutting down of trees". In 1848, a proposal was put forward in Kmetijske in rokodelske novice by the Slovene Society of Ljubljana to call this month (related to ice melting), but it did not stick. The idea was proposed by a priest, Blaž Potočnik. Another name of February in Slovene was , after the mythological character Vesna. Patterns Having only 28 days in common years, February is the only month of the year that can pass without a single full moon. Using Coordinated Universal Time as the basis for determining the date and time of a full moon, this last happened in 2018 and will next happen in 2037. The same is true regarding a new moon: again using Coordinated Universal Time as the basis, this last happened in 2014 and will next happen in 2033. February is also the only month of the calendar that, at intervals alternating between one of six years and two of eleven years, has exactly four full 7-day weeks. In countries that start their week on a Monday, it occurs as part of a common year starting on Friday, in which February 1st is a Monday and the 28th is a Sunday; the most recent occurrence was 2021, and the next one will be 2027. In countries that start their week on a Sunday, it occurs in a common year starting on Thursday; the most recent occurrence was 2015 and the next occurrence will be 2026. The pattern is broken by a skipped leap year, but no leap year has been skipped since 1900 and no others will be skipped until 2100. Astronomy February meteor showers include the Alpha Centaurids (appearing in early February), the March Virginids (lasting from February 14 to April 25, peaking around March 20), the Delta Cancrids (appearing December 14 to February 14, peaking on January 17), the Omicron Centaurids (late January through February, peaking in mid-February), Theta Centaurids (January 23 – March 12, only visible in the southern hemisphere), Eta Virginids (February 24 and March 27, peaking around March 18), and Pi Virginids (February 13 and April 8, peaking between March 3 and March 9). Symbols The zodiac signs of February are Aquarius (until February 18) and Pisces (February 19 onward). Its birth flowers are the violet (Viola), the common primrose (Primula vulgaris), and the Iris. Its birthstone is the amethyst, which symbolizes piety, humility, spiritual wisdom, and sincerity. Observances This list does not necessarily imply either official status nor general observance. Month-long In Catholic tradition, February is the Month of the Purification of the Blessed Virgin Mary. American Heart Month (United States) Black History Month (United States, Canada) National Bird-Feeding Month (United States) National Children's Dental Health Month (United States) Season for Nonviolence: January 30 – April 4 (International observance) Turner Syndrome Awareness Month (United States) LGBT History Month (United Kingdom, Ireland) Non-Gregorian (All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Movable Food Freedom Day (Canada): Date changes each year Safer Internet Day: First day of second week National Day of the Sun (Argentina): Date varies based on province First Saturday Ice Cream for Breakfast Day First Sunday Mother's Day (Kosovo) First Week of February (first Monday, ending on Sunday) World Interfaith Harmony Week First Monday Constitution Day (Mexico) National Frozen Yogurt Day (United States) First Friday National Wear Red Day (United States) Second Saturday International Purple Hijab Day Second Sunday Autism Sunday (United Kingdom) Children's Day (Cook Islands, Nauru, Niue, Tokelau, Cayman Islands) Mother's Day (Norway) Super Bowl Sunday (United States) World Marriage Day Second Monday Meal Monday (Scotland) Second Tuesday National Sports Day (Qatar) Week of February 22 National Engineers Week (U.S.) Third Monday Family Day (Canada) (provinces of British Columbia, Alberta, Saskatchewan, Manitoba, Ontario, New Brunswick and Prince Edward Island.) President's Day/Washington's Birthday (United States) Third Thursday Global Information Governance Day Third Friday Yukon Heritage Day (Canada) Last Friday International Stand Up to Bullying Day Last Saturday Open That Bottle Night Last day of February Rare Disease Day Fixed February 1 Abolition of Slavery Day (Mauritius) Air Force Day (Nicaragua) Federal Territory Day (Kuala Lumpur, Labuan and Putrajaya, Malaysia) Heroes' Day (Rwanda) Imbolc (Ireland, Scotland, Isle of Man, and some Neopagan groups in the Northern hemisphere) Lammas (some Neopagan groups in the Southern hemisphere) Memorial Day of the Republic (Hungary) National Freedom Day (United States) February 2 Anniversary of Treaty of Tartu (Estonia) Constitution Day (Philippines) Day of Youth (Azerbaijan) Feast of the Presentation of Jesus at the Temple (or Candlemas) (Western Christianity), and its related observances: A quarter day in the Christian liturgical calendar (due to Candlemas) (Scotland) Celebration of Yemanja (Candomblé) Groundhog Day (United States and Canada) Marmot Day (Alaska, United States) Inventor's Day (Thailand) National Tater Tot Day (United States) World Wetlands Day February 3 Anniversary of The Day the Music Died (United States) Communist Party of Vietnam Foundation Anniversary (Vietnam) Day of the Virgin of Suyapa (Honduras) Heroes' Day (Mozambique) Martyrs' Day (São Tomé and Príncipe) Setsubun (Japan) Veterans' Day (Thailand) February 4 Day of the Armed Struggle (Angola) Independence Day (Sri Lanka) Rosa Parks Day (California and Missouri, United States) World Cancer Day February 5 Crown Princess Mary's birthday (Denmark) Kashmir Solidarity Day (Pakistan) Liberation Day (San Marino) National Weatherperson's Day (United States) Runeberg's Birthday (Finland) Unity Day (Burundi) February 6 International Day of Zero Tolerance to Female Genital Mutilation Ronald Reagan Day (California, United States) Sami National Day (Russia, Finland, Norway and Sweden) Waitangi Day (New Zealand) February 7 Independence Day (Grenada) February 8 Parinirvana Day (some Mahayana Buddhist traditions, most celebrate on February 15) Prešeren Day (Slovenia) Propose Day February 9 National Pizza Day (United States) St. Maroun's Day (Maronite Church, Eastern Orthodox Church, public holiday in Lebanon) February 10 Feast of St. Paul's Shipwreck (Public holiday in Malta) Fenkil Day (Eritrea) National Memorial Day of the Exiles and Foibe (Italy) February 11 112 day (European Union) Armed Forces Day (Liberia) Day of Revenue Service (Azerbaijan) Evelio Javier Day (Panay Island, the Philippines) Feast day of Our Lady of Lourdes (Catholic Church), and its related observance: World Day of the Sick (Roman Catholic Church) Inventors' Day (United States) National Foundation Day (Japan) Youth Day (Cameroon) February 12 Darwin Day (International) Georgia Day (Georgia (U.S. state)) International Day of Women's Health Lincoln's Birthday (United States) National Freedom to Marry Day (United States) Red Hand Day (United Nations) Sexual and Reproductive Health Awareness Day (Canada) Union Day (Myanmar) Youth Day (Venezuela) February 13 Black Love Day (United States) Children's Day (Myanmar) World Radio Day February 14 Statehood Day (Arizona, United States) Statehood Day (Oregon, United States) Presentation of Jesus at the Temple (Armenian Apostolic Church) V-Day (movement) (International) Valentine's Day (International) Singles Awareness Day February 15 Candlemas (Eastern Orthodox Church) International Duties Memorial Day (Russia, regional) John Frum Day (Vanuatu) Liberation Day (Afghanistan) National Flag of Canada Day (Canada) National I Want Butterscotch Day (United States) Parinirvana Day (most Mahayana Buddhist traditions, some celebrate on February 8) Serbia's National Day Statehood Day (Serbia) Susan B. Anthony Day (United States) The ENIAC Day (Philadelphia, United States) Total Defence Day (Singapore) February 16 Day of the Shining Star (North Korea) Restoration of Lithuania's Statehood Day (Lithuania) February 17 Independence Day (Kosovo) Random Acts of Kindness Day (United States) Revolution Day (Libya) February 18 National Democracy Day (Nepal) Dialect Day (Amami Islands, Japan) Independence Day (Gambia) Kurdish Students Union Day (Iraqi Kurdistan) Wife's Day (Iceland) February 19 Armed Forces Day (Mexico) Brâncuși Day (Romania) Commemoration of Vasil Levski (Bulgaria) Flag Day (Turkmenistan) Shivaji Jayanti (Maharashtra, India) February 20 Day of Heavenly Hundred Heroes (Ukraine) World Day of Social Justice February 21 International Mother Language Day Language Movement Day (Bangladesh) February 22 Feast of the Chair of Saint Peter (Roman Catholic Church) Independence Day (Saint Lucia) Founder's Day (Saudi Arabia) Founder's Day or "B.-P. day" (World Organization of the Scout Movement) National Margarita Day (United States) World Thinking Day (World Association of Girl Guides and Girl Scouts) February 23 Mashramani-Republic Day (Guyana) Meteņi (Latvia) National Banana Bread Day (United States) National Day (Brunei) Red Army Day or Day of Soviet Army and Navy in the former Soviet Union, also held in various former Soviet republics: Defender of the Fatherland Day (Russia) Defender of the Fatherland and Armed Forces day (Belarus) Emperor's Birthday (Japan) February 24 Dragobete (Romania) Engineer's Day (Iran) Flag Day in Mexico Independence Day (Estonia) National Artist Day (Thailand) Sepandārmazgān or "Women's Day" (Zoroastrian, Iran) February 25 Armed Forces Day (Dominican Republic) Kitano Baika-sai or "Plum Blossom Festival" (Kitano Tenman-gū Shrine, Kyoto, Japan) Meher Baba's birthday (followers of Meher Baba) Memorial Day for the Victims of the Communist Dictatorships (Hungary) National Day (Kuwait) People Power Day (Philippines) Revolution Day (Suriname) Soviet Occupation Day (Georgia) February 26 Liberation Day (Kuwait) Day of Remembrance for Victims of Khojaly massacre (Azerbaijan) National Wear Red Day (United Kingdom) Saviours' Day (Nation of Islam) February 27 Anosmia Awareness Day (International observance) Doctors' Day (Vietnam) International Polar Bear Day Majuba Day (some Afrikaners in South Africa) Marathi Language Day (Maharashtra, India) Independence Day (Dominican Republic) Anti-Bullying Day (Canada) February 28 Day of Remembrance for Victims of Massacres in Armenia (Armenia) Andalusia Day (Andalusia, Spain) Kalevala Day (Finland) National Science Day (India) Peace Memorial Day (Taiwan) Teachers' Day (Arab states) February 29 Bachelor's Day (Ireland, United Kingdom) National Frog Legs Day (United States)
Technology
Months
null
10857
https://en.wikipedia.org/wiki/Stage%20%28stratigraphy%29
Stage (stratigraphy)
In chronostratigraphy, a stage is a succession of rock strata laid down in a single age on the geologic timescale, which usually represents millions of years of deposition. A given stage of rock and the corresponding age of time will by convention have the same name, and the same boundaries. Rock series are divided into stages, just as geological epochs are divided into ages. Stages are divided into smaller stratigraphic units called chronozones or substages, and added together into superstages. The term faunal stage is sometimes used, referring to the fact that the same fauna (animals) are found throughout the layer (by definition). Definition Stages are primarily defined by a consistent set of fossils (biostratigraphy) or a consistent magnetic polarity (see paleomagnetism) in the rock. Usually one or more index fossils that are common, found worldwide, easily recognized, and limited to a single, or at most a few, stages are used to define the stage's bottom. Thus, for example in the local North American subdivision, a paleontologist finding fragments of the trilobite Olenellus would identify the beds as being from the Waucoban Stage whereas fragments of a later trilobite such as Elrathia would identify the stage as Albertan. Stages were important in the 19th and early 20th centuries as they were the major tool available for dating and correlating rock units prior to the development of seismology and radioactive dating in the second half of the 20th century. Microscopic analysis of the rock (petrology) is also sometimes useful in confirming that a given segment of rock is from a particular age. Originally, faunal stages were only defined regionally. As additional stratigraphic and geochronologic tools were developed, they were defined over ever broader areas. More recently, the adjective "faunal" has been dropped as regional and global correlations of rock sequences have become relatively certain and there is less need for faunal labels to define the age of formations. A tendency developed to use European and, to a lesser extent, Asian stage names for the same time period worldwide, even though the faunas in other regions often had little in common with the stage as originally defined. International standardization Boundaries and names are established by the International Commission on Stratigraphy (ICS) of the International Union of Geological Sciences. As of 2008, the ICS is nearly finished with a task begun in 1974, subdividing the Phanerozoic eonothem into internationally accepted stages using two types of benchmark. For younger stages, a Global Boundary Stratotype Section and Point (GSSP), a physical outcrop clearly demonstrates the boundary. For older stages, a Global Standard Stratigraphic Age (GSSA) is an absolute date. The benchmarks will give a much greater certainty that results can be compared with confidence in the date determinations, and such results will have farther scope than any evaluation based solely on local knowledge and conditions. In many regions local subdivisions and classification criteria are still used along with the newer internationally coordinated uniform system, but once the research establishes a more complete international system, it is expected that local systems will be abandoned. Stages and lithostratigraphy Stages can include many lithostratigraphic units (for example formations, beds, members, etc.) of differing rock types that were being laid down in different environments at the same time. In the same way, a lithostratigraphic unit can include a number of stages or parts of them.
Physical sciences
Stratigraphy
Earth science
10859
https://en.wikipedia.org/wiki/Fields%20Medal
Fields Medal
The Fields Medal is a prize awarded to two, three, or four mathematicians under 40 years of age at the International Congress of the International Mathematical Union (IMU), a meeting that takes place every four years. The name of the award honours the Canadian mathematician John Charles Fields. The Fields Medal is regarded as one of the highest honors a mathematician can receive, and has been described as the Nobel Prize of Mathematics, although there are several major differences, including frequency of award, number of awards, age limits, monetary value, and award criteria. According to the annual Academic Excellence Survey by ARWU, the Fields Medal is consistently regarded as the top award in the field of mathematics worldwide, and in another reputation survey conducted by IREG in 2013–14, the Fields Medal came closely after the Abel Prize as the second most prestigious international award in mathematics. The prize includes a monetary award which, since 2006, has been 15,000. Fields was instrumental in establishing the award, designing the medal himself, and funding the monetary component, though he died before it was established and his plan was overseen by John Lighton Synge. The medal was first awarded in 1936 to Finnish mathematician Lars Ahlfors and American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 2014, the Iranian mathematician Maryam Mirzakhani became the first female Fields Medalist. In total, 64 people have been awarded the Fields Medal. The most recent group of Fields Medalists received their awards on 5 July 2022 in an online event which was live-streamed from Helsinki, Finland. It was originally meant to be held in Saint Petersburg, Russia, but was moved following the 2022 Russian invasion of Ukraine. Conditions of the award The Fields Medal has for a long time been regarded as the most prestigious award in the field of mathematics and is often described as the Nobel Prize of Mathematics. Unlike the Nobel Prize, the Fields Medal is only awarded every four years. The Fields Medal also has an age limit: a recipient must be under age 40 on 1 January of the year in which the medal is awarded. The under-40 rule is based on Fields's desire that "while it was in recognition of work already done, it was at the same time intended to be an encouragement for further achievement on the part of the recipients and a stimulus to renewed effort on the part of others." Moreover, an individual can only be awarded one Fields Medal; winners are ineligible to be awarded future medals. First awarded in 1936, 64 people have won the medal as of 2022. With the exception of two PhD holders in physics (Edward Witten and Martin Hairer), only people with a PhD in mathematics have won the medal. List of Fields medalists In certain years, the Fields medalists have been officially cited for particular mathematical achievements, while in other years such specificities have not been given. However, in every year that the medal has been awarded, noted mathematicians have lectured at the International Congress of Mathematicians on each medalist's body of work. In the following table, official citations are quoted when possible (namely for the years 1958, 1998, and every year since 2006). For the other years through 1986, summaries of the ICM lectures, as written by Donald Albers, Gerald L. Alexanderson, and Constance Reid, are quoted. In the remaining years (1990, 1994, and 2002), part of the text of the ICM lecture itself has been quoted. The upcoming awarding of the Fields Medal at the 2026 International Congress of the International Mathematical Union is planned to take place in Philadelphia. Landmarks The medal was first awarded in 1936 to the Finnish mathematician Lars Ahlfors and the American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 1954, Jean-Pierre Serre became the youngest winner of the Fields Medal, at 27. He retains this distinction. In 1966, Alexander Grothendieck boycotted the ICM, held in Moscow, to protest Soviet military actions taking place in Eastern Europe. Léon Motchane, founder and director of the Institut des Hautes Études Scientifiques, attended and accepted Grothendieck's Fields Medal on his behalf. In 1970, Sergei Novikov, because of restrictions placed on him by the Soviet government, was unable to travel to the congress in Nice to receive his medal. In 1978, Grigory Margulis, because of restrictions placed on him by the Soviet government, was unable to travel to the congress in Helsinki to receive his medal. The award was accepted on his behalf by Jacques Tits, who said in his address: "I cannot but express my deep disappointment—no doubt shared by many people here—in the absence of Margulis from this ceremony. In view of the symbolic meaning of this city of Helsinki, I had indeed grounds to hope that I would have a chance at last to meet a mathematician whom I know only through his work and for whom I have the greatest respect and admiration." In 1982, the congress was due to be held in Warsaw but had to be rescheduled to the next year, because of martial law introduced in Poland on 13 December 1981. The awards were announced at the ninth General Assembly of the IMU earlier in the year and awarded at the 1983 Warsaw congress. In 1990, Edward Witten became the first physicist to win the award. In 1998, at the ICM, Andrew Wiles was presented by the chair of the Fields Medal Committee, Yuri I. Manin, with the first-ever IMU silver plaque in recognition of his proof of Fermat's Last Theorem. Don Zagier referred to the plaque as a "quantized Fields Medal". Accounts of this award frequently make reference that at the time of the award Wiles was over the age limit for the Fields medal. Although Wiles was slightly over the age limit in 1994, he was thought to be a favorite to win the medal; however, a gap (later resolved by Taylor and Wiles) in the proof was found in 1993. In 2006, Grigori Perelman, who proved the Poincaré conjecture, refused his Fields Medal and did not attend the congress. In 2014, Maryam Mirzakhani became the first Iranian as well as the first woman to win the Fields Medal, and Artur Avila became the first South American and Manjul Bhargava became the first person of Indian origin to do so. In 2022, Maryna Viazovska became the first Ukrainian to win the Fields Medal, and June Huh became the first person of Korean ancestry to do so. Medal The medal was designed by Canadian sculptor R. Tait McKenzie. It is made of 14KT gold, has a diameter of 63.5mm, and weighs 169g. On the obverse is Archimedes and a quote attributed to 1st century AD poet Manilius, which reads in Latin: ("To surpass one's understanding and master the world"). The year number 1933 is written in Roman numerals and contains an error (MCNXXXIII rather than MCMXXXIII). In capital Greek letters the word Ἀρχιμηδους, or "of Archimedes," is inscribed. On the reverse is the inscription: Translation: "Mathematicians gathered from the entire world have awarded [understood but not written: 'this prize'] for outstanding writings." In the background, there is the representation of Archimedes' tomb, with the carving illustrating his theorem On the Sphere and Cylinder, behind an olive branch. (This is the mathematical result of which Archimedes was reportedly most proud: Given a sphere and a circumscribed cylinder of the same height and diameter, the ratio between their volumes is equal to .) The rim bears the name of the prizewinner. Female recipients The Fields Medal has had two female recipients, Maryam Mirzakhani from Iran in 2014, and Maryna Viazovska from Ukraine in 2022. In popular culture The Fields Medal gained some recognition in popular culture due to references in the 1997 film, Good Will Hunting. In the movie, Gerald Lambeau (Stellan Skarsgård) is an MIT professor who won the award prior to the events of the story. Throughout the film, references made to the award are meant to convey its prestige in the field.
Mathematics
Basics
null
10890
https://en.wikipedia.org/wiki/Fundamental%20interaction
Fundamental interaction
In physics, the fundamental interactions or fundamental forces are interactions in nature that appear not to be reducible to more basic interactions. There are four fundamental interactions known to exist: gravity electromagnetism weak interaction strong interaction The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at subatomic scales and govern nuclear interactions inside atoms. Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. Each of the known fundamental interactions can be described mathematically as a field. The gravitational interaction is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics. Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies. The historical success of models that show relationships between fundamental interactions have lead to efforts to go beyond the Standard Model and combine all four forces in to a theory of everything. History Classical theory In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one. In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.) Standard Model The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED). The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory. Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support. Overview of the fundamental interactions In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ± (intrinsic angular momentum ±, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons. The interaction of any pair of fermions in perturbation theory can then be modelled thus: Two fermions go in → interaction by boson exchange → two changed fermions go out. The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from + to − (or vice versa) during such an exchange (in units of the reduced Planck constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force". According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of: Electric and magnetic force into electromagnetism; The electromagnetic interaction and the weak interaction into the electroweak interaction; see below. Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research. The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples. Interactions Gravity Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate. Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out. Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe. The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it retards the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high. Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime. Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton. Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible. Proposed extra dimensions could explain why the gravity force is so weak. Electroweak interaction Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force. The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 1015 K, the electromagnetic force and the weak force were still merged as a combined electroweak force. For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979. Electromagnetism Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other. Electromagnetism has an infinite range, as gravity does, but is vastly stronger. It is the force that binds electrons to atoms, and it holds molecules together. It is responsible for everyday phenomena like light, magnets, electricity, and friction. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements. In a four kilogram (~1 gallon) jug of water, there is of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity, but tend to cancel out so that for astronomical-scale bodies, gravity dominates. Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes. The constant speed of light in vacuum (customarily denoted with a lowercase letter ) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space. In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function. Weak interaction The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT. Strong interaction The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10−15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10−15 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV. The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably: The pions were understood to be oscillations of vacuum condensates; Jun John Sakurai proposed the rho and omega vector bosons to be force carrying particles for approximate symmetries of isospin and hypercharge; Geoffrey Chew, Edward K. Burdett and Steven Frautschi grouped the heavier hadrons into families that could be understood as vibrational and rotational excitations of strings. While each of these approaches offered insights, no approach led directly to a fundamental theory. Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined. In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons. Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions. QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances. Higgs interaction Conventionally, the Higgs interaction is not counted among the four fundamental forces. Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form , with Yukawa coupling , particle mass (in eV), and Higgs vacuum expectation value . Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form , with Higgs mass . Because the reduced Compton wavelength of the Higgs boson is so small (, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 1011 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances. Beyond the Standard Model The fundamental forces may become unified into a single force at very high energies and on a minuscule scale, the Planck scale. Particle accelerators cannot produce the enormous energies required to experimentally probe this regime. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification. Grand Unified Theories (GUTs) are proposals to show that the three fundamental interactions described by the Standard Model are all different manifestations of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated, as well as predicting gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces. A so-called theory of everything, which would fintegrate GUTs with a quantum gravity theory face a greater barrier, because no quantum gravity theories, which include string theory, loop quantum gravity, and twistor theory, have secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it. Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles acquire their masses only through supersymmetry breaking effects and these particles, known as moduli, can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), giving rise to a need to explain a nonzero cosmological constant, and possibly to other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow.
Physical sciences
Physics basics: General
Physics
10891
https://en.wikipedia.org/wiki/Floppy%20disk
Floppy disk
[[File:Image3,5-Diskette removed.jpg|thumbnail|A 3½-inch floppy disk removed from its housing]] A floppy disk or floppy diskette (casually referred to as a floppy, a diskette, or a disk) is a type of disk storage composed of a thin and flexible disk of a magnetic storage medium in a square or nearly square plastic enclosure lined with a fabric that removes dust particles from the spinning disk. The three most popular (and commercially available) floppy disks are the 8-inch, 5¼-inch, and 3½-inch floppy disks. Floppy disks store digital data which can be read and written when the disk is inserted into a floppy disk drive (FDD) connected to or inside a computer or other device. The first floppy disks, invented and made by IBM in 1971, had a disk diameter of . Subsequently, the 5¼-inch (133.35 mm) and then the 3½-inch (88.9 mm) became a ubiquitous form of data storage and transfer into the first years of the 21st century. 3½-inch floppy disks can still be used with an external USB floppy disk drive. USB drives for 5¼-inch, 8-inch, and other-size floppy disks are rare to non-existent. Some individuals and organizations continue to use older equipment to read or transfer data from floppy disks. Floppy disks were so common in late 20th-century culture that many electronic and software programs continue to use save icons that look like floppy disks well into the 21st century, as a form of skeuomorphic design. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, they have been superseded by data storage methods with much greater data storage capacity and data transfer speed, such as USB flash drives, memory cards, optical discs, and storage available through local computer networks and cloud storage. History The first commercial floppy disks, developed in the late 1960s, were in diameter; they became commercially available in 1971 as a component of IBM products and both drives and disks were then sold separately starting in 1972 by Memorex and others. These disks and associated drives were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, and Burroughs Corporation. The term "floppy disk" appeared in print as early as 1970, and although IBM announced its first media as the Type 1 Diskette in 1973, the industry continued to use the terms "floppy disk" or "floppy". In 1976, Shugart Associates introduced the 5¼-inch floppy disk drive. By 1978, there were more than ten manufacturers producing such drives. There were competing floppy disk formats, with hard- and soft-sector versions and encoding schemes such as differential Manchester encoding (DM), modified frequency modulation (MFM), M2FM and group coded recording (GCR). The 5¼-inch format displaced the 8-inch one for most uses, and the hard-sectored disk format disappeared. The most common capacity of the 5¼-inch format in DOS-based PCs was 360 KB (368,640 bytes) for the Double-Sided Double-Density (DSDD) format using MFM encoding. In 1984, IBM introduced with its PC/AT the 1.2 MB (1,228,800 bytes) dual-sided 5¼-inch floppy disk, but it never became very popular. IBM started using the 720 KB double density 3½-inch microfloppy disk on its Convertible laptop computer in 1986 and the 1.44 MB (1,474,560 bytes) high-density version with the IBM Personal System/2 (PS/2) line in 1987. These disk drives could be added to older PC models. In 1988, Y-E Data introduced a drive for 2.88 MB Double-Sided Extended-Density (DSED) diskettes which was used by IBM in its top-of-the-line PS/2 and some RS/6000 models and in the second-generation NeXTcube and NeXTstation; however, this format had limited market success due to lack of standards and movement to 1.44 MB drives. Throughout the early 1980s, limits of the 5¼-inch format became clear. Originally designed to be more practical than the 8-inch format, it was becoming considered too large; as the quality of recording media grew, data could be stored in a smaller area. Several solutions were developed, with drives at 2-, 2½-, 3-, 3¼-, 3½- and 4-inches (and Sony's disk) offered by various companies. They all had several advantages over the old format, including a rigid case with a sliding metal (or later, sometimes plastic) shutter over the head slot, which helped protect the delicate magnetic medium from dust and damage, and a sliding write protection tab, which was far more convenient than the adhesive tabs used with earlier disks. The established market for the 5¼-inch format made it difficult for these mutually incompatible new formats to gain significant market share. A variant on the Sony design, introduced in 1983 by many manufacturers, was then rapidly adopted. By 1988, the 3½-inch was outselling the 5¼-inch. Generally, the term floppy disk persisted, even though later style floppy disks have a rigid case around an internal floppy disk. By the end of the 1980s, 5¼-inch disks had been superseded by 3½-inch disks. During this time, PCs frequently came equipped with drives of both sizes. By the mid-1990s, 5¼-inch drives had virtually disappeared, as the 3½-inch disk became the predominant floppy disk. The advantages of the 3½-inch disk were its higher capacity, its smaller physical size, and its rigid case which provided better protection from dirt and other environmental risks. Prevalence Floppy disks became commonplace during the 1980s and 1990s in their use with personal computers to distribute software, transfer data, and create backups. Before hard disks became affordable to the general population, floppy disks were often used to store a computer's operating system (OS). Most home computers from that time have an elementary OS and BASIC stored in read-only memory (ROM), with the option of loading a more advanced OS from a floppy disk. By the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were an estimated five billion standard floppy disks in use. An attempt to enhance the existing 3½-inch designs was the SuperDisk in the late 1990s, using very narrow data tracks and a high precision head guidance mechanism with a capacity of 120 MB and backward-compatibility with standard 3½-inch floppies; a format war briefly occurred between SuperDisk and other high-density floppy-disk products, although ultimately recordable CDs/DVDs, solid-state flash storage, and eventually cloud-based online storage would render all these removable disk formats obsolete. External USB-based floppy disk drives are still available, and many modern systems provide firmware support for booting from such drives. Gradual transition to other formats In the mid-1990s, mechanically incompatible higher-density floppy disks were introduced, like the Iomega Zip disk. Adoption was limited by the competition between proprietary formats and the need to buy expensive drives for computers where the disks would be used. In some cases, failure in market penetration was exacerbated by the release of higher-capacity versions of the drive and media being not backward-compatible with the original drives, dividing the users between new and old adopters. Consumers were wary of making costly investments into unproven and rapidly changing technologies, so none of the technologies became the established standard. Apple introduced the iMac G3 in 1998 with a CD-ROM drive but no floppy drive; this made USB-connected floppy drives popular accessories, as the iMac came without any writable removable media device. Recordable CDs were touted as an alternative, because of the greater capacity, compatibility with existing CD-ROM drives, and—with the advent of re-writeable CDs and packet writing—a similar reusability as floppy disks. However, CD-R/RWs remained mostly an archival medium, not a medium for exchanging data or editing files on the medium itself, because there was no common standard for packet writing which allowed for small updates. Other formats, such as magneto-optical discs, had the flexibility of floppy disks combined with greater capacity, but remained niche due to costs. High-capacity backward compatible floppy technologies became popular for a while and were sold as an option or even included in standard PCs, but in the long run, their use was limited to professionals and enthusiasts. Flash-based USB thumb drives finally provided a practical and popular replacement that supported traditional file systems and all common usage scenarios of floppy disks. As opposed to other solutions, no new drive type or special software was required that impeded adoption, since all that was necessary was an already common USB port. Usage in the 21st century In 2002, most manufacturers still provided floppy disk drives as standard equipment to meet user demand for file transfer and an emergency boot device, as well as for the general secure feeling of having the familiar device. By this time, the retail cost of a floppy drive had fallen to around $20 (), so there was little financial incentive to omit the device from a system. Subsequently, enabled by the widespread support for USB flash drives and BIOS boot, manufacturers and retailers progressively reduced the availability of floppy disk drives as standard equipment. In February 2003, Dell, one of the leading personal computer vendors, announced that floppy drives would no longer be pre-installed on Dell Dimension home computers, although they were still available as a selectable option and purchasable as an aftermarket OEM add-on. By January 2007, only 2% of computers sold in stores contained built-in floppy disk drives. Floppy disks are used for emergency boots in aging systems lacking support for other bootable media and for BIOS updates, since most BIOS and firmware programs can still be executed from bootable floppy disks. If BIOS updates fail or become corrupt, floppy drives can sometimes be used to perform a recovery. The music and theatre industries still use equipment requiring standard floppy disks (e.g. synthesizers, samplers, drum machines, sequencers, and lighting consoles). Industrial automation equipment such as programmable machinery and industrial robots may not have a USB interface; data and programs are then loaded from disks, damageable in industrial environments. This equipment may not be replaced due to cost or requirement for continuous availability; existing software emulation and virtualization do not solve this problem because a customized operating system is used that has no drivers for USB devices. Hardware floppy disk emulators can be made to interface floppy-disk controllers to a USB port that can be used for flash drives. In May 2016, the United States Government Accountability Office released a report that covered the need to upgrade or replace legacy computer systems within federal agencies. According to this document, old IBM Series/1 minicomputers running on 8-inch floppy disks are still used to coordinate "the operational functions of the United States' nuclear forces". The government planned to update some of the technology by the end of the 2017 fiscal year. Use in Japan's government ended in 2024. Windows 10 and Windows 11 no longer come with drivers for floppy disk drives (both internal and external). However, they will still support them with a separate device driver provided by Microsoft. The British Airways Boeing 747-400 fleet, up to its retirement in 2020, used 3½-inch floppy disks to load avionics software. Sony, who had been in the floppy disk business since 1983, ended domestic sales of all six 3½-inch floppy disk models as of March 2011. This has been viewed by some as the end of the floppy disk. While production of new floppy disk media has ceased, sales and uses of this media from inventories is expected to continue until at least 2026. Legacy For more than two decades, the floppy disk was the primary external writable storage device used. Most computing environments before the 1990s were non-networked, and floppy disks were the primary means to transfer data between computers, a method known informally as sneakernet. Unlike hard disks, floppy disks were handled and seen; even a novice user could identify a floppy disk. Because of these factors, a picture of a 3½-inch floppy disk became an interface metaphor for saving data. , the floppy disk symbol is still used by software on user-interface elements related to saving files even though physical floppy disks are largely obsolete. Examples of such software include LibreOffice, Microsoft Paint, and WordPad. Design Structure 8-inch and 5¼-inch disks The 8-inch and 5¼-inch floppy disks contain a magnetically coated round plastic medium with a large circular hole in the center for a drive's spindle. The medium is contained in a square plastic cover that has a small oblong opening in both sides to allow the drive's heads to read and write data and a large hole in the center to allow the magnetic medium to spin by rotating it from its middle hole. Inside the cover are two layers of fabric with the magnetic medium sandwiched in the middle. The fabric is designed to reduce friction between the medium and the outer cover, and catch particles of debris abraded off the disk to keep them from accumulating on the heads. The cover is usually a one-part sheet, double-folded with flaps glued or spot-welded together. A small notch on the side of the disk identifies whether it is writable, as detected by a mechanical switch or photoelectric sensor. In the 8-inch disk, the notch being covered or not present enables writing, while in the 5¼-inch disk, the notch being present and uncovered enables writing. Tape may be used over the notch to change the mode of the disk. Punch devices were sold to convert read-only 5¼" disks to writable ones, and also to enable writing on the unused side of single-sided disks for computers with single-sided drives. The latter worked because single- and double-sided disks typically contained essentially identical actual magnetic media, for manufacturing efficiency. Disks whose obverse and reverse sides were thus used separately in single-sided drives were known as flippy disks. Disk notching 5¼" floppies for PCs was generally only required where users wanted to overwrite original 5¼" disks of store-bought software, which somewhat commonly shipped with no notch present. Another LED/photo-transistor pair located near the center of the disk detects the index hole once per rotation in the magnetic disk. Detection occurs whenever the drive's sensor, the holes in the correctly inserted floppy's plastic envelope and a single hole in the rotating floppy disk medium line up. This mechanism is used to detect the angular start of each track, and whether or not the disk is rotating at the correct speed. Early 8‑inch and 5¼‑inch disks also had holes for each sector in the enclosed magnetic medium, in addition to the index hole, with the same radial distance from the center, for alignment with the same envelope hole. These were termed hard sectored disks. Later soft-sectored disks have only one index hole in the medium, and sector position is determined by the disk controller or low-level software from patterns marking the start of a sector. Generally, the same drives are used to read and write both types of disks, with only the disks and controllers differing. Some operating systems using soft sectors, such as Apple DOS, do not use the index hole, and the drives designed for such systems often lack the corresponding sensor; this was mainly a hardware cost-saving measure. 3½-inch disk The core of the 3½-inch disk is the same as the other two disks, but the front has only a label and a small opening for reading and writing data, protected by the shutter—a spring-loaded metal or plastic cover, pushed to the side on entry into the drive. Rather than having a hole in the center, it has a metal hub which mates to the spindle of the drive. Typical 3½-inch disk magnetic coating materials are: DD: 2 μm magnetic iron oxide HD: 1.2 μm cobalt-doped iron oxide ED: 3 μm barium ferrite Two holes at the bottom left and right indicate whether the disk is write-protected and whether it is high-density; these holes are spaced as far apart as the holes in punched A4 paper, allowing write-protected high-density floppy disks to be clipped into international standard (ISO 838) ring binders. The dimensions of the disk shell are not quite square: its width is slightly less than its depth, so that it is impossible to insert the disk into a drive slot sideways (i.e. rotated 90 degrees from the correct shutter-first orientation). A diagonal notch at top right ensures that the disk is inserted into the drive in the correct orientation—not upside down or label-end first—and an arrow at top left indicates direction of insertion. The drive usually has a button that, when pressed, ejects the disk with varying degrees of force, the discrepancy due to the ejection force provided by the spring of the shutter. In IBM PC compatibles, Commodores, Apple II/IIIs, and other non-Apple-Macintosh machines with standard floppy disk drives, a disk may be ejected manually at any time. The drive has a disk-change switch that detects when a disk is ejected or inserted. Failure of this mechanical switch is a common source of disk corruption if a disk is changed and the drive (and hence the operating system) fails to notice. One of the chief usability problems of the floppy disk is its vulnerability; even inside a closed plastic housing, the disk medium is highly sensitive to dust, condensation and temperature extremes. As with all magnetic storage, it is vulnerable to magnetic fields. Blank disks have been distributed with an extensive set of warnings, cautioning the user not to expose it to dangerous conditions. Rough treatment or removing the disk from the drive while the magnetic media is still spinning is likely to cause damage to the disk, drive head, or stored data. On the other hand, the 3½‑inch floppy disk has been lauded for its mechanical usability by human–computer interaction expert Donald Norman: Operation A spindle motor in the drive rotates the magnetic medium at a certain speed, while a stepper motor-operated mechanism moves the magnetic read/write heads radially along the surface of the disk. Both read and write operations require the media to be rotating and the head to contact the disk media, an action originally accomplished by a disk-load solenoid. Later drives held the heads out of contact until a front-panel lever was rotated (5¼-inch) or disk insertion was complete (3½-inch). To write data, current is sent through a coil in the head as the media rotates. The head's magnetic field aligns the magnetization of the particles directly below the head on the media. When the current is reversed the magnetization aligns in the opposite direction, encoding one bit of data. To read data, the magnetization of the particles in the media induce a tiny voltage in the head coil as they pass under it. This small signal is amplified and sent to the floppy disk controller, which converts the streams of pulses from the media into data, checks it for errors, and sends it to the host computer system. Formatting A blank unformatted diskette has a coating of magnetic oxide with no magnetic order to the particles. During formatting, the magnetizations of the particles are aligned forming tracks, each broken up into sectors, enabling the controller to properly read and write data. The tracks are concentric rings around the center, with spaces between tracks where no data is written; gaps with padding bytes are provided between the sectors and at the end of the track to allow for slight speed variations in the disk drive, and to permit better interoperability with disk drives connected to other similar systems. Each sector of data has a header that identifies the sector location on the disk. A cyclic redundancy check (CRC) is written into the sector headers and at the end of the user data so that the disk controller can detect potential errors. Some errors are soft and can be resolved by automatically re-trying the read operation; other errors are permanent and the disk controller will signal a failure to the operating system if multiple attempts to read the data still fail. Insertion and ejection After a disk is inserted, a catch or lever at the front of the drive is manually lowered to prevent the disk from accidentally emerging, engage the spindle clamping hub, and in two-sided drives, engage the second read/write head with the media. In some 5¼-inch drives, insertion of the disk compresses and locks an ejection spring which partially ejects the disk upon opening the catch or lever. This enables a smaller concave area for the thumb and fingers to grasp the disk during removal. Newer 5¼-inch drives and all 3½-inch drives automatically engage the spindle and heads when a disk is inserted, doing the opposite with the press of the eject button. On Apple Macintosh computers with built-in 3½-inch disk drives, the ejection button is replaced by software controlling an ejection motor which only does so when the operating system no longer needs to access the drive. The user could drag the image of the floppy drive to the trash can on the desktop to eject the disk. In the case of a power failure or drive malfunction, a loaded disk can be removed manually by inserting a straightened paper clip into a small hole at the drive's front panel, just as one would do with a CD-ROM drive in a similar situation. The X68000 has soft-eject 5¼-inch drives. Some late-generation IBM PS/2 machines had soft-eject 3½-inch disk drives as well for which some issues of DOS (i.e. PC DOS 5.02 and higher) offered an EJECT command. Finding track zero Before a disk can be accessed, the drive needs to synchronize its head position with the disk tracks. In some drives, this is accomplished with a Track Zero Sensor, while for others it involves the drive head striking an immobile reference surface. In either case, the head is moved so that it is approaching track zero position of the disk. When a drive with the sensor has reached track zero, the head stops moving immediately and is correctly aligned. For a drive without the sensor, the mechanism attempts to move the head the maximum possible number of positions needed to reach track zero, knowing that once this motion is complete, the head will be positioned over track zero. Some drive mechanisms such as the Apple II 5¼-inch drive without a track zero sensor, produce characteristic mechanical noises when trying to move the heads past the reference surface. This physical striking is responsible for the 5¼-inch drive clicking during the boot of an Apple II, and the loud rattles of its DOS and ProDOS when disk errors occurred and track zero synchronization was attempted. Finding sectors All 8-inch and some 5¼-inch drives used a mechanical method to locate sectors, known as either hard sectors or soft sectors, and is the purpose of the small hole in the jacket, off to the side of the spindle hole. A light beam sensor detects when a punched hole in the disk is visible through the hole in the jacket. For a soft-sectored disk, there is only a single hole, which is used to locate the first sector of each track. Clock timing is then used to find the other sectors behind it, which requires precise speed regulation of the drive motor. For a hard-sectored disk, there are many holes, one for each sector row, plus an additional hole in a half-sector position, that is used to indicate sector zero. The Apple II computer system is notable in that it did not have an index hole sensor and ignored the presence of hard or soft sectoring. Instead, it used special repeating data synchronization patterns written to the disk between each sector, to assist the computer in finding and synchronizing with the data in each track. The later 3½-inch drives of the mid-1980s did not use sector index holes, but instead also used synchronization patterns. Most 3½-inch drives used a constant speed drive motor and contain the same number of sectors across all tracks. This is sometimes referred to as Constant Angular Velocity (CAV). In order to fit more data onto a disk, some 3½-inch drives (notably the Macintosh External 400K and 800K drives) instead use Constant Linear Velocity (CLV), which uses a variable speed drive motor that spins more slowly as the head moves away from the center of the disk, maintaining the same speed of the head(s) relative to the surface(s) of the disk. This allows more sectors to be written to the longer middle and outer tracks as the track length increases. Sizes While the original IBM 8-inch disk was actually so defined, the other sizes are defined in the metric system, their usual names being but rough approximations. Different sizes of floppy disks are mechanically incompatible, and disks can fit only one size of drive. Drive assemblies with both 3½-inch and 5¼-inch slots were available during the transition period between the sizes, but they contained two separate drive mechanisms. In addition, there are many subtle, usually software-driven incompatibilities between the two. 5¼-inch disks formatted for use with Apple II computers would be unreadable and treated as unformatted on a Commodore. As computer platforms began to form, attempts were made at interchangeability. For example, the "SuperDrive" included from the Macintosh SE to the Power Macintosh G3 could read, write and format IBM PC format 3½-inch disks, but few IBM-compatible computers had drives that did the reverse. 8-inch, 5¼-inch and 3½-inch drives were manufactured in a variety of sizes, most to fit standardized drive bays. Alongside the common disk sizes were non-classical sizes for specialized systems. 8-inch floppy disk Floppy disks of the first standard are 8 inches in diameter, protected by a flexible plastic jacket. It was a read-only device used by IBM as a way of loading microcode. Read/write floppy disks and their drives became available in 1972, but it was IBM's 1973 introduction of the 3740 data entry system that began the establishment of floppy disks, called by IBM the Diskette 1, as an industry standard for information interchange. Diskettes formatted for this system stored 242,944 bytes. Early microcomputers used for engineering, business, or word processing often used one or more 8-inch disk drives for removable storage; the CP/M operating system was developed for microcomputers with 8-inch drives. The family of 8-inch disks and drives increased over time and later versions could store up to 1.2 MB; many microcomputer applications did not need that much capacity on one disk, so a smaller size disk with lower-cost media and drives was feasible. The 5¼-inch drive succeeded the 8-inch size in many applications, and developed to about the same storage capacity as the original 8-inch size, using higher-density media and recording techniques. 5¼-inch floppy disk The head gap of an 80‑track high-density (1.2 MB in the MFM format) 5¼‑inch drive (a.k.a. Mini diskette, Mini disk, or Minifloppy) is smaller than that of a 40‑track double-density (360 KB if double-sided) drive but can also format, read and write 40‑track disks provided the controller supports double stepping or has a switch to do so. 5¼-inch 80-track drives were also called hyper drives. A blank 40‑track disk formatted and written on an 80‑track drive can be taken to its native drive without problems, and a disk formatted on a 40‑track drive can be used on an 80‑track drive. Disks written on a 40‑track drive and then updated on an 80 track drive become unreadable on any 40‑track drives due to track width incompatibility. Single-sided disks were coated on both sides, despite the availability of more expensive double sided disks. The reason usually given for the higher price was that double sided disks were certified error-free on both sides of the media. Double-sided disks could be used in some drives for single-sided disks, as long as an index signal was not needed. This was done one side at a time, by turning them over (flippy disks); more expensive dual-head drives which could read both sides without turning over were later produced, and eventually became used universally. 3½-inch floppy disk In the early 1980s, many manufacturers introduced smaller floppy drives and media in various formats. A consortium of 21 companies eventually settled on a 3½-inch design known as the Micro diskette, Micro disk, or Micro floppy, similar to a Sony design but improved to support both single-sided and double-sided media, with formatted capacities generally of 360 KB and 720 KB respectively. Single-sided drives of the consortium design first shipped in 1983, and double-sided in 1984. The double-sided, high-density 1.44 MB (actually 1440 KiB = 1.41 MiB or 1.47 MB) disk drive, which would become the most popular, first shipped in 1986. The first Macintosh computers used single-sided 3½-inch floppy disks, but with 400 KB formatted capacity. These were followed in 1986 by double-sided 800 KB floppies. The higher capacity was achieved at the same recording density by varying the disk-rotation speed with head position so that the linear speed of the disk was closer to constant. Later Macs could also read and write 1.44 MB HD disks in PC format with fixed rotation speed. Higher capacities were similarly achieved by Acorn's RISC OS (800 KB for DD, 1,600 KB for HD) and AmigaOS (880 KB for DD, 1,760 KB for HD). All 3½-inch disks have a rectangular hole in one corner which, if obstructed, write-enables the disk. A sliding detented piece can be moved to block or reveal the part of the rectangular hole that is sensed by the drive. The HD 1.44 MB disks have a second, unobstructed hole in the opposite corner that identifies them as being of that capacity. In IBM-compatible PCs, the three densities of 3½-inch floppy disks are backwards-compatible; higher-density drives can read, write and format lower-density media. It is also possible to format a disk at a lower density than that for which it was intended, but only if the disk is first thoroughly demagnetized with a bulk eraser, as the high-density format is magnetically stronger and will prevent the disk from working in lower-density modes. Writing at different densities than those at which disks were intended, sometimes by altering or drilling holes, was possible but not supported by manufacturers. A hole on one side of a 3½-inch disk can be altered as to make some disk drives and operating systems treat the disk as one of higher or lower density, for bidirectional compatibility or economical reasons. Some computers, such as the PS/2 and Acorn Archimedes, ignored these holes altogether. Other sizes Other smaller floppy sizes were proposed, especially for portable or pocket-sized devices that needed a smaller storage device. 3¼-inch floppies otherwise similar to 5¼-inch floppies were proposed by Tabor and Dysan. Three-inch disks similar in construction to 3½-inch were manufactured and used for a time, particularly by Amstrad computers and word processors. A two-inch nominal size known as the Video Floppy was introduced by Sony for use with its Mavica still video camera. An incompatible two-inch floppy produced by Fujifilm called the LT-1 was used in the Zenith Minisport portable computer. None of these sizes achieved much market success. Sizes, performance and capacity Floppy disk size is often referred to in inches, even in countries using metric and though the size is defined in metric. The ANSI specification of 3½-inch disks is entitled in part "90 mm (3.5-inch)" though 90 mm is closer to 3.54 inches. Formatted capacities are generally set in terms of kilobytes and megabytes. Data is generally written to floppy disks in sectors (angular blocks) and tracks (concentric rings at a constant radius). For example, the HD format of 3½-inch floppy disks uses 512 bytes per sector, 18 sectors per track, 80 tracks per side and two sides, for a total of 1,474,560 bytes per disk. Some disk controllers can vary these parameters at the user's request, increasing storage on the disk, although they may not be able to be read on machines with other controllers. For example, Microsoft applications were often distributed on 3½-inch 1.68 MB DMF disks formatted with 21 sectors instead of 18; they could still be recognized by a standard controller. On the IBM PC, MSX and most other microcomputer platforms, disks were written using a constant angular velocity (CAV) format, with the disk spinning at a constant speed and the sectors holding the same amount of information on each track regardless of radial location. Because the sectors have constant angular size, the 512 bytes in each sector are compressed more near the disk's center. A more space-efficient technique would be to increase the number of sectors per track toward the outer edge of the disk, from 18 to 30 for instance, thereby keeping nearly constant the amount of physical disk space used for storing each sector; an example is zone bit recording. Apple implemented this in early Macintosh computers by spinning the disk more slowly when the head was at the edge, while maintaining the data rate, allowing 400 KB of storage per side and an extra 80 KB on a double-sided disk. This higher capacity came with a disadvantage: the format used a unique drive mechanism and control circuitry, meaning that Mac disks could not be read on other computers. Apple eventually reverted to constant angular velocity on HD floppy disks with their later machines, still unique to Apple as they supported the older variable-speed formats. Disk formatting is usually done by a utility program supplied by the computer OS manufacturer; generally, it sets up a file storage directory system on the disk, and initializes its sectors and tracks. Areas of the disk unusable for storage due to flaws can be locked (marked as "bad sectors") so that the operating system does not attempt to use them. This was time-consuming so many environments had quick formatting which skipped the error checking process. When floppy disks were often used, disks pre-formatted for popular computers were sold. The unformatted capacity of a floppy disk does not include the sector and track headings of a formatted disk; the difference in storage between them depends on the drive's application. Floppy disk drive and media manufacturers specify the unformatted capacity (for example, 2 MB for a standard 3½-inch HD floppy). It is implied that this should not be exceeded, since doing so will most likely result in performance problems. DMF was introduced permitting 1.68 MB to fit onto an otherwise standard 3½-inch disk; utilities then appeared allowing disks to be formatted as such. Mixtures of decimal prefixes and binary sector sizes require care to properly calculate total capacity. Whereas semiconductor memory naturally favors powers of two (size doubles each time an address pin is added to the integrated circuit), the capacity of a disk drive is the product of sector size, sectors per track, tracks per side and sides (which in hard disk drives with multiple platters can be greater than 2). Although other sector sizes have been known in the past, formatted sector sizes are now almost always set to powers of two (256 bytes, 512 bytes, etc.), and, in some cases, disk capacity is calculated as multiples of the sector size rather than only in bytes, leading to a combination of decimal multiples of sectors and binary sector sizes. For example, 1.44 MB 3½-inch HD disks have the "M" prefix peculiar to their context, coming from their capacity of 2,880 512-byte sectors (1,440 KiB), consistent with neither a decimal megabyte nor a binary mebibyte (MiB). Hence, these disks hold 1.47 MB or 1.41 MiB. Usable data capacity is a function of the disk format used, which in turn is determined by the FDD controller and its settings. Differences between such formats can result in capacities ranging from approximately 1,300 to 1,760 KiB (1.80 MB) on a standard 3½-inch high-density floppy (and up to nearly 2 MB with utilities such as 2M/2MGUI). The highest capacity techniques require much tighter matching of drive head geometry between drives, something not always possible and unreliable. For example, the LS-240 drive supports a 32 MB capacity on standard 3½-inch HD disks, but this is a write-once technique, and requires its own drive. The raw maximum transfer rate of 3½-inch ED floppy drives (2.88 MB) is nominally 1,000 kilobits/s, or approximately 83% that of single-speed CD‑ROM (71% of audio CD). This represents the speed of raw data bits moving under the read head; however, the effective speed is somewhat less due to space used for headers, gaps and other format fields and can be even further reduced by delays to seek between tracks.
Technology
Non-volatile memory
null
10902
https://en.wikipedia.org/wiki/Force
Force
A force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. The concept of force makes the everyday notion of pushing or pulling mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol . Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part often applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In equilibrium, these stresses cause no acceleration of the body as the forces balance one another. If these are not in equilibrium, they can cause deformation of solid materials or flow in fluids. In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes. Development of the concept Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years. By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational. High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Pre-Newtonian concepts Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion. Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics. In the early 17th century, before Newton's Principia, the term "force" () was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named (live force) by Leibniz. The modern concept of force corresponds to Newton's (accelerating force). Newtonian mechanics Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches. First law Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so. Second law According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion. Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object. A modern statement of Newton's second law is a vector equation: where is the momentum of the system, and is the net (vector sum) force. If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time. In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum, where m is the mass and is the velocity. If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes By substituting the definition of acceleration, the algebraic version of Newton's second law is derived: Third law Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if is the force of body 1 on body 2 and that of body 2 on body 1, then This law is sometimes referred to as the action-reaction law, with called the action and the reaction. Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body. In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero: More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system. Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if is the momentum of object 1 and the momentum of object 2, then Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained. Defining "force" Some textbooks use Newton's second law as a definition of force. However, for the equation for a constant mass to then have any predictive content, it must be combined with further information. Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference. The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways, which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll. Combining forces Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous. Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action. Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force. As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two. Equilibrium When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium. Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa. Static Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them. The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration. Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object. A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion. Dynamic Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity. Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion. Examples of forces in classical mechanics Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body. Gravitational force or Gravity What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of will experience a force: For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward. Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion. Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass () and the radius () of the Earth to the gravitational acceleration: where the vector direction is given by , is the unit vector directed outward from the center of the Earth. In this equation, a dimensional constant is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass due to the gravitational pull of mass is where is the distance between the two objects' centers of mass and is the unit vector pointed in the direction away from the center of the first object toward the center of the second object. This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed. Electromagnetic The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges. The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement. Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force. Thus the electric field anywhere in space is defined as where is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge due to electric and magnetic fields: where is the electromagnetic force, is the electric field at the body's location, is the magnetic field, and is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field. The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum. Normal When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects. The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface. Friction Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction. The static friction force () will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction () multiplied by the normal force (). In other words, the magnitude of the static friction force satisfies the inequality: The kinetic friction force () is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals: where is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction. Tension Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine. Spring A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If is the displacement, the force exerted by an ideal spring equals: where is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load. Centripetal For an object in uniform circular motion, the net force acting on the object equals: where is the mass of the object, is the velocity of the object and is the distance to the center of the circular path and is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction. Continuum mechanics Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows: where is the volume of the object in the fluid and is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight. A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction: where: is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and is the velocity of the object. More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as where is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions. Fictitious There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating. Because these forces are not genuine they are also referred to as "pseudo forces". In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. Concepts derived from force Rotation and torque Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force is defined relative to an arbitrary reference point as the cross product: where is the position vector of the force application point relative to the reference point. Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body: where is the moment of inertia of the body is the angular acceleration of the body. This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation. Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque: where is the angular momentum of the particle. Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques. Yank The yank is defined as the rate of change of force The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used. Kinematic integrals Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse: which by Newton's Second Law must be equivalent to the change in momentum (yielding the Impulse momentum theorem). Similarly, integrating with respect to position gives a definition for the work done by a force: which is equivalent to changes in kinetic energy (yielding the work energy theorem). Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change in a time interval dt: so with the velocity. Potential energy Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field is defined as that field whose gradient is equal and opposite to the force produced at every point: Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not. Conservation A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area. Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector emanating from spherically symmetric potentials. Examples of this follow: For gravity: where is the gravitational constant, and is the mass of object n. For electrostatic forces: where is electric permittivity of free space, and is the electric charge of object n. For spring forces: where is the spring constant. For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials. The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases. Units The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes. The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared. The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque.
Physical sciences
Physics
null
10911
https://en.wikipedia.org/wiki/Functional%20group
Functional group
In organic chemistry, a functional group is a substituent or moiety in a molecule that causes the molecule's characteristic chemical reactions. The same functional group will undergo the same or similar chemical reactions regardless of the rest of the molecule's composition. This enables systematic prediction of chemical reactions and behavior of chemical compounds and the design of chemical synthesis. The reactivity of a functional group can be modified by other functional groups nearby. Functional group interconversion can be used in retrosynthetic analysis to plan organic synthesis. A functional group is a group of atoms in a molecule with distinctive chemical properties, regardless of the other atoms in the molecule. The atoms in a functional group are linked to each other and to the rest of the molecule by covalent bonds. For repeating units of polymers, functional groups attach to their nonpolar core of carbon atoms and thus add chemical character to carbon chains. Functional groups can also be charged, e.g. in carboxylate salts (), which turns the molecule into a polyatomic ion or a complex ion. Functional groups binding to a central atom in a coordination complex are called ligands. Complexation and solvation are also caused by specific interactions of functional groups. In the common rule of thumb "like dissolves like", it is the shared or mutually well-interacting functional groups which give rise to solubility. For example, sugar dissolves in water because both share the hydroxyl functional group () and hydroxyls interact strongly with each other. Plus, when functional groups are more electronegative than atoms they attach to, the functional groups will become polar, and the otherwise nonpolar molecules containing these functional groups become polar and so become soluble in some aqueous environment. Combining the names of functional groups with the names of the parent alkanes generates what is termed a systematic nomenclature for naming organic compounds. In traditional nomenclature, the first carbon atom after the carbon that attaches to the functional group is called the alpha carbon; the second, beta carbon, the third, gamma carbon, etc. If there is another functional group at a carbon, it may be named with the Greek letter, e.g., the gamma-amine in gamma-aminobutyric acid is on the third carbon of the carbon chain attached to the carboxylic acid group. IUPAC conventions call for numeric labeling of the position, e.g. 4-aminobutanoic acid. In traditional names various qualifiers are used to label isomers, for example, isopropanol (IUPAC name: propan-2-ol) is an isomer of n-propanol (propan-1-ol). The term moiety has some overlap with the term "functional group". However, a moiety is an entire "half" of a molecule, which can be not only a single functional group, but also a larger unit consisting of multiple functional groups. For example, an "aryl moiety" may be any group containing an aromatic ring, regardless of how many functional groups the said aryl has. Table of common functional groups The following is a list of common functional groups. In the formulas, the symbols R and R' usually denote an attached hydrogen, or a hydrocarbon side chain of any length, but may sometimes refer to any group of atoms. Hydrocarbons Hydrocarbons are a class of molecule that is defined by functional groups called hydrocarbyls that contain only carbon and hydrogen, but vary in the number and order of double bonds. Each one differs in type (and scope) of reactivity. There are also a large number of branched or ring alkanes that have specific names, e.g., tert-butyl, bornyl, cyclohexyl, etc. There are several functional groups that contain an alkene such as vinyl group, allyl group, or acrylic group. Hydrocarbons may form charged structures: positively charged carbocations or negative carbanions. Carbocations are often named -um. Examples are tropylium and triphenylmethyl cations and the cyclopentadienyl anion. Groups containing halogen Haloalkanes are a class of molecule that is defined by a carbon–halogen bond. This bond can be relatively weak (in the case of an iodoalkane) or quite stable (as in the case of a fluoroalkane). In general, with the exception of fluorinated compounds, haloalkanes readily undergo nucleophilic substitution reactions or elimination reactions. The substitution on the carbon, the acidity of an adjacent proton, the solvent conditions, etc. all can influence the outcome of the reactivity. Groups containing oxygen Compounds that contain C-O bonds each possess differing reactivity based upon the location and hybridization of the C-O bond, owing to the electron-withdrawing effect of sp-hybridized oxygen (carbonyl groups) and the donating effects of sp2-hybridized oxygen (alcohol groups). Groups containing nitrogen Compounds that contain nitrogen in this category may contain C-O bonds, such as in the case of amides. Groups containing sulfur Compounds that contain sulfur exhibit unique chemistry due to sulfur's ability to form more bonds than oxygen, its lighter analogue on the periodic table. Substitutive nomenclature (marked as prefix in table) is preferred over functional class nomenclature (marked as suffix in table) for sulfides, disulfides, sulfoxides and sulfones. Groups containing phosphorus Compounds that contain phosphorus exhibit unique chemistry due to the ability of phosphorus to form more bonds than nitrogen, its lighter analogue on the periodic table. Groups containing boron Compounds containing boron exhibit unique chemistry due to their having partially filled octets and therefore acting as Lewis acids. Groups containing metals Fluorine is too electronegative to be bonded to magnesium; it becomes an ionic salt instead. Names of radicals or moieties These names are used to refer to the moieties themselves or to radical species, and also to form the names of halides and substituents in larger molecules. When the parent hydrocarbon is unsaturated, the suffix ("-yl", "-ylidene", or "-ylidyne") replaces "-ane" (e.g. "ethane" becomes "ethyl"); otherwise, the suffix replaces only the final "-e" (e.g. "ethyne" becomes "ethynyl"). When used to refer to moieties, multiple single bonds differ from a single multiple bond. For example, a methylene bridge (methanediyl) has two single bonds, whereas a methylidene group (methylidene) has one double bond. Suffixes can be combined, as in methylidyne (triple bond) vs. methylylidene (single bond and double bond) vs. methanetriyl (three double bonds). There are some retained names, such as methylene for methanediyl, 1,x-phenylene for phenyl-1,x-diyl (where x is 2, 3, or 4), carbyne for methylidyne, and trityl for triphenylmethyl.
Physical sciences
Concepts_2
Chemistry
10913
https://en.wikipedia.org/wiki/Fractal
Fractal
In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory. One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the conventional dimension of the filled polygon). Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two (the ratio of the new to the old radius) to the power of three (the conventional dimension of the filled sphere). However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension (which is formally called the topological dimension). Analytically, many fractals are nowhere differentiable. An infinite fractal curve can be conceived of as winding through space differently from an ordinary line – although it is still topologically 1-dimensional, its fractal dimension indicates that it locally fills space more efficiently than an ordinary line. Starting in the 17th century with notions of recursion, fractals have moved through increasingly rigorous mathematical treatment to the study of continuous but not differentiable functions in the 19th century by the seminal work of Bernard Bolzano, Bernhard Riemann, and Karl Weierstrass, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 20th century. There is some disagreement among mathematicians about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as "beautiful, damn hard, increasingly useful. That's fractals." More formally, in 1982 Mandelbrot defined fractal as follows: "A fractal is by definition a set for which the Hausdorff–Besicovitch dimension strictly exceeds the topological dimension." Later, seeing this as too restrictive, he simplified and expanded the definition to this: "A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole." Still later, Mandelbrot proposed "to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants". The consensus among mathematicians is that theoretical fractals are infinitely self-similar iterated and detailed mathematical constructs, of which many examples have been formulated and studied. Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in visual, physical, and aural media and found in nature, technology, art, and architecture. Fractals are of particular relevance in the field of chaos theory because they show up in the geometric depictions of most chaotic processes (typically either as attractors or as boundaries between basins of attraction). Etymology The term "fractal" was coined by the mathematician Benoît Mandelbrot in 1975. Mandelbrot based it on the Latin , meaning "broken" or "fractured", and used it to extend the concept of theoretical fractional dimensions to geometric patterns in nature. Introduction The word "fractal" often has different connotations for mathematicians and the general public, where the public is more likely to be familiar with fractal art than the mathematical concept. The mathematical concept is difficult to define formally, even for mathematicians, but key features can be understood with a little mathematical background. The feature of "self-similarity", for instance, is easily understood by analogy to zooming in with a lens or other device that zooms in on digital images to uncover finer, previously invisible, new structure. If this is done on fractals, however, no new detail appears; nothing changes and the same pattern repeats over and over, or for some fractals, nearly the same pattern reappears over and over. Self-similarity itself is not necessarily counter-intuitive (e.g., people have pondered self-similarity informally such as in the infinite regress in parallel mirrors or the homunculus, the little man inside the head of the little man inside the head ...). The difference for fractals is that the pattern reproduced must be detailed. This idea of being detailed relates to another feature that can be understood without much mathematical background: Having a fractal dimension greater than its topological dimension, for instance, refers to how a fractal scales compared to how geometric shapes are usually perceived. A straight line, for instance, is conventionally understood to be one-dimensional; if such a figure is rep-tiled into pieces each 1/3 the length of the original, then there are always three equal pieces. A solid square is understood to be two-dimensional; if such a figure is rep-tiled into pieces each scaled down by a factor of 1/3 in both dimensions, there are a total of 32 = 9 pieces. We see that for ordinary self-similar objects, being n-dimensional means that when it is rep-tiled into pieces each scaled down by a scale-factor of 1/r, there are a total of rn pieces. Now, consider the Koch curve. It can be rep-tiled into four sub-copies, each scaled down by a scale-factor of 1/3. So, strictly by analogy, we can consider the "dimension" of the Koch curve as being the unique real number D that satisfies 3D = 4. This number is called the fractal dimension of the Koch curve; it is not the conventionally perceived dimension of a curve. In general, a key property of fractals is that the fractal dimension differs from the conventionally understood dimension (formally called the topological dimension). This also leads to understanding a third feature, that fractals as mathematical equations are "nowhere differentiable". In a concrete sense, this means fractals cannot be measured in traditional ways. To elaborate, in trying to find the length of a wavy non-fractal curve, one could find straight segments of some measuring tool small enough to lay end to end over the waves, where the pieces could get small enough to be considered to conform to the curve in the normal manner of measuring with a tape measure. But in measuring an infinitely "wiggly" fractal curve such as the Koch snowflake, one would never find a small enough straight segment to conform to the curve, because the jagged pattern would always re-appear, at arbitrarily small scales, essentially pulling a little more of the tape measure into the total length measured each time one attempted to fit it tighter and tighter to the curve. The result is that one must need infinite tape to perfectly cover the entire curve, i.e. the snowflake has an infinite perimeter. History The history of fractals traces a path from chiefly theoretical studies to modern applications in computer graphics, with several notable people contributing canonical fractal forms along the way. A common theme in traditional African architecture is the use of fractal scaling, whereby small parts of the structure tend to look similar to larger parts, such as a circular village made of circular houses. According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense). In his writings, Leibniz used the term "fractional exponents", but lamented that "Geometry" did not yet know of them. Indeed, according to various historical accounts, after that point few mathematicians tackled the issues and the work of those who did remained obscured largely because of resistance to such unfamiliar emerging concepts, which were sometimes referred to as mathematical "monsters". Thus, it was not until two centuries had passed that on July 18, 1872 Karl Weierstrass presented the first definition of a function with a graph that would today be considered a fractal, having the non-intuitive property of being everywhere continuous but nowhere differentiable at the Royal Prussian Academy of Sciences. In addition, the quotient difference becomes arbitrarily large as the summation index increases. Not long after that, in 1883, Georg Cantor, who attended lectures by Weierstrass, published examples of subsets of the real line known as Cantor sets, which had unusual properties and are now recognized as fractals. Also in the last part of that century, Felix Klein and Henri Poincaré introduced a category of fractal that has come to be called "self-inverse" fractals. One of the next milestones came in 1904, when Helge von Koch, extending ideas of Poincaré and dissatisfied with Weierstrass's abstract and analytic definition, gave a more geometric definition including hand-drawn images of a similar function, which is now called the Koch snowflake. Another milestone came a decade later in 1915, when Wacław Sierpiński constructed his famous triangle then, one year later, his carpet. By 1918, two French mathematicians, Pierre Fatou and Gaston Julia, though working independently, arrived essentially simultaneously at results describing what is now seen as fractal behaviour associated with mapping complex numbers and iterative functions and leading to further ideas about attractors and repellors (i.e., points that attract or repel other points), which have become very important in the study of fractals. Very shortly after that work was submitted, by March 1918, Felix Hausdorff expanded the definition of "dimension", significantly for the evolution of the definition of fractals, to allow for sets to have non-integer dimensions. The idea of self-similar curves was taken further by Paul Lévy, who, in his 1938 paper Plane or Space Curves and Surfaces Consisting of Parts Similar to the Whole, described a new fractal curve, the Lévy C curve. Different researchers have postulated that without the aid of modern computer graphics, early investigators were limited to what they could depict in manual drawings, so lacked the means to visualize the beauty and appreciate some of the implications of many of the patterns they had discovered (the Julia set, for instance, could only be visualized through a few iterations as very simple drawings). That changed, however, in the 1960s, when Benoit Mandelbrot started writing about self-similarity in papers such as How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension, which built on earlier work by Lewis Fry Richardson. In 1975, Mandelbrot solidified hundreds of years of thought and mathematical development in coining the word "fractal" and illustrated his mathematical definition with striking computer-constructed visualizations. These images, such as of his canonical Mandelbrot set, captured the popular imagination; many of them were based on recursion, leading to the popular meaning of the term "fractal". In 1980, Loren Carpenter gave a presentation at the SIGGRAPH where he introduced his software for generating and rendering fractally generated landscapes. Definition and characteristics One often cited description that Mandelbrot published to describe geometric fractals is "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole"; this is generally helpful but limited. Authors disagree on the exact definition of fractal, but most usually elaborate on the basic ideas of self-similarity and the unusual relationship fractals have with the space they are embedded in. One point agreed on is that fractal patterns are characterized by fractal dimensions, but whereas these numbers quantify complexity (i.e., changing detail with changing scale), they neither uniquely describe nor specify details of how to construct particular fractal patterns. In 1975 when Mandelbrot coined the word "fractal", he did so to denote an object whose Hausdorff–Besicovitch dimension is greater than its topological dimension. However, this requirement is not met by space-filling curves such as the Hilbert curve. Because of the trouble involved in finding one definition for fractals, some argue that fractals should not be strictly defined at all. According to Falconer, fractals should be only generally characterized by a gestalt of the following features; Self-similarity, which may include: Exact self-similarity: identical at all scales, such as the Koch snowflake Quasi self-similarity: approximates the same pattern at different scales; may contain small copies of the entire fractal in distorted and degenerate forms; e.g., the Mandelbrot set's satellites are approximations of the entire set, but not exact copies. Statistical self-similarity: repeats a pattern stochastically so numerical or statistical measures are preserved across scales; e.g., randomly generated fractals like the well-known example of the coastline of Britain for which one would not expect to find a segment scaled and repeated as neatly as the repeated unit that defines fractals like the Koch snowflake. Qualitative self-similarity: as in a time series Multifractal scaling: characterized by more than one fractal dimension or scaling rule Fine or detailed structure at arbitrarily small scales. A consequence of this structure is fractals may have emergent properties (related to the next criterion in this list). Irregularity locally and globally that cannot easily be described in the language of traditional Euclidean geometry other than as the limit of a recursively defined sequence of stages. For images of fractal patterns, this has been expressed by phrases such as "smoothly piling up surfaces" and "swirls upon swirls";see Common techniques for generating fractals. As a group, these criteria form guidelines for excluding certain cases, such as those that may be self-similar without having other typically fractal features. A straight line, for instance, is self-similar but not fractal because it lacks detail, and is easily described in Euclidean language without a need for recursion. Common techniques for generating fractals Images of fractals can be created by fractal generating programs. Because of the butterfly effect, a small change in a single variable can have an unpredictable outcome. Iterated function systems (IFS) – use fixed geometric replacement rules; may be stochastic or deterministic; e.g., Koch snowflake, Cantor set, Haferman carpet, Sierpinski carpet, Sierpinski gasket, Peano curve, Harter-Heighway dragon curve, T-square, Menger sponge Strange attractors – use iterations of a map or solutions of a system of initial-value differential or difference equations that exhibit chaos (e.g., see multifractal image, or the logistic map) L-systems – use string rewriting; may resemble branching patterns, such as in plants, biological cells (e.g., neurons and immune system cells), blood vessels, pulmonary structure, etc. or turtle graphics patterns such as space-filling curves and tilings Escape-time fractals – use a formula or recurrence relation at each point in a space (such as the complex plane); usually quasi-self-similar; also known as "orbit" fractals; e.g., the Mandelbrot set, Julia set, Burning Ship fractal, Nova fractal and Lyapunov fractal. The 2d vector fields that are generated by one or two iterations of escape-time formulae also give rise to a fractal form when points (or pixel data) are passed through this field repeatedly. Random fractals – use stochastic rules; e.g., Lévy flight, percolation clusters, self avoiding walks, fractal landscapes, trajectories of Brownian motion and the Brownian tree (i.e., dendritic fractals generated by modeling diffusion-limited aggregation or reaction-limited aggregation clusters). Finite subdivision rules – use a recursive topological algorithm for refining tilings and they are similar to the process of cell division. The iterative processes used in creating the Cantor set and the Sierpinski carpet are examples of finite subdivision rules, as is barycentric subdivision. Applications Simulated fractals Fractal patterns have been modeled extensively, albeit within a range of scales rather than infinitely, owing to the practical limits of physical time and space. Models may simulate theoretical fractals or natural phenomena with fractal features. The outputs of the modelling process may be highly artistic renderings, outputs for investigation, or benchmarks for fractal analysis. Some specific applications of fractals to technology are listed elsewhere. Images and other outputs of modelling are normally referred to as being "fractals" even if they do not have strictly fractal characteristics, such as when it is possible to zoom into a region of the fractal image that does not exhibit any fractal properties. Also, these may include calculation or display artifacts which are not characteristics of true fractals. Modeled fractals may be sounds, digital images, electrochemical patterns, circadian rhythms, etc. Fractal patterns have been reconstructed in physical 3-dimensional space and virtually, often called "in silico" modeling. Models of fractals are generally created using fractal-generating software that implements techniques such as those outlined above. As one illustration, trees, ferns, cells of the nervous system, blood and lung vasculature, and other branching patterns in nature can be modeled on a computer by using recursive algorithms and L-systems techniques. The recursive nature of some patterns is obvious in certain examples—a branch from a tree or a frond from a fern is a miniature replica of the whole: not identical, but similar in nature. Similarly, random fractals have been used to describe/create many highly irregular real-world objects, such as coastlines and mountains. A limitation of modeling fractals is that resemblance of a fractal model to a natural phenomenon does not prove that the phenomenon being modeled is formed by a process similar to the modeling algorithms. Natural phenomena with fractal features Approximate fractals found in nature display self-similarity over extended, but finite, scale ranges. The connection between fractals and leaves, for instance, is currently being used to determine how much carbon is contained in trees. Phenomena known to have fractal features include: Actin cytoskeleton Algae Animal coloration patterns Blood vessels and pulmonary vessels Brownian motion (generated by a one-dimensional Wiener process). Clouds and rainfall areas Coastlines Craters Crystals DNA Dust grains Earthquakes Fault lines Geometrical optics Heart rates Heart sounds Lake shorelines and areas Lightning bolts Mountain-goat horns Neurons Polymers Percolation Mountain ranges Ocean waves Pineapple Proteins Psychedelic Experience Purkinje cells Rings of Saturn River networks Romanesco broccoli Snowflakes Soil pores Surfaces in turbulent flows Trees Fractals in cell biology Fractals often appear in the realm of living organisms where they arise through branching processes and other complex pattern formation. Ian Wong and co-workers have shown that migrating cells can form fractals by clustering and branching. Nerve cells function through processes at the cell surface, with phenomena that are enhanced by largely increasing the surface to volume ratio. As a consequence nerve cells often are found to form into fractal patterns. These processes are crucial in cell physiology and different pathologies. Multiple subcellular structures also are found to assemble into fractals. Diego Krapf has shown that through branching processes the actin filaments in human cells assemble into fractal patterns. Similarly Matthias Weiss showed that the endoplasmic reticulum displays fractal features. The current understanding is that fractals are ubiquitous in cell biology, from proteins, to organelles, to whole cells. In creative works Since 1999 numerous scientific groups have performed fractal analysis on over 50 paintings created by Jackson Pollock by pouring paint directly onto horizontal canvasses. Recently, fractal analysis has been used to achieve a 93% success rate in distinguishing real from imitation Pollocks. Cognitive neuroscientists have shown that Pollock's fractals induce the same stress-reduction in observers as computer-generated fractals and Nature's fractals. Decalcomania, a technique used by artists such as Max Ernst, can produce fractal-like patterns. It involves pressing paint between two surfaces and pulling them apart. Cyberneticist Ron Eglash has suggested that fractal geometry and mathematics are prevalent in African art, games, divination, trade, and architecture. Circular houses appear in circles of circles, rectangular houses in rectangles of rectangles, and so on. Such scaling patterns can also be found in African textiles, sculpture, and even cornrow hairstyles. Hokky Situngkir also suggested the similar properties in Indonesian traditional art, batik, and ornaments found in traditional houses. Ethnomathematician Ron Eglash has discussed the planned layout of Benin city using fractals as the basis, not only in the city itself and the villages but even in the rooms of houses. He commented that "When Europeans first came to Africa, they considered the architecture very disorganised and thus primitive. It never occurred to them that the Africans might have been using a form of mathematics that they hadn't even discovered yet." In a 1996 interview with Michael Silverblatt, David Foster Wallace explained that the structure of the first draft of Infinite Jest he gave to his editor Michael Pietsch was inspired by fractals, specifically the Sierpinski triangle (a.k.a. Sierpinski gasket), but that the edited novel is "more like a lopsided Sierpinsky Gasket". Some works by the Dutch artist M. C. Escher, such as Circle Limit III, contain shapes repeated to infinity that become smaller and smaller as they get near to the edges, in a pattern that would always look the same if zoomed in. Aesthetics and Psychological Effects of Fractal Based Design: Highly prevalent in nature, fractal patterns possess self-similar components that repeat at varying size scales. The perceptual experience of human-made environments can be impacted with inclusion of these natural patterns. Previous work has demonstrated consistent trends in preference for and complexity estimates of fractal patterns. However, limited information has been gathered on the impact of other visual judgments. Here we examine the aesthetic and perceptual experience of fractal ‘global-forest’ designs already installed in humanmade spaces and demonstrate how fractal pattern components are associated with positive psychological experiences that can be utilized to promote occupant well-being. These designs are composite fractal patterns consisting of individual fractal ‘tree-seeds’ which combine to create a ‘global fractal forest.’ The local ‘tree-seed’ patterns, global configuration of tree-seed locations, and overall resulting ‘global-forest’ patterns have fractal qualities. These designs span multiple mediums yet are all intended to lower occupant stress without detracting from the function and overall design of the space. In this series of studies, we first establish divergent relationships between various visual attributes, with pattern complexity, preference, and engagement ratings increasing with fractal complexity compared to ratings of refreshment and relaxation which stay the same or decrease with complexity. Subsequently, we determine that the local constituent fractal (‘tree-seed’) patterns contribute to the perception of the overall fractal design, and address how to balance aesthetic and psychological effects (such as individual experiences of perceived engagement and relaxation) in fractal design installations. This set of studies demonstrates that fractal preference is driven by a balance between increased arousal (desire for engagement and complexity) and decreased tension (desire for relaxation or refreshment). Installations of these composite mid-high complexity ‘global-forest’ patterns consisting of ‘tree-seed’ components balance these contrasting needs, and can serve as a practical implementation of biophilic patterns in human-made environments to promote occupant well-being. Physiological responses Humans appear to be especially well-adapted to processing fractal patterns with fractal dimension between 1.3 and 1.5. When humans view fractal patterns with fractal dimension between 1.3 and 1.5, this tends to reduce physiological stress. Applications in technology Fractal antennas Fractal transistor Fractal heat exchangers Digital imaging Architecture Urban growth Classification of histopathology slides Fractal landscape or Coastline complexity Detecting 'life as we don't know it' by fractal analysis Enzymes (Michaelis–Menten kinetics) Generation of new music Signal and image compression Creation of digital photographic enlargements Fractal in soil mechanics Computer and video game design Computer Graphics Organic environments Procedural generation Fractography and fracture mechanics Small angle scattering theory of fractally rough systems T-shirts and other fashion Generation of patterns for camouflage, such as MARPAT Digital sundial Technical analysis of price series Fractals in networks Medicine Neuroscience Diagnostic Imaging Pathology Geology Geography Archaeology Soil mechanics Seismology Search and rescue Morton order space filling curves for GPU cache coherency in texture mapping, rasterisation and indexing of turbulence data.
Mathematics
Geometry
null
10915
https://en.wikipedia.org/wiki/Fluid
Fluid
In physics, a fluid is a liquid, gas, or other material that may continuously move and deform (flow) under an applied shear stress, or external force. They have zero shear modulus, or, in simpler terms, are substances which cannot resist any shear force applied to them. Although the term fluid generally includes both the liquid and gas phases, its definition varies among branches of science. Definitions of solid vary as well, and depending on field, some substances can have both fluid and solid properties. Non-Newtonian fluids like Silly Putty appear to behave similar to a solid when a sudden force is applied. Substances with a very high viscosity such as pitch appear to behave like a solid (see pitch drop experiment) as well. In particle physics, the concept is extended to include fluidic matters other than liquids or gases. A fluid in medicine or biology refers to any liquid constituent of the body (body fluid), whereas "liquid" is not used in this sense. Sometimes liquids given for fluid replacement, either by drinking or by injection, are also called fluids (e.g. "drink plenty of fluids"). In hydraulics, fluid is a term which refers to liquids with certain properties, and is broader than (hydraulic) oils. Physics Fluids display properties such as: lack of resistance to permanent deformation, resisting only relative rates of deformation in a dissipative, frictional manner, and the ability to flow (also described as the ability to take on the shape of the container). These properties are typically a function of their inability to support a shear stress in static equilibrium. By contrast, solids respond to shear either with a spring-like restoring force—meaning that deformations are reversible—or they require a certain initial stress before they deform (see plasticity). Solids respond with restoring forces to both shear stresses and to normal stresses, both compressive and tensile. By contrast, ideal fluids only respond with restoring forces to normal stresses, called pressure: fluids can be subjected both to compressive stress—corresponding to positive pressure—and to tensile stress, corresponding to negative pressure. Solids and liquids both have tensile strengths, which when exceeded in solids creates irreversible deformation and fracture, and in liquids cause the onset of cavitation. Both solids and liquids have free surfaces, which cost some amount of free energy to form. In the case of solids, the amount of free energy to form a given unit of surface area is called surface energy, whereas for liquids the same quantity is called surface tension. In response to surface tension, the ability of liquids to flow results in behaviour differing from that of solids, though at equilibrium both tend to minimise their surface energy: liquids tend to form rounded droplets, whereas pure solids tend to form crystals. Gases, lacking free surfaces, freely diffuse. Modelling In a solid, shear stress is a function of strain, but in a fluid, shear stress is a function of strain rate. A consequence of this behavior is Pascal's law which describes the role of pressure in characterizing a fluid's state. The behavior of fluids can be described by the Navier–Stokes equations—a set of partial differential equations which are based on: continuity (conservation of mass), conservation of linear momentum, conservation of angular momentum, conservation of energy. The study of fluids is fluid mechanics, which is subdivided into fluid dynamics and fluid statics depending on whether the fluid is in motion. Classification of fluids Depending on the relationship between shear stress and the rate of strain and its derivatives, fluids can be characterized as one of the following: Newtonian fluids: where stress is directly proportional to rate of strain Non-Newtonian fluids: where stress is not proportional to rate of strain, its higher powers and derivatives. Newtonian fluids follow Newton's law of viscosity and may be called viscous fluids. Fluids may be classified by their compressibility: Compressible fluid: A fluid that causes volume reduction or density change when pressure is applied to the fluid or when the fluid becomes supersonic. Incompressible fluid: A fluid that does not vary in volume with changes in pressure or flow velocity (i.e., ρ=constant) such as water or oil. Newtonian and incompressible fluids do not actually exist, but are assumed to be for theoretical settlement. Virtual fluids that completely ignore the effects of viscosity and compressibility are called perfect fluids.
Physical sciences
Fluid mechanics
Physics
10918
https://en.wikipedia.org/wiki/Fibonacci%20sequence
Fibonacci sequence
In mathematics, the Fibonacci sequence is a sequence in which each element is the sum of the two elements that precede it. Numbers that are part of the Fibonacci sequence are known as Fibonacci numbers, commonly denoted . Many writers begin the sequence with 0 and 1, although some authors start it from 1 and 1 and some (as did Fibonacci) from 1 and 2. Starting from 0 and 1, the sequence begins 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ... The Fibonacci numbers were first described in Indian mathematics as early as 200 BC in work by Pingala on enumerating possible patterns of Sanskrit poetry formed from syllables of two lengths. They are named after the Italian mathematician Leonardo of Pisa, also known as Fibonacci, who introduced the sequence to Western European mathematics in his 1202 book . Fibonacci numbers appear unexpectedly often in mathematics, so much so that there is an entire journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, and graphs called Fibonacci cubes used for interconnecting parallel and distributed systems. They also appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, and the arrangement of a pine cone's bracts, though they do not occur in all species. Fibonacci numbers are also strongly related to the golden ratio: Binet's formula expresses the -th Fibonacci number in terms of and the golden ratio, and implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as increases. Fibonacci numbers are also closely related to Lucas numbers, which obey the same recurrence relation and with the Fibonacci numbers form a complementary pair of Lucas sequences. Definition The Fibonacci numbers may be defined by the recurrence relation and for . Under some older definitions, the value is omitted, so that the sequence starts with and the recurrence is valid for . The first 20 Fibonacci numbers are: {| class="wikitable" style="text-align:right" ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! |- | 0 | 1 | 1 | 2 | 3 | 5 | 8 | 13 | 21 | 34 | 55 | 89 | 144 | 233 | 377 | 610 | 987 | 1597 | 2584 | 4181 |} History India The Fibonacci sequence appears in Indian mathematics, in connection with Sanskrit prosody. In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long (L) syllables of 2 units duration, juxtaposed with short (S) syllables of 1 unit duration. Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration units is . Knowledge of the Fibonacci sequence was expressed as early as Pingala ( 450 BC–200 BC). Singh cites Pingala's cryptic formula misrau cha ("the two are mixed") and scholars who interpret it in context as saying that the number of patterns for beats () is obtained by adding one [S] to the cases and one [L] to the cases. Bharata Muni also expresses knowledge of the sequence in the Natya Shastra (c. 100 BC–c. 350 AD). However, the clearest exposition of the sequence arises in the work of Virahanka (c. 700 AD), whose own work is lost, but is available in a quotation by Gopala (c. 1135): Variations of two earlier meters [is the variation] ... For example, for [a meter of length] four, variations of meters of two [and] three being mixed, five happens. [works out examples 8, 13, 21] ... In this way, the process should be followed in all mātrā-vṛttas [prosodic combinations]. Hemachandra (c. 1150) is credited with knowledge of the sequence as well, writing that "the sum of the last and the one before the last is the number ... of the next mātrā-vṛtta." Europe The Fibonacci sequence first appears in the book (The Book of Calculation, 1202) by Fibonacci where it is used to calculate the growth of rabbit populations. Fibonacci considers the growth of an idealized (biologically unrealistic) rabbit population, assuming that: a newly born breeding pair of rabbits are put in a field; each breeding pair mates at the age of one month, and at the end of their second month they always produce another pair of rabbits; and rabbits never die, but continue breeding forever. Fibonacci posed the rabbit math problem: how many pairs will there be in one year? At the end of the first month, they mate, but there is still only 1 pair. At the end of the second month they produce a new pair, so there are 2 pairs in the field. At the end of the third month, the original pair produce a second pair, but the second pair only mate to gestate for a month, so there are 3 pairs in all. At the end of the fourth month, the original pair has produced yet another new pair, and the pair born two months ago also produces their first pair, making 5 pairs. At the end of the -th month, the number of pairs of rabbits is equal to the number of mature pairs (that is, the number of pairs in month ) plus the number of pairs alive last month (month ). The number in the -th month is the -th Fibonacci number. The name "Fibonacci sequence" was first used by the 19th-century number theorist Édouard Lucas. Relation to the golden ratio Closed-form expression Like every sequence defined by a homogeneous linear recurrence with constant coefficients, the Fibonacci numbers have a closed-form expression. It has become known as Binet's formula, named after French mathematician Jacques Philippe Marie Binet, though it was already known by Abraham de Moivre and Daniel Bernoulli: where is the golden ratio, and is its conjugate: Since , this formula can also be written as To see the relation between the sequence and these constants, note that and are both solutions of the equation and thus so the powers of and satisfy the Fibonacci recursion. In other words, It follows that for any values and , the sequence defined by satisfies the same recurrence, If and are chosen so that and then the resulting sequence must be the Fibonacci sequence. This is the same as requiring and satisfy the system of equations: which has solution producing the required formula. Taking the starting values and to be arbitrary constants, a more general solution is: where Computation by rounding Since for all , the number is the closest integer to . Therefore, it can be found by rounding, using the nearest integer function: In fact, the rounding error quickly becomes very small as grows, being less than 0.1 for , and less than 0.01 for . This formula is easily inverted to find an index of a Fibonacci number : Instead using the floor function gives the largest index of a Fibonacci number that is not greater than : where , , and . Magnitude Since Fn is asymptotic to , the number of digits in is asymptotic to . As a consequence, for every integer there are either 4 or 5 Fibonacci numbers with decimal digits. More generally, in the base representation, the number of digits in is asymptotic to Limit of consecutive quotients Johannes Kepler observed that the ratio of consecutive Fibonacci numbers converges. He wrote that "as 5 is to 8 so is 8 to 13, practically, and as 8 is to 13, so is 13 to 21 almost", and concluded that these ratios approach the golden ratio This convergence holds regardless of the starting values and , unless . This can be verified using Binet's formula. For example, the initial values 3 and 2 generate the sequence 3, 2, 5, 7, 12, 19, 31, 50, 81, 131, 212, 343, 555, ... . The ratio of consecutive elements in this sequence shows the same convergence towards the golden ratio. In general, , because the ratios between consecutive Fibonacci numbers approaches . Decomposition of powers Since the golden ratio satisfies the equation this expression can be used to decompose higher powers as a linear function of lower powers, which in turn can be decomposed all the way down to a linear combination of and 1. The resulting recurrence relationships yield Fibonacci numbers as the linear coefficients: This equation can be proved by induction on : For , it is also the case that and it is also the case that These expressions are also true for if the Fibonacci sequence Fn is extended to negative integers using the Fibonacci rule Identification Binet's formula provides a proof that a positive integer is a Fibonacci number if and only if at least one of or is a perfect square. This is because Binet's formula, which can be written as , can be multiplied by and solved as a quadratic equation in via the quadratic formula: Comparing this to , it follows that In particular, the left-hand side is a perfect square. Matrix form A 2-dimensional system of linear difference equations that describes the Fibonacci sequence is alternatively denoted which yields . The eigenvalues of the matrix are and corresponding to the respective eigenvectors As the initial value is it follows that the th element is From this, the th element in the Fibonacci series may be read off directly as a closed-form expression: Equivalently, the same computation may be performed by diagonalization of through use of its eigendecomposition: where The closed-form expression for the th element in the Fibonacci series is therefore given by which again yields The matrix has a determinant of −1, and thus it is a 2 × 2 unimodular matrix. This property can be understood in terms of the continued fraction representation for the golden ratio : The convergents of the continued fraction for are ratios of successive Fibonacci numbers: is the -th convergent, and the -st convergent can be found from the recurrence relation . The matrix formed from successive convergents of any continued fraction has a determinant of +1 or −1. The matrix representation gives the following closed-form expression for the Fibonacci numbers: For a given , this matrix can be computed in arithmetic operations, using the exponentiation by squaring method. Taking the determinant of both sides of this equation yields Cassini's identity, Moreover, since for any square matrix , the following identities can be derived (they are obtained from two different coefficients of the matrix product, and one may easily deduce the second one from the first one by changing into ), In particular, with , These last two identities provide a way to compute Fibonacci numbers recursively in arithmetic operations. This matches the time for computing the -th Fibonacci number from the closed-form matrix formula, but with fewer redundant steps if one avoids recomputing an already computed Fibonacci number (recursion with memoization). Combinatorial identities Combinatorial proofs Most identities involving Fibonacci numbers can be proved using combinatorial arguments using the fact that can be interpreted as the number of (possibly empty) sequences of 1s and 2s whose sum is . This can be taken as the definition of with the conventions , meaning no such sequence exists whose sum is −1, and , meaning the empty sequence "adds up" to 0. In the following, is the cardinality of a set: In this manner the recurrence relation may be understood by dividing the sequences into two non-overlapping sets where all sequences either begin with 1 or 2: Excluding the first element, the remaining terms in each sequence sum to or and the cardinality of each set is or giving a total of sequences, showing this is equal to . In a similar manner it may be shown that the sum of the first Fibonacci numbers up to the -th is equal to the -th Fibonacci number minus 1. In symbols: This may be seen by dividing all sequences summing to based on the location of the first 2. Specifically, each set consists of those sequences that start until the last two sets each with cardinality 1. Following the same logic as before, by summing the cardinality of each set we see that ... where the last two terms have the value . From this it follows that . A similar argument, grouping the sums by the position of the first 1 rather than the first 2 gives two more identities: and In words, the sum of the first Fibonacci numbers with odd index up to is the -th Fibonacci number, and the sum of the first Fibonacci numbers with even index up to is the -th Fibonacci number minus 1. A different trick may be used to prove or in words, the sum of the squares of the first Fibonacci numbers up to is the product of the -th and -th Fibonacci numbers. To see this, begin with a Fibonacci rectangle of size and decompose it into squares of size ; from this the identity follows by comparing areas: Symbolic method The sequence is also considered using the symbolic method. More precisely, this sequence corresponds to a specifiable combinatorial class. The specification of this sequence is . Indeed, as stated above, the -th Fibonacci number equals the number of combinatorial compositions (ordered partitions) of using terms 1 and 2. It follows that the ordinary generating function of the Fibonacci sequence, , is the rational function Induction proofs Fibonacci identities often can be easily proved using mathematical induction. For example, reconsider Adding to both sides gives and so we have the formula for Similarly, add to both sides of to give Binet formula proofs The Binet formula is This can be used to prove Fibonacci identities. For example, to prove that note that the left hand side multiplied by becomes as required, using the facts and to simplify the equations. Other identities Numerous other identities can be derived using various methods. Here are some of them: Cassini's and Catalan's identities Cassini's identity states that Catalan's identity is a generalization: d'Ocagne's identity where is the -th Lucas number. The last is an identity for doubling ; other identities of this type are by Cassini's identity. These can be found experimentally using lattice reduction, and are useful in setting up the special number field sieve to factorize a Fibonacci number. More generally, or alternatively Putting in this formula, one gets again the formulas of the end of above section Matrix form. Generating function The generating function of the Fibonacci sequence is the power series This series is convergent for any complex number satisfying and its sum has a simple closed form: This can be proved by multiplying by : where all terms involving for cancel out because of the defining Fibonacci recurrence relation. The partial fraction decomposition is given by where is the golden ratio and is its conjugate. The related function is the generating function for the negafibonacci numbers, and satisfies the functional equation Using equal to any of 0.01, 0.001, 0.0001, etc. lays out the first Fibonacci numbers in the decimal expansion of . For example, Reciprocal sums Infinite sums over reciprocal Fibonacci numbers can sometimes be evaluated in terms of theta functions. For example, the sum of every odd-indexed reciprocal Fibonacci number can be written as and the sum of squared reciprocal Fibonacci numbers as If we add 1 to each Fibonacci number in the first sum, there is also the closed form and there is a nested sum of squared Fibonacci numbers giving the reciprocal of the golden ratio, The sum of all even-indexed reciprocal Fibonacci numbers is with the Lambert series since So the reciprocal Fibonacci constant is Moreover, this number has been proved irrational by Richard André-Jeannin. Millin's series gives the identity which follows from the closed form for its partial sums as tends to infinity: Primes and divisibility Divisibility properties Every third number of the sequence is even (a multiple of ) and, more generally, every -th number of the sequence is a multiple of Fk. Thus the Fibonacci sequence is an example of a divisibility sequence. In fact, the Fibonacci sequence satisfies the stronger divisibility property where is the greatest common divisor function. (This relation is different if a different indexing convention is used, such as the one that starts the sequence with and .) In particular, any three consecutive Fibonacci numbers are pairwise coprime because both and . That is, for every . Every prime number divides a Fibonacci number that can be determined by the value of modulo 5. If is congruent to 1 or 4 modulo 5, then divides , and if is congruent to 2 or 3 modulo 5, then, divides . The remaining case is that , and in this case divides Fp. These cases can be combined into a single, non-piecewise formula, using the Legendre symbol: Primality testing The above formula can be used as a primality test in the sense that if where the Legendre symbol has been replaced by the Jacobi symbol, then this is evidence that is a prime, and if it fails to hold, then is definitely not a prime. If is composite and satisfies the formula, then is a Fibonacci pseudoprime. When is largesay a 500-bit numberthen we can calculate efficiently using the matrix form. Thus Here the matrix power is calculated using modular exponentiation, which can be adapted to matrices. Fibonacci primes A Fibonacci prime is a Fibonacci number that is prime. The first few are: 2, 3, 5, 13, 89, 233, 1597, 28657, 514229, ... Fibonacci primes with thousands of digits have been found, but it is not known whether there are infinitely many. is divisible by , so, apart from , any Fibonacci prime must have a prime index. As there are arbitrarily long runs of composite numbers, there are therefore also arbitrarily long runs of composite Fibonacci numbers. No Fibonacci number greater than is one greater or one less than a prime number. The only nontrivial square Fibonacci number is 144. Attila Pethő proved in 2001 that there is only a finite number of perfect power Fibonacci numbers. In 2006, Y. Bugeaud, M. Mignotte, and S. Siksek proved that 8 and 144 are the only such non-trivial perfect powers. 1, 3, 21, and 55 are the only triangular Fibonacci numbers, which was conjectured by Vern Hoggatt and proved by Luo Ming. No Fibonacci number can be a perfect number. More generally, no Fibonacci number other than 1 can be multiply perfect, and no ratio of two Fibonacci numbers can be perfect. Prime divisors With the exceptions of 1, 8 and 144 (, and ) every Fibonacci number has a prime factor that is not a factor of any smaller Fibonacci number (Carmichael's theorem). As a result, 8 and 144 ( and ) are the only Fibonacci numbers that are the product of other Fibonacci numbers. The divisibility of Fibonacci numbers by a prime is related to the Legendre symbol which is evaluated as follows: If is a prime number then For example, It is not known whether there exists a prime such that Such primes (if there are any) would be called Wall–Sun–Sun primes. Also, if is an odd prime number then: Example 1. , in this case and we have: Example 2. , in this case and we have: Example 3. , in this case and we have: Example 4. , in this case and we have: For odd , all odd prime divisors of are congruent to 1 modulo 4, implying that all odd divisors of (as the products of odd prime divisors) are congruent to 1 modulo 4. For example, All known factors of Fibonacci numbers for all are collected at the relevant repositories. Periodicity modulo n If the members of the Fibonacci sequence are taken mod , the resulting sequence is periodic with period at most . The lengths of the periods for various form the so-called Pisano periods. Determining a general formula for the Pisano periods is an open problem, which includes as a subproblem a special instance of the problem of finding the multiplicative order of a modular integer or of an element in a finite field. However, for any particular , the Pisano period may be found as an instance of cycle detection. Generalizations The Fibonacci sequence is one of the simplest and earliest known sequences defined by a recurrence relation, and specifically by a linear difference equation. All these sequences may be viewed as generalizations of the Fibonacci sequence. In particular, Binet's formula may be generalized to any sequence that is a solution of a homogeneous linear difference equation with constant coefficients. Some specific examples that are close, in some sense, to the Fibonacci sequence include: Generalizing the index to negative integers to produce the negafibonacci numbers. Generalizing the index to real numbers using a modification of Binet's formula. Starting with other integers. Lucas numbers have , , and . Primefree sequences use the Fibonacci recursion with other starting points to generate sequences in which all numbers are composite. Letting a number be a linear function (other than the sum) of the 2 preceding numbers. The Pell numbers have . If the coefficient of the preceding value is assigned a variable value , the result is the sequence of Fibonacci polynomials. Not adding the immediately preceding numbers. The Padovan sequence and Perrin numbers have . Generating the next number by adding 3 numbers (tribonacci numbers), 4 numbers (tetranacci numbers), or more. The resulting sequences are known as n-Step Fibonacci numbers. Applications Mathematics The Fibonacci numbers occur as the sums of binomial coefficients in the "shallow" diagonals of Pascal's triangle: This can be proved by expanding the generating function and collecting like terms of . To see how the formula is used, we can arrange the sums by the number of terms present: {| | | |- | | | | | |- | | | | |} which is , where we are choosing the positions of twos from terms. These numbers also give the solution to certain enumerative problems, the most common of which is that of counting the number of ways of writing a given number as an ordered sum of 1s and 2s (called compositions); there are ways to do this (equivalently, it's also the number of domino tilings of the rectangle). For example, there are ways one can climb a staircase of 5 steps, taking one or two steps at a time: {| | | | | | | |- | | | | |} The figure shows that 8 can be decomposed into 5 (the number of ways to climb 4 steps, followed by a single-step) plus 3 (the number of ways to climb 3 steps, followed by a double-step). The same reasoning is applied recursively until a single step, of which there is only one way to climb. The Fibonacci numbers can be found in different ways among the set of binary strings, or equivalently, among the subsets of a given set. The number of binary strings of length without consecutive s is the Fibonacci number . For example, out of the 16 binary strings of length 4, there are without consecutive s—they are 0000, 0001, 0010, 0100, 0101, 1000, 1001, and 1010. Such strings are the binary representations of Fibbinary numbers. Equivalently, is the number of subsets of without consecutive integers, that is, those for which for every . A bijection with the sums to is to replace 1 with 0 and 2 with 10, and drop the last zero. The number of binary strings of length without an odd number of consecutive s is the Fibonacci number . For example, out of the 16 binary strings of length 4, there are without an odd number of consecutive s—they are 0000, 0011, 0110, 1100, 1111. Equivalently, the number of subsets of without an odd number of consecutive integers is . A bijection with the sums to is to replace 1 with 0 and 2 with 11. The number of binary strings of length without an even number of consecutive s or s is . For example, out of the 16 binary strings of length 4, there are without an even number of consecutive s or s—they are 0001, 0111, 0101, 1000, 1010, 1110. There is an equivalent statement about subsets. Yuri Matiyasevich was able to show that the Fibonacci numbers can be defined by a Diophantine equation, which led to his solving Hilbert's tenth problem. The Fibonacci numbers are also an example of a complete sequence. This means that every positive integer can be written as a sum of Fibonacci numbers, where any one number is used once at most. Moreover, every positive integer can be written in a unique way as the sum of one or more distinct Fibonacci numbers in such a way that the sum does not include any two consecutive Fibonacci numbers. This is known as Zeckendorf's theorem, and a sum of Fibonacci numbers that satisfies these conditions is called a Zeckendorf representation. The Zeckendorf representation of a number can be used to derive its Fibonacci coding. Starting with 5, every second Fibonacci number is the length of the hypotenuse of a right triangle with integer sides, or in other words, the largest number in a Pythagorean triple, obtained from the formula The sequence of Pythagorean triangles obtained from this formula has sides of lengths (3,4,5), (5,12,13), (16,30,34), (39,80,89), ... . The middle side of each of these triangles is the sum of the three sides of the preceding triangle. The Fibonacci cube is an undirected graph with a Fibonacci number of nodes that has been proposed as a network topology for parallel computing. Fibonacci numbers appear in the ring lemma, used to prove connections between the circle packing theorem and conformal maps. Computer science The Fibonacci numbers are important in computational run-time analysis of Euclid's algorithm to determine the greatest common divisor of two integers: the worst case input for this algorithm is a pair of consecutive Fibonacci numbers. Fibonacci numbers are used in a polyphase version of the merge sort algorithm in which an unsorted list is divided into two lists whose lengths correspond to sequential Fibonacci numbers—by dividing the list so that the two parts have lengths in the approximate proportion . A tape-drive implementation of the polyphase merge sort was described in The Art of Computer Programming. A Fibonacci tree is a binary tree whose child trees (recursively) differ in height by exactly 1. So it is an AVL tree, and one with the fewest nodes for a given height—the "thinnest" AVL tree. These trees have a number of vertices that is a Fibonacci number minus one, an important fact in the analysis of AVL trees. Fibonacci numbers are used by some pseudorandom number generators. Fibonacci numbers arise in the analysis of the Fibonacci heap data structure. A one-dimensional optimization method, called the Fibonacci search technique, uses Fibonacci numbers. The Fibonacci number series is used for optional lossy compression in the IFF 8SVX audio file format used on Amiga computers. The number series compands the original audio wave similar to logarithmic methods such as μ-law. Some Agile teams use a modified series called the "Modified Fibonacci Series" in planning poker, as an estimation tool. Planning Poker is a formal part of the Scaled Agile Framework. Fibonacci coding Negafibonacci coding Nature Fibonacci sequences appear in biological settings, such as branching in trees, arrangement of leaves on a stem, the fruitlets of a pineapple, the flowering of artichoke, the arrangement of a pine cone, and the family tree of honeybees. Kepler pointed out the presence of the Fibonacci sequence in nature, using it to explain the (golden ratio-related) pentagonal form of some flowers. Field daisies most often have petals in counts of Fibonacci numbers. In 1830, Karl Friedrich Schimper and Alexander Braun discovered that the parastichies (spiral phyllotaxis) of plants were frequently expressed as fractions involving Fibonacci numbers. Przemysław Prusinkiewicz advanced the idea that real instances can in part be understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars. A model for the pattern of florets in the head of a sunflower was proposed by in 1979. This has the form where is the index number of the floret and is a constant scaling factor; the florets thus lie on Fermat's spiral. The divergence angle, approximately 137.51°, is the golden angle, dividing the circle in the golden ratio. Because this ratio is irrational, no floret has a neighbor at exactly the same angle from the center, so the florets pack efficiently. Because the rational approximations to the golden ratio are of the form , the nearest neighbors of floret number are those at for some index , which depends on , the distance from the center. Sunflowers and similar flowers most commonly have spirals of florets in clockwise and counter-clockwise directions in the amount of adjacent Fibonacci numbers, typically counted by the outermost range of radii. Fibonacci numbers also appear in the ancestral pedigrees of bees (which are haplodiploids), according to the following rules: If an egg is laid but not fertilized, it produces a male (or drone bee in honeybees). If, however, an egg is fertilized, it produces a female. Thus, a male bee always has one parent, and a female bee has two. If one traces the pedigree of any male bee (1 bee), he has 1 parent (1 bee), 2 grandparents, 3 great-grandparents, 5 great-great-grandparents, and so on. This sequence of numbers of parents is the Fibonacci sequence. The number of ancestors at each level, , is the number of female ancestors, which is , plus the number of male ancestors, which is . This is under the unrealistic assumption that the ancestors at each level are otherwise unrelated. It has similarly been noticed that the number of possible ancestors on the human X chromosome inheritance line at a given ancestral generation also follows the Fibonacci sequence. A male individual has an X chromosome, which he received from his mother, and a Y chromosome, which he received from his father. The male counts as the "origin" of his own X chromosome (), and at his parents' generation, his X chromosome came from a single parent . The male's mother received one X chromosome from her mother (the son's maternal grandmother), and one from her father (the son's maternal grandfather), so two grandparents contributed to the male descendant's X chromosome . The maternal grandfather received his X chromosome from his mother, and the maternal grandmother received X chromosomes from both of her parents, so three great-grandparents contributed to the male descendant's X chromosome . Five great-great-grandparents contributed to the male descendant's X chromosome , etc. (This assumes that all ancestors of a given descendant are independent, but if any genealogy is traced far enough back in time, ancestors begin to appear on multiple lines of the genealogy, until eventually a population founder appears on all lines of the genealogy.) Other In optics, when a beam of light shines at an angle through two stacked transparent plates of different materials of different refractive indexes, it may reflect off three surfaces: the top, middle, and bottom surfaces of the two plates. The number of different beam paths that have reflections, for , is the -th Fibonacci number. (However, when , there are three reflection paths, not two, one for each of the three surfaces.) Fibonacci retracement levels are widely used in technical analysis for financial market trading. Since the conversion factor 1.609344 for miles to kilometers is close to the golden ratio, the decomposition of distance in miles into a sum of Fibonacci numbers becomes nearly the kilometer sum when the Fibonacci numbers are replaced by their successors. This method amounts to a radix 2 number register in golden ratio base being shifted. To convert from kilometers to miles, shift the register down the Fibonacci sequence instead. The measured values of voltages and currents in the infinite resistor chain circuit (also called the resistor ladder or infinite series-parallel circuit) follow the Fibonacci sequence. The intermediate results of adding the alternating series and parallel resistances yields fractions composed of consecutive Fibonacci numbers. The equivalent resistance of the entire circuit equals the golden ratio. Brasch et al. 2012 show how a generalized Fibonacci sequence also can be connected to the field of economics. In particular, it is shown how a generalized Fibonacci sequence enters the control function of finite-horizon dynamic optimisation problems with one state and one control variable. The procedure is illustrated in an example often referred to as the Brock–Mirman economic growth model. Mario Merz included the Fibonacci sequence in some of his artworks beginning in 1970. Joseph Schillinger (1895–1943) developed a system of composition which uses Fibonacci intervals in some of its melodies; he viewed these as the musical counterpart to the elaborate harmony evident within nature.
Mathematics
Calculus and analysis
null
10929
https://en.wikipedia.org/wiki/Fighter%20aircraft
Fighter aircraft
Fighter aircraft (early on also pursuit aircraft) are military aircraft designed primarily for air-to-air combat. In military conflict, the role of fighter aircraft is to establish air superiority of the battlespace. Domination of the airspace above a battlefield permits bombers and attack aircraft to engage in tactical and strategic bombing of enemy targets, and helps prevent the enemy from doing the same. The key performance features of a fighter include not only its firepower but also its high speed and maneuverability relative to the target aircraft. The success or failure of a combatant's efforts to gain air superiority hinges on several factors including the skill of its pilots, the tactical soundness of its doctrine for deploying its fighters, and the numbers and performance of those fighters. Many modern fighter aircraft also have secondary capabilities such as ground attack and some types, such as fighter-bombers, are designed from the outset for dual roles. Other fighter designs are highly specialized while still filling the main air superiority role, and these include the interceptor and, historically, the heavy fighter and night fighter. History Since World War I, achieving and maintaining air superiority has been considered essential for victory in conventional warfare. Fighters continued to be developed throughout World War I, to deny enemy aircraft and dirigibles the ability to gather information by reconnaissance over the battlefield. Early fighters were very small and lightly armed by later standards, and most were biplanes built with a wooden frame covered with fabric, and a maximum airspeed of about . A successful German biplane, the Albatross, however, was built with a plywood shell, rather than fabric, which created a stronger, faster airplane. As control of the airspace over armies became increasingly important, all of the major powers developed fighters to support their military operations. Between the wars, wood was largely replaced in part or whole by metal tubing, and finally aluminum stressed skin structures (monocoque) began to predominate. By World War II, most fighters were all-metal monoplanes armed with batteries of machine guns or cannons and some were capable of speeds approaching . Most fighters up to this point had one engine, but a number of twin-engine fighters were built; however they were found to be outmatched against single-engine fighters and were relegated to other tasks, such as night fighters equipped with radar sets. By the end of the war, turbojet engines were replacing piston engines as the means of propulsion, further increasing aircraft speed. Since the weight of the turbojet engine was far less than a piston engine, having two engines was no longer a handicap and one or two were used, depending on requirements. This in turn required the development of ejection seats so the pilot could escape, and G-suits to counter the much greater forces being applied to the pilot during maneuvers. In the 1950s, radar was fitted to day fighters, since due to ever increasing air-to-air weapon ranges, pilots could no longer see far enough ahead to prepare for the opposition. Subsequently, radar capabilities grew enormously and are now the primary method of target acquisition. Wings were made thinner and swept back to reduce transonic drag, which required new manufacturing methods to obtain sufficient strength. Skins were no longer sheet metal riveted to a structure, but milled from large slabs of alloy. The sound barrier was broken, and after a few false starts due to required changes in controls, speeds quickly reached Mach 2, past which aircraft cannot maneuver sufficiently to avoid attack. Air-to-air missiles largely replaced guns and rockets in the early 1960s since both were believed unusable at the speeds being attained, however the Vietnam War showed that guns still had a role to play, and most fighters built since then are fitted with cannon (typically between in caliber) in addition to missiles. Most modern combat aircraft can carry at least a pair of air-to-air missiles. In the 1970s, turbofans replaced turbojets, improving fuel economy enough that the last piston engine support aircraft could be replaced with jets, making multi-role combat aircraft possible. Honeycomb structures began to replace milled structures, and the first composite components began to appear on components subjected to little stress. With the steady improvements in computers, defensive systems have become increasingly efficient. To counter this, stealth technologies have been pursued by the United States, Russia, India and China. The first step was to find ways to reduce the aircraft's reflectivity to radar waves by burying the engines, eliminating sharp corners and diverting any reflections away from the radar sets of opposing forces. Various materials were found to absorb the energy from radar waves, and were incorporated into special finishes that have since found widespread application. Composite structures have become widespread, including major structural components, and have helped to counterbalance the steady increases in aircraft weight—most modern fighters are larger and heavier than World War II medium bombers. Because of the importance of air superiority, since the early days of aerial combat armed forces have constantly competed to develop technologically superior fighters and to deploy these fighters in greater numbers, and fielding a viable fighter fleet consumes a substantial proportion of the defense budgets of modern armed forces. The global combat aircraft market was worth $45.75 billion in 2017 and is projected by Frost & Sullivan at $47.2 billion in 2026: 35% modernization programs and 65% aircraft purchases, dominated by the Lockheed Martin F-35 with 3,000 deliveries over 20 years. Classification A fighter aircraft is primarily designed for air-to-air combat. A given type may be designed for specific combat conditions, and in some cases for additional roles such as air-to-ground fighting. Historically the British Royal Flying Corps and Royal Air Force referred to them as "scouts" until the early 1920s, while the U.S. Army called them "pursuit" aircraft until the late 1940s (using the designation P, as in Curtiss P-40 Warhawk, Republic P-47 Thunderbolt and Bell P-63 Kingcobra). The UK changed to calling them fighters in the 1920s, while the US Army did so in the 1940s. A short-range fighter designed to defend against incoming enemy aircraft is known as an interceptor. Recognized classes of fighter include: Air superiority fighter Fighter-bomber Heavy fighter Interceptor Light fighter All-weather fighter (including the night fighter) Reconnaissance fighter Strategic fighter (including the escort fighter and strike fighter) Of these, the Fighter-bomber, reconnaissance fighter and strike fighter classes are dual-role, possessing qualities of the fighter alongside some other battlefield role. Some fighter designs may be developed in variants performing other roles entirely, such as ground attack or unarmed reconnaissance. This may be for political or national security reasons, for advertising purposes, or other reasons. The Sopwith Camel and other "fighting scouts" of World War I performed a great deal of ground-attack work. In World War II, the USAAF and RAF often favored fighters over dedicated light bombers or dive bombers, and types such as the Republic P-47 Thunderbolt and Hawker Hurricane that were no longer competitive as aerial combat fighters were relegated to ground attack. Several aircraft, such as the F-111 and F-117, have received fighter designations though they had no fighter capability due to political or other reasons. The F-111B variant was originally intended for a fighter role with the U.S. Navy, but it was canceled. This blurring follows the use of fighters from their earliest days for "attack" or "strike" operations against ground targets by means of strafing or dropping small bombs and incendiaries. Versatile multi role fighter-bombers such as the McDonnell Douglas F/A-18 Hornet are a less expensive option than having a range of specialized aircraft types. Some of the most expensive fighters such as the US Grumman F-14 Tomcat, McDonnell Douglas F-15 Eagle, Lockheed Martin F-22 Raptor and Russian Sukhoi Su-27 were employed as all-weather interceptors as well as air superiority fighter aircraft, while commonly developing air-to-ground roles late in their careers. An interceptor is generally an aircraft intended to target (or intercept) bombers and so often trades maneuverability for climb rate. As a part of military nomenclature, a letter is often assigned to various types of aircraft to indicate their use, along with a number to indicate the specific aircraft. The letters used to designate a fighter differ in various countries. In the English-speaking world, "F" is often now used to indicate a fighter (e.g. Lockheed Martin F-35 Lightning II or Supermarine Spitfire F.22), though "P" used to be used in the US for pursuit (e.g. Curtiss P-40 Warhawk), a translation of the French "C" (Dewoitine D.520 C.1) for Chasseur while in Russia "I" was used for Istrebitel, or exterminator (Polikarpov I-16). Air superiority fighter As fighter types have proliferated, the air superiority fighter emerged as a specific role at the pinnacle of speed, maneuverability, and air-to-air weapon systems – able to hold its own against all other fighters and establish its dominance in the skies above the battlefield. Interceptor The interceptor is a fighter designed specifically to intercept and engage approaching enemy aircraft. There are two general classes of interceptor: relatively lightweight aircraft in the point-defence role, built for fast reaction, high performance and with a short range, and heavier aircraft with more comprehensive avionics and designed to fly at night or in all weathers and to operate over longer ranges. Originating during World War I, by 1929 this class of fighters had become known as the interceptor. Night and all-weather fighters The equipment necessary for daytime flight is inadequate when flying at night or in poor visibility. The night fighter was developed during World War I with additional equipment to aid the pilot in flying straight, navigating and finding the target. From modified variants of the Royal Aircraft Factory B.E.2c in 1915, the night fighter has evolved into the highly capable all-weather fighter. Strategic fighters The strategic fighter is a fast, heavily armed and long-range type, able to act as an escort fighter protecting bombers, to carry out offensive sorties of its own as a penetration fighter and maintain standing patrols at significant distance from its home base. Bombers are vulnerable due to their low speed, large size and poor maneuvrability. The escort fighter was developed during World War II to come between the bombers and enemy attackers as a protective shield. The primary requirement was for long range, with several heavy fighters given the role. However they too proved unwieldy and vulnerable, so as the war progressed techniques such as drop tanks were developed to extend the range of more nimble conventional fighters. The penetration fighter is typically also fitted for the ground-attack role, and so is able to defend itself while conducting attack sorties. Piston engine fighters 1914–1918: World War I The word "fighter" was first used to describe a two-seat aircraft carrying a machine gun (mounted on a pedestal) and its operator as well as the pilot. Although the term was coined in the United Kingdom, the first examples were the French Voisin pushers beginning in 1910, and a Voisin III would be the first to shoot down another aircraft, on 5 October 1914. However at the outbreak of World War I, front-line aircraft were mostly unarmed and used almost exclusively for reconnaissance. On 15 August 1914, Miodrag Tomić encountered an enemy airplane while on a reconnaissance flight over Austria-Hungary which fired at his aircraft with a revolver, so Tomić fired back. It was believed to be the first exchange of fire between aircraft. Within weeks, all Serbian and Austro-Hungarian aircraft were armed. Another type of military aircraft formed the basis for an effective "fighter" in the modern sense of the word. It was based on small fast aircraft developed before the war for air racing such with the Gordon Bennett Cup and Schneider Trophy. The military scout airplane was not expected to carry serious armament, but rather to rely on speed to "scout" a location, and return quickly to report, making it a flying horse. British scout aircraft, in this sense, included the Sopwith Tabloid and Bristol Scout. The French and the Germans didn't have an equivalent as they used two seaters for reconnaissance, such as the Morane-Saulnier L, but would later modify pre-war racing aircraft into armed single seaters. It was quickly found that these were of little use since the pilot couldn't record what he saw while also flying, while military leaders usually ignored what the pilots reported. Attempts were made with handheld weapons such as pistols and rifles and even light machine guns, but these were ineffective and cumbersome. The next advance came with the fixed forward-firing machine gun, so that the pilot pointed the entire aircraft at the target and fired the gun, instead of relying on a second gunner. Roland Garros bolted metal deflector plates to the propeller so that it would not shoot itself out of the sky and a number of Morane-Saulnier Ns were modified. The technique proved effective, however the deflected bullets were still highly dangerous. Soon after the commencement of the war, pilots armed themselves with pistols, carbines, grenades, and an assortment of improvised weapons. Many of these proved ineffective as the pilot had to fly his airplane while attempting to aim a handheld weapon and make a difficult deflection shot. The first step in finding a real solution was to mount the weapon on the aircraft, but the propeller remained a problem since the best direction to shoot is straight ahead. Numerous solutions were tried. A second crew member behind the pilot could aim and fire a swivel-mounted machine gun at enemy airplanes; however, this limited the area of coverage chiefly to the rear hemisphere, and effective coordination of the pilot's maneuvering with the gunner's aiming was difficult. This option was chiefly employed as a defensive measure on two-seater reconnaissance aircraft from 1915 on. Both the SPAD S.A and the Royal Aircraft Factory B.E.9 added a second crewman ahead of the engine in a pod but this was both hazardous to the second crewman and limited performance. The Sopwith L.R.T.Tr. similarly added a pod on the top wing with no better luck. An alternative was to build a "pusher" scout such as the Airco DH.2, with the propeller mounted behind the pilot. The main drawback was that the high drag of a pusher type's tail structure made it slower than a similar "tractor" aircraft. A better solution for a single seat scout was to mount the machine gun (rifles and pistols having been dispensed with) to fire forwards but outside the propeller arc. Wing guns were tried but the unreliable weapons available required frequent clearing of jammed rounds and misfires and remained impractical until after the war. Mounting the machine gun over the top wing worked well and was used long after the ideal solution was found. The Nieuport 11 of 1916 used this system with considerable success, however, this placement made aiming and reloading difficult but would continue to be used throughout the war as the weapons used were lighter and had a higher rate of fire than synchronized weapons. The British Foster mounting and several French mountings were specifically designed for this kind of application, fitted with either the Hotchkiss or Lewis Machine gun, which due to their design were unsuitable for synchronizing. The need to arm a tractor scout with a forward-firing gun whose bullets passed through the propeller arc was evident even before the outbreak of war and inventors in both France and Germany devised mechanisms that could time the firing of the individual rounds to avoid hitting the propeller blades. Franz Schneider, a Swiss engineer, had patented such a device in Germany in 1913, but his original work was not followed up. French aircraft designer Raymond Saulnier patented a practical device in April 1914, but trials were unsuccessful because of the propensity of the machine gun employed to hang fire due to unreliable ammunition. In December 1914, French aviator Roland Garros asked Saulnier to install his synchronization gear on Garros' Morane-Saulnier Type L parasol monoplane. Unfortunately the gas-operated Hotchkiss machine gun he was provided had an erratic rate of fire and it was impossible to synchronize it with the propeller. As an interim measure, the propeller blades were fitted with metal wedges to protect them from ricochets. Garros' modified monoplane first flew in March 1915 and he began combat operations soon after. Garros scored three victories in three weeks before he himself was downed on 18 April and his airplane, along with its synchronization gear and propeller was captured by the Germans. Meanwhile, the synchronization gear (called the Stangensteuerung in German, for "pushrod control system") devised by the engineers of Anthony Fokker's firm was the first system to enter service. It would usher in what the British called the "Fokker scourge" and a period of air superiority for the German forces, making the Fokker Eindecker monoplane a feared name over the Western Front, despite its being an adaptation of an obsolete pre-war French Morane-Saulnier racing airplane, with poor flight characteristics and a by now mediocre performance. The first Eindecker victory came on 1 July 1915, when Leutnant Kurt Wintgens, of Feldflieger Abteilung 6 on the Western Front, downed a Morane-Saulnier Type L. His was one of five Fokker M.5K/MG prototypes for the Eindecker, and was armed with a synchronized aviation version of the Parabellum MG14 machine gun. The success of the Eindecker kicked off a competitive cycle of improvement among the combatants, both sides striving to build ever more capable single-seat fighters. The Albatros D.I and Sopwith Pup of 1916 set the classic pattern followed by fighters for about twenty years. Most were biplanes and only rarely monoplanes or triplanes. The strong box structure of the biplane provided a rigid wing that allowed the accurate control essential for dogfighting. They had a single operator, who flew the aircraft and also controlled its armament. They were armed with one or two Maxim or Vickers machine guns, which were easier to synchronize than other types, firing through the propeller arc. Gun breeches were in front of the pilot, with obvious implications in case of accidents, but jams could be cleared in flight, while aiming was simplified. The use of metal aircraft structures was pioneered before World War I by Breguet but would find its biggest proponent in Anthony Fokker, who used chrome-molybdenum steel tubing for the fuselage structure of all his fighter designs, while the innovative German engineer Hugo Junkers developed two all-metal, single-seat fighter monoplane designs with cantilever wings: the strictly experimental Junkers J 2 private-venture aircraft, made with steel, and some forty examples of the Junkers D.I, made with corrugated duralumin, all based on his experience in creating the pioneering Junkers J 1 all-metal airframe technology demonstration aircraft of late 1915. While Fokker would pursue steel tube fuselages with wooden wings until the late 1930s, and Junkers would focus on corrugated sheet metal, Dornier was the first to build a fighter (the Dornier-Zeppelin D.I) made with pre-stressed sheet aluminum and having cantilevered wings, a form that would replace all others in the 1930s. As collective combat experience grew, the more successful pilots such as Oswald Boelcke, Max Immelmann, and Edward Mannock developed innovative tactical formations and maneuvers to enhance their air units' combat effectiveness. Allied and – before 1918 – German pilots of World War I were not equipped with parachutes, so in-flight fires or structural failures were often fatal. Parachutes were well-developed by 1918 having previously been used by balloonists, and were adopted by the German flying services during the course of that year. The well-known Manfred von Richthofen, the "Red Baron", was wearing one when he was killed, but the allied command continued to oppose their use on various grounds. In April 1917, during a brief period of German aerial supremacy a British pilot's average life expectancy was calculated to average 93 flying hours, or about three weeks of active service. More than 50,000 airmen from both sides died during the war. 1919–1938: Inter-war period Fighter development stagnated between the wars, especially in the United States and the United Kingdom, where budgets were small. In France, Italy and Russia, where large budgets continued to allow major development, both monoplanes and all metal structures were common. By the end of the 1920s, however, those countries overspent themselves and were overtaken in the 1930s by those powers that hadn't been spending heavily, namely the British, the Americans, the Spanish (in the Spanish civil war) and the Germans. Given limited budgets, air forces were conservative in aircraft design, and biplanes remained popular with pilots for their agility, and remained in service long after they ceased to be competitive. Designs such as the Gloster Gladiator, Fiat CR.42 Falco, and Polikarpov I-15 were common even in the late 1930s, and many were still in service as late as 1942. Up until the mid-1930s, the majority of fighters in the US, the UK, Italy and Russia remained fabric-covered biplanes. Fighter armament eventually began to be mounted inside the wings, outside the arc of the propeller, though most designs retained two synchronized machine guns directly ahead of the pilot, where they were more accurate (that being the strongest part of the structure, reducing the vibration to which the guns were subjected). Shooting with this traditional arrangement was also easier because the guns shot directly ahead in the direction of the aircraft's flight, up to the limit of the guns range; unlike wing-mounted guns which to be effective required to be harmonised, that is, preset to shoot at an angle by ground crews so that their bullets would converge on a target area a set distance ahead of the fighter. Rifle-caliber calibre guns remained the norm, with larger weapons either being too heavy and cumbersome or deemed unnecessary against such lightly built aircraft. It was not considered unreasonable to use World War I-style armament to counter enemy fighters as there was insufficient air-to-air combat during most of the period to disprove this notion. The rotary engine, popular during World War I, quickly disappeared, its development having reached the point where rotational forces prevented more fuel and air from being delivered to the cylinders, which limited horsepower. They were replaced chiefly by the stationary radial engine though major advances led to inline engines gaining ground with several exceptional engines—including the V-12 Curtiss D-12. Aircraft engines increased in power several-fold over the period, going from a typical in the Fokker D.VII of 1918 to in the Curtiss P-36 of 1936. The debate between the sleek in-line engines versus the more reliable radial models continued, with naval air forces preferring the radial engines, and land-based forces often choosing inlines. Radial designs did not require a separate (and vulnerable) radiator, but had increased drag. Inline engines often had a better power-to-weight ratio. Some air forces experimented with "heavy fighters" (called "destroyers" by the Germans). These were larger, usually twin-engined aircraft, sometimes adaptations of light or medium bomber types. Such designs typically had greater internal fuel capacity (thus longer range) and heavier armament than their single-engine counterparts. In combat, they proved vulnerable to more agile single-engine fighters. The primary driver of fighter innovation, right up to the period of rapid re-armament in the late 1930s, were not military budgets, but civilian aircraft racing. Aircraft designed for these races introduced innovations like streamlining and more powerful engines that would find their way into the fighters of World War II. The most significant of these was the Schneider Trophy races, where competition grew so fierce, only national governments could afford to enter. At the very end of the inter-war period in Europe came the Spanish Civil War. This was just the opportunity the German Luftwaffe, Italian Regia Aeronautica, and the Soviet Union's Voenno-Vozdushnye Sily needed to test their latest aircraft. Each party sent numerous aircraft types to support their sides in the conflict. In the dogfights over Spain, the latest Messerschmitt Bf 109 fighters did well, as did the Soviet Polikarpov I-16. The later German design was earlier in its design cycle, and had more room for development and the lessons learned led to greatly improved models in World War II. The Russians failed to keep up and despite newer models coming into service, I-16s remaining the most common Soviet front-line fighter into 1942 despite being outclassed by the improved Bf 109s in World War II. For their part, the Italians developed several monoplanes such as the Fiat G.50 Freccia, but being short on funds, were forced to continue operating obsolete Fiat CR.42 Falco biplanes. From the early 1930s the Japanese were at war against both the Chinese Nationalists and the Russians in China, and used the experience to improve both training and aircraft, replacing biplanes with modern cantilever monoplanes and creating a cadre of exceptional pilots. In the United Kingdom, at the behest of Neville Chamberlain (more famous for his 'peace in our time' speech), the entire British aviation industry was retooled, allowing it to change quickly from fabric covered metal framed biplanes to cantilever stressed skin monoplanes in time for the war with Germany, a process that France attempted to emulate, but too late to counter the German invasion. The period of improving the same biplane design over and over was now coming to an end, and the Hawker Hurricane and Supermarine Spitfire started to supplant the Gloster Gladiator and Hawker Fury biplanes but many biplanes remained in front-line service well past the start of World War II. While not a combatant in Spain, they too absorbed many of the lessons in time to use them. The Spanish Civil War also provided an opportunity for updating fighter tactics. One of the innovations was the development of the "finger-four" formation by the German pilot Werner Mölders. Each fighter squadron (German: Staffel) was divided into several flights (Schwärme) of four aircraft. Each Schwarm was divided into two Rotten, which was a pair of aircraft. Each Rotte was composed of a leader and a wingman. This flexible formation allowed the pilots to maintain greater situational awareness, and the two Rotten could split up at any time and attack on their own. The finger-four would be widely adopted as the fundamental tactical formation during World War Two, including by the British and later the Americans. 1939–1945: World War II World War II featured fighter combat on a larger scale than any other conflict to date. German Field Marshal Erwin Rommel noted the effect of airpower: "Anyone who has to fight, even with the most modern weapons, against an enemy in complete command of the air, fights like a savage..." Throughout the war, fighters performed their conventional role in establishing air superiority through combat with other fighters and through bomber interception, and also often performed roles such as tactical air support and reconnaissance. Fighter design varied widely among combatants. The Japanese and Italians favored lightly armed and armored but highly maneuverable designs such as the Japanese Nakajima Ki-27, Nakajima Ki-43 and Mitsubishi A6M Zero and the Italian Fiat G.50 Freccia and Macchi MC.200. In contrast, designers in the United Kingdom, Germany, the Soviet Union, and the United States believed that the increased speed of fighter aircraft would create g-forces unbearable to pilots who attempted maneuvering dogfights typical of the First World War, and their fighters were instead optimized for speed and firepower. In practice, while light, highly maneuverable aircraft did possess some advantages in fighter-versus-fighter combat, those could usually be overcome by sound tactical doctrine, and the design approach of the Italians and Japanese made their fighters ill-suited as interceptors or attack aircraft. European theater During the invasion of Poland and the Battle of France, Luftwaffe fighters—primarily the Messerschmitt Bf 109—held air superiority, and the Luftwaffe played a major role in German victories in these campaigns. During the Battle of Britain, however, British Hurricanes and Spitfires proved roughly equal to Luftwaffe fighters. Additionally Britain's radar-based Dowding system directing fighters onto German attacks and the advantages of fighting above Britain's home territory allowed the RAF to deny Germany air superiority, saving the UK from possible German invasion and dealing the Axis a major defeat early in the Second World War. On the Eastern Front, Soviet fighter forces were overwhelmed during the opening phases of Operation Barbarossa. This was a result of the tactical surprise at the outset of the campaign, the leadership vacuum within the Soviet military left by the Great Purge, and the general inferiority of Soviet designs at the time, such as the obsolescent Polikarpov I-15 biplane and the I-16. More modern Soviet designs, including the Mikoyan-Gurevich MiG-3, LaGG-3 and Yakolev Yak-1, had not yet arrived in numbers and in any case were still inferior to the Messerschmitt Bf 109. As a result, during the early months of these campaigns, Axis air forces destroyed large numbers of Red Air Force aircraft on the ground and in one-sided dogfights. In the later stages on the Eastern Front, Soviet training and leadership improved, as did their equipment. By 1942 Soviet designs such as the Yakovlev Yak-9 and Lavochkin La-5 had performance comparable to the German Bf 109 and Focke-Wulf Fw 190. Also, significant numbers of British, and later U.S., fighter aircraft were supplied to aid the Soviet war effort as part of Lend-Lease, with the Bell P-39 Airacobra proving particularly effective in the lower-altitude combat typical of the Eastern Front. The Soviets were also helped indirectly by the American and British bombing campaigns, which forced the Luftwaffe to shift many of its fighters away from the Eastern Front in defense against these raids. The Soviets increasingly were able to challenge the Luftwaffe, and while the Luftwaffe maintained a qualitative edge over the Red Air Force for much of the war, the increasing numbers and efficacy of the Soviet Air Force were critical to the Red Army's efforts at turning back and eventually annihilating the Wehrmacht. Meanwhile, air combat on the Western Front had a much different character. Much of this combat focused on the strategic bombing campaigns of the RAF and the USAAF against German industry intended to wear down the Luftwaffe. Axis fighter aircraft focused on defending against Allied bombers while Allied fighters' main role was as bomber escorts. The RAF raided German cities at night, and both sides developed radar-equipped night fighters for these battles. The Americans, in contrast, flew daylight bombing raids into Germany delivering the Combined Bomber Offensive. Unescorted Consolidated B-24 Liberators and Boeing B-17 Flying Fortress bombers, however, proved unable to fend off German interceptors (primarily Bf 109s and Fw 190s). With the later arrival of long range fighters, particularly the North American P-51 Mustang, American fighters were able to escort far into Germany on daylight raids and by ranging ahead attrited the Luftwaffe to establish control of the skies over Western Europe. By the time of Operation Overlord in June 1944, the Allies had gained near complete air superiority over the Western Front. This cleared the way both for intensified strategic bombing of German cities and industries, and for the tactical bombing of battlefield targets. With the Luftwaffe largely cleared from the skies, Allied fighters increasingly served as ground attack aircraft. Allied fighters, by gaining air superiority over the European battlefield, played a crucial role in the eventual defeat of the Axis, which Reichmarshal Hermann Göring, commander of the German Luftwaffe summed up when he said: "When I saw Mustangs over Berlin, I knew the jig was up." Pacific theater Major air combat during the war in the Pacific began with the entry of the Western Allies following Japan's attack against Pearl Harbor. The Imperial Japanese Navy Air Service primarily operated the Mitsubishi A6M Zero, and the Imperial Japanese Army Air Service flew the Nakajima Ki-27 and the Nakajima Ki-43, initially enjoying great success, as these fighters generally had better range, maneuverability, speed and climb rates than their Allied counterparts. Additionally, Japanese pilots were well trained and many were combat veterans from Japan's campaigns in China. They quickly gained air superiority over the Allies, who at this stage of the war were often disorganized, under-trained and poorly equipped, and Japanese air power contributed significantly to their successes in the Philippines, Malaysia and Singapore, the Dutch East Indies and Burma. By mid-1942, the Allies began to regroup and while some Allied aircraft such as the Brewster Buffalo and the P-39 Airacobra were hopelessly outclassed by fighters like Japan's Mitsubishi A6M Zero, others such as the Army's Curtiss P-40 Warhawk and the Navy's Grumman F4F Wildcat possessed attributes such as superior firepower, ruggedness and dive speed, and the Allies soon developed tactics (such as the Thach Weave) to take advantage of these strengths. These changes soon paid dividends, as the Allied ability to deny Japan air superiority was critical to their victories at Coral Sea, Midway, Guadalcanal and New Guinea. In China, the Flying Tigers also used the same tactics with some success, although they were unable to stem the tide of Japanese advances there. By 1943, the Allies began to gain the upper hand in the Pacific Campaign's air campaigns. Several factors contributed to this shift. First, the Lockheed P-38 Lightning and second-generation Allied fighters such as the Grumman F6 Hellcat and later the Vought F4 Corsair, the Republic P-47 Thunderbolt and the North American P-51 Mustang, began arriving in numbers. These fighters outperformed Japanese fighters in all respects except maneuverability. Other problems with Japan's fighter aircraft also became apparent as the war progressed, such as their lack of armor and light armament, which had been typical of all pre-war fighters worldwide, but the problem was particularly difficult to rectify on the Japanese designs. This made them inadequate as either bomber-interceptors or ground-attack aircraft, roles Allied fighters were still able to fill. Most importantly, Japan's training program failed to provide enough well-trained pilots to replace losses. In contrast, the Allies improved both the quantity and quality of pilots graduating from their training programs. By mid-1944, Allied fighters had gained air superiority throughout the theater, which would not be contested again during the war. The extent of Allied quantitative and qualitative superiority by this point in the war was demonstrated during the Battle of the Philippine Sea, a lopsided Allied victory in which Japanese fliers were shot down in such numbers and with such ease that American fighter pilots likened it to a great 'turkey shoot'. Late in the war, Japan began to produce new fighters such as the Nakajima Ki-84 and the Kawanishi N1K to replace the Zero, but only in small numbers, and by then Japan lacked the trained pilots or sufficient fuel to mount an effective challenge to Allied attacks. During the closing stages of the war, Japan's fighter arm could not seriously challenge raids over Japan by American Boeing B-29 Superfortresses, and was largely reduced to Kamikaze attacks. Technological innovations Fighter technology advanced rapidly during the Second World War. Piston-engines, which powered the vast majority of World War II fighters, grew more powerful: at the beginning of the war fighters typically had engines producing between and , while by the end of the war many could produce over . For example, the Spitfire, one of the few fighters in continuous production throughout the war, was in 1939 powered by a Merlin II, while variants produced in 1945 were equipped with the Rolls-Royce Griffon 61. Nevertheless, these fighters could only achieve modest increases in top speed due to problems of compressibility created as aircraft and their propellers approached the sound barrier, and it was apparent that propeller-driven aircraft were approaching the limits of their performance. German jet and rocket-powered fighters entered combat in 1944, too late to impact the war's outcome. The same year the Allies' only operational jet fighter, the Gloster Meteor, also entered service. World War II fighters also increasingly featured monocoque construction, which improved their aerodynamic efficiency while adding structural strength. Laminar flow wings, which improved high speed performance, also came into use on fighters such as the P-51 Mustang, while the Messerschmitt Me 262 and the Messerschmitt Me 163 featured swept wings that dramatically reduced drag at high subsonic speeds. Armament also advanced during the war. The rifle-caliber machine guns that were common on prewar fighters could not easily down the more rugged warplanes of the era. Air forces began to replace or supplement them with cannons, which fired explosive shells that could blast a hole in an enemy aircraft – rather than relying on kinetic energy from a solid bullet striking a critical component of the aircraft, such as a fuel line or control cable, or the pilot. Cannons could bring down even heavy bombers with just a few hits, but their slower rate of fire made it difficult to hit fast-moving fighters in a dogfight. Eventually, most fighters mounted cannons, sometimes in combination with machine guns. The British epitomized this shift. Their standard early war fighters mounted eight caliber machine guns, but by mid-war they often featured a combination of machine guns and cannons, and late in the war often only cannons. The Americans, in contrast, had problems producing a cannon design, so instead placed multiple heavy machine guns on their fighters. Fighters were also increasingly fitted with bomb racks and air-to-surface ordnance such as bombs or rockets beneath their wings, and pressed into close air support roles as fighter-bombers. Although they carried less ordnance than light and medium bombers, and generally had a shorter range, they were cheaper to produce and maintain and their maneuverability made it easier for them to hit moving targets such as motorized vehicles. Moreover, if they encountered enemy fighters, their ordnance (which reduced lift and increased drag and therefore decreased performance) could be jettisoned and they could engage enemy fighters, which eliminated the need for fighter escorts that bombers required. Heavily armed fighters such as Germany's Focke-Wulf Fw 190, Britain's Hawker Typhoon and Hawker Tempest, and America's Curtiss P-40, F4U Corsair, P-47 Thunderbolt and P-38 Lightning all excelled as fighter-bombers, and since the Second World War ground attack has become an important secondary capability of many fighters. World War II also saw the first use of airborne radar on fighters. The primary purpose of these radars was to help night fighters locate enemy bombers and fighters. Because of the bulkiness of these radar sets, they could not be carried on conventional single-engined fighters and instead were typically retrofitted to larger heavy fighters or light bombers such as Germany's Messerschmitt Bf 110 and Junkers Ju 88, Britain's de Havilland Mosquito and Bristol Beaufighter, and America's Douglas A-20, which then served as night fighters. The Northrop P-61 Black Widow, a purpose-built night fighter, was the only fighter of the war that incorporated radar into its original design. Britain and America cooperated closely in the development of airborne radar, and Germany's radar technology generally lagged slightly behind Anglo-American efforts, while other combatants developed few radar-equipped fighters. A concept originated from German engineer Bernhard J. Schrage in 1943 as a response to the increasing threat posed by Allied heavy bombers, particularly at night. The Schrage Musik system involved mounting upward-facing cannon turrets, typically twin 20mm or 30mm guns, in the belly of German night fighters such as the Messerschmitt Bf 110 and later versions of the Junkers Ju 88. These guns were angled upwards to target the vulnerable underside of enemy bombers. 1946–present: Post–World War II period Several prototype fighter programs begun early in 1945 continued on after the war and led to advanced piston-engine fighters that entered production and operational service in 1946. A typical example is the Lavochkin La-9 'Fritz', which was an evolution of the successful wartime Lavochkin La-7 'Fin'. Working through a series of prototypes, the La-120, La-126 and La-130, the Lavochkin design bureau sought to replace the La-7's wooden airframe with a metal one, as well as fit a laminar flow wing to improve maneuver performance, and increased armament. The La-9 entered service in August 1946 and was produced until 1948; it also served as the basis for the development of a long-range escort fighter, the La-11 'Fang', of which nearly 1200 were produced 1947–51. Over the course of the Korean War, however, it became obvious that the day of the piston-engined fighter was coming to a close and that the future would lie with the jet fighter. This period also witnessed experimentation with jet-assisted piston engine aircraft. La-9 derivatives included examples fitted with two underwing auxiliary pulsejet engines (the La-9RD) and a similarly mounted pair of auxiliary ramjet engines (the La-138); however, neither of these entered service. One that did enter service – with the U.S. Navy in March 1945 – was the Ryan FR-1 Fireball; production was halted with the war's end on VJ-Day, with only 66 having been delivered, and the type was withdrawn from service in 1947. The USAAF had ordered its first 13 mixed turboprop-turbojet-powered pre-production prototypes of the Consolidated Vultee XP-81 fighter, but this program was also canceled by VJ Day, with 80% of the engineering work completed. Rocket-powered fighters The first rocket-powered aircraft was the Lippisch Ente, which made a successful maiden flight in March 1928. The only pure rocket aircraft ever mass-produced was the Messerschmitt Me 163B Komet in 1944, one of several German World War II projects aimed at developing high speed, point-defense aircraft. Later variants of the Me 262 (C-1a and C-2b) were also fitted with "mixed-power" jet/rocket powerplants, while earlier models were fitted with rocket boosters, but were not mass-produced with these modifications. The USSR experimented with a rocket-powered interceptor in the years immediately following World War II, the Mikoyan-Gurevich I-270. Only two were built. In the 1950s, the British developed mixed-power jet designs employing both rocket and jet engines to cover the performance gap that existed in turbojet designs. The rocket was the main engine for delivering the speed and height required for high-speed interception of high-level bombers and the turbojet gave increased fuel economy in other parts of flight, most notably to ensure the aircraft was able to make a powered landing rather than risking an unpredictable gliding return. The Saunders-Roe SR.53 was a successful design, and was planned for production when economics forced the British to curtail most aircraft programs in the late 1950s. Furthermore, rapid advancements in jet engine technology rendered mixed-power aircraft designs like Saunders-Roe's SR.53 (and the following SR.177) obsolete. The American Republic XF-91 Thunderceptor –the first U.S. fighter to exceed Mach 1 in level flight– met a similar fate for the same reason, and no hybrid rocket-and-jet-engine fighter design has ever been placed into service. The only operational implementation of mixed propulsion was Rocket-Assisted Take Off (RATO), a system rarely used in fighters, such as with the zero-length launch, RATO-based takeoff scheme from special launch platforms, tested out by both the United States and the Soviet Union, and made obsolete with advancements in surface-to-air missile technology. Jet-powered fighters It has become common in the aviation community to classify jet fighters by "generations" for historical purposes. No official definitions of these generations exist; rather, they represent the notion of stages in the development of fighter-design approaches, performance capabilities, and technological evolution. Different authors have packed jet fighters into different generations. For example, Richard P. Hallion of the Secretary of the Air Force's Action Group classified the F-16 as a sixth-generation jet fighter. The timeframes associated with each generation remain inexact and are only indicative of the period during which their design philosophies and technology employment enjoyed a prevailing influence on fighter design and development. These timeframes also encompass the peak period of service entry for such aircraft. 1940s–1950s: First-generation The first generation of jet fighters comprised the initial, subsonic jet-fighter designs introduced late in World War II (1939–1945) and in the early post-war period. They differed little from their piston-engined counterparts in appearance, and many employed unswept wings. Guns and cannon remained the principal armament. The need to obtain a decisive advantage in maximum speed pushed the development of turbojet-powered aircraft forward. Top speeds for fighters rose steadily throughout World War II as more powerful piston engines developed, and they approached transonic flight-speeds where the efficiency of propellers drops off, making further speed increases nearly impossible. The first jets developed during World War II and saw combat in the last two years of the war. Messerschmitt developed the first operational jet fighter, the Me 262A, primarily serving with the Luftwaffe's JG 7, the world's first jet-fighter wing. It was considerably faster than contemporary piston-driven aircraft, and in the hands of a competent pilot, proved quite difficult for Allied pilots to defeat. The Luftwaffe never deployed the design in numbers sufficient to stop the Allied air campaign, and a combination of fuel shortages, pilot losses, and technical difficulties with the engines kept the number of sorties low. Nevertheless, the Me 262 indicated the obsolescence of piston-driven aircraft. Spurred by reports of the German jets, Britain's Gloster Meteor entered production soon after, and the two entered service around the same time in 1944. Meteors commonly served to intercept the V-1 flying bomb, as they were faster than available piston-engined fighters at the low altitudes used by the flying bombs. Nearer the end of World War II, the first military jet-powered light-fighter design, the Luftwaffe intended the Heinkel He 162A Spatz (sparrow) to serve as a simple jet fighter for German home defense, with a few examples seeing squadron service with JG 1 by April 1945. By the end of the war almost all work on piston-powered fighters had ended. A few designs combining piston- and jet-engines for propulsion – such as the Ryan FR Fireball – saw brief use, but by the end of the 1940s virtually all new fighters were jet-powered. Despite their advantages, the early jet-fighters were far from perfect. The operational lifespan of turbines were very short and engines were temperamental, while power could be adjusted only slowly and acceleration was poor (even if top speed was higher) compared to the final generation of piston fighters. Many squadrons of piston-engined fighters remained in service until the early to mid-1950s, even in the air forces of the major powers (though the types retained were the best of the World War II designs). Innovations including ejection seats, air brakes and all-moving tailplanes became widespread in this period. The Americans began using jet fighters operationally after World War II, the wartime Bell P-59 having proven a failure. The Lockheed P-80 Shooting Star (soon re-designated F-80) was more prone to wave drag than the swept-wing Me 262, but had a cruise speed () as high as the maximum speed attainable by many piston-engined fighters. The British designed several new jets, including the distinctive single-engined twin boom de Havilland Vampire which Britain sold to the air forces of many nations. The British transferred the technology of the Rolls-Royce Nene jet-engine to the Soviets, who soon put it to use in their advanced Mikoyan-Gurevich MiG-15 fighter, which used fully swept wings that allowed flying closer to the speed of sound than straight-winged designs such as the F-80. The MiG-15s' top speed of proved quite a shock to the American F-80 pilots who encountered them in the Korean War, along with their armament of two cannons and a single cannon. Nevertheless, in the first jet-versus-jet dogfight, which occurred during the Korean War on 8 November 1950, an F-80 shot down two North Korean MiG-15s. The Americans responded by rushing their own swept-wing fighter – the North American F-86 Sabre – into battle against the MiGs, which had similar transsonic performance. The two aircraft had different strengths and weaknesses, but were similar enough that victory could go either way. While the Sabres focused primarily on downing MiGs and scored favorably against those flown by the poorly-trained North Koreans, the MiGs in turn decimated US bomber formations and forced the withdrawal of numerous American types from operational service. The world's navies also transitioned to jets during this period, despite the need for catapult-launching of the new aircraft. The U.S. Navy adopted the Grumman F9F Panther as their primary jet fighter in the Korean War period, and it was one of the first jet fighters to employ an afterburner. The de Havilland Sea Vampire became the Royal Navy's first jet fighter. Radar was used on specialized night-fighters such as the Douglas F3D Skyknight, which also downed MiGs over Korea, and later fitted to the McDonnell F2H Banshee and swept-wing Vought F7U Cutlass and McDonnell F3H Demon as all-weather / night fighters. Early versions of Infra-red (IR) air-to-air missiles (AAMs) such as the AIM-9 Sidewinder and radar-guided missiles such as the AIM-7 Sparrow whose descendants remain in use , were first introduced on swept-wing subsonic Demon and Cutlass naval fighters. 1950s–1960s: Second-generation Technological breakthroughs, lessons learned from the aerial battles of the Korean War, and a focus on conducting operations in a nuclear warfare environment shaped the development of second-generation fighters. Technological advances in aerodynamics, propulsion and aerospace building-materials (primarily aluminum alloys) permitted designers to experiment with aeronautical innovations such as swept wings, delta wings, and area-ruled fuselages. Widespread use of afterburning turbojet engines made these the first production aircraft to break the sound barrier, and the ability to sustain supersonic speeds in level flight became a common capability amongst fighters of this generation. Fighter designs also took advantage of new electronics technologies that made effective radars small enough to carry aboard smaller aircraft. Onboard radars permitted detection of enemy aircraft beyond visual range, thereby improving the handoff of targets by longer-ranged ground-based warning- and tracking-radars. Similarly, advances in guided-missile development allowed air-to-air missiles to begin supplementing the gun as the primary offensive weapon for the first time in fighter history. During this period, passive-homing infrared-guided (IR) missiles became commonplace, but early IR missile sensors had poor sensitivity and a very narrow field of view (typically no more than 30°), which limited their effective use to only close-range, tail-chase engagements. Radar-guided (RF) missiles were introduced as well, but early examples proved unreliable. These semi-active radar homing (SARH) missiles could track and intercept an enemy aircraft "painted" by the launching aircraft's onboard radar. Medium- and long-range RF air-to-air missiles promised to open up a new dimension of "beyond-visual-range" (BVR) combat, and much effort concentrated on further development of this technology. The prospect of a potential third world war featuring large mechanized armies and nuclear-weapon strikes led to a degree of specialization along two design approaches: interceptors, such as the English Electric Lightning and Mikoyan-Gurevich MiG-21F; and fighter-bombers, such as the Republic F-105 Thunderchief and the Sukhoi Su-7B. Dogfighting, per se, became de-emphasized in both cases. The interceptor was an outgrowth of the vision that guided missiles would completely replace guns and combat would take place at beyond-visual ranges. As a result, strategists designed interceptors with a large missile-payload and a powerful radar, sacrificing agility in favor of high speed, altitude ceiling and rate of climb. With a primary air-defense role, emphasis was placed on the ability to intercept strategic bombers flying at high altitudes. Specialized point-defense interceptors often had limited range and few, if any, ground-attack capabilities. Fighter-bombers could swing between air-superiority and ground-attack roles, and were often designed for a high-speed, low-altitude dash to deliver their ordnance. Television- and IR-guided air-to-surface missiles were introduced to augment traditional gravity bombs, and some were also equipped to deliver a nuclear bomb. 1960s–1970s: Third-generation jet fighters The third generation witnessed continued maturation of second-generation innovations, but it is most marked by renewed emphases on maneuverability and on traditional ground-attack capabilities. Over the course of the 1960s, increasing combat experience with guided missiles demonstrated that combat would devolve into close-in dogfights. Analog avionics began to appear, replacing older "steam-gauge" cockpit instrumentation. Enhancements to the aerodynamic performance of third-generation fighters included flight control surfaces such as canards, powered slats, and blown flaps. A number of technologies would be tried for vertical/short takeoff and landing, but thrust vectoring would be successful on the Harrier. Growth in air-combat capability focused on the introduction of improved air-to-air missiles, radar systems, and other avionics. While guns remained standard equipment (early models of F-4 being a notable exception), air-to-air missiles became the primary weapons for air-superiority fighters, which employed more sophisticated radars and medium-range RF AAMs to achieve greater "stand-off" ranges, however, kill probabilities proved unexpectedly low for RF missiles due to poor reliability and improved electronic countermeasures (ECM) for spoofing radar seekers. Infrared-homing AAMs saw their fields of view expand to 45°, which strengthened their tactical usability. Nevertheless, the low dogfight loss-exchange ratios experienced by American fighters in the skies over Vietnam led the U.S. Navy to establish its famous "TOPGUN" fighter-weapons school, which provided a graduate-level curriculum to train fleet fighter-pilots in advanced Air Combat Maneuvering (ACM) and Dissimilar air combat training (DACT) tactics and techniques. This era also saw an expansion in ground-attack capabilities, principally in guided missiles, and witnessed the introduction of the first truly effective avionics for enhanced ground attack, including terrain-avoidance systems. Air-to-surface missiles (ASM) equipped with electro-optical (E-O) contrast seekers – such as the initial model of the widely used AGM-65 Maverick – became standard weapons, and laser-guided bombs (LGBs) became widespread in an effort to improve precision-attack capabilities. Guidance for such precision-guided munitions (PGM) was provided by externally-mounted targeting pods, which were introduced in the mid-1960s. The third generation also led to the development of new automatic-fire weapons, primarily chain-guns that use an electric motor to drive the mechanism of a cannon. This allowed a plane to carry a single multi-barrel weapon (such as the Vulcan), and provided greater accuracy and rates of fire. Powerplant reliability increased, and jet engines became "smokeless" to make it harder to sight aircraft at long distances. Dedicated ground-attack aircraft (like the Grumman A-6 Intruder, SEPECAT Jaguar and LTV A-7 Corsair II) offered longer range, more sophisticated night-attack systems or lower cost than supersonic fighters. With variable-geometry wings, the supersonic F-111 introduced the Pratt & Whitney TF30, the first turbofan equipped with afterburner. The ambitious project sought to create a versatile common fighter for many roles and services. It would serve well as an all-weather bomber, but lacked the performance to defeat other fighters. The McDonnell F-4 Phantom was designed to capitalize on radar and missile technology as an all-weather interceptor, but emerged as a versatile strike-bomber nimble enough to prevail in air combat, adopted by the U.S. Navy, Air Force and Marine Corps. Despite numerous shortcomings that would not be fully addressed until newer fighters, the Phantom claimed 280 aerial kills (more than any other U.S. fighter) over Vietnam. With range and payload capabilities that rivaled that of World War II bombers such as B-24 Liberator, the Phantom would become a highly successful multirole aircraft. 1970s–2000s: Fourth-generation Fourth-generation fighters continued the trend towards multirole configurations, and were equipped with increasingly sophisticated avionics- and weapon-systems. Fighter designs were significantly influenced by the Energy-Maneuverability (E-M) theory developed by Colonel John Boyd and mathematician Thomas Christie, based upon Boyd's combat experience in the Korean War and as a fighter-tactics instructor during the 1960s. E-M theory emphasized the value of aircraft-specific energy maintenance as an advantage in fighter combat. Boyd perceived maneuverability as the primary means of getting "inside" an adversary's decision-making cycle, a process Boyd called the "OODA loop" (for "Observation-Orientation-Decision-Action"). This approach emphasized aircraft designs capable of performing "fast transients" – quick changes in speed, altitude, and direction – as opposed to relying chiefly on high speed alone. E-M characteristics were first applied to the McDonnell Douglas F-15 Eagle, but Boyd and his supporters believed these performance parameters called for a small, lightweight aircraft with a larger, higher-lift wing. The small size would minimize drag and increase the thrust-to-weight ratio, while the larger wing would minimize wing loading; while the reduced wing loading tends to lower top speed and can cut range, it increases payload capacity and the range reduction can be compensated for by increased fuel in the larger wing. The efforts of Boyd's "Fighter mafia" would result in the General Dynamics F-16 Fighting Falcon (now Lockheed Martin's). The F-16's maneuverability was further enhanced by its slight aerodynamic instability. This technique, called "relaxed static stability" (RSS), was made possible by introduction of the "fly-by-wire" (FBW) flight-control system (FLCS), which in turn was enabled by advances in computers and in system-integration techniques. Analog avionics, required to enable FBW operations, became a fundamental requirement, but began to be replaced by digital flight-control systems in the latter half of the 1980s. Likewise, Full Authority Digital Engine Controls (FADEC) to electronically manage powerplant performance was introduced with the Pratt & Whitney F100 turbofan. The F-16's sole reliance on electronics and wires to relay flight commands, instead of the usual cables and mechanical linkage controls, earned it the sobriquet of "the electric jet". Electronic FLCS and FADEC quickly became essential components of all subsequent fighter designs. Other innovative technologies introduced in fourth-generation fighters included pulse-Doppler fire-control radars (providing a "look-down/shoot-down" capability), head-up displays (HUD), "hands on throttle-and-stick" (HOTAS) controls, and multi-function displays (MFD), all essential equipment . Aircraft designers began to incorporate composite materials in the form of bonded-aluminum honeycomb structural elements and graphite epoxy laminate skins to reduce weight. Infrared search-and-track (IRST) sensors became widespread for air-to-ground weapons delivery, and appeared for air-to-air combat as well. "All-aspect" IR AAM became standard air superiority weapons, which permitted engagement of enemy aircraft from any angle (although the field of view remained relatively limited). The first long-range active-radar-homing RF AAM entered service with the AIM-54 Phoenix, which solely equipped the Grumman F-14 Tomcat, one of the few variable-sweep-wing fighter designs to enter production. Even with the tremendous advancement of air-to-air missiles in this era, internal guns were standard equipment. Another revolution came in the form of a stronger reliance on ease of maintenance, which led to standardization of parts, reductions in the numbers of access panels and lubrication points, and overall parts reduction in more complicated equipment like the engines. Some early jet fighters required 50 man-hours of work by a ground crew for every hour the aircraft was in the air; later models substantially reduced this to allow faster turn-around times and more sorties in a day. Some modern military aircraft only require 10-man-hours of work per hour of flight time, and others are even more efficient. Aerodynamic innovations included variable-camber wings and exploitation of the vortex lift effect to achieve higher angles of attack through the addition of leading-edge extension devices such as strakes. Unlike interceptors of the previous eras, most fourth-generation air-superiority fighters were designed to be agile dogfighters (although the Mikoyan MiG-31 and Panavia Tornado ADV are notable exceptions). The continually rising cost of fighters, however, continued to emphasize the value of multirole fighters. The need for both types of fighters led to the "high/low mix" concept, which envisioned a high-capability and high-cost core of dedicated air-superiority fighters (like the F-15 and Su-27) supplemented by a larger contingent of lower-cost multi-role fighters (such as the F-16 and MiG-29). Most fourth-generation fighters, such as the McDonnell Douglas F/A-18 Hornet, HAL Tejas, JF-17 and Dassault Mirage 2000, are true multirole warplanes, designed as such from the start. This was facilitated by multimode avionics that could switch seamlessly between air and ground modes. The earlier approaches of adding on strike capabilities or designing separate models specialized for different roles generally became passé (with the Panavia Tornado being an exception in this regard). Attack roles were generally assigned to dedicated ground-attack aircraft such as the Sukhoi Su-25 and the A-10 Thunderbolt II. A typical US Air Force fighter wing of the period might contain a mix of one air superiority squadron (F-15C), one strike fighter squadron (F-15E), and two multirole fighter squadrons (F-16C). Perhaps the most novel technology introduced for combat aircraft was stealth, which involves the use of special "low-observable" (L-O) materials and design techniques to reduce the susceptibility of an aircraft to detection by the enemy's sensor systems, particularly radars. The first stealth aircraft introduced were the Lockheed F-117 Nighthawk attack aircraft (introduced in 1983) and the Northrop Grumman B-2 Spirit bomber (first flew in 1989). Although no stealthy fighters per se appeared among the fourth generation, some radar-absorbent coatings and other L-O treatments developed for these programs are reported to have been subsequently applied to fourth-generation fighters. 1990s–2000s: 4.5-generation The end of the Cold War in 1992 led many governments to significantly decrease military spending as a "peace dividend". Air force inventories were cut. Research and development programs working on "fifth-generation" fighters took serious hits. Many programs were canceled during the first half of the 1990s, and those that survived were "stretched out". While the practice of slowing the pace of development reduces annual investment expenses, it comes at the penalty of increased overall program and unit costs over the long-term. In this instance, however, it also permitted designers to make use of the tremendous achievements being made in the fields of computers, avionics and other flight electronics, which had become possible largely due to the advances made in microchip and semiconductor technologies in the 1980s and 1990s. This opportunity enabled designers to develop fourth-generation designs – or redesigns – with significantly enhanced capabilities. These improved designs have become known as "Generation 4.5" fighters, recognizing their intermediate nature between the 4th and 5th generations, and their contribution in furthering development of individual fifth-generation technologies. The primary characteristics of this sub-generation are the application of advanced digital avionics and aerospace materials, modest signature reduction (primarily RF "stealth"), and highly integrated systems and weapons. These fighters have been designed to operate in a "network-centric" battlefield environment and are principally multirole aircraft. Key weapons technologies introduced include beyond-visual-range (BVR) AAMs; Global Positioning System (GPS)–guided weapons, solid-state phased-array radars; helmet-mounted sights; and improved secure, jamming-resistant datalinks. Thrust vectoring to further improve transient maneuvering capabilities has also been adopted by many 4.5th generation fighters, and uprated powerplants have enabled some designs to achieve a degree of "supercruise" ability. Stealth characteristics are focused primarily on frontal-aspect radar cross section (RCS) signature-reduction techniques including radar-absorbent materials (RAM), L-O coatings and limited shaping techniques. "Half-generation" designs are either based on existing airframes or are based on new airframes following similar design theory to previous iterations; however, these modifications have introduced the structural use of composite materials to reduce weight, greater fuel fractions to increase range, and signature reduction treatments to achieve lower RCS compared to their predecessors. Prime examples of such aircraft, which are based on new airframe designs making extensive use of carbon-fiber composites, include the Eurofighter Typhoon, Dassault Rafale, Saab JAS 39 Gripen, JF-17 Thunder, and HAL Tejas Mark 1A. Apart from these fighter jets, most of the 4.5 generation aircraft are actually modified variants of existing airframes from the earlier fourth generation fighter jets. Such fighter jets are generally heavier and examples include the Boeing F/A-18E/F Super Hornet, which is an evolution of the F/A-18 Hornet, the F-15E Strike Eagle, which is a ground-attack/multi-role variant of the F-15 Eagle, the Su-30SM and Su-35S modified variants of the Sukhoi Su-27, and the MiG-35 upgraded version of the Mikoyan MiG-29. The Su-30SM/Su-35S and MiG-35 feature thrust vectoring engine nozzles to enhance maneuvering. The upgraded version of F-16 is also considered a member of the 4.5 generation aircraft. Generation 4.5 fighters first entered service in the early 1990s, and most of them are still being produced and evolved. It is quite possible that they may continue in production alongside fifth-generation fighters due to the expense of developing the advanced level of stealth technology needed to achieve aircraft designs featuring very low observables (VLO), which is one of the defining features of fifth-generation fighters. Of the 4.5th generation designs, the Strike Eagle, Super Hornet, Typhoon, Gripen, and Rafale have been used in combat. The U.S. government has defined 4.5 generation fighter aircraft as those that "(1) have advanced capabilities, including— (A) AESA radar; (B) high capacity data-link; and (C) enhanced avionics; and (2) have the ability to deploy current and reasonably foreseeable advanced armaments." 2000s–2020s: Fifth-generation Currently the cutting edge of fighter design, fifth-generation fighters are characterized by being designed from the start to operate in a network-centric combat environment, and to feature extremely low, all-aspect, multi-spectral signatures employing advanced materials and shaping techniques. They have multifunction AESA radars with high-bandwidth, low-probability of intercept (LPI) data transmission capabilities. The infra-red search and track sensors incorporated for air-to-air combat as well as for air-to-ground weapons delivery in the 4.5th generation fighters are now fused in with other sensors for Situational Awareness IRST or SAIRST, which constantly tracks all targets of interest around the aircraft so the pilot need not guess when he glances. These sensors, along with advanced avionics, glass cockpits, helmet-mounted sights (not currently on F-22), and improved secure, jamming-resistant LPI datalinks are highly integrated to provide multi-platform, multi-sensor data fusion for vastly improved situational awareness while easing the pilot's workload. Avionics suites rely on extensive use of very high-speed integrated circuit (VHSIC) technology, common modules, and high-speed data buses. Overall, the integration of all these elements is claimed to provide fifth-generation fighters with a "first-look, first-shot, first-kill capability". A key attribute of fifth-generation fighters is a small radar cross-section. Great care has been taken in designing its layout and internal structure to minimize RCS over a broad bandwidth of detection and tracking radar frequencies; furthermore, to maintain its VLO signature during combat operations, primary weapons are carried in internal weapon bays that are only briefly opened to permit weapon launch. Furthermore, stealth technology has advanced to the point where it can be employed without a tradeoff with aerodynamics performance, in contrast to previous stealth efforts. Some attention has also been paid to reducing IR signatures, especially on the F-22. Detailed information on these signature-reduction techniques is classified, but in general includes special shaping approaches, thermoset and thermoplastic materials, extensive structural use of advanced composites, conformal sensors, heat-resistant coatings, low-observable wire meshes to cover intake and cooling vents, heat ablating tiles on the exhaust troughs (seen on the Northrop YF-23), and coating internal and external metal areas with radar-absorbent materials and paint (RAM/RAP). The AESA radar offers unique capabilities for fighters (and it is also quickly becoming essential for Generation 4.5 aircraft designs, as well as being retrofitted onto some fourth-generation aircraft). In addition to its high resistance to ECM and LPI features, it enables the fighter to function as a sort of "mini-AWACS", providing high-gain electronic support measures (ESM) and electronic warfare (EW) jamming functions. Other technologies common to this latest generation of fighters includes integrated electronic warfare system (INEWS) technology, integrated communications, navigation, and identification (CNI) avionics technology, centralized "vehicle health monitoring" systems for ease of maintenance, fiber optics data transmission, stealth technology and even hovering capabilities. Maneuver performance remains important and is enhanced by thrust-vectoring, which also helps reduce takeoff and landing distances. Supercruise may or may not be featured; it permits flight at supersonic speeds without the use of the afterburner – a device that significantly increases IR signature when used in full military power. Such aircraft are sophisticated and expensive. The fifth generation was ushered in by the Lockheed Martin/Boeing F-22 Raptor in late 2005. The U.S. Air Force originally planned to acquire 650 F-22s, but now only 187 will be built. As a result, its unit flyaway cost (FAC) is around US$150 million. To spread the development costs – and production base – more broadly, the Joint Strike Fighter (JSF) program enrolls eight other countries as cost- and risk-sharing partners. Altogether, the nine partner nations anticipate procuring over 3,000 Lockheed Martin F-35 Lightning II fighters at an anticipated average FAC of $80–85 million. The F-35, however, is designed to be a family of three aircraft, a conventional take-off and landing (CTOL) fighter, a short take-off and vertical landing (STOVL) fighter, and a Catapult Assisted Take Off But Arrested Recovery (CATOBAR) fighter, each of which has a different unit price and slightly varying specifications in terms of fuel capacity (and therefore range), size and payload. Other countries have initiated fifth-generation fighter development projects. In December 2010, it was discovered that China is developing the 5th generation fighter Chengdu J-20. The J-20 took its maiden flight in January 2011. The Shenyang J-35 took its maiden flight on 31 October 2012, and developed a carrier-based version based on Chinese aircraft carriers. United Aircraft Corporation with Russia's Mikoyan LMFS and Sukhoi Su-75 Checkmate plan, Sukhoi Su-57 became the first fifth-generation fighter jets in service with the Russian Aerospace Forces on 2020, and launch missiles in the Russo-Ukrainian War in 2022. Japan is exploring its technical feasibility to produce fifth-generation fighters. India is developing the Advanced Medium Combat Aircraft (AMCA), a medium weight stealth fighter jet designated to enter into serial production by late 2030s. India also had initiated a joint fifth generation heavy fighter with Russia called the FGFA. May, the project is suspected to have not yielded desired progress or results for India and has been put on hold or dropped altogether. Other countries considering fielding an indigenous or semi-indigenous advanced fifth generation aircraft include South Korea, Sweden, Turkey and Pakistan. 2020s–present: Sixth-generation As of November 2018, France, Germany, China, Japan, Russia, the United Kingdom and the United States have announced the development of a sixth-generation aircraft program. France and Germany will develop a joint sixth-generation fighter to replace their current fleet of Dassault Rafales, Eurofighter Typhoons, and Panavia Tornados by 2035. The overall development will be led by a collaboration of Dassault and Airbus, while the engines will reportedly be jointly developed by Safran and MTU Aero Engines. Thales and MBDA are also seeking a stake in the project. Spain officially joined the Franco-German project to develop a Next-Generation Fighter (NGF) that will form part of a broader Future Combat Air Systems (FCAS) with the signing of a letter of intent (LOI) on February 14, 2019. Currently at the concept stage, the first sixth-generation jet fighter is expected to enter service in the United States Navy in 2025–30 period. The USAF seeks a new fighter for the 2030–50 period named the "Next Generation Tactical Aircraft" ("Next Gen TACAIR"). The US Navy looks to replace its F/A-18E/F Super Hornets beginning in 2025 with the Next Generation Air Dominance air superiority fighter. The United Kingdom's proposed stealth fighter is being developed by a European consortium called Team Tempest, consisting of BAE Systems, Rolls-Royce, Leonardo S.p.A. and MBDA. The aircraft is intended to enter service in 2035. Weapons Fighters were typically armed with guns only for air to air combat up through the late 1950s, though unguided rockets for mostly air to ground use and limited air to air use were deployed in WWII. From the late 1950s forward guided missiles came into use for air to air combat. Throughout this history fighters which by surprise or maneuver attain a good firing position have achieved the kill about one third to one half the time, no matter what weapons were carried. The only major historic exception to this has been the low effectiveness shown by guided missiles in the first one to two decades of their existence. From WWI to the present, fighter aircraft have featured machine guns and automatic cannons as weapons, and they are still considered as essential back-up weapons today. The power of air-to-air guns has increased greatly over time, and has kept them relevant in the guided missile era. In WWI two rifle (approximately 0.30) caliber machine guns was the typical armament, producing a weight of fire of about per second. In WWII rifle caliber machine guns also remained common, though usually in larger numbers or supplemented with much heavier 0.50 caliber machine guns or cannons. The standard WWII American fighter armament of six 0.50-cal (12.7mm) machine guns fired a bullet weight of approximately 3.7 kg/sec (8.1 lbs/sec), at a muzzle velocity of 856 m/s (2,810 ft/s). British and German aircraft tended to use a mix of machine guns and autocannon, the latter firing explosive projectiles. Later British fighters were exclusively cannon-armed, the US were not able to produce a reliable cannon in high numbers and most fighters remained equipped only with heavy machine guns despite the US Navy pressing for a change to 20 mm. Post war 20–30 mm revolver cannon and rotary cannon were introduced. The modern M61 Vulcan 2ft rotary cannon that is standard on current American fighters fires a projectile weight of about 10 kg/s (22 lb/s), nearly three times that of six 0.50-cal machine guns, with higher velocity of 1,052 m/s (3450 ft/s) supporting a flatter trajectory, and with exploding projectiles. Modern fighter gun systems also feature ranging radar and lead computing electronic gun sights to ease the problem of aim point to compensate for projectile drop and time of flight (target lead) in the complex three dimensional maneuvering of air-to-air combat. However, getting in position to use the guns is still a challenge. The range of guns is longer than in the past but still quite limited compared to missiles, with modern gun systems having a maximum effective range of approximately 1,000 meters. High probability of kill also requires firing to usually occur from the rear hemisphere of the target. Despite these limits, when pilots are well trained in air-to-air gunnery and these conditions are satisfied, gun systems are tactically effective and highly cost efficient. The cost of a gun firing pass is far less than firing a missile, and the projectiles are not subject to the thermal and electronic countermeasures than can sometimes defeat missiles. When the enemy can be approached to within gun range, the lethality of guns is approximately a 25% to 50% chance of "kill per firing pass". The range limitations of guns, and the desire to overcome large variations in fighter pilot skill and thus achieve higher force effectiveness, led to the development of the guided air-to-air missile. There are two main variations, heat-seeking (infrared homing), and radar guided. Radar missiles are typically several times heavier and more expensive than heat-seekers, but with longer range, greater destructive power, and ability to track through clouds. The highly successful AIM-9 Sidewinder heat-seeking (infrared homing) short-range missile was developed by the United States Navy in the 1950s. These small missiles are easily carried by lighter fighters, and provide effective ranges of approximately . Beginning with the AIM-9L in 1977, subsequent versions of Sidewinder have added all-aspect capability, the ability to use the lower heat of air to skin friction on the target aircraft to track from the front and sides. The latest (2003 service entry) AIM-9X also features "off-boresight" and "lock on after launch" capabilities, which allow the pilot to make a quick launch of a missile to track a target anywhere within the pilot's vision. The AIM-9X development cost was U.S. $3 billion in mid to late 1990s dollars, and 2015 per unit procurement cost is $0.6 million each. The missile weighs 85.3 kg (188 lbs), and has a maximum range of 35 km (22 miles) at higher altitudes. Like most air-to-air missiles, lower altitude range can be as limited as only about one third of maximum due to higher drag and less ability to coast downward. The effectiveness of infrared homing missiles was only 7% early in the Vietnam War, but improved to approximately 15%–40% over the course of the war. The AIM-4 Falcon used by the USAF had kill rates of approximately 7% and was considered a failure. The AIM-9B Sidewinder introduced later achieved 15% kill rates, and the further improved AIM-9D and J models reached 19%. The AIM-9G used in the last year of the Vietnam air war achieved 40%. Israel used almost totally guns in the 1967 Six-Day War, achieving 60 kills and 10 losses. However, Israel made much more use of steadily improving heat-seeking missiles in the 1973 Yom Kippur War. In this extensive conflict Israel scored 171 of 261 total kills with heat-seeking missiles (65.5%), 5 kills with radar guided missiles (1.9%), and 85 kills with guns (32.6%). The AIM-9L Sidewinder scored 19 kills out of 26 fired missiles (73%) in the 1982 Falklands War. But, in a conflict against opponents using thermal countermeasures, the United States only scored 11 kills out of 48 fired (Pk = 23%) with the follow-on AIM-9M in the 1991 Gulf War. Radar guided missiles fall into two main missile guidance types. In the historically more common semi-active radar homing case the missile homes in on radar signals transmitted from launching aircraft and reflected from the target. This has the disadvantage that the firing aircraft must maintain radar lock on the target and is thus less free to maneuver and more vulnerable to attack. A widely deployed missile of this type was the AIM-7 Sparrow, which entered service in 1954 and was produced in improving versions until 1997. In more advanced active radar homing the missile is guided to the vicinity of the target by internal data on its projected position, and then "goes active" with an internally carried small radar system to conduct terminal guidance to the target. This eliminates the requirement for the firing aircraft to maintain radar lock, and thus greatly reduces risk. A prominent example is the AIM-120 AMRAAM, which was first fielded in 1991 as the AIM-7 replacement, and which has no firm retirement date . The current AIM-120D version has a maximum high altitude range of greater than , and cost approximately $2.4 million each (2016). As is typical with most other missiles, range at lower altitude may be as little as one third that of high altitude. In the Vietnam air war radar missile kill reliability was approximately 10% at shorter ranges, and even worse at longer ranges due to reduced radar return and greater time for the target aircraft to detect the incoming missile and take evasive action. At one point in the Vietnam war, the U.S. Navy fired 50 AIM-7 Sparrow radar guided missiles in a row without a hit. Between 1958 and 1982 in five wars there were 2,014 combined heat-seeking and radar guided missile firings by fighter pilots engaged in air-to-air combat, achieving 528 kills, of which 76 were radar missile kills, for a combined effectiveness of 26%. However, only 4 of the 76 radar missile kills were in the beyond-visual-range mode intended to be the strength of radar guided missiles. The United States invested over $10 billion in air-to-air radar missile technology from the 1950s to the early 1970s. Amortized over actual kills achieved by the U.S. and its allies, each radar guided missile kill thus cost over $130 million. The defeated enemy aircraft were for the most part older MiG-17s, −19s, and −21s, with new cost of $0.3 million to $3 million each. Thus, the radar missile investment over that period far exceeded the value of enemy aircraft destroyed, and furthermore had very little of the intended BVR effectiveness. However, continuing heavy development investment and rapidly advancing electronic technology led to significant improvement in radar missile reliabilities from the late 1970s onward. Radar guided missiles achieved 75% Pk (9 kills out of 12 shots) in operations in the Gulf War in 1991. The percentage of kills achieved by radar guided missiles also surpassed 50% of total kills for the first time by 1991. Since 1991, 20 of 61 kills worldwide have been beyond-visual-range using radar missiles. Discounting an accidental friendly fire kill, in operational use the AIM-120D (the current main American radar guided missile) has achieved 9 kills out of 16 shots for a 56% Pk. Six of these kills were BVR, out of 13 shots, for a 46% BVR Pk. Though all these kills were against less capable opponents who were not equipped with operating radar, electronic countermeasures, or a comparable weapon themselves, the BVR Pk was a significant improvement from earlier eras. However, a current concern is electronic countermeasures to radar missiles, which are thought to be reducing the effectiveness of the AIM-120D. Some experts believe that the European Meteor missile, the Russian R-37M, and the Chinese PL-15 are more resistant to countermeasures and more effective than the AIM-120D. Now that higher reliabilities have been achieved, both types of missiles allow the fighter pilot to often avoid the risk of the short-range dogfight, where only the more experienced and skilled fighter pilots tend to prevail, and where even the finest fighter pilot can simply get unlucky. Taking maximum advantage of complicated missile parameters in both attack and defense against competent opponents does take considerable experience and skill, but against surprised opponents lacking comparable capability and countermeasures, air-to-air missile warfare is relatively simple. By partially automating air-to-air combat and reducing reliance on gun kills mostly achieved by only a small expert fraction of fighter pilots, air-to-air missiles now serve as highly effective force multipliers.
Technology
Military aviation
null
10931
https://en.wikipedia.org/wiki/Finite-state%20machine
Finite-state machine
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. For any non-deterministic finite-state machine, an equivalent deterministic one can be constructed. The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are: vending machines, which dispense products when the proper combination of coins is deposited; elevators, whose sequence of stops is determined by the floors requested by riders; traffic lights, which change sequence when cars are waiting; combination locks, which require the input of a sequence of numbers in the proper order. The finite-state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but an FSM cannot. This is because an FSM's memory is limited by the number of states it has. A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. FSMs are studied in the more general field of automata theory. Example: coin-operated turnstile An example of a simple mechanism that can be modeled by a state machine is a turnstile. A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted. Considered as a state machine, the turnstile has two possible states: Locked and Unlocked. There are two possible inputs that affect its state: putting a coin in the slot (coin) and pushing the arm (push). In the locked state, pushing on the arm has no effect; no matter how many times the input push is given, it stays in the locked state. Putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect; that is, giving additional coin inputs does not change the state. A customer pushing through the arms gives a push input and resets the state to Locked. The turnstile state machine can be represented by a state-transition table, showing for each possible state, the transitions between them (based upon the inputs given to the machine) and the outputs resulting from each input: {| class="wikitable" ! Current State ! Input ! Next State ! Output |- ! rowspan="2"|Locked | coin || Unlocked || Unlocks the turnstile so that the customer can push through. |- | push || Locked || None |- ! rowspan="2"|Unlocked | coin || Unlocked || None |- | push || Locked || When the customer has pushed through, locks the turnstile. |} The turnstile state machine can also be represented by a directed graph called a state diagram (above). Each state is represented by a node (circle). Edges (arrows) show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition. An input that doesn't cause a change of state (such as a coin input in the Unlocked state) is represented by a circular arrow returning to the original state. The arrow into the Locked node from the black dot indicates it is the initial state. Concepts and terminology A state is a description of the status of a system that is waiting to execute a transition. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received. For example, when using an audio system to listen to the radio (the system is in the "radio" state), receiving a "next" stimulus results in moving to the next station. When the system is in the "CD" state, the "next" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state. In some finite-state machine representations, it is also possible to associate actions with a state: an entry action: performed when entering the state, and an exit action: performed when exiting the state. Representations State/Event table Several state-transition table types are used. The most common representation is shown below: the combination of current state (e.g. B) and input (e.g. Y) shows the next state (e.g. C). The complete action's information is not directly described in the table and can only be added using footnotes. An FSM definition including the full action's information is possible using state tables (see also virtual finite-state machine). UML state machines The Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite-state machines while retaining their main benefits. UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of both Mealy machines and Moore machines. They support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines. SDL state machines The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition: send an event receive an event start a timer cancel a timer start another concurrent state machine decision SDL embeds basic data types called "Abstract Data Types", an action language, and an execution semantic in order to make the finite-state machine executable. Other state diagrams There are a large number of variants to represent an FSM such as the one in figure 3. Usage In addition to their use in modeling reactive systems presented here, finite-state machines are significant in many different areas, including electrical engineering, linguistics, computer science, philosophy, biology, mathematics, video game programming, and logic. Finite-state machines are a class of automata studied in automata theory and the theory of computation. In computer science, finite-state machines are widely used in modeling of application behavior (control theory), design of hardware digital systems, software engineering, compilers, network protocols, and computational linguistics. Classification Finite-state machines can be subdivided into acceptors, classifiers, transducers and sequencers. Acceptors Acceptors (also called detectors or recognizers) produce binary output, indicating whether or not the received input is accepted. Each state of an acceptor is either accepting or non accepting. Once all input has been received, if the current state is an accepting state, the input is accepted; otherwise it is rejected. As a rule, input is a sequence of symbols (characters); actions are not used. The start state can also be an accepting state, in which case the acceptor accepts the empty string. The example in figure 4 shows an acceptor that accepts the string "nice". In this acceptor, the only accepting state is state 7. A (possibly infinite) set of symbol sequences, called a formal language, is a regular language if there is some acceptor that accepts exactly that set. For example, the set of binary strings with an even number of zeroes is a regular language (cf. Fig. 5), while the set of all strings whose length is a prime number is not. An acceptor could also be described as defining a language that would contain every string accepted by the acceptor but none of the rejected ones; that language is accepted by the acceptor. By definition, the languages accepted by acceptors are the regular languages. The problem of determining the language accepted by a given acceptor is an instance of the algebraic path problem—itself a generalization of the shortest path problem to graphs with edges weighted by the elements of an (arbitrary) semiring. An example of an accepting state appears in Fig. 5: a deterministic finite automaton (DFA) that detects whether the binary input string contains an even number of 0s. S1 (which is also the start state) indicates the state at which an even number of 0s has been input. S1 is therefore an accepting state. This acceptor will finish in an accept state, if the binary string contains an even number of 0s (including any binary string containing no 0s). Examples of strings accepted by this acceptor are ε (the empty string), 1, 11, 11..., 00, 010, 1010, 10110, etc. Classifiers Classifiers are a generalization of acceptors that produce n-ary output where n is strictly greater than two. Transducers Transducers produce output based on a given input and/or a state using actions. They are used for control applications and in the field of computational linguistics. In control applications, two types are distinguished: Moore machine The FSM uses only entry actions, i.e., output depends only on state. The advantage of the Moore model is a simplification of the behaviour. Consider an elevator door. The state machine recognizes two commands: "command_open" and "command_close", which trigger state changes. The entry action (E:) in state "Opening" starts a motor opening the door, the entry action in state "Closing" starts a motor in the other direction closing the door. States "Opened" and "Closed" stop the motor when fully opened or closed. They signal to the outside world (e.g., to other state machines) the situation: "door is open" or "door is closed". Mealy machine The FSM also uses input actions, i.e., output depends on input and state. The use of a Mealy FSM leads often to a reduction of the number of states. The example in figure 7 shows a Mealy FSM implementing the same behaviour as in the Moore example (the behaviour depends on the implemented FSM execution model and will work, e.g., for virtual FSM but not for event-driven FSM). There are two input actions (I:): "start motor to close the door if command_close arrives" and "start motor in the other direction to open the door if command_open arrives". The "opening" and "closing" intermediate states are not shown. Sequencers Sequencers (also called generators) are a subclass of acceptors and transducers that have a single-letter input alphabet. They produce only one sequence, which can be seen as an output sequence of acceptor or transducer outputs. Determinism A further distinction is between deterministic (DFA) and non-deterministic (NFA, GNFA) automata. In a deterministic automaton, every state has exactly one transition for each possible input. In a non-deterministic automaton, an input can lead to one, more than one, or no transition for a given state. The powerset construction algorithm can transform any nondeterministic automaton into a (usually more complex) deterministic automaton with identical functionality. A finite-state machine with only one state is called a "combinatorial FSM". It only allows actions upon transition into a state. This concept is useful in cases where a number of finite-state machines are required to work together, and when it is convenient to consider a purely combinatorial part as a form of FSM to suit the design tools. Alternative semantics There are other sets of semantics available to represent state machines. For example, there are tools for modeling and designing logic for embedded controllers. They combine hierarchical state machines (which usually have more than one current state), flow graphs, and truth tables into one language, resulting in a different formalism and set of semantics. These charts, like Harel's original state machines, support hierarchically nested states, orthogonal regions, state actions, and transition actions. Mathematical model In accordance with the general classification, the following formal definitions are found. A deterministic finite-state machine or deterministic finite-state acceptor is a quintuple , where: is the input alphabet (a finite non-empty set of symbols); is a finite non-empty set of states; is an initial state, an element of ; is the state-transition function: (in a nondeterministic finite automaton it would be , i.e. would return a set of states); is the set of final states, a (possibly empty) subset of . For both deterministic and non-deterministic FSMs, it is conventional to allow to be a partial function, i.e. does not have to be defined for every combination of and . If an FSM is in a state , the next symbol is and is not defined, then can announce an error (i.e. reject the input). This is useful in definitions of general state machines, but less useful when transforming the machine. Some algorithms in their default form may require total functions. A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. That is, each formal language accepted by a finite-state machine is accepted by such a kind of restricted Turing machine, and vice versa. A finite-state transducer is a sextuple , where: is the input alphabet (a finite non-empty set of symbols); is the output alphabet (a finite non-empty set of symbols); is a finite non-empty set of states; is the initial state, an element of ; is the state-transition function: ; is the output function. If the output function depends on the state and input symbol () that definition corresponds to the Mealy model, and can be modelled as a Mealy machine. If the output function depends only on the state () that definition corresponds to the Moore model, and can be modelled as a Moore machine. A finite-state machine with no output function at all is known as a semiautomaton or transition system. If we disregard the first output symbol of a Moore machine, , then it can be readily converted to an output-equivalent Mealy machine by setting the output function of every Mealy transition (i.e. labeling every edge) with the output symbol given of the destination Moore state. The converse transformation is less straightforward because a Mealy machine state may have different output labels on its incoming transitions (edges). Every such state needs to be split in multiple Moore machine states, one for every incident output symbol. Optimization Optimizing an FSM means finding a machine with the minimum number of states that performs the same function. The fastest known algorithm doing this is the Hopcroft minimization algorithm. Other techniques include using an implication table, or the Moore reduction procedure. Additionally, acyclic FSAs can be minimized in linear time. Implementation Hardware applications In a digital circuit, an FSM may be built using a programmable logic device, a programmable logic controller, logic gates and flip flops or relays. More specifically, a hardware implementation requires a register to store state variables, a block of combinational logic that determines the state transition, and a second block of combinational logic that determines the output of an FSM. One of the classic hardware implementations is the Richards controller. In a Medvedev machine, the output is directly connected to the state flip-flops minimizing the time delay between flip-flops and output. Through state encoding for low power state machines may be optimized to minimize power consumption. Software applications The following concepts are commonly used to build software applications with finite-state machines: Automata-based programming Event-driven finite-state machine Virtual finite-state machine State design pattern Finite-state machines and compilers Finite automata are often used in the frontend of programming language compilers. Such a frontend may comprise several finite-state machines that implement a lexical analyzer and a parser. Starting from a sequence of characters, the lexical analyzer builds a sequence of language tokens (such as reserved words, literals, and identifiers) from which the parser builds a syntax tree. The lexical analyzer and the parser handle the regular and context-free parts of the programming language's grammar.
Mathematics
Automata theory
null
10933
https://en.wikipedia.org/wiki/Functional%20programming
Functional programming
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program. In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner. Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming that treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification. Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8). History The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s. Church later developed a weaker system, the simply-typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming. The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced. Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features. Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q. In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language. John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming. The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML. In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming. In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages. The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990. More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept. Functional programming continues to be used in commercial settings. Concepts A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts. First-class and higher-order functions Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator , which returns the derivative of a function . Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values). Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one. Pure functions Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code: If the result of a pure expression is not used, it can be removed without affecting other expressions. If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.) If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe). If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation). While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics. Recursion Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches. The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop. Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages. Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming. Strict versus non-strict evaluation Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression: print length([2+1, 3*2, 1/0, 5-4]) fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself. The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell. argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them. Type systems Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic data types makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases. Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the language C that is written in Coq and formally verified. A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#. Referential transparency Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent. Consider C assignment statement x=x*10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x*10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. Data structures Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created. Comparison to imperative programming Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency. Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item. Imperative vs. functional programming The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable "result". Traditional imperative loop: const numList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; let result = 0; for (let i = 0; i < numList.length; i++) { if (numList[i] % 2 === 0) { result += numList[i] * 10; } } Functional programming with higher-order functions: const result = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] .filter(n => n % 2 === 0) .map(a => a * 10) .reduce((a, b) => a + b, 0);Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule). Simulating state There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way. The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries). Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged. Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations. Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit. Efficiency issues Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations. Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion. Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called lifetimes. Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka. Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) . Abstraction cost Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure: (even? 5) (.equals (mod 5 2) 0) When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as: (defn even? "Returns true if n is even, throws an exception if n is not an integer" {:added "1.0" :static true} [n] (if (integer? n) (zero? (bit-and (clojure.lang.RT/uncheckedLongCast n) 1)) (throw (IllegalArgumentException. (str "Argument must be an integer: " n))))) has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?. For instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining. One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime. Functional programming in non-functional languages It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions. JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as Perl 5.0 in 1994, PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin. In Perl, lambda, map, reduce, filter, and closures are fully supported and frequently used. The book Higher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming. In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style. In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes. In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#. Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold. Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript. Comparison to logic programming Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program: mother(charles, elizabeth). mother(harry, diana). The program can be queried, like a functional program, to generate mothers from children: ?- mother(harry, X). X = diana. ?- mother(charles, X). X = elizabeth. But it can also be queried backwards, to generate children: ?- mother(X, elizabeth). X = charles. ?- mother(X, diana). X = harry. It can even be used to generate all instances of the mother relation: ?- mother(X, Y). X = charles, Y = elizabeth. X = harry, Y = diana. Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form: maternal_grandmother(X) = mother(mother(X)). The same definition in relational notation needs to be written in the unnested form: maternal_grandmother(X, Y) :- mother(X, Z), mother(Z, Y). Here :- means if and , means and. However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming: grandparent(X) := parent(parent(X)). parent(X) := mother(X). parent(X) := father(X). mother(charles) := elizabeth. father(charles) := phillip. mother(harry) := diana. father(harry) := charles. ?- grandparent(X,Y). X = harry, Y = elizabeth. X = harry, Y = phillip. Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy. Applications Text editors Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages. Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family. Spreadsheets Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature. Microservices Due to their composability, functional programming paradigms can be suitable for microservices-based architectures. Academia Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming. Industry Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming. Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform Allegro Lokalnie. Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory. Education Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods. Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics. In particular, Scheme has been a relatively popular choice for teaching programming for years.
Technology
Programming
null
10939
https://en.wikipedia.org/wiki/Formal%20language
Formal language
In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar. The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings called words. Words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules. In computer science, formal languages are used, among others, as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages, in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way. The field of formal language theory studies primarily the purely syntactic aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages. History In the 17th century, Gottfried Leibniz imagined and described the characteristica universalis, a universal and formal language which utilised pictographs. Later, Carl Friedrich Gauss investigated the problem of Gauss codes. Gottlob Frege attempted to realize Leibniz's ideas, through a notational system first outlined in Begriffsschrift (1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903). This described a "formal language of pure language." In the first half of the 20th century, several developments were made with relevance to formal languages. Axel Thue published four papers relating to words and language between 1906 and 1914. The last of these introduced what Emil Post later termed 'Thue Systems', and gave an early example of an undecidable problem. Post would later use this paper as the basis for a 1947 proof "that the word problem for semigroups was recursively insoluble", and later devised the canonical system for the creation of formal languages. In 1907, Leonardo Torres Quevedo introduced a formal language for the description of mechanical drawings (mechanical devices), in Vienna. He published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("On a system of notations and symbols intended to facilitate the description of machines"). Heinz Zemanek rated it as an equivalent to a programming language for the numerical control of machine tools. Noam Chomsky devised an abstract representation of formal and natural languages, known as the Chomsky hierarchy. In 1959 John Backus developed the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation of FORTRAN. Peter Naur was the secretary/editor for the ALGOL60 Report in which he used Backus–Naur form to describe the Formal part of ALGOL60. Words over an alphabet An alphabet, in the context of formal languages, can be any set; its elements are called letters. An alphabet may contain an infinite number of elements; however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use an alphabet in the usual sense of the word, or more generally any finite character encoding such as ASCII or Unicode. A word over an alphabet can be any finite sequence (i.e., string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ* (using the Kleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, the empty word, which is often denoted by e, ε, λ or even Λ. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word. In some applications, especially in logic, the alphabet is also known as the vocabulary and words are known as formulas or sentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor. Definition A formal language L over an alphabet Σ is a subset of Σ*, that is, a set of words over that alphabet. Sometimes the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of 'well-formed expressions'. In computer science and mathematics, which do not usually deal with natural languages, the adjective "formal" is often omitted as redundant. While formal language theory usually concerns itself with formal languages that are described by some syntactic rules, the actual definition of the concept "formal language" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the intuitive concept of a "language", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being accompanied with a formal grammar that describes it. Examples The following rules describe a formal language  over the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}: Every nonempty string that does not contain "+" or "=" and does not start with "0" is in . The string "0" is in . A string containing "=" is in  if and only if there is exactly one "=", and it separates two valid strings of . A string containing "+" but not "=" is in  if and only if every "+" in the string separates two valid strings of . No string is in  other than those implied by the previous rules. Under these rules, the string "23+4=555" is in , but the string "=234=+" is not. This formal language expresses natural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, "+" means addition, "23+4=555" is false, etc. Constructions For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a language  as just  = {a, b, ab, cba}. The degenerate case of this construction is the empty language, which contains no words at all ( = ∅). However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: "a", "abb", "ababba", "aaababbbbaab", .... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writing L = {a, b, ab, cba}. Here are some examples of formal languages: = Σ*, the set of all words over Σ; = {a}* = {an}, where n ranges over the natural numbers and "an" means "a" repeated n times (this is the set of words consisting only of the symbol "a"); the set of syntactically correct programs in a given programming language (the syntax of which is usually defined by a context-free grammar); the set of inputs upon which a certain Turing machine halts; or the set of maximal strings of alphanumeric ASCII characters on this line, i.e., the set {the, set, of, maximal, strings, alphanumeric, ASCII, characters, on, this, line, i, e}. Language-specification formalisms Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as those strings generated by some formal grammar; those strings described or matched by a particular regular expression; those strings accepted by some automaton, such as a Turing machine or finite-state automaton; those strings for which some decision procedure (an algorithm that asks a sequence of related YES/NO questions) produces the answer YES. Typical questions asked about such formalisms include: What is their expressive power? (Can formalism X describe every language that formalism Y can describe? Can it describe other languages?) What is their recognizability? (How difficult is it to decide whether a given word belongs to a language described by formalism X?) What is their comparability? (How difficult is it to decide whether two languages, one described in formalism X and one in formalism Y, or in X again, are actually the same language?). Surprisingly often, the answer to these decision problems is "it cannot be done at all", or "it is extremely expensive" (with a characterization of how expensive). Therefore, formal language theory is a major application area of computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are widely used in practical applications. Operations on languages Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations. Examples: suppose and are languages over some common alphabet . The concatenation consists of all strings of the form where is a string from and is a string from . The intersection of and consists of all strings that are contained in both languages The complement of with respect to consists of all strings over that are not in . The Kleene star: the language consisting of all words that are concatenations of zero or more words in the original language; Reversal: Let ε be the empty word, then , and for each non-empty word (where are elements of some alphabet), let , then for a formal language , . String homomorphism Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, the context-free languages are known to be closed under union, concatenation, and intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract families of languages studies the most common closure properties of language families in their own right. {| class="wikitable" |+ align="top"|Closure properties of language families ( Op where both and are in the language family given by the column). After Hopcroft and Ullman. |- ! Operation ! ! Regular ! DCFL ! CFL ! IND ! CSL ! recursive ! RE |- |Union | | | | | | | | |- |Intersection | | | | | | | | |- |Complement | | | | | | | | |- |Concatenation | | | | | | | | |- |Kleene star | | | | | | | | |- |(String) homomorphism | | | | | | | | |- |ε-free (string) homomorphism | | | | | | | | |- |Substitution | | | | | | | | |- |Inverse homomorphism | | | | | | | | |- |Reverse | | | | | | | | |- |Intersection with a regular language | | | | | | | | |} Applications Programming languages A compiler usually has two distinct components. A lexical analyzer, sometimes generated by a tool like lex, identifies the tokens of the programming language grammar, e.g. identifiers or keywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, sometimes generated by a parser generator like yacc, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built. Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically an abstract syntax tree. This is used by subsequent stages of the compiler to eventually generate an executable containing machine code that runs directly on the hardware, or some intermediate code that requires a virtual machine to execute. Formal theories, systems, and proofs In mathematical logic, a formal theory is a set of sentences expressed in a formal language. A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules, which may be interpreted as valid rules of inference, or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems and may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance). A formal proof or derivation is a finite sequence of well-formed formulas (which may be interpreted as sentences, or propositions) each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions. Interpretations and models Formal languages are entirely syntactic in nature, but may be given semantics that give meaning to the elements of the language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language, and an interpretation assigns a meaning to each of the formulas—usually, a truth value. The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as objects within mathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; a model for a formula is an interpretation of terms such that the formula becomes true.
Mathematics
Discrete mathematics
null
10949
https://en.wikipedia.org/wiki/Four%20color%20theorem
Four color theorem
In mathematics, the four color theorem, or the four color map theorem, states that no more than four colors are required to color the regions of any map so that no two adjacent regions have the same color. Adjacent means that two regions share a common boundary of non-zero length (i.e., not merely a corner where three or more regions meet). It was the first major theorem to be proved using a computer. Initially, this proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand. The proof has gained wide acceptance since then, although some doubts remain. The theorem is a stronger version of the five color theorem, which can be shown using a significantly simpler argument. Although the weaker five color theorem was proven already in the 1800s, the four color theorem resisted until 1976 when it was proven by Kenneth Appel and Wolfgang Haken. This came after many false proofs and mistaken counterexamples in the preceding decades. The Appel–Haken proof proceeds by analyzing a very large number of reducible configurations. This was improved upon in 1997 by Robertson, Sanders, Seymour, and Thomas, who have managed to decrease the number of such configurations to 633 – still an extremely long case analysis. In 2005, the theorem was verified by Georges Gonthier using a general-purpose theorem-proving software. Formulation In graph-theoretic terms, the theorem states that for loopless planar graph , its chromatic number is . The intuitive statement of the four color theorem – "given any separation of a plane into contiguous regions, the regions can be colored using at most four colors so that no two adjacent regions have the same color" – needs to be interpreted appropriately to be correct. First, regions are adjacent if they share a boundary segment; two regions that share only isolated boundary points are not considered adjacent. (Otherwise, a map in a shape of a pie chart would make an arbitrarily large number of regions 'adjacent' to each other at a common corner, and require arbitrarily large number of colors as a result.) Second, bizarre regions, such as those with finite area but infinitely long perimeter, are not allowed; maps with such regions can require more than four colors. (To be safe, we can restrict to regions whose boundaries consist of finitely many straight line segments. It is allowed that a region has enclaves, that is it entirely surrounds one or more other regions.) Note that the notion of "contiguous region" (technically: connected open subset of the plane) is not the same as that of a "country" on regular maps, since countries need not be contiguous (they may have exclaves; e.g., the Cabinda Province as part of Angola, Nakhchivan as part of Azerbaijan, Kaliningrad as part of Russia, France with its overseas territories, and Alaska as part of the United States are not contiguous). If we required the entire territory of a country to receive the same color, then four colors are not always sufficient. For instance, consider a simplified map: In this map, the two regions labeled A belong to the same country. If we wanted those regions to receive the same color, then five colors would be required, since the two A regions together are adjacent to four other regions, each of which is adjacent to all the others. A simpler statement of the theorem uses graph theory. The set of regions of a map can be represented more abstractly as an undirected graph that has a vertex for each region and an edge for every pair of regions that share a boundary segment. This graph is planar: it can be drawn in the plane without crossings by placing each vertex at an arbitrarily chosen location within the region to which it corresponds, and by drawing the edges as curves without crossings that lead from one region's vertex, across a shared boundary segment, to an adjacent region's vertex. Conversely any planar graph can be formed from a map in this way. In graph-theoretic terminology, the four-color theorem states that the vertices of every planar graph can be colored with at most four colors so that no two adjacent vertices receive the same color, or for short: every planar graph is four-colorable. History Early proof attempts As far as is known, the conjecture was first proposed on October 23, 1852, when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. At the time, Guthrie's brother, Frederick, was a student of Augustus De Morgan (the former advisor of Francis) at University College London. Francis inquired with Frederick regarding it, who then took it to De Morgan. (Francis Guthrie graduated later in 1852, and later became a professor of mathematics in South Africa.) According to De Morgan: A student of mine [Guthrie] asked me to day to give him a reason for a fact which I did not know was a fact—and do not yet. He says that if a figure be any how divided and the compartments differently colored so that figures with any portion of common boundary line are differently colored—four colors may be wanted but not more—the following is his case in which four colors are wanted. Query cannot a necessity for five or more be invented... "F.G.", perhaps one of the two Guthries, published the question in The Athenaeum in 1854, and De Morgan posed the question again in the same magazine in 1860. Another early published reference by in turn credits the conjecture to De Morgan. There were several early failed attempts at proving the theorem. De Morgan believed that it followed from a simple fact about four regions, though he didn't believe that fact could be derived from more elementary facts. This arises in the following way. We never need four colours in a neighborhood unless there be four counties, each of which has boundary lines in common with each of the other three. Such a thing cannot happen with four areas unless one or more of them be inclosed by the rest; and the colour used for the inclosed county is thus set free to go on with. Now this principle, that four areas cannot each have common boundary with all the other three without inclosure, is not, we fully believe, capable of demonstration upon anything more evident and more elementary; it must stand as a postulate. One proposed proof was given by Alfred Kempe in 1879, which was widely acclaimed; another was given by Peter Guthrie Tait in 1880. It was not until 1890 that Kempe's proof was shown incorrect by Percy Heawood, and in 1891, Tait's proof was shown incorrect by Julius Petersen—each false proof stood unchallenged for 11 years. In 1890, in addition to exposing the flaw in Kempe's proof, Heawood proved the five color theorem and generalized the four color conjecture to surfaces of arbitrary genus. Tait, in 1880, showed that the four color theorem is equivalent to the statement that a certain type of graph (called a snark in modern terminology) must be non-planar. In 1943, Hugo Hadwiger formulated the Hadwiger conjecture, a far-reaching generalization of the four-color problem that still remains unsolved. Proof by computer During the 1960s and 1970s, German mathematician Heinrich Heesch developed methods of using computers to search for a proof. Notably he was the first to use discharging for proving the theorem, which turned out to be important in the unavoidability portion of the subsequent Appel–Haken proof. He also expanded on the concept of reducibility and, along with Ken Durre, developed a computer test for it. Unfortunately, at this critical juncture, he was unable to procure the necessary supercomputer time to continue his work. Others took up his methods, including his computer-assisted approach. While other teams of mathematicians were racing to complete proofs, Kenneth Appel and Wolfgang Haken at the University of Illinois announced, on June 21, 1976, that they had proved the theorem. They were assisted in some algorithmic work by John A. Koch. If the four-color conjecture were false, there would be at least one map with the smallest possible number of regions that requires five colors. The proof showed that such a minimal counterexample cannot exist, through the use of two technical concepts: An unavoidable set is a set of configurations such that every map that satisfies some necessary conditions for being a minimal non-4-colorable triangulation (such as having minimum degree 5) must have at least one configuration from this set. A reducible configuration is an arrangement of countries that cannot occur in a minimal counterexample. If a map contains a reducible configuration, the map can be reduced to a smaller map. This smaller map has the condition that if it can be colored with four colors, this also applies to the original map. This implies that if the original map cannot be colored with four colors the smaller map cannot either and so the original map is not minimal. Using mathematical rules and procedures based on properties of reducible configurations, Appel and Haken found an unavoidable set of reducible configurations, thus proving that a minimal counterexample to the four-color conjecture could not exist. Their proof reduced the infinitude of possible maps to 1,834 reducible configurations (later reduced to 1,482) which had to be checked one by one by computer and took over a thousand hours. This reducibility part of the work was independently double checked with different programs and computers. However, the unavoidability part of the proof was verified in over 400 pages of microfiche, which had to be checked by hand with the assistance of Haken's daughter Dorothea Blostein. Appel and Haken's announcement was widely reported by the news media around the world, and the math department at the University of Illinois used a postmark stating "Four colors suffice." At the same time the unusual nature of the proof—it was the first major theorem to be proved with extensive computer assistance—and the complexity of the human-verifiable portion aroused considerable controversy. In the early 1980s, rumors spread of a flaw in the Appel–Haken proof. Ulrich Schmidt at RWTH Aachen had examined Appel and Haken's proof for his master's thesis that was published in 1981. He had checked about 40% of the unavoidability portion and found a significant error in the discharging procedure . In 1986, Appel and Haken were asked by the editor of Mathematical Intelligencer to write an article addressing the rumors of flaws in their proof. They replied that the rumors were due to a "misinterpretation of [Schmidt's] results" and obliged with a detailed article. Their magnum opus, Every Planar Map is Four-Colorable, a book claiming a complete and detailed proof (with a microfiche supplement of over 400 pages), appeared in 1989; it explained and corrected the error discovered by Schmidt as well as several further errors found by others. Simplification and verification Since the proving of the theorem, a new approach has led to both a shorter proof and a more efficient algorithm for 4-coloring maps. In 1996, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas created a quadratic-time algorithm (requiring only O(n2) time, where n is the number of vertices), improving on a quartic-time algorithm based on Appel and Haken's proof. The new proof, based on the same ideas, is similar to Appel and Haken's but more efficient because it reduces the complexity of the problem and requires checking only 633 reducible configurations. Both the unavoidability and reducibility parts of this new proof must be executed by a computer and are impractical to check by hand. In 2001, the same authors announced an alternative proof, by proving the snark conjecture. This proof remains unpublished, however. In 2005, Benjamin Werner and Georges Gonthier formalized a proof of the theorem inside the Coq proof assistant. This removed the need to trust the various computer programs used to verify particular cases; it is only necessary to trust the Coq kernel. Summary of proof ideas The following discussion is a summary based on the introduction to Every Planar Map is Four Colorable . Although flawed, Kempe's original purported proof of the four color theorem provided some of the basic tools later used to prove it. The explanation here is reworded in terms of the modern graph theory formulation above. Kempe's argument goes as follows. First, if planar regions separated by the graph are not triangulated (i.e., do not have exactly three edges in their boundaries), we can add edges without introducing new vertices in order to make every region triangular, including the unbounded outer region. If this triangulated graph is colorable using four colors or fewer, so is the original graph since the same coloring is valid if edges are removed. So it suffices to prove the four color theorem for triangulated graphs to prove it for all planar graphs, and without loss of generality we assume the graph is triangulated. Suppose v, e, and f are the number of vertices, edges, and regions (faces). Since each region is triangular and each edge is shared by two regions, we have that 2e = 3f. This together with Euler's formula, v − e + f = 2, can be used to show that 6v − 2e = 12. Now, the degree of a vertex is the number of edges abutting it. If vn is the number of vertices of degree n and D is the maximum degree of any vertex, But since 12 > 0 and 6 − i ≤ 0 for all i ≥ 6, this demonstrates that there is at least one vertex of degree 5 or less. If there is a graph requiring 5 colors, then there is a minimal such graph, where removing any vertex makes it four-colorable. Call this graph G. Then G cannot have a vertex of degree 3 or less, because if d(v) ≤ 3, we can remove v from G, four-color the smaller graph, then add back v and extend the four-coloring to it by choosing a color different from its neighbors. Kempe also showed correctly that G can have no vertex of degree 4. As before we remove the vertex v and four-color the remaining vertices. If all four neighbors of v are different colors, say red, green, blue, and yellow in clockwise order, we look for an alternating path of vertices colored red and blue joining the red and blue neighbors. Such a path is called a Kempe chain. There may be a Kempe chain joining the red and blue neighbors, and there may be a Kempe chain joining the green and yellow neighbors, but not both, since these two paths would necessarily intersect, and the vertex where they intersect cannot be colored. Suppose it is the red and blue neighbors that are not chained together. Explore all vertices attached to the red neighbor by red-blue alternating paths, and then reverse the colors red and blue on all these vertices. The result is still a valid four-coloring, and v can now be added back and colored red. This leaves only the case where G has a vertex of degree 5; but Kempe's argument was flawed for this case. Heawood noticed Kempe's mistake and also observed that if one was satisfied with proving only five colors are needed, one could run through the above argument (changing only that the minimal counterexample requires 6 colors) and use Kempe chains in the degree 5 situation to prove the five color theorem. In any case, to deal with this degree 5 vertex case requires a more complicated notion than removing a vertex. Rather the form of the argument is generalized to considering configurations, which are connected subgraphs of G with the degree of each vertex (in G) specified. For example, the case described in degree 4 vertex situation is the configuration consisting of a single vertex labelled as having degree 4 in G. As above, it suffices to demonstrate that if the configuration is removed and the remaining graph four-colored, then the coloring can be modified in such a way that when the configuration is re-added, the four-coloring can be extended to it as well. A configuration for which this is possible is called a reducible configuration. If at least one of a set of configurations must occur somewhere in G, that set is called unavoidable. The argument above began by giving an unavoidable set of five configurations (a single vertex with degree 1, a single vertex with degree 2, ..., a single vertex with degree 5) and then proceeded to show that the first 4 are reducible; to exhibit an unavoidable set of configurations where every configuration in the set is reducible would prove the theorem. Because G is triangular, the degree of each vertex in a configuration is known, and all edges internal to the configuration are known, the number of vertices in G adjacent to a given configuration is fixed, and they are joined in a cycle. These vertices form the ring of the configuration; a configuration with k vertices in its ring is a k-ring configuration, and the configuration together with its ring is called the ringed configuration. As in the simple cases above, one may enumerate all distinct four-colorings of the ring; any coloring that can be extended without modification to a coloring of the configuration is called initially good. For example, the single-vertex configuration above with 3 or fewer neighbors were initially good. In general, the surrounding graph must be systematically recolored to turn the ring's coloring into a good one, as was done in the case above where there were 4 neighbors; for a general configuration with a larger ring, this requires more complex techniques. Because of the large number of distinct four-colorings of the ring, this is the primary step requiring computer assistance. Finally, it remains to identify an unavoidable set of configurations amenable to reduction by this procedure. The primary method used to discover such a set is the method of discharging. The intuitive idea underlying discharging is to consider the planar graph as an electrical network. Initially positive and negative "electrical charge" is distributed amongst the vertices so that the total is positive. Recall the formula above: Each vertex is assigned an initial charge of 6-deg(v). Then one "flows" the charge by systematically redistributing the charge from a vertex to its neighboring vertices according to a set of rules, the discharging procedure. Since charge is preserved, some vertices still have positive charge. The rules restrict the possibilities for configurations of positively charged vertices, so enumerating all such possible configurations gives an unavoidable set. As long as some member of the unavoidable set is not reducible, the discharging procedure is modified to eliminate it (while introducing other configurations). Appel and Haken's final discharging procedure was extremely complex and, together with a description of the resulting unavoidable configuration set, filled a 400-page volume, but the configurations it generated could be checked mechanically to be reducible. Verifying the volume describing the unavoidable configuration set itself was done by peer review over a period of several years. A technical detail not discussed here but required to complete the proof is immersion reducibility. False disproofs The four color theorem has been notorious for attracting a large number of false proofs and disproofs in its long history. At first, The New York Times refused, as a matter of policy, to report on the Appel–Haken proof, fearing that the proof would be shown false like the ones before it. Some alleged proofs, like Kempe's and Tait's mentioned above, stood under public scrutiny for over a decade before they were refuted. But many more, authored by amateurs, were never published at all. Generally, the simplest, though invalid, counterexamples attempt to create one region which touches all other regions. This forces the remaining regions to be colored with only three colors. Because the four color theorem is true, this is always possible; however, because the person drawing the map is focused on the one large region, they fail to notice that the remaining regions can in fact be colored with three colors. This trick can be generalized: there are many maps where if the colors of some regions are selected beforehand, it becomes impossible to color the remaining regions without exceeding four colors. A casual verifier of the counterexample may not think to change the colors of these regions, so that the counterexample will appear as though it is valid. Perhaps one effect underlying this common misconception is the fact that the color restriction is not transitive: a region only has to be colored differently from regions it touches directly, not regions touching regions that it touches. If this were the restriction, planar graphs would require arbitrarily large numbers of colors. Other false disproofs violate the assumptions of the theorem, such as using a region that consists of multiple disconnected parts, or disallowing regions of the same color from touching at a point. Three-coloring While every planar map can be colored with four colors, it is NP-complete in complexity to decide whether an arbitrary planar map can be colored with just three colors. A cubic map can be colored with only three colors if and only if each interior region has an even number of neighboring regions. In the US states map example, landlocked Missouri (MO) has eight neighbors (an even number): it must be differently colored from all of them, but the neighbors can alternate colors, thus this part of the map needs only three colors. However, landlocked Nevada (NV) has five neighbors (an odd number): these neighbors require three colors, and it must be differently colored from them, thus four colors are needed here. Generalizations Infinite graphs The four color theorem applies not only to finite planar graphs, but also to infinite graphs that can be drawn without crossings in the plane, and even more generally to infinite graphs (possibly with an uncountable number of vertices) for which every finite subgraph is planar. To prove this, one can combine a proof of the theorem for finite planar graphs with the De Bruijn–Erdős theorem stating that, if every finite subgraph of an infinite graph is k-colorable, then the whole graph is also k-colorable . This can also be seen as an immediate consequence of Kurt Gödel's compactness theorem for first-order logic, simply by expressing the colorability of an infinite graph with a set of logical formulae. Higher surfaces One can also consider the coloring problem on surfaces other than the plane. The problem on the sphere or cylinder is equivalent to that on the plane. For closed (orientable or non-orientable) surfaces with positive genus, the maximum number p of colors needed depends on the surface's Euler characteristic χ according to the formula where the outermost brackets denote the floor function. Alternatively, for an orientable surface the formula can be given in terms of the genus of a surface, g: This formula, the Heawood conjecture, was proposed by P. J. Heawood in 1890 and, after contributions by several people, proved by Gerhard Ringel and J. W. T. Youngs in 1968. The only exception to the formula is the Klein bottle, which has Euler characteristic 0 (hence the formula gives p = 7) but requires only 6 colors, as shown by Philip Franklin in 1934. For example, the torus has Euler characteristic χ = 0 (and genus g = 1) and thus p = 7, so no more than 7 colors are required to color any map on a torus. This upper bound of 7 is sharp: certain toroidal polyhedra such as the Szilassi polyhedron require seven colors. A Möbius strip requires six colors as do 1-planar graphs (graphs drawn with at most one simple crossing per edge) . If both the vertices and the faces of a planar graph are colored, in such a way that no two adjacent vertices, faces, or vertex-face pair have the same color, then again at most six colors are needed . For graphs whose vertices are represented as pairs of points on two distinct surfaces, with edges drawn as non-crossing curves on one of the two surfaces, the chromatic number can be at least 9 and is at most 12, but more precise bounds are not known; this is Gerhard Ringel's Earth–Moon problem. Solid regions There is no obvious extension of the coloring result to three-dimensional solid regions. By using a set of n flexible rods, one can arrange that every rod touches every other rod. The set would then require n colors, or n+1 including the empty space that also touches every rod. The number n can be taken to be any integer, as large as desired. Such examples were known to Fredrick Guthrie in 1880. Even for axis-parallel cuboids (considered to be adjacent when two cuboids share a two-dimensional boundary area), an unbounded number of colors may be necessary. Relation to other areas of mathematics Dror Bar-Natan gave a statement concerning Lie algebras and Vassiliev invariants which is equivalent to the four color theorem. Use outside of mathematics Despite the motivation from coloring political maps of countries, the theorem is not of particular interest to cartographers. According to an article by the math historian Kenneth May, "Maps utilizing only four colors are rare, and those that do usually require only three. Books on cartography and the history of mapmaking do not mention the four-color property". The theorem also does not guarantee the usual cartographic requirement that non-contiguous regions of the same country (such as the exclave Alaska and the rest of the United States) be colored identically. Because the four-color theorem does not apply when the regions on the map are not contiguous, it also does not apply to the world map. On the world map, the ocean, Belgium, Germany, the Netherlands, and France all border each other because the Netherlands borders France on the island of Saint Martin. This is the only counterexample.
Mathematics
Graph theory
null
10958
https://en.wikipedia.org/wiki/Fossil
Fossil
A fossil (from Classical Latin , ) is any preserved remains, impression, or trace of any once-living thing from a past geological age. Examples include bones, shells, exoskeletons, stone imprints of animals or microbes, objects preserved in amber, hair, petrified wood and DNA remnants. The totality of fossils is known as the fossil record. Though the fossil record is incomplete, numerous studies have demonstrated that there is enough information available to give a good understanding of the pattern of diversification of life on Earth. In addition, the record can predict and fill gaps such as the discovery of Tiktaalik in the arctic of Canada. Paleontology includes the study of fossils: their age, method of formation, and evolutionary significance. Specimens are sometimes considered to be fossils if they are over 10,000 years old. The oldest fossils are around 3.48 billion years to 4.1 billion years old. The observation in the 19th century that certain fossils were associated with certain rock strata led to the recognition of a geological timescale and the relative ages of different fossils. The development of radiometric dating techniques in the early 20th century allowed scientists to quantitatively measure the absolute ages of rocks and the fossils they host. There are many processes that lead to fossilization, including permineralization, casts and molds, authigenic mineralization, replacement and recrystallization, adpression, carbonization, and bioimmuration. Fossils vary in size from one-micrometre (1 μm) bacteria to dinosaurs and trees, many meters long and weighing many tons. A fossil normally preserves only a portion of the deceased organism, usually that portion that was partially mineralized during life, such as the bones and teeth of vertebrates, or the chitinous or calcareous exoskeletons of invertebrates. Fossils may also consist of the marks left behind by the organism while it was alive, such as animal tracks or feces (coprolites). These types of fossil are called trace fossils or ichnofossils, as opposed to body fossils. Some fossils are biochemical and are called chemofossils or biosignatures. History of study Gathering fossils dates at least to the beginning of recorded history. The fossils themselves are referred to as the fossil record. The fossil record was one of the early sources of data underlying the study of evolution and continues to be relevant to the history of life on Earth. Paleontologists examine the fossil record to understand the process of evolution and the way particular species have evolved. Ancient civilizations Fossils have been visible and common throughout most of natural history, and so documented human interaction with them goes back as far as recorded history, or earlier. There are many examples of paleolithic stone knives in Europe, with fossil echinoderms set precisely at the hand grip, dating back to Homo heidelbergensis and Neanderthals. These ancient peoples also drilled holes through the center of those round fossil shells, apparently using them as beads for necklaces. The ancient Egyptians gathered fossils of species that resembled the bones of modern species they worshipped. The god Set was associated with the hippopotamus, therefore fossilized bones of hippo-like species were kept in that deity's temples. Five-rayed fossil sea urchin shells were associated with the deity Sopdu, the Morning Star, equivalent of Venus in Roman mythology. Fossils appear to have directly contributed to the mythology of many civilizations, including the ancient Greeks. Classical Greek historian Herodotos wrote of an area near Hyperborea where gryphons protected golden treasure. There was indeed gold mining in that approximate region, where beaked Protoceratops skulls were common as fossils. A later Greek scholar, Aristotle, eventually realized that fossil seashells from rocks were similar to those found on the beach, indicating the fossils were once living animals. He had previously explained them in terms of vaporous exhalations, which Persian polymath Avicenna modified into the theory of petrifying fluids (). Recognition of fossil seashells as originating in the sea was built upon in the 14th century by Albert of Saxony, and accepted in some form by most naturalists by the 16th century. Roman naturalist Pliny the Elder wrote of "tongue stones", which he called glossopetra. These were fossil shark teeth, thought by some classical cultures to look like the tongues of people or snakes. He also wrote about the horns of Ammon, which are fossil ammonites, whence the group of shelled octopus-cousins ultimately draws its modern name. Pliny also makes one of the earlier known references to toadstones, thought until the 18th century to be a magical cure for poison originating in the heads of toads, but which are fossil teeth from Lepidotes, a Cretaceous ray-finned fish. The Plains tribes of North America are thought to have similarly associated fossils, such as the many intact pterosaur fossils naturally exposed in the region, with their own mythology of the thunderbird. There is no such direct mythological connection known from prehistoric Africa, but there is considerable evidence of tribes there excavating and moving fossils to ceremonial sites, apparently treating them with some reverence. In Japan, fossil shark teeth were associated with the mythical tengu, thought to be the razor-sharp claws of the creature, documented some time after the 8th century AD. In medieval China, the fossil bones of ancient mammals including Homo erectus were often mistaken for "dragon bones" and used as medicine and aphrodisiacs. In addition, some of these fossil bones are collected as "art" by scholars, who left scripts on various artifacts, indicating the time they were added to a collection. One good example is the famous scholar Huang Tingjian of the Song dynasty during the 11th century, who kept a specific seashell fossil with his own poem engraved on it. In his Dream Pool Essays published in 1088, Song dynasty Chinese scholar-official Shen Kuo hypothesized that marine fossils found in a geological stratum of mountains located hundreds of miles from the Pacific Ocean was evidence that a prehistoric seashore had once existed there and shifted over centuries of time. His observation of petrified bamboos in the dry northern climate zone of what is now Yan'an, Shaanxi province, China, led him to advance early ideas of gradual climate change due to bamboo naturally growing in wetter climate areas. In medieval Christendom, fossilized sea creatures on mountainsides were seen as proof of the biblical deluge of Noah's Ark. After observing the existence of seashells in mountains, the ancient Greek philosopher Xenophanes (c. 570 – 478 BC) speculated that the world was once inundated in a great flood that buried living creatures in drying mud. In 1027, the Persian Avicenna explained fossils' stoniness in The Book of Healing: From the 13th century to the present day, scholars pointed out that the fossil skulls of Deinotherium giganteum, found in Crete and Greece, might have been interpreted as being the skulls of the Cyclopes of Greek mythology, and are possibly the origin of that Greek myth. Their skulls appear to have a single eye-hole in the front, just like their modern elephant cousins, though in fact it's actually the opening for their trunk. In Norse mythology, echinoderm shells (the round five-part button left over from a sea urchin) were associated with the god Thor, not only being incorporated in thunderstones, representations of Thor's hammer and subsequent hammer-shaped crosses as Christianity was adopted, but also kept in houses to garner Thor's protection. These grew into the shepherd's crowns of English folklore, used for decoration and as good luck charms, placed by the doorway of homes and churches. In Suffolk, a different species was used as a good-luck charm by bakers, who referred to them as fairy loaves, associating them with the similarly shaped loaves of bread they baked. Early modern explanations More scientific views of fossils emerged during the Renaissance. Leonardo da Vinci concurred with Aristotle's view that fossils were the remains of ancient life. For example, Leonardo noticed discrepancies with the biblical flood narrative as an explanation for fossil origins: In 1666, Nicholas Steno examined a shark, and made the association of its teeth with the "tongue stones" of ancient Greco-Roman mythology, concluding that those were not in fact the tongues of venomous snakes, but the teeth of some long-extinct species of shark. Robert Hooke (1635–1703) included micrographs of fossils in his Micrographia and was among the first to observe fossil forams. His observations on fossils, which he stated to be the petrified remains of creatures some of which no longer existed, were published posthumously in 1705. William Smith (1769–1839), an English canal engineer, observed that rocks of different ages (based on the law of superposition) preserved different assemblages of fossils, and that these assemblages succeeded one another in a regular and determinable order. He observed that rocks from distant locations could be correlated based on the fossils they contained. He termed this the principle of faunal succession. This principle became one of Darwin's chief pieces of evidence that biological evolution was real. Georges Cuvier came to believe that most if not all the animal fossils he examined were remains of extinct species. This led Cuvier to become an active proponent of the geological school of thought called catastrophism. Near the end of his 1796 paper on living and fossil elephants he said: Interest in fossils, and geology more generally, expanded during the early nineteenth century. In Britain, Mary Anning's discoveries of fossils, including the first complete ichthyosaur and a complete plesiosaurus skeleton, sparked both public and scholarly interest. Linnaeus and Darwin Early naturalists well understood the similarities and differences of living species leading Linnaeus to develop a hierarchical classification system still in use today. Darwin and his contemporaries first linked the hierarchical structure of the tree of life with the then very sparse fossil record. Darwin eloquently described a process of descent with modification, or evolution, whereby organisms either adapt to natural and changing environmental pressures, or they perish. When Darwin wrote On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life, the oldest animal fossils were those from the Cambrian Period, now known to be about 540 million years old. He worried about the absence of older fossils because of the implications on the validity of his theories, but he expressed hope that such fossils would be found, noting that: "only a small portion of the world is known with accuracy." Darwin also pondered the sudden appearance of many groups (i.e. phyla) in the oldest known Cambrian fossiliferous strata. After Darwin Since Darwin's time, the fossil record has been extended to between 2.3 and 3.5 billion years. Most of these Precambrian fossils are microscopic bacteria or microfossils. However, macroscopic fossils are now known from the late Proterozoic. The Ediacara biota (also called Vendian biota) dating from 575 million years ago collectively constitutes a richly diverse assembly of early multicellular eukaryotes. The fossil record and faunal succession form the basis of the science of biostratigraphy or determining the age of rocks based on embedded fossils. For the first 150 years of geology, biostratigraphy and superposition were the only means for determining the relative age of rocks. The geologic time scale was developed based on the relative ages of rock strata as determined by the early paleontologists and stratigraphers. Since the early years of the twentieth century, absolute dating methods, such as radiometric dating (including potassium/argon, argon/argon, uranium series, and, for very recent fossils, radiocarbon dating) have been used to verify the relative ages obtained by fossils and to provide absolute ages for many fossils. Radiometric dating has shown that the earliest known stromatolites are over 3.4 billion years old. Modern era Paleontology has joined with evolutionary biology to share the interdisciplinary task of outlining the tree of life, which inevitably leads backwards in time to Precambrian microscopic life when cell structure and functions evolved. Earth's deep time in the Proterozoic and deeper still in the Archean is only "recounted by microscopic fossils and subtle chemical signals." Molecular biologists, using phylogenetics, can compare protein amino acid or nucleotide sequence homology (i.e., similarity) to evaluate taxonomy and evolutionary distances among organisms, with limited statistical confidence. The study of fossils, on the other hand, can more specifically pinpoint when and in what organism a mutation first appeared. Phylogenetics and paleontology work together in the clarification of science's still dim view of the appearance of life and its evolution. Niles Eldredge's study of the Phacops trilobite genus supported the hypothesis that modifications to the arrangement of the trilobite's eye lenses proceeded by fits and starts over millions of years during the Devonian. Eldredge's interpretation of the Phacops fossil record was that the aftermaths of the lens changes, but not the rapidly occurring evolutionary process, were fossilized. This and other data led Stephen Jay Gould and Niles Eldredge to publish their seminal paper on punctuated equilibrium in 1971. Synchrotron X-ray tomographic analysis of early Cambrian bilaterian embryonic microfossils yielded new insights of metazoan evolution at its earliest stages. The tomography technique provides previously unattainable three-dimensional resolution at the limits of fossilization. Fossils of two enigmatic bilaterians, the worm-like Markuelia and a putative, primitive protostome, Pseudooides, provide a peek at germ layer embryonic development. These 543-million-year-old embryos support the emergence of some aspects of arthropod development earlier than previously thought in the late Proterozoic. The preserved embryos from China and Siberia underwent rapid diagenetic phosphatization resulting in exquisite preservation, including cell structures. This research is a notable example of how knowledge encoded by the fossil record continues to contribute otherwise unattainable information on the emergence and development of life on Earth. For example, the research suggests Markuelia has closest affinity to priapulid worms, and is adjacent to the evolutionary branching of Priapulida, Nematoda and Arthropoda. Despite significant advances in uncovering and identifying paleontological specimens, it is generally accepted that the fossil record is vastly incomplete. Approaches for measuring the completeness of the fossil record have been developed for numerous subsets of species, including those grouped taxonomically, temporally, environmentally/geographically, or in sum. This encompasses the subfield of taphonomy and the study of biases in the paleontological record. Dating/Age Stratigraphy and estimations Paleontology seeks to map out how life evolved across geologic time. A substantial hurdle is the difficulty of working out fossil ages. Beds that preserve fossils typically lack the radioactive elements needed for radiometric dating. This technique is our only means of giving rocks greater than about 50 million years old an absolute age, and can be accurate to within 0.5% or better. Although radiometric dating requires careful laboratory work, its basic principle is simple: the rates at which various radioactive elements decay are known, and so the ratio of the radioactive element to its decay products shows how long ago the radioactive element was incorporated into the rock. Radioactive elements are common only in rocks with a volcanic origin, and so the only fossil-bearing rocks that can be dated radiometrically are volcanic ash layers, which may provide termini for the intervening sediments. Consequently, palaeontologists rely on stratigraphy to date fossils. Stratigraphy is the science of deciphering the "layer-cake" that is the sedimentary record. Rocks normally form relatively horizontal layers, with each layer younger than the one underneath it. If a fossil is found between two layers whose ages are known, the fossil's age is claimed to lie between the two known ages. Because rock sequences are not continuous, but may be broken up by faults or periods of erosion, it is very difficult to match up rock beds that are not directly adjacent. However, fossils of species that survived for a relatively short time can be used to match isolated rocks: this technique is called biostratigraphy. For instance, the conodont Eoplacognathus pseudoplanus has a short range in the Middle Ordovician period. If rocks of unknown age have traces of E. pseudoplanus, they have a mid-Ordovician age. Such index fossils must be distinctive, be globally distributed and occupy a short time range to be useful. Misleading results are produced if the index fossils are incorrectly dated. Stratigraphy and biostratigraphy can in general provide only relative dating (A was before B), which is often sufficient for studying evolution. However, this is difficult for some time periods, because of the problems involved in matching rocks of the same age across continents. Family-tree relationships also help to narrow down the date when lineages first appeared. For instance, if fossils of B or C date to X million years ago and the calculated "family tree" says A was an ancestor of B and C, then A must have evolved earlier. It is also possible to estimate how long ago two living clades diverged, in other words approximately how long ago their last common ancestor must have lived, by assuming that DNA mutations accumulate at a constant rate. These "molecular clocks", however, are fallible, and provide only approximate timing: for example, they are not sufficiently precise and reliable for estimating when the groups that feature in the Cambrian explosion first evolved, and estimates produced by different techniques may vary by a factor of two. Limitations Organisms are only rarely preserved as fossils in the best of circumstances, and only a fraction of such fossils have been discovered. This is illustrated by the fact that the number of species known through the fossil record is less than 5% of the number of known living species, suggesting that the number of species known through fossils must be far less than 1% of all the species that have ever lived. Because of the specialized and rare circumstances required for a biological structure to fossilize, only a small percentage of life-forms can be expected to be represented in discoveries, and each discovery represents only a snapshot of the process of evolution. The transition itself can only be illustrated and corroborated by transitional fossils, which will never demonstrate an exact half-way point. The fossil record is strongly biased toward organisms with hard-parts, leaving most groups of soft-bodied organisms with little to no role. It is replete with the mollusks, the vertebrates, the echinoderms, the brachiopods and some groups of arthropods. Sites Lagerstätten Fossil sites with exceptional preservation—sometimes including preserved soft tissues—are known as Lagerstätten—German for "storage places". These formations may have resulted from carcass burial in an anoxic environment with minimal bacteria, thus slowing decomposition. Lagerstätten span geological time from the Cambrian period to the present. Worldwide, some of the best examples of near-perfect fossilization are the Cambrian Maotianshan Shales and Burgess Shale, the Devonian Hunsrück Slates, the Jurassic Solnhofen Limestone, and the Carboniferous Mazon Creek localities. Fossilization processes Recrystallization A fossil is said to be recrystallized when the original skeletal compounds are still present but in a different crystal form, such as from aragonite to calcite. Replacement Replacement occurs when the shell, bone, or other tissue is replaced with another mineral. In some cases mineral replacement of the original shell occurs so gradually and at such fine scales that microstructural features are preserved despite the total loss of original material. Scientists can use such fossils when researching the anatomical structure of ancient species. Several species of saurids have been identified from mineralized dinosaur fossils. Permineralization Permineralization is a process of fossilization that occurs when an organism is buried. The empty spaces within an organism (spaces filled with liquid or gas during life) become filled with mineral-rich groundwater. Minerals precipitate from the groundwater, occupying the empty spaces. This process can occur in very small spaces, such as within the cell wall of a plant cell. Small scale permineralization can produce very detailed fossils. For permineralization to occur, the organism must become covered by sediment soon after death, otherwise the remains are destroyed by scavengers or decomposition. The degree to which the remains are decayed when covered determines the later details of the fossil. Some fossils consist only of skeletal remains or teeth; other fossils contain traces of skin, feathers or even soft tissues. This is a form of diagenesis. Phosphatization It involves the process of fossilization where organic matter is replaced by abundant calcium-phosphate minerals. The produced fossils tend to be particularly dense and have a dark coloration that ranges from dark orange to black. Pyritization This fossil preservation involves the elements sulfur and iron. Organisms may become pyritized when they are in marine sediments saturated with iron sulfides. As organic matter decays it releases sulfide which reacts with dissolved iron in the surrounding waters, forming pyrite. Pyrite replaces carbonate shell material due to an undersaturation of carbonate in the surrounding waters. Some plants become pyritized when they are in a clay terrain, but to a lesser extent than in a marine environment. Some pyritized fossils include Precambrian microfossils, marine arthropods and plants. Silicification In silicification, the precipitation of silica from saturated water bodies is crucial for the fossil preservation. The mineral-laden water permeates the pores and cells of some dead organism, where it becomes a gel. Over time, the gel will dehydrate, forming a silica-rich crystal structure, which can be expressed in the form of quartz, chalcedony, agate, opal, among others, with the shape of the original remain. Casts and molds In some cases, the original remains of the organism completely dissolve or are otherwise destroyed. The remaining organism-shaped hole in the rock is called an external mold. If this void is later filled with sediment, the resulting cast resembles what the organism looked like. An endocast, or internal mold, is the result of sediments filling an organism's interior, such as the inside of a bivalve or snail or the hollow of a skull. Endocasts are sometimes termed , especially when bivalves are preserved this way. Authigenic mineralization This is a special form of cast and mold formation. If the chemistry is right, the organism (or fragment of organism) can act as a nucleus for the precipitation of minerals such as siderite, resulting in a nodule forming around it. If this happens rapidly before significant decay to the organic tissue, very fine three-dimensional morphological detail can be preserved. Nodules from the Carboniferous Mazon Creek fossil beds of Illinois, US, are among the best documented examples of such mineralization. Adpression (compression-impression) Compression fossils, such as those of fossil ferns, are the result of chemical reduction of the complex organic molecules composing the organism's tissues. In this case the fossil consists of original material, albeit in a geochemically altered state. This chemical change is an expression of diagenesis. Often what remains is a carbonaceous film known as a phytoleim, in which case the fossil is known as a compression. Often, however, the phytoleim is lost and all that remains is an impression of the organism in the rock—an impression fossil. In many cases, however, compressions and impressions occur together. For instance, when the rock is broken open, the phytoleim will often be attached to one part (compression), whereas the counterpart will just be an impression. For this reason, one term covers the two modes of preservation: adpression. Carbonization and coalification Fossils that are carbonized or coalified consist of the organic remains which have been reduced primarily to the chemical element carbon. Carbonized fossils consist of a thin film which forms a silhouette of the original organism, and the original organic remains were typically soft tissues. Coalified fossils consist primarily of coal, and the original organic remains were typically woody in composition. Soft tissue, cell and molecular preservation Because of their antiquity, an unexpected exception to the alteration of an organism's tissues by chemical reduction of the complex organic molecules during fossilization has been the discovery of soft tissue in dinosaur fossils, including blood vessels, and the isolation of proteins and evidence for DNA fragments. In 2014, Mary Schweitzer and her colleagues reported the presence of iron particles (goethite-aFeO(OH)) associated with soft tissues recovered from dinosaur fossils. Based on various experiments that studied the interaction of iron in haemoglobin with blood vessel tissue they proposed that solution hypoxia coupled with iron chelation enhances the stability and preservation of soft tissue and provides the basis for an explanation for the unforeseen preservation of fossil soft tissues. However, a slightly older study based on eight taxa ranging in time from the Devonian to the Jurassic found that reasonably well-preserved fibrils that probably represent collagen were preserved in all these fossils and that the quality of preservation depended mostly on the arrangement of the collagen fibers, with tight packing favoring good preservation. There seemed to be no correlation between geological age and quality of preservation, within that timeframe. Bioimmuration Bioimmuration occurs when a skeletal organism overgrows or otherwise subsumes another organism, preserving the latter, or an impression of it, within the skeleton. Usually it is a sessile skeletal organism, such as a bryozoan or an oyster, which grows along a substrate, covering other sessile sclerobionts. Sometimes the bioimmured organism is soft-bodied and is then preserved in negative relief as a kind of external mold. There are also cases where an organism settles on top of a living skeletal organism that grows upwards, preserving the settler in its skeleton. Bioimmuration is known in the fossil record from the Ordovician to the Recent. Types Index Index fossils (also known as guide fossils, indicator fossils or zone fossils) are fossils used to define and identify geologic periods (or faunal stages). They work on the premise that, although different sediments may look different depending on the conditions under which they were deposited, they may include the remains of the same species of fossil. The shorter the species' time range, the more precisely different sediments can be correlated, and so rapidly evolving species' fossils are particularly valuable. The best index fossils are common, easy to identify at species level and have a broad distribution—otherwise the likelihood of finding and recognizing one in the two sediments is poor. Trace Trace fossils consist mainly of tracks and burrows, but also include coprolites (fossil feces) and marks left by feeding. Trace fossils are particularly significant because they represent a data source that is not limited to animals with easily fossilized hard parts, and they reflect animal behaviours. Many traces date from significantly earlier than the body fossils of animals that are thought to have been capable of making them. Whilst exact assignment of trace fossils to their makers is generally impossible, traces may for example provide the earliest physical evidence of the appearance of moderately complex animals (comparable to earthworms). Coprolites are classified as trace fossils as opposed to body fossils, as they give evidence for the animal's behaviour (in this case, diet) rather than morphology. They were first described by William Buckland in 1829. Prior to this they were known as "fossil fir cones" and "bezoar stones." They serve a valuable purpose in paleontology because they provide direct evidence of the predation and diet of extinct organisms. Coprolites may range in size from a few millimetres to over 60 centimetres. Transitional A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group. This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation. Microfossils Microfossil is a descriptive term applied to fossilized plants and animals whose size is just at or below the level at which the fossil can be analyzed by the naked eye. A commonly applied cutoff point between "micro" and "macro" fossils is 1 mm. Microfossils may either be complete (or near-complete) organisms in themselves (such as the marine plankters foraminifera and coccolithophores) or component parts (such as small teeth or spores) of larger animals or plants. Microfossils are of critical importance as a reservoir of paleoclimate information, and are also commonly used by biostratigraphers to assist in the correlation of rock units. Resin Fossil resin (colloquially called amber) is a natural polymer found in many types of strata throughout the world, even the Arctic. The oldest fossil resin dates to the Triassic, though most dates to the Cenozoic. The excretion of the resin by certain plants is thought to be an evolutionary adaptation for protection from insects and to seal wounds. Fossil resin often contains other fossils called inclusions that were captured by the sticky resin. These include bacteria, fungi, other plants, and animals. Animal inclusions are usually small invertebrates, predominantly arthropods such as insects and spiders, and only extremely rarely a vertebrate such as a small lizard. Preservation of inclusions can be exquisite, including small fragments of DNA. Derived or reworked A derived, reworked or is a fossil found in rock that accumulated significantly later than when the fossilized animal or plant died. Reworked fossils are created by erosion exhuming (freeing) fossils from the rock formation in which they were originally deposited and their redeposition in a younger sedimentary deposit. Wood Fossil wood is wood that is preserved in the fossil record. Wood is usually the part of a plant that is best preserved (and most easily found). Fossil wood may or may not be petrified. The fossil wood may be the only part of the plant that has been preserved; therefore such wood may get a special kind of botanical name. This will usually include "xylon" and a term indicating its presumed affinity, such as Araucarioxylon (wood of Araucaria or some related genus), Palmoxylon (wood of an indeterminate palm), or Castanoxylon (wood of an indeterminate chinkapin). Subfossil The term subfossil can be used to refer to remains, such as bones, nests, or fecal deposits, whose fossilization process is not complete, either because the length of time since the animal involved was living is too short or because the conditions in which the remains were buried were not optimal for fossilization. Subfossils are often found in caves or other shelters where they can be preserved for thousands of years. The main importance of subfossil vs. fossil remains is that the former contain organic material, which can be used for radiocarbon dating or extraction and sequencing of DNA, protein, or other biomolecules. Additionally, isotope ratios can provide much information about the ecological conditions under which extinct animals lived. Subfossils are useful for studying the evolutionary history of an environment and can be important to studies in paleoclimatology. Subfossils are often found in depositionary environments, such as lake sediments, oceanic sediments, and soils. Once deposited, physical and chemical weathering can alter the state of preservation, and small subfossils can also be ingested by living organisms. Subfossil remains that date from the Mesozoic are exceptionally rare, are usually in an advanced state of decay, and are consequently much disputed. The vast bulk of subfossil material comes from Quaternary sediments, including many subfossilized chironomid head capsules, ostracod carapaces, diatoms, and foraminifera. For remains such as molluscan seashells, which frequently do not change their chemical composition over geological time, and may occasionally even retain such features as the original color markings for millions of years, the label 'subfossil' is applied to shells that are understood to be thousands of years old, but are of Holocene age, and therefore are not old enough to be from the Pleistocene epoch. Chemical fossils Chemical fossils, or chemofossils, are chemicals found in rocks and fossil fuels (petroleum, coal, and natural gas) that provide an organic signature for ancient life. Molecular fossils and isotope ratios represent two types of chemical fossils. The oldest traces of life on Earth are fossils of this type, including carbon isotope anomalies found in zircons that imply the existence of life as early as 4.1 billion years ago. Stromatolites Stromatolites are layered accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains by biofilms of microorganisms, especially cyanobacteria. Stromatolites provide some of the most ancient fossil records of life on Earth, dating back more than 3.5 billion years ago. Stromatolites were much more abundant in Precambrian times. While older, Archean fossil remains are presumed to be colonies of cyanobacteria, younger (that is, Proterozoic) fossils may be primordial forms of the eukaryote chlorophytes (that is, green algae). One genus of stromatolite very common in the geologic record is Collenia. The earliest stromatolite of confirmed microbial origin dates to 2.724 billion years ago. A 2009 discovery provides strong evidence of microbial stromatolites extending as far back as 3.45 billion years ago. Stromatolites are a major constituent of the fossil record for life's first 3.5 billion years, peaking about 1.25 billion years ago. They subsequently declined in abundance and diversity, which by the start of the Cambrian had fallen to 20% of their peak. The most widely supported explanation is that stromatolite builders fell victims to grazing creatures (the Cambrian substrate revolution), implying that sufficiently complex organisms were common over 1 billion years ago. The connection between grazer and stromatolite abundance is well documented in the younger Ordovician evolutionary radiation; stromatolite abundance also increased after the end-Ordovician and end-Permian extinctions decimated marine animals, falling back to earlier levels as marine animals recovered. Fluctuations in metazoan population and diversity may not have been the only factor in the reduction in stromatolite abundance. Factors such as the chemistry of the environment may have been responsible for changes. While prokaryotic cyanobacteria themselves reproduce asexually through cell division, they were instrumental in priming the environment for the evolutionary development of more complex eukaryotic organisms. Cyanobacteria (as well as extremophile Gammaproteobacteria) are thought to be largely responsible for increasing the amount of oxygen in the primeval Earth's atmosphere through their continuing photosynthesis. Cyanobacteria use water, carbon dioxide and sunlight to create their food. A layer of mucus often forms over mats of cyanobacterial cells. In modern microbial mats, debris from the surrounding habitat can become trapped within the mucus, which can be cemented by the calcium carbonate to grow thin laminations of limestone. These laminations can accrete over time, resulting in the banded pattern common to stromatolites. The domal morphology of biological stromatolites is the result of the vertical growth necessary for the continued infiltration of sunlight to the organisms for photosynthesis. Layered spherical growth structures termed oncolites are similar to stromatolites and are also known from the fossil record. Thrombolites are poorly laminated or non-laminated clotted structures formed by cyanobacteria common in the fossil record and in modern sediments. The Zebra River Canyon area of the Kubis platform in the deeply dissected Zaris Mountains of southwestern Namibia provides an extremely well exposed example of the thrombolite-stromatolite-metazoan reefs that developed during the Proterozoic period, the stromatolites here being better developed in updip locations under conditions of higher current velocities and greater sediment influx. Pseudofossils Pseudofossils are visual patterns in rocks that are produced by geologic processes rather than biologic processes. They can easily be mistaken for real fossils. Some pseudofossils, such as geological dendrite crystals, are formed by naturally occurring fissures in the rock that get filled up by percolating minerals. Other types of pseudofossils are kidney ore (round shapes in iron ore) and moss agates, which look like moss or plant leaves. Concretions, spherical or ovoid-shaped nodules found in some sedimentary strata, were once thought to be dinosaur eggs, and are often mistaken for fossils as well. Astrobiology It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on the planet Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions. On 24 January 2014, NASA reported that current studies by the Curiosity and Opportunity rovers on Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective. Art According to one hypothesis, a Corinthian vase from the 6th century is the oldest artistic record of a vertebrate fossil, perhaps a Miocene giraffe combined with elements from other species. However, a subsequent study using artificial intelligence and expert evaluations reject this idea, because mammals do not have the eye bones shown in the painted monster. Morphologically, the vase painting correspond to a carnivorous reptile of the Varanidae family that still lives in regions occupied by the ancient Greek. Trading and collecting Fossil trading is the practice of buying and selling fossils. This is many times done illegally with artifacts stolen from research sites, costing many important scientific specimens each year. The problem is quite pronounced in China, where many specimens have been stolen. Fossil collecting (sometimes, in a non-scientific sense, fossil hunting) is the collection of fossils for scientific study, hobby, or profit. Fossil collecting, as practiced by amateurs, is the predecessor of modern paleontology and many still collect fossils and study fossils as amateurs. Professionals and amateurs alike collect fossils for their scientific value. As medicine The use of fossils to address health issues is rooted in traditional medicine and include the use of fossils as talismans. The specific fossil to use to alleviate or cure an illness is often based on its resemblance to the symptoms or affected organ. The usefulness of fossils as medicine is almost entirely a placebo effect, though fossil material might conceivably have some antacid activity or supply some essential minerals. The use of dinosaur bones as "dragon bones" has persisted in Traditional Chinese medicine into modern times, with mid-Cretaceous dinosaur bones being used for the purpose in Ruyang County during the early 21st century. Gallery
Biology and health sciences
Basics
null
10969
https://en.wikipedia.org/wiki/Field-programmable%20gate%20array
Field-programmable gate array
A field-programmable gate array (FPGA) is a type of configurable integrated circuit that can be repeatedly programmed after manufacturing. FPGAs are a subset of logic devices referred to as programmable logic devices (PLDs). They consist of an array of programmable logic blocks with a connecting grid, that can be configured "in the field" to interconnect with other logic blocks to perform various digital functions. FPGAs are often used in limited (low) quantity production of custom-made products, and in research and development, where the higher cost of individual FPGAs is not as important, and where creating and manufacturing a custom circuit wouldn't be feasible. Other applications for FPGAs include the telecommunications, automotive, aerospace, and industrial sectors, which benefit from their flexibility, high signal processing speed, and parallel processing abilities. A FPGA configuration is generally written using a hardware description language (HDL) e.g. VHDL, similar to the ones used for application-specific integrated circuits (ASICs). Circuit diagrams were formerly used to write the configuration. The logic blocks of an FPGA can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more sophisticated blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software. FPGAs also have a role in embedded system development due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture. FPGAs are also commonly used during the development of ASICs to speed up the simulation process. History The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable). Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration. Xilinx produced the first commercially viable field-programmable gate array in 1985the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs). In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992. Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s when competitors sprouted up, eroding a significant portion of their market share. By 1993, Actel (later Microsemi, now Microchip) was serving about 18 percent of the market. The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications. By 2013, Altera (31 percent), Xilinx (36 percent) and Actel (10 percent) together represented approximately 77 percent of the FPGA market. Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like the data centers that operate their Bing search engine), due to the performance per watt advantage FPGAs deliver. Microsoft began using FPGAs to accelerate Bing in 2014, and in 2018 began deploying FPGAs across other data center workloads for their Azure cloud computing platform. Growth The following timelines indicate progress in different aspects of FPGA design. Gates 1987: 9,000 gates, Xilinx 1992: 600,000, Naval Surface Warfare Department Early 2000s: millions 2013: 50 million, Xilinx Market size 1985: First commercial FPGA : Xilinx XC2064 1987: $14 million : >$385 million 2005: $1.9 billion 2010 estimates: $2.75 billion 2013: $5.4 billion 2020 estimate: $9.8 billion 2030 estimate: $23.34 billion Design starts A design start is a new custom design for implementation on an FPGA. 2005: 80,000 2008: 90,000 Design Contemporary FPGAs have ample logic gates and RAM blocks to implement complex digital computations. FPGAs can be used to implement any logical function that an ASIC can perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications. As FPGA designs employ very fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning helps resource allocation within FPGAs to meet these timing constraints. Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmable slew rate on each output pin. This allows the user to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded high-speed channels that would otherwise run too slowly. Also common are quartz-crystal oscillator driver circuitry, on-chip RC oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management as well as for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few mixed signal FPGAs have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks, allowing them to operate as a system on a chip (SoC). Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric. Logic blocks The most common FPGA architecture consists of an array of logic blocks called configurable logic blocks (CLBs) or logic array blocks (LABs) (depending on vendor), I/O pads, and routing channels. Generally, all the routing channels have the same width (number of signals). Multiple I/O pads may fit into the height of one row or the width of one column in the array. "An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing channels needed may vary considerably even among designs with the same amount of logic. For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing channels increase the cost (and decrease the performance) of the FPGA without providing any benefit, FPGA manufacturers try to provide just enough channels so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs." In general, a logic block consists of a few logical cells. A typical cell consists of a 4-input LUT, a full adder (FA) and a D-type flip-flop. The LUT might be split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT through the first multiplexer (mux). In arithmetic mode, their outputs are fed to the adder. The selection of mode is programmed into the second mux. The output can be either synchronous or asynchronous, depending on the programming of the third mux. In practice, the entire adder or parts of it are stored as functions into the LUTs in order to save space. Hard blocks Modern FPGA families expand upon the above capabilities to include higher-level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased performance compared to building them from logical primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high-speed I/O logic and embedded memories. Higher-end FPGAs can contain high-speed multi-gigabit transceivers and hard IP cores such as processor cores, Ethernet medium access control units, PCI or PCI Express controllers, and external memory controllers. These cores exist alongside the programmable fabric, but they are built out of transistors instead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high-performance signal conditioning circuitry along with high-speed serializers and deserializers, components that cannot be built out of LUTs. Higher-level physical layer (PHY) functionality such as line coding may or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA. Soft core An alternate approach to using hard macro processors is to make use of soft processor IP cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are examples of popular softcore processors. Many modern FPGAs are programmed at run time, which has led to the idea of reconfigurable computing or reconfigurable systems – CPUs that reconfigure themselves to suit the task at hand. Additionally, new non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip. Integration In 2012 the coarse-grained architectural approach was taken a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete system on a programmable chip. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 all Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric, or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Microsemi SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel analog-to-digital converters and digital-to-analog converters in their flash memory-based FPGA fabric. Clocking Most of the logic inside of an FPGA is synchronous circuitry that requires a clock signal. FPGAs contain dedicated global and regional routing networks for clock and reset, typically implemented as an H tree, so they can be delivered with minimal skew. FPGAs may contain analog phase-locked loop or delay-locked loop components to synthesize new clock frequencies and manage jitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separate clock domains. These clock signals can be generated locally by an oscillator or they can be recovered from a data stream. Care must be taken when building clock domain crossing circuitry to avoid metastability. Some FPGAs contain dual port RAM blocks that are capable of working with different clocks, aiding in the construction of building FIFOs and dual port buffers that bridge clock domains. 3D architectures To shrink the size and power consumption of FPGAs, vendors such as Tabula and Xilinx have introduced 3D or stacked architectures. Following the introduction of its 28 nm 7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies. Xilinx's approach stacks several (three or four) active FPGA dies side by side on a silicon interposer – a single piece of silicon that carries passive interconnect. The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called a heterogeneous FPGA. Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other dies and technologies to the FPGA using Intel's embedded multi_die interconnect bridge (EMIB) technology. Programming To define the behavior of the FPGA, the user provides a design in a hardware description language (HDL) or as a schematic design. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and its component modules. Using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fit to the actual FPGA architecture using a process called place and route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the results using timing analysis, simulation, and other verification and validation techniques. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA via a serial interface (JTAG) or to an external memory device such as an EEPROM. The most common HDLs are VHDL and Verilog. National Instruments' LabVIEW graphical programming language (sometimes referred to as G) has an FPGA add-in module available to target and program FPGA hardware. Verilog was created to simplify the process making HDL more robust and flexible. Verilog has a C-like syntax, unlike VHDL. To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called intellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license). Such designs are known as open-source hardware. In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally, the design is laid out in the FPGA at which point propagation delay values can be back-annotated onto the netlist, and the simulation can be run again with these values. More recently, OpenCL (Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in the C programming language. For further information, see high-level synthesis and C to HDL. Most FPGAs rely on an SRAM-based approach to be programmed. These FPGAs are in-system programmable and re-programmable, but require external boot devices. For example, flash memory or EEPROM devices may load contents into internal SRAM that controls routing and logic. The SRAM approach is based on CMOS. Rarer alternatives to the SRAM approach include: Fuse: one-time programmable. Bipolar. Obsolete. Antifuse: one-time programmable. CMOS. Examples: Actel SX and Axcelerator families; Quicklogic Eclipse II family. PROM: programmable read-only memory technology. One-time programmable because of plastic packaging. Obsolete. EPROM: erasable programmable read-only memory technology. One-time programmable but with window, can be erased with ultraviolet (UV) light. CMOS. Obsolete. EEPROM: electrically erasable programmable read-only memory technology. Can be erased, even in plastic packages. Some but not all EEPROM devices can be in-system programmed. CMOS. Flash: flash-erase EPROM technology. Can be erased, even in plastic packages. Some but not all flash devices can be in-system programmed. Usually, a flash cell is smaller than an equivalent EEPROM cell and is, therefore, less expensive to manufacture. CMOS. Example: Actel ProASIC family. Manufacturers In 2016, long-time industry rivals Xilinx (now part of AMD) and Altera (now part of İntel) were the FPGA market leaders. At that time, they controlled nearly 90 percent of the market. Both Xilinx (now AMD) and Altera (now Intel) provide proprietary electronic design automation software for Windows and Linux (ISE/Vivado and Quartus) which enables engineers to design, analyze, simulate, and synthesize (compile) their designs. In March 2010, Tabula announced their FPGA technology that uses time-multiplexed logic and interconnect that claims potential cost savings for high-density applications. On March 24, 2015, Tabula officially shut down. On June 1, 2015, Intel announced it would acquire Altera for approximately US$16.7 billion and completed the acquisition on December 30, 2015. On October 27, 2020, AMD announced it would acquire Xilinx and completed the acquisition valued at about US$50 billion in February 2022. In February 2024 Altera became independent of Intel again. Other manufacturers include: Achronix, manufacturing SRAM based FPGAs with 1.5 GHz fabric speed Altium, provides system-on-FPGA hardware-software design environment. Cologne Chip, German Government backed designer and producer of FPGAs Efinix offers small to medium-sized FPGAs. They combine logic and routing interconnects into a configurable XLR cell. GOWIN Semiconductors, manufacturing small and medium-sized SRAM and Flash-based FPGAs. They also offer pin-compatible replacements for a few Xilinx, Altera and Lattice products. Lattice Semiconductor manufactures low-power SRAM-based FPGAs featuring integrated configuration flash, instant-on and live reconfiguration SiliconBlue Technologies provides extremely low-power SRAM-based FPGAs with optional integrated nonvolatile configuration memory; acquired by Lattice in 2011 Microchip: Microsemi (previously Actel), producing antifuse, flash-based, mixed-signal FPGAs; acquired by Microchip in 2018 Atmel, a second source of some Altera-compatible devices; also FPSLIC mentioned above; acquired by Microchip in 2016 QuickLogic manufactures ultra-low-power sensor hubs, extremely-low-powered, low-density SRAM-based FPGAs, with display bridges MIPI and RGB inputs; MIPI, RGB and LVDS outputs. Applications An FPGA can be used to solve any problem which is computable. FPGAs can be used to implement a soft microprocessor, such as the Xilinx MicroBlaze or Altera Nios II. But their advantage lies in that they are significantly faster for some applications because of their parallel nature and optimality in terms of the number of gates used for certain processes. FPGAs were originally introduced as competitors to CPLDs to implement glue logic for printed circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as full systems on chips (SoCs). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications that had traditionally been the sole reserve of digital signal processors (DSPs) began to use FPGAs instead. The evolution of FPGAs has motivated an increase in the use of these devices, whose architecture allows the development of hardware solutions optimized for complex tasks, such as 3D MRI image segmentation, 3D discrete wavelet transform, tomographic image reconstruction, or PET/MRI systems. The developed solutions can perform intensive computation tasks with parallel processing, are dynamically reprogrammable, and have a low cost, all while meeting the hard real-time requirements associated with medical imaging. Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a general-purpose processor. The search engine Bing is noted for adopting FPGA acceleration for its search algorithm in 2014. , FPGAs are seeing increased use as AI accelerators including Microsoft's Project Catapult and for accelerating artificial neural networks for machine learning applications. Originally, FPGAs were reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. Often a custom-made chip would be cheaper if made in larger quantities, but FPGAs may be chosen to quickly bring a product to market. By 2017, new cost and performance dynamics broadened the range of viable applications. Other uses for FPGAs include: Space (with radiation hardening) Hardware security modules High-speed financial transactions Retrocomputing (e.g. the MARS and MiSTer FPGA projects) Usage by United States military FPGAs play a crucial role in modern military communications, especially in systems like the Joint Tactical Radio System (JTRS) and in devices from companies such as Thales and Harris Corporation. Their flexibility and programmability make them ideal for military communications, offering customizable and secure signal processing. In the JTRS, used by the US military, FPGAs provide adaptability and real-time processing, crucial for meeting various communication standards and encryption methods. Thales uses FPGA technology in designing communication devices that fulfill the rigorous demands of military use, including rapid reconfiguration and robust security. Similarly, Harris Corporation, now part of L3Harris Technologies, incorporates FPGAs in its defense and commercial communication solutions, enhancing signal processing and system security. L3Harris Rapidly adaptable standards-compliant radio (RASOR): A modular open system approach (MOSA) solution supporting over 50 data links and waveforms. ASPEN technology platform: Consists of proven hardware modules with programmable software and FPGA options for advanced, configurable data links. AN/PRC-117F(C) radios: Supported the U.S. Air Force Electronic Systems Command, strengthening Harris' role as a full-spectrum communications system supplier. Thales SYNAPS radio damily: Utilizes software-defined radio (SDR) technology, typically involving FPGA for enhanced flexibility and performance. AN/PRC-148 (multiband inter/intra team radio - MBITR): A small-form-factor, multiband, multi-mode SDR used in Afghanistan and Iraq. JTRS Cluster 2 handheld radio: Currently in development, recently completed a successful early operational assessment. Security FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerning hardware security. FPGAs' flexibility makes malicious modifications during fabrication a lower risk. Previously, for many FPGAs, the design bitstream was exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstream encryption and authentication. For example, Altera and Xilinx offer AES encryption (up to 256-bit) for bitstreams stored in an external flash memory. Physical unclonable functions (PUFs) are integrated circuits that have their own unique signatures, due to processing, and can also be used to secure FPGAs while taking up very little hardware space. FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi's ProAsic 3 or Lattice's XP2 programmable devices, do not expose the bitstream and do not need encryption. In addition, flash memory for a lookup table provides single event upset protection for space applications. Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such as Microsemi. With its Stratix 10 FPGAs and SoCs, Altera introduced a Secure Device Manager and physical unclonable functions to provide high levels of protection against physical attacks. In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that some FPGAs can be vulnerable to hostile intent. They discovered a critical backdoor vulnerability had been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto and access keys, accessing unencrypted bitstream, modifying low-level silicon features, and extracting configuration data. In 2020 a critical vulnerability (named "Starbleed") was discovered in all Xilinx 7series FPGAs that rendered bitstream encryption useless. There is no workaround. Xilinx did not produce a hardware revision. Ultrascale and later devices, already on the market at the time, were not affected. Similar technologies Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study from 2006 showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations. Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the field") to fix bugs, and often include shorter time to market and lower non-recurring engineering costs. Vendors can also take a middle road via FPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed. This is often also the case with new processor designs. Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running. The primary differences between complex programmable logic devices (CPLDs) and FPGAs are architectural. A CPLD has a comparatively restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. As a result, CPLDs are less flexible but have the advantage of more predictable timing delays and FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complex electronic design automation (EDA) software. In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complex embedded functions such as adders, multipliers, memory, and serializer/deserializers. Another common distinction is that CPLDs contain embedded flash memory to store their configuration while FPGAs usually require external non-volatile memory (but not always). When a design requires simple instant-on (logic is already configured at power-up) CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions and are responsible for "booting" the FPGA as well as controlling reset and boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design.
Technology
Semiconductors
null
10975
https://en.wikipedia.org/wiki/Fatty%20acid
Fatty acid
In chemistry, particularly in biochemistry, a fatty acid is a carboxylic acid with an aliphatic chain, which is either saturated or unsaturated. Most naturally occurring fatty acids have an unbranched chain of an even number of carbon atoms, from 4 to 28. Fatty acids are a major component of the lipids (up to 70% by weight) in some species such as microalgae but in some other organisms are not found in their standalone form, but instead exist as three main classes of esters: triglycerides, phospholipids, and cholesteryl esters. In any of these forms, fatty acids are both important dietary sources of fuel for animals and important structural components for cells. History The concept of fatty acid (acide gras) was introduced in 1813 by Michel Eugène Chevreul, though he initially used some variant terms: graisse acide and acide huileux ("acid fat" and "oily acid"). Types of fatty acids Fatty acids are classified in many ways: by length, by saturation vs unsaturation, by even vs odd carbon content, and by linear vs branched. Length of fatty acids Short-chain fatty acids (SCFAs) are fatty acids with aliphatic tails of five or fewer carbons (e.g. butyric acid). Medium-chain fatty acids (MCFAs) are fatty acids with aliphatic tails of 6 to 12 carbons, which can form medium-chain triglycerides. Long-chain fatty acids (LCFAs) are fatty acids with aliphatic tails of 13 to 21 carbons. Very long chain fatty acids (VLCFAs) are fatty acids with aliphatic tails of 22 or more carbons. Saturated fatty acids Saturated fatty acids have no C=C double bonds. They have the formula CH(CH)COOH, where n is some positive integer. An important saturated fatty acid is stearic acid (n = 16), which when neutralized with sodium hydroxide is the most common form of soap. Unsaturated fatty acids Unsaturated fatty acids have one or more C=C double bonds. The C=C double bonds can give either cis or trans isomers. cis A cis configuration means that the two hydrogen atoms adjacent to the double bond stick out on the same side of the chain. The rigidity of the double bond freezes its conformation and, in the case of the cis isomer, causes the chain to bend and restricts the conformational freedom of the fatty acid. The more double bonds the chain has in the cis configuration, the less flexibility it has. When a chain has many cis bonds, it becomes quite curved in its most accessible conformations. For example, oleic acid, with one double bond, has a "kink" in it, whereas linoleic acid, with two double bonds, has a more pronounced bend. α-Linolenic acid, with three double bonds, favors a hooked shape. The effect of this is that, in restricted environments, such as when fatty acids are part of a phospholipid in a lipid bilayer or triglycerides in lipid droplets, cis bonds limit the ability of fatty acids to be closely packed, and therefore can affect the melting temperature of the membrane or of the fat. Cis unsaturated fatty acids, however, increase cellular membrane fluidity, whereas trans unsaturated fatty acids do not. trans A trans configuration, by contrast, means that the adjacent two hydrogen atoms lie on opposite sides of the chain. As a result, they do not cause the chain to bend much, and their shape is similar to straight saturated fatty acids. In most naturally occurring unsaturated fatty acids, each double bond has three (n−3), six (n−6), or nine (n−9) carbon atoms after it, and all double bonds have a cis configuration. Most fatty acids in the trans configuration (trans fats) are not found in nature and are the result of human processing (e.g., hydrogenation). Some trans fatty acids also occur naturally in the milk and meat of ruminants (such as cattle and sheep). They are produced, by fermentation, in the rumen of these animals. They are also found in dairy products from milk of ruminants, and may be also found in breast milk of women who obtained them from their diet. The geometric differences between the various types of unsaturated fatty acids, as well as between saturated and unsaturated fatty acids, play an important role in biological processes, and in the construction of biological structures (such as cell membranes). Even- vs odd-chained fatty acids Most fatty acids are even-chained, e.g. stearic (C18) and oleic (C18), meaning they are composed of an even number of carbon atoms. Some fatty acids have odd numbers of carbon atoms; they are referred to as odd-chained fatty acids (OCFA). The most common OCFA are the saturated C15 and C17 derivatives, pentadecanoic acid and heptadecanoic acid respectively, which are found in dairy products. On a molecular level, OCFAs are biosynthesized and metabolized slightly differently from the even-chained relatives. Branching Most common fatty acids are straight-chain compounds, with no additional carbon atoms bonded as side groups to the main hydrocarbon chain. Branched-chain fatty acids contain one or more methyl groups bonded to the hydrocarbon chain. Nomenclature Carbon atom numbering Most naturally occurring fatty acids have an unbranched chain of carbon atoms, with a carboxyl group (–COOH) at one end, and a methyl group (–CH3) at the other end. The position of each carbon atom in the backbone of a fatty acid is usually indicated by counting from 1 at the −COOH end. Carbon number x is often abbreviated C-x (or sometimes Cx), with x = 1, 2, 3, etc. This is the numbering scheme recommended by the IUPAC. Another convention uses letters of the Greek alphabet in sequence, starting with the first carbon after the carboxyl group. Thus carbon α (alpha) is C-2, carbon β (beta) is C-3, and so forth. Although fatty acids can be of diverse lengths, in this second convention the last carbon in the chain is always labelled as ω (omega), which is the last letter in the Greek alphabet. A third numbering convention counts the carbons from that end, using the labels "ω", "ω−1", "ω−2". Alternatively, the label "ω−x" is written "n−x", where the "n" is meant to represent the number of carbons in the chain. In either numbering scheme, the position of a double bond in a fatty acid chain is always specified by giving the label of the carbon closest to the carboxyl end. Thus, in an 18 carbon fatty acid, a double bond between C-12 (or ω−6) and C-13 (or ω−5) is said to be "at" position C-12 or ω−6. The IUPAC naming of the acid, such as "octadec-12-enoic acid" (or the more pronounceable variant "12-octadecanoic acid") is always based on the "C" numbering. The notation Δx,y,... is traditionally used to specify a fatty acid with double bonds at positions x,y,.... (The capital Greek letter "Δ" (delta) corresponds to Roman "D", for Double bond). Thus, for example, the 20-carbon arachidonic acid is Δ5,8,11,14, meaning that it has double bonds between carbons 5 and 6, 8 and 9, 11 and 12, and 14 and 15. In the context of human diet and fat metabolism, unsaturated fatty acids are often classified by the position of the double bond closest between to the ω carbon (only), even in the case of multiple double bonds such as the essential fatty acids. Thus linoleic acid (18 carbons, Δ9,12), γ-linolenic acid (18-carbon, Δ6,9,12), and arachidonic acid (20-carbon, Δ5,8,11,14) are all classified as "ω−6" fatty acids; meaning that their formula ends with –CH=CH–––––. Fatty acids with an odd number of carbon atoms are called odd-chain fatty acids, whereas the rest are even-chain fatty acids. The difference is relevant to gluconeogenesis. Naming of fatty acids The following table describes the most common systems of naming fatty acids. Free fatty acids When circulating in the plasma (plasma fatty acids), not in their ester, fatty acids are known as non-esterified fatty acids (NEFAs) or free fatty acids (FFAs). FFAs are always bound to a transport protein, such as albumin. FFAs also form from triglyceride food oils and fats by hydrolysis, contributing to the characteristic rancid odor. An analogous process happens in biodiesel with risk of part corrosion. Production Industrial Fatty acids are usually produced industrially by the hydrolysis of triglycerides, with the removal of glycerol (see oleochemicals). Phospholipids represent another source. Some fatty acids are produced synthetically by hydrocarboxylation of alkenes. By animals In animals, fatty acids are formed from carbohydrates predominantly in the liver, adipose tissue, and the mammary glands during lactation. Carbohydrates are converted into pyruvate by glycolysis as the first important step in the conversion of carbohydrates into fatty acids. Pyruvate is then decarboxylated to form acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl-CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to the mitochondrion as malate. The cytosolic acetyl-CoA is carboxylated by acetyl-CoA carboxylase into malonyl-CoA, the first committed step in the synthesis of fatty acids. Malonyl-CoA is then involved in a repeating series of reactions that lengthens the growing fatty acid chain by two carbons at a time. Almost all natural fatty acids, therefore, have even numbers of carbon atoms. When synthesis is complete the free fatty acids are nearly always combined with glycerol (three fatty acids to one glycerol molecule) to form triglycerides, the main storage form of fatty acids, and thus of energy in animals. However, fatty acids are also important components of the phospholipids that form the phospholipid bilayers out of which all the membranes of the cell are constructed (the cell wall, and the membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus). The "uncombined fatty acids" or "free fatty acids" found in the circulation of animals come from the breakdown (or lipolysis) of stored triglycerides. Because they are insoluble in water, these fatty acids are transported bound to plasma albumin. The levels of "free fatty acids" in the blood are limited by the availability of albumin binding sites. They can be taken up from the blood by all cells that have mitochondria (with the exception of the cells of the central nervous system). Fatty acids can only be broken down in mitochondria, by means of beta-oxidation followed by further combustion in the citric acid cycle to CO and water. Cells in the central nervous system, although they possess mitochondria, cannot take free fatty acids up from the blood, as the blood–brain barrier is impervious to most free fatty acids, excluding short-chain fatty acids and medium-chain fatty acids. These cells have to manufacture their own fatty acids from carbohydrates, as described above, in order to produce and maintain the phospholipids of their cell membranes, and those of their organelles. Variation between animal species Studies on the cell membranes of mammals and reptiles discovered that mammalian cell membranes are composed of a higher proportion of polyunsaturated fatty acids (DHA, omega−3 fatty acid) than reptiles. Studies on bird fatty acid composition have noted similar proportions to mammals but with 1/3rd less omega−3 fatty acids as compared to omega−6 for a given body size. This fatty acid composition results in a more fluid cell membrane but also one that is permeable to various ions ( & ), resulting in cell membranes that are more costly to maintain. This maintenance cost has been argued to be one of the key causes for the high metabolic rates and concomitant warm-bloodedness of mammals and birds. However polyunsaturation of cell membranes may also occur in response to chronic cold temperatures as well. In fish increasingly cold environments lead to increasingly high cell membrane content of both monounsaturated and polyunsaturated fatty acids, to maintain greater membrane fluidity (and functionality) at the lower temperatures. Fatty acids in dietary fats The following table gives the fatty acid, vitamin E and cholesterol composition of some common dietary fats. Reactions of fatty acids Fatty acids exhibit reactions like other carboxylic acids, i.e. they undergo esterification and acid-base reactions. Transesterification All fatty acids transesterify. Typically, transesterification is practiced in the conversion of fats to fatty acid methyl esters. These esters are used for biodiesel. They are also hydrogenated to give fatty alcohols. Even vinyl esters can be made by transesterification using vinyl acetate. Acid-base reactions Fatty acids do not show a great variation in their acidities, as indicated by their respective pKa. Nonanoic acid, for example, has a pK of 4.96, being only slightly weaker than acetic acid (4.76). As the chain length increases, the solubility of the fatty acids in water decreases, so that the longer-chain fatty acids have minimal effect on the pH of an aqueous solution. Near neutral pH, fatty acids exist at their conjugate bases, i.e. oleate, etc. Solutions of fatty acids in ethanol can be titrated with sodium hydroxide solution using phenolphthalein as an indicator. This analysis is used to determine the free fatty acid content of fats; i.e., the proportion of the triglycerides that have been hydrolyzed. Neutralization of fatty acids, like saponification, is a widely practiced route to metallic soaps. Hydrogenation and hardening Hydrogenation of unsaturated fatty acids is widely practiced. Typical conditions involve 2.0–3.0 MPa of H pressure, 150 °C, and nickel supported on silica as a catalyst. This treatment affords saturated fatty acids. The extent of hydrogenation is indicated by the iodine number. Hydrogenated fatty acids are less prone toward rancidification. Since the saturated fatty acids are higher melting than the unsaturated precursors, the process is called hardening. Related technology is used to convert vegetable oils into margarine. The hydrogenation of triglycerides (vs fatty acids) is advantageous because the carboxylic acids degrade the nickel catalysts, affording nickel soaps. During partial hydrogenation, unsaturated fatty acids can be isomerized from cis to trans configuration. More forcing hydrogenation, i.e. using higher pressures of H and higher temperatures, converts fatty acids into fatty alcohols. Fatty alcohols are, however, more easily produced from simpler fatty acid esters, like the fatty acid methyl esters ("FAME"s). Chemistry of saturated vs unsaturated acids The reactivity of saturated fatty acids is usually associated with the carboxylic acid or the adjacent methylene group By conversion to their acid chlorides, they can be converted to the symmetrical fatty ketone laurone (). Treatment with sulfur trioxide gives the α-sulfonic acids. The reactivity of unsaturated fatty acids is often dominated by the site of unsaturation. These reactions are the basis of ozonolysis, hydrogenation, and the iodine number. Ozonolysis (degradation by ozone) is practiced in the production of azelaic acid ((CH)(COH)) from oleic acid. Circulation Digestion and intake Short- and medium-chain fatty acids are absorbed directly into the blood via intestine capillaries and travel through the portal vein just as other absorbed nutrients do. However, long-chain fatty acids are not directly released into the intestinal capillaries. Instead they are absorbed into the fatty walls of the intestine villi and reassemble again into triglycerides. The triglycerides are coated with cholesterol and protein (protein coat) into a compound called a chylomicron. From within the cell, the chylomicron is released into a lymphatic capillary called a lacteal, which merges into larger lymphatic vessels. It is transported via the lymphatic system and the thoracic duct up to a location near the heart (where the arteries and veins are larger). The thoracic duct empties the chylomicrons into the bloodstream via the left subclavian vein. At this point the chylomicrons can transport the triglycerides to tissues where they are stored or metabolized for energy. Metabolism Fatty acids are broken down to CO and water by the intra-cellular mitochondria through beta oxidation and the citric acid cycle. In the final step (oxidative phosphorylation), reactions with oxygen release a lot of energy, captured in the form of large quantities of ATP. Many cell types can use either glucose or fatty acids for this purpose, but fatty acids release more energy per gram. Fatty acids (provided either by ingestion or by drawing on triglycerides stored in fatty tissues) are distributed to cells to serve as a fuel for muscular contraction and general metabolism. Essential fatty acids Fatty acids that are required for good health but cannot be made in sufficient quantity from other substrates, and therefore must be obtained from food, are called essential fatty acids. There are two series of essential fatty acids: one has a double bond three carbon atoms away from the methyl end; the other has a double bond six carbon atoms away from the methyl end. Humans lack the ability to introduce double bonds in fatty acids beyond carbons 9 and 10, as counted from the carboxylic acid side. Two essential fatty acids are linoleic acid (LA) and alpha-linolenic acid (ALA). These fatty acids are widely distributed in plant oils. The human body has a limited ability to convert ALA into the longer-chain omega-3 fatty acids — eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), which can also be obtained from fish. Omega−3 and omega−6 fatty acids are biosynthetic precursors to endocannabinoids with antinociceptive, anxiolytic, and neurogenic properties. Distribution Blood fatty acids adopt distinct forms in different stages in the blood circulation. They are taken in through the intestine in chylomicrons, but also exist in very low density lipoproteins (VLDL) and low density lipoproteins (LDL) after processing in the liver. In addition, when released from adipocytes, fatty acids exist in the blood as free fatty acids. It is proposed that the blend of fatty acids exuded by mammalian skin, together with lactic acid and pyruvic acid, is distinctive and enables animals with a keen sense of smell to differentiate individuals. Skin The stratum corneum the outermost layer of the epidermis is composed of terminally differentiated and enucleated corneocytes within a lipid matrix. Together with cholesterol and ceramides, free fatty acids form a water-impermeable barrier that prevents evaporative water loss. Generally, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (about 50% by weight), cholesterol (25%), and free fatty acids (15%). Saturated fatty acids 16 and 18 carbons in length are the dominant types in the epidermis, while unsaturated fatty acids and saturated fatty acids of various other lengths are also present. The relative abundance of the different fatty acids in the epidermis is dependent on the body site the skin is covering. There are also characteristic epidermal fatty acid alterations that occur in psoriasis, atopic dermatitis, and other inflammatory conditions. Analysis The chemical analysis of fatty acids in lipids typically begins with an interesterification step that breaks down their original esters (triglycerides, waxes, phospholipids etc.) and converts them to methyl esters, which are then separated by gas chromatography or analyzed by gas chromatography and mid-infrared spectroscopy. Separation of unsaturated isomers is possible by silver ion complemented thin-layer chromatography. Other separation techniques include high-performance liquid chromatography (with short columns packed with silica gel with bonded phenylsulfonic acid groups whose hydrogen atoms have been exchanged for silver ions). The role of silver lies in its ability to form complexes with unsaturated compounds. Industrial uses Fatty acids are mainly used in the production of soap, both for cosmetic purposes and, in the case of metallic soaps, as lubricants. Fatty acids are also converted, via their methyl esters, to fatty alcohols and fatty amines, which are precursors to surfactants, detergents, and lubricants. Other applications include their use as emulsifiers, texturizing agents, wetting agents, anti-foam agents, or stabilizing agents. Esters of fatty acids with simpler alcohols (such as methyl-, ethyl-, n-propyl-, isopropyl- and butyl esters) are used as emollients in cosmetics and other personal care products and as synthetic lubricants. Esters of fatty acids with more complex alcohols, such as sorbitol, ethylene glycol, diethylene glycol, and polyethylene glycol are consumed in food, or used for personal care and water treatment, or used as synthetic lubricants or fluids for metal working.
Biology and health sciences
Biochemistry and molecular biology
null
10983
https://en.wikipedia.org/wiki/First-order%20logic
First-order logic
First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for all x, if x is a man, then x is mortal"; where "for all x" is a quantifier, x is a variable, and "... is a man" and "... is mortal" are predicates. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic. A theory about a topic, such as set theory, a theory for groups, or a formal theory of arithmetic, is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic. The term "first-order" distinguishes first-order logic from higher-order logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted. In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets. There are many deductive systems for first-order logic which are both sound, i.e. all provable statements are true in all models; and complete, i.e. all statements which are true in all models are provable. Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem. First-order logic is the standard for the formalization of mathematics into axioms, and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axiom systems that do fully describe these two structures, i.e. categorical axiom systems, can be obtained in stronger logics such as second-order logic. The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce. For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001). Introduction While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. A predicate evaluates to true or false for an entity or entities in the domain of discourse. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such as p and q. They are not viewed as an application of a predicate, such as , to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false. However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common form for some individual , in the first sentence the value of the variable x is "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic. The truth of a formula such as "x is a philosopher" depends on which object is denoted by x and on the interpretation of the predicate "is a philosopher". Consequently, "x is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment. Relationships between predicates can be stated using logical connectives. For example, the first-order formula "if x is a philosopher, then x is a scholar", is a conditional statement with "x is a philosopher" as its hypothesis, and "x is a scholar" as its conclusion, which again needs specification of x in order to have a definite truth value. Quantifiers can be applied to variables in a formula. The variable x in the previous formula can be universally quantified, for instance, with the first-order sentence "For every x, if x is a philosopher, then x is a scholar". The universal quantifier "for every" in this sentence expresses the idea that the claim "if x is a philosopher, then x is a scholar" holds for all choices of x. The negation of the sentence "For every x, if x is a philosopher, then x is a scholar" is logically equivalent to the sentence "There exists x such that x is a philosopher and x is not a scholar". The existential quantifier "there exists" expresses the idea that the claim "x is a philosopher and x is not a scholar" holds for some choice of x. The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables. An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. For example, consider the sentence "There exists x such that x is a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of the Republic." It is true, as witnessed by Plato in that text. There are two key parts of first-order logic. The syntax determines which finite sequences of symbols are well-formed expressions in first-order logic, while the semantics determines the meanings behind these expressions. Syntax Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is well formed. There are two key types of well-formed expressions: terms, which intuitively represent objects, and formulas, which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings of symbols, where all the symbols together form the alphabet of the language. Alphabet As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols. It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol always represents "and"; it is never interpreted as "or", which is represented by the logical symbol . However, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate depending on the interpretation at hand. Logical symbols Logical symbols are a set of characters that vary by author, but usually include the following: Quantifier symbols: for universal quantification, and for existential quantification Logical connectives: for conjunction, for disjunction, for implication, for biconditional, for negation. Some authors use Cpq instead of and Epq instead of , especially in contexts where → is used for other purposes. Moreover, the horseshoe may replace ; the triple-bar may replace ; a tilde (), Np, or Fp may replace ; a double bar , , or Apq may replace ; and an ampersand , Kpq, or the middle dot may replace , especially if these symbols are not available for technical reasons. (The aforementioned symbols Cpq, Epq, Np, Apq, and Kpq are used in Polish notation.) Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context. An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, ... . Subscripts are often used to distinguish variables: An equality symbol (sometimes, identity symbol) (see below). Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices. Other logical symbols include the following: Truth constants: T, V, or for "true" and F, O, or for "false" (V and O are from Polish notation). Without any such logical operators of valence 0, these two constants can only be expressed using quantifiers. Additional logical connectives such as the Sheffer stroke, Dpq (NAND), and exclusive or, Jpq. Non-logical symbols Non-logical symbols represent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes: For every integer n ≥ 0, there is a collection of n-ary, or n-place, predicate symbols. Because they represent relations between n elements, they are also called relation symbols. For each arity n, there is an infinite supply of them: Pn0, Pn1, Pn2, Pn3, ... For every integer n ≥ 0, there are infinitely many n-ary function symbols: f n0, f n1, f n2, f n3, ... When the arity of a predicate symbol or function symbol is clear from context, the superscript n is often omitted. In this traditional approach, there is only one language of first-order logic. This approach is still common, especially in philosophically oriented books. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a signature. Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, <} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of the Löwenheim–Skolem theorem. Though signatures might in some cases imply how non-logical symbols are to be interpreted, interpretation of the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics. In this approach, every non-logical symbol is of one of the following types: A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or equal to 0. These are often denoted by uppercase letters such as P, Q and R. Examples: In P(x), P is a predicate symbol of valence 1. One possible interpretation is "x is a man". In Q(x,y), Q is a predicate symbol of valence 2. Possible interpretations include "x is greater than y" and "x is the father of y". Relations of valence 0 can be identified with propositional variables, which can stand for any statement. One possible interpretation of R is "Socrates is a man". A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase roman letters such as f, g and h. Examples: f(x) may be interpreted as "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x". In arithmetic, g(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y". Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet such as a, b and c. The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, it may stand for the empty set. The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols. Formation rules The formation rules define the terms and formulas of first-order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms. Terms The set of terms is inductively defined by the following rules: Variables. Any variable symbol is a term. Functions. If f is an n-ary function symbol, and t1, ..., tn are terms, then f(t1,...,tn) is a term. In particular, symbols denoting individual constants are nullary function symbols, and thus are terms. Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term. Formulas The set of formulas (also called well-formed formulas or WFFs) is inductively defined by the following rules: Predicate symbols. If P is an n-ary predicate symbol and t1, ..., tn are terms then P(t1,...,tn) is a formula. Equality. If the equality symbol is considered part of logic, and t1 and t2 are terms, then t1 = t2 is a formula. Negation. If is a formula, then is a formula. Binary connectives. If and are formulas, then () is a formula. Similar rules apply to other binary logical connectives. Quantifiers. If is a formula and x is a variable, then (for all x, holds) and (there exists x such that ) are formulas. Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas. For example: is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. However, is not a formula, although it is a string of symbols from the alphabet. The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability. Notational conventions For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is: is evaluated first and are evaluated next Quantifiers are evaluated next is evaluated last. Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula: might be written as: In some fields, it is common to use infix notation for binary relations and functions, instead of the prefix notation defined above. For example, in arithmetic, one typically writes "2 + 2 = 4" instead of "=(+(2,2),4)". It is common to regard formulas in infix notation as abbreviations for the corresponding formulas in prefix notation, cf. also term structure vs. representation. The definitions above use infix notation for binary connectives such as . A less common convention is Polish notation, in which one writes , and so on in front of their arguments rather than between them. This convention is advantageous in that it allows all punctuation symbols to be discarded. As such, Polish notation is compact and elegant, but rarely used in practice because it is hard for humans to read. In Polish notation, the formula: becomes Free and bound variables In a formula, a variable may occur free or bound (or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbol x, each occurrence of a variable symbol x in a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbol x appears.p.297 Then, an occurrence of x is said to be bound if that occurrence of x lies within the scope of at least one of either or . Finally, x is bound in φ if all occurrences of x in φ are bound.pp.142--143 Intuitively, a variable symbol is free in a formula if at no point is it quantified:pp.142--143 in , the sole occurrence of variable x is free while that of y is bound. The free and bound variable occurrences in a formula are defined inductively as follows. Atomic formulas If φ is an atomic formula, then x occurs free in φ if and only if x occurs in φ. Moreover, there are no bound variables in any atomic formula. Negation x occurs free in ¬φ if and only if x occurs free in φ. x occurs bound in ¬φ if and only if x occurs bound in φ Binary connectives x occurs free in (φ → ψ) if and only if x occurs free in either φ or ψ. x occurs bound in (φ → ψ) if and only if x occurs bound in either φ or ψ. The same rule applies to any other binary connective in place of →. Quantifiers x occurs free in , if and only if x occurs free in φ and x is a different symbol from y. Also, x occurs bound in , if and only if x is y or x occurs bound in φ. The same rule holds with in place of . For example, in , x and y occur only bound, z occurs only free, and w is neither because it does not occur in the formula. Free and bound variables of a formula need not be disjoint sets: in the formula , the first occurrence of x, as argument of P, is free while the second one, as argument of Q, is bound. A formula in first-order logic with no free variable occurrences is called a first-order sentence. These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil(x) is true must depend on what x represents. But the sentence will be either true or false in a given interpretation. Example: ordered abelian groups In mathematics, the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then: The expressions +(x, y) and +(x, +(y, −(z))) are terms. These are usually written as x + y and x + y − z. The expressions +(x, y) = 0 and ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas. These are usually written as x + y = 0 and x + y − z  ≤  x + y. The expression is a formula, which is usually written as This formula has one free variable, z. The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written Semantics An interpretation of a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic, but aside from requiring the axiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.) First-order structures The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model; see below). The structure consists of a domain of discourse D and an interpretation function mapping non-logical symbols to predicates, functions, and constants. The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example, states the existence of some object in D for which the predicate P is true (or, more precisely, for which the predicate assigned to the predicate symbol P by the interpretation is true). For example, one can take D to be the set of integers. Non-logical symbols are interpreted as follows: The interpretation of an n-ary function symbol is a function from Dn to D. For example, if the domain of discourse is the set of integers, a function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol f is associated with the function which, in this interpretation, is addition. The interpretation of a constant symbol (a function symbol of arity 0) is a function from D0 (a set whose only member is the empty tuple) to D, which can be simply identified with an object in D. For example, an interpretation may assign the value to the constant symbol . The interpretation of an n-ary predicate symbol is a set of n-tuples of elements of D, giving the arguments for which the predicate is true. For example, an interpretation of a binary predicate symbol P may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than its second argument. Equivalently, predicate symbols may be assigned Boolean-valued functions from Dn to . Evaluation of truth values A formula evaluates to true or false given an interpretation and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as . The truth value of this formula changes depending on the values that x and y denote. First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment: Variables. Each variable x evaluates to μ(x) Functions. Given terms that have been evaluated to elements of the domain of discourse, and a n-ary function symbol f, the term evaluates to . Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema. Atomic formulas (1). A formula is associated the value true or false depending on whether , where are the evaluation of the terms and is the interpretation of , which by assumption is a subset of . Atomic formulas (2). A formula is assigned true if and evaluate to the same object of the domain of discourse (see the section on equality below). Logical connectives. A formula in the form , , etc. is evaluated according to the truth table for the connective in question, as in propositional logic. Existential quantifiers. A formula is true according to M and if there exists an evaluation of the variables that differs from at most regarding the evaluation of x and such that φ is true according to the interpretation M and the variable assignment . This formal definition captures the idea that is true if and only if there is a way to choose a value for x such that φ(x) is satisfied. Universal quantifiers. A formula is true according to M and if φ(x) is true for every pair composed by the interpretation M and some variable assignment that differs from at most on the value of x. This captures the idea that is true if every possible choice of a value for x causes φ(x) to be true. If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and if and only if it is true according to M and every other variable assignment . There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M; say that for each d in the domain the constant symbol cd is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows: Existential quantifiers (alternate). A formula is true according to M if there is some d in the domain of discourse such that holds. Here is the result of substituting cd for every free occurrence of x in φ. Universal quantifiers (alternate). A formula is true according to M if, for every d in the domain of discourse, is true according to M. This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments. Validity, satisfiability, and logical consequence If a sentence φ evaluates to true under a given interpretation M, one says that M satisfies φ; this is denoted . A sentence is satisfiable if there is some interpretation under which it is true. This is a bit different from the symbol from model theory, where denotes satisfiability in a model, i.e. "there is a suitable assignment of values in 's domain to variable symbols of ". Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variables , ..., is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables , ..., . This has the same effect as saying that a formula φ is satisfied if and only if its universal closure is satisfied. A formula is logically valid (or simply valid) if it is true in every interpretation. These formulas play a role similar to tautologies in propositional logic. A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ. Algebraizations An alternate approach to the semantics of first-order logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators: Cylindric algebra, by Alfred Tarski and colleagues; Polyadic algebra, by Paul Halmos; Predicate functor logic, mainly due to Willard Quine. These algebras are all lattices that properly extend the two-element Boolean algebra. Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra. This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical ZFC. They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions. First-order theories, models, and elementary classes A first-order theory of a particular signature is a set of axioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite or recursively enumerable, in which case the theory is called effective. Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived. A first-order structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory. Many theories have an intended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models. A theory is consistent if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete. Empty domains The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class. There are several difficulties with empty domains, however: Many common rules of inference are valid only when the domain of discourse is required to be nonempty. One example is the rule stating that implies when x is not a free variable in . This rule, which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the empty domain is permitted. The definition of truth in an interpretation that uses a variable assignment function cannot work with empty domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot assign interpretations to constant symbols.) This truth definition requires that one must select a variable assignment function (μ above) before truth values for even atomic formulas can be defined. Then the truth value of a sentence is defined to be its truth value under any variable assignment, and it is proved that this truth value does not depend on which assignment is chosen. This technique does not work if there are no assignment functions at all; it must be changed to accommodate empty domains. Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition. Deductive systems A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called derivations in proof theory. They are also often called proofs but are completely formalized unlike natural-language mathematical proofs. A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective. A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area. In general, logical consequence in first-order logic is only semidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B. Rules of inference A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion. For example, one common rule of inference is the rule of substitution. If t is a term and φ is a formula possibly containing the variable x, then φ[t/x] is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t, one can conclude φ[t/x] from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t.) To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by , in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φ[t/y] is , which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z, so that the formula after substitution is , which is again logically valid. The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule. Hilbert-style systems and natural deduction A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom, a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemas of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference. Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof. Sequent calculus The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with one formula at a time, it uses sequents, which are expressions of the form: where A1, ..., An, B1, ..., Bk are formulas and the turnstile symbol is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that implies . Tableaux method Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent and children C and D. Resolution The resolution rule is a single rule of inference that, together with unification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving. The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses and , the conclusion can be obtained. Provable identities Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas in prenex normal form. Some provable identities include: (where must not occur free in ) (where must not occur free in ) Equality and its axioms There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are: Reflexivity. For each variable x, x = x. Substitution for functions. For all variables x and y, and any function symbol f, x = y → f(..., x, ...) = f(..., y, ...). Substitution for formulas. For any variables x and y and any formula φ(z) with a free variable z, then: x = y → (φ(x) → φ(y)). These are axiom schemas, each of which specifies an infinite set of axioms. The third schema is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbol f, is (equivalent to) a special case of the third schema, using the formula: φ(z): f(..., x, ...) = f(..., z, ...) Then x = y → (f(..., x, ...) = f(..., x, ...) → f(..., x, ...) = f(..., y, ...)). Since x = y is given, and f(..., x, ...) = f(..., x, ...) true by reflexivity, we have f(..., x, ...) = f(..., y, ...) Many other properties of equality are consequences of the axioms above, for example: Symmetry. If x = y then y = x. Transitivity. If x = y and y = z then x = z. First-order logic without equality An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as first-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation. When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered. First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted. Defining equality within a theory If a theory has a binary formula A(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument. Some theories allow other ad hoc definitions of equality: In the theory of partial orders with one relation symbol ≤, one could define s = t to be an abbreviation for s ≤ t t ≤ s. In set theory with one relation ∈, one may define s = t to be an abbreviation for . This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, which can be stated as , with an alternative formulation , which says that if sets x and y have the same elements, then they also belong to the same sets. Metalogical properties One motivation for the use of first-order logic, rather than higher-order logic, is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories. Completeness and undecidability Gödel's completeness theorem, proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ. Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert and Wilhelm Ackermann in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem. There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are the guarded fragment of first-order logic, as well as two-variable logic. The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics. The Löwenheim–Skolem theorem The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable). The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox. The compactness theorem The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models. The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures). There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in the logic of graphs, that expresses the idea that there is a path from x to y. Connectedness can be expressed in second-order logic, however, but not with only existential set quantifiers, as also enjoys compactness. Lindström's theorem Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type: A logical system satisfying Lindström's definition that contains first-order logic and satisfies both the Löwenheim–Skolem theorem and the compactness theorem must be equivalent to first-order logic. A logical system satisfying Lindström's definition that has a semidecidable logical consequence relation and satisfies the Löwenheim–Skolem theorem must be equivalent to first-order logic. Limitations Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe. For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and the counting quantifiers and . Expressiveness The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical. Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers. This expressiveness comes at a metalogical cost, however: by Lindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order. Formalizing natural languages First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis for knowledge representation languages, such as FO(.). Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic". Restrictions, extensions, and variations There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity. Restricted languages First-order logic can be studied in languages with fewer logical symbols than were described above: Because can be expressed as , and can be expressed as , either of the two quantifiers and can be dropped. Since can be expressed as and can be expressed as , either or can be dropped. In other words, it is sufficient to have and , or and , as the only logical connectives. Similarly, it is sufficient to have only and as logical connectives, or to have only the Sheffer stroke (NAND) or the Peirce arrow (NOR) operator. It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols in an appropriate way. For example, instead of using a constant symbol one may use a predicate (interpreted as ) and replace every predicate such as with . A function such as will similarly be replaced by a predicate interpreted as . This change requires adding additional axioms to the theory at hand, so that interpretations of the predicate symbols used have the correct semantics. Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system. It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied. Many-sorted logic Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. Many-sorted first-order logic allows variables to have different sorts, which have different domains. This is also called typed first-order logic, and the sorts called types (as in data type), but it is not the same as first-order type theory. Many-sorted first-order logic is often used in the study of second-order arithmetic. When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic. One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols and and the axiom: . Then the elements satisfying are thought of as elements of the first sort, and elements satisfying as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula , one writes: . Additional quantifiers Additional quantifiers can be added to first-order logic. Sometimes it is useful to say that " holds for exactly one x", which can be expressed as . This notation, called uniqueness quantification, may be taken to abbreviate a formula such as . First-order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others. Bounded quantifiers are often used in the study of set theory or arithmetic. Infinitary logics Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory. Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed. The most commonly studied infinitary logics are denoted Lαβ, where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is Lωω. In the logic L∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as Lκω. For example, Lω1ω permits countable conjunctions and disjunctions. The set of free variables in a formula of Lκω can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another. In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in Lκ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic Lκλ permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ. Non-classical and modal logics Intuitionistic first-order logic uses intuitionistic rather than classical reasoning; for example, ¬¬φ need not be equivalent to φ and ¬ ∀x.φ is in general not equivalent to ∃ x.¬φ . First-order modal logic allows one to describe other possible worlds as well as this contingently true world which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for example "it is necessary that φ" (true in all possible worlds) and "it is possible that φ" (true in some possible world). With standard first-order logic we have a single domain, and each predicate is assigned one extension. With first-order modal logic we have a domain function that assigns each possible world its own domain, so that each predicate gets an extension only relative to these possible worlds. This allows us to model cases where, for example, Alex is a philosopher, but might have been a mathematician, and might not have existed at all. In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all. First-order fuzzy logics are first-order extensions of propositional fuzzy logics rather than classical propositional calculus. Fixpoint logic Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators. Higher-order logics The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus is a legal first-order formula, but is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified. Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as full semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics. Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics. Automated theorem proving and formal methods Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search. The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct. Some proof verifiers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write, results are often formalized as a series of lemmas, for which derivations can be constructed separately. Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences. For the problem of model checking, efficient algorithms are known to decide whether an input finite structure satisfies a first-order formula, in addition to computational complexity bounds: see .
Mathematics
Discrete mathematics
null
10987
https://en.wikipedia.org/wiki/Functor
Functor
In mathematics, specifically category theory, a functor is a mapping between categories. Functors were first considered in algebraic topology, where algebraic objects (such as the fundamental group) are associated to topological spaces, and maps between these algebraic objects are associated to continuous maps between spaces. Nowadays, functors are used throughout modern mathematics to relate various categories. Thus, functors are important in all areas within mathematics to which category theory is applied. The words category and functor were borrowed by mathematicians from the philosophers Aristotle and Rudolf Carnap, respectively. The latter used functor in a linguistic context; see function word. Definition Let C and D be categories. A functor F from C to D is a mapping that associates each object in C to an object in D, associates each morphism in C to a morphism in D such that the following two conditions hold: for every object in C, for all morphisms and in C. That is, functors must preserve identity morphisms and composition of morphisms. Covariance and contravariance There are many constructions in mathematics that would be functors but for the fact that they "turn morphisms around" and "reverse composition". We then define a contravariant functor F from C to D as a mapping that associates each object in C with an object in D, associates each morphism in C with a morphism in D such that the following two conditions hold: for every object in C, for all morphisms and in C. Variance of functor (composite) The composite of two functors of the same variance: The composite of two functors of opposite variance: Note that contravariant functors reverse the direction of composition. Ordinary functors are also called covariant functors in order to distinguish them from contravariant ones. Note that one can also define a contravariant functor as a covariant functor on the opposite category . Some authors prefer to write all expressions covariantly. That is, instead of saying is a contravariant functor, they simply write (or sometimes ) and call it a functor. Contravariant functors are also occasionally called cofunctors. There is a convention which refers to "vectors"—i.e., vector fields, elements of the space of sections of a tangent bundle —as "contravariant" and to "covectors"—i.e., 1-forms, elements of the space of sections of a cotangent bundle —as "covariant". This terminology originates in physics, and its rationale has to do with the position of the indices ("upstairs" and "downstairs") in expressions such as for or for In this formalism it is observed that the coordinate transformation symbol (representing the matrix ) acts on the "covector coordinates" "in the same way" as on the basis vectors: —whereas it acts "in the opposite way" on the "vector coordinates" (but "in the same way" as on the basis covectors: ). This terminology is contrary to the one used in category theory because it is the covectors that have pullbacks in general and are thus contravariant, whereas vectors in general are covariant since they can be pushed forward.
Mathematics
Category theory
null
11004
https://en.wikipedia.org/wiki/Fundamental%20group
Fundamental group
In the mathematical field of algebraic topology, the fundamental group of a topological space is the group of the equivalence classes under homotopy of the loops contained in the space. It records information about the basic shape, or holes, of the topological space. The fundamental group is the first and simplest homotopy group. The fundamental group is a homotopy invariant—topological spaces that are homotopy equivalent (or the stronger case of homeomorphic) have isomorphic fundamental groups. The fundamental group of a topological space is denoted by . Intuition Start with a space (for example, a surface), and some point in it, and all the loops both starting and ending at this point—paths that start at this point, wander around and eventually return to the starting point. Two loops can be combined in an obvious way: travel along the first loop, then along the second. Two loops are considered equivalent if one can be deformed into the other without breaking. The set of all such loops with this method of combining and this equivalence between them is the fundamental group for that particular space. History Henri Poincaré defined the fundamental group in 1895 in his paper "Analysis situs". The concept emerged in the theory of Riemann surfaces, in the work of Bernhard Riemann, Poincaré, and Felix Klein. It describes the monodromy properties of complex-valued functions, as well as providing a complete topological classification of closed surfaces. Definition Throughout this article, X is a topological space. A typical example is a surface such as the one depicted at the right. Moreover, is a point in X called the base-point. (As is explained below, its role is rather auxiliary.) The idea of the definition of the homotopy group is to measure how many (broadly speaking) curves on X can be deformed into each other. The precise definition depends on the notion of the homotopy of loops, which is explained first. Homotopy of loops Given a topological space X, a loop based at is defined to be a continuous function (also known as a continuous map) such that the starting point and the end point are both equal to . A homotopy is a continuous interpolation between two loops. More precisely, a homotopy between two loops (based at the same point ) is a continuous map such that for all that is, the starting point of the homotopy is for all t (which is often thought of as a time parameter). for all that is, similarly the end point stays at for all t. for all . If such a homotopy h exists, and are said to be homotopic. The relation " is homotopic to " is an equivalence relation so that the set of equivalence classes can be considered: . This set (with the group structure described below) is called the fundamental group of the topological space X at the base point . The purpose of considering the equivalence classes of loops up to homotopy, as opposed to the set of all loops (the so-called loop space of X) is that the latter, while being useful for various purposes, is a rather big and unwieldy object. By contrast the above quotient is, in many cases, more manageable and computable. Group structure By the above definition, is just a set. It becomes a group (and therefore deserves the name fundamental group) using the concatenation of loops. More precisely, given two loops , their product is defined as the loop Thus the loop first follows the loop with "twice the speed" and then follows with "twice the speed". The product of two homotopy classes of loops and is then defined as . It can be shown that this product does not depend on the choice of representatives and therefore gives a well-defined operation on the set . This operation turns into a group. Its neutral element is the constant loop, which stays at for all times t. The inverse of a (homotopy class of a) loop is the same loop, but traversed in the opposite direction. More formally, . Given three based loops the product is the concatenation of these loops, traversing and then with quadruple speed, and then with double speed. By comparison, traverses the same paths (in the same order), but with double speed, and with quadruple speed. Thus, because of the differing speeds, the two paths are not identical. The associativity axiom therefore crucially depends on the fact that paths are considered up to homotopy. Indeed, both above composites are homotopic, for example, to the loop that traverses all three loops with triple speed. The set of based loops up to homotopy, equipped with the above operation therefore does turn into a group. Dependence of the base point Although the fundamental group in general depends on the choice of base point, it turns out that, up to isomorphism (actually, even up to inner isomorphism), this choice makes no difference as long as the space X is path-connected. For path-connected spaces, therefore, many authors write instead of Concrete examples This section lists some basic examples of fundamental groups. To begin with, in Euclidean space () or any convex subset of there is only one homotopy class of loops, and the fundamental group is therefore the trivial group with one element. More generally, any star domain – and yet more generally, any contractible space – has a trivial fundamental group. Thus, the fundamental group does not distinguish between such spaces. The 2-sphere A path-connected space whose fundamental group is trivial is called simply connected. For example, the 2-sphere depicted on the right, and also all the higher-dimensional spheres, are simply-connected. The figure illustrates a homotopy contracting one particular loop to the constant loop. This idea can be adapted to all loops such that there is a point that is in the image of However, since there are loops such that (constructed from the Peano curve, for example), a complete proof requires more careful analysis with tools from algebraic topology, such as the Seifert–van Kampen theorem or the cellular approximation theorem. The circle The circle (also known as the 1-sphere) is not simply connected. Instead, each homotopy class consists of all loops that wind around the circle a given number of times (which can be positive or negative, depending on the direction of winding). The product of a loop that winds around m times and another that winds around n times is a loop that winds around m + n times. Therefore, the fundamental group of the circle is isomorphic to the additive group of integers. This fact can be used to give proofs of the Brouwer fixed point theorem and the Borsuk–Ulam theorem in dimension 2. The figure eight The fundamental group of the figure eight is the free group on two letters. The idea to prove this is as follows: choosing the base point to be the point where the two circles meet (dotted in black in the picture at the right), any loop can be decomposed as where a and b are the two loops winding around each half of the figure as depicted, and the exponents are integers. Unlike the fundamental group of the figure eight is not abelian: the two ways of composing a and b are not homotopic to each other: More generally, the fundamental group of a bouquet of r circles is the free group on r letters. The fundamental group of a wedge sum of two path connected spaces X and Y can be computed as the free product of the individual fundamental groups: This generalizes the above observations since the figure eight is the wedge sum of two circles. The fundamental group of the plane punctured at n points is also the free group with n generators. The i-th generator is the class of the loop that goes around the i-th puncture without going around any other punctures. Graphs The fundamental group can be defined for discrete structures too. In particular, consider a connected graph , with a designated vertex v0 in V. The loops in G are the cycles that start and end at v0. Let T be a spanning tree of G. Every simple loop in G contains exactly one edge in E \ T; every loop in G is a concatenation of such simple loops. Therefore, the fundamental group of a graph is a free group, in which the number of generators is exactly the number of edges in E \ T. This number equals . For example, suppose G has 16 vertices arranged in 4 rows of 4 vertices each, with edges connecting vertices that are adjacent horizontally or vertically. Then G has 24 edges overall, and the number of edges in each spanning tree is , so the fundamental group of G is the free group with 9 generators. Note that G has 9 "holes", similarly to a bouquet of 9 circles, which has the same fundamental group. Knot groups Knot groups are by definition the fundamental group of the complement of a knot embedded in For example, the knot group of the trefoil knot is known to be the braid group which gives another example of a non-abelian fundamental group. The Wirtinger presentation explicitly describes knot groups in terms of generators and relations based on a diagram of the knot. Therefore, knot groups have some usage in knot theory to distinguish between knots: if is not isomorphic to some other knot group of another knot , then can not be transformed into . Thus the trefoil knot can not be continuously transformed into the circle (also known as the unknot), since the latter has knot group . There are, however, knots that can not be deformed into each other, but have isomorphic knot groups. Oriented surfaces The fundamental group of a genus-n orientable surface can be computed in terms of generators and relations as This includes the torus, being the case of genus 1, whose fundamental group is Topological groups The fundamental group of a topological group X (with respect to the base point being the neutral element) is always commutative. In particular, the fundamental group of a Lie group is commutative. In fact, the group structure on X endows with another group structure: given two loops and in X, another loop can defined by using the group multiplication in X: This binary operation on the set of all loops is a priori independent from the one described above. However, the Eckmann–Hilton argument shows that it does in fact agree with the above concatenation of loops, and moreover that the resulting group structure is abelian. An inspection of the proof shows that, more generally, is abelian for any H-space X, i.e., the multiplication need not have an inverse, nor does it have to be associative. For example, this shows that the fundamental group of a loop space of another topological space Y, is abelian. Related ideas lead to Heinz Hopf's computation of the cohomology of a Lie group. Functoriality If is a continuous map, and with then every loop in with base point can be composed with to yield a loop in with base point This operation is compatible with the homotopy equivalence relation and with composition of loops. The resulting group homomorphism, called the induced homomorphism, is written as or, more commonly, This mapping from continuous maps to group homomorphisms is compatible with composition of maps and identity morphisms. In the parlance of category theory, the formation of associating to a topological space its fundamental group is therefore a functor from the category of topological spaces together with a base point to the category of groups. It turns out that this functor does not distinguish maps that are homotopic relative to the base point: if are continuous maps with , and f and g are homotopic relative to {x0}, then f∗ = g∗. As a consequence, two homotopy equivalent path-connected spaces have isomorphic fundamental groups: For example, the inclusion of the circle in the punctured plane is a homotopy equivalence and therefore yields an isomorphism of their fundamental groups. The fundamental group functor takes products to products and coproducts to coproducts. That is, if X and Y are path connected, then and if they are also locally contractible, then (In the latter formula, denotes the wedge sum of pointed topological spaces, and the free product of groups.) The latter formula is a special case of the Seifert–van Kampen theorem, which states that the fundamental group functor takes pushouts along inclusions to pushouts. Abstract results As was mentioned above, computing the fundamental group of even relatively simple topological spaces tends to be not entirely trivial, but requires some methods of algebraic topology. Relationship to first homology group The abelianization of the fundamental group can be identified with the first homology group of the space. A special case of the Hurewicz theorem asserts that the first singular homology group is, colloquially speaking, the closest approximation to the fundamental group by means of an abelian group. In more detail, mapping the homotopy class of each loop to the homology class of the loop gives a group homomorphism from the fundamental group of a topological space X to its first singular homology group This homomorphism is not in general an isomorphism since the fundamental group may be non-abelian, but the homology group is, by definition, always abelian. This difference is, however, the only one: if X is path-connected, this homomorphism is surjective and its kernel is the commutator subgroup of the fundamental group, so that is isomorphic to the abelianization of the fundamental group. Gluing topological spaces Generalizing the statement above, for a family of path connected spaces the fundamental group is the free product of the fundamental groups of the This fact is a special case of the Seifert–van Kampen theorem, which allows to compute, more generally, fundamental groups of spaces that are glued together from other spaces. For example, the 2-sphere can be obtained by gluing two copies of slightly overlapping half-spheres along a neighborhood of the equator. In this case the theorem yields is trivial, since the two half-spheres are contractible and therefore have trivial fundamental group. The fundamental groups of surfaces, as mentioned above, can also be computed using this theorem. In the parlance of category theory, the theorem can be concisely stated by saying that the fundamental group functor takes pushouts (in the category of topological spaces) along inclusions to pushouts (in the category of groups). Coverings Given a topological space B, a continuous map is called a covering or E is called a covering space of B if every point b in B admits an open neighborhood U such that there is a homeomorphism between the preimage of U and a disjoint union of copies of U (indexed by some set I), in such a way that is the standard projection map Universal covering A covering is called a universal covering if E is, in addition to the preceding condition, simply connected. It is universal in the sense that all other coverings can be constructed by suitably identifying points in E. Knowing a universal covering of a topological space X is helpful in understanding its fundamental group in several ways: first, identifies with the group of deck transformations, i.e., the group of homeomorphisms that commute with the map to X, i.e., Another relation to the fundamental group is that can be identified with the fiber For example, the map (or, equivalently, ) is a universal covering. The deck transformations are the maps for This is in line with the identification in particular this proves the above claim Any path connected, locally path connected and locally simply connected topological space X admits a universal covering. An abstract construction proceeds analogously to the fundamental group by taking pairs (x, γ), where x is a point in X and γ is a homotopy class of paths from x0 to x. The passage from a topological space to its universal covering can be used in understanding the geometry of X. For example, the uniformization theorem shows that any simply connected Riemann surface is (isomorphic to) either or the upper half plane. General Riemann surfaces then arise as quotients of group actions on these three surfaces. The quotient of a free action of a discrete group G on a simply connected space Y has fundamental group As an example, the real n-dimensional real projective space is obtained as the quotient of the n-dimensional unit sphere by the antipodal action of the group sending to As is simply connected for n ≥ 2, it is a universal cover of in these cases, which implies for n ≥ 2. Lie groups Let G be a connected, simply connected compact Lie group, for example, the special unitary group SU(n), and let Γ be a finite subgroup of G. Then the homogeneous space X = G/Γ has fundamental group Γ, which acts by right multiplication on the universal covering space G. Among the many variants of this construction, one of the most important is given by locally symmetric spaces X = Γ&hairsp;\G/K, where G is a non-compact simply connected, connected Lie group (often semisimple), K is a maximal compact subgroup of G Γ is a discrete countable torsion-free subgroup of G. In this case the fundamental group is Γ and the universal covering space G/K is actually contractible (by the Cartan decomposition for Lie groups). As an example take G = SL(2, R), K = SO(2) and Γ any torsion-free congruence subgroup of the modular group SL(2, Z). From the explicit realization, it also follows that the universal covering space of a path connected topological group H is again a path connected topological group G. Moreover, the covering map is a continuous open homomorphism of G onto H with kernel Γ, a closed discrete normal subgroup of G: Since G is a connected group with a continuous action by conjugation on a discrete group Γ, it must act trivially, so that Γ has to be a subgroup of the center of G. In particular π1(H) = Γ is an abelian group; this can also easily be seen directly without using covering spaces. The group G is called the universal covering group of H. As the universal covering group suggests, there is an analogy between the fundamental group of a topological group and the center of a group; this is elaborated at Lattice of covering groups. Fibrations Fibrations provide a very powerful means to compute homotopy groups. A fibration f the so-called total space, and the base space B has, in particular, the property that all its fibers are homotopy equivalent and therefore can not be distinguished using fundamental groups (and higher homotopy groups), provided that B is path-connected. Therefore, the space E can be regarded as a "twisted product" of the base space B and the fiber The great importance of fibrations to the computation of homotopy groups stems from a long exact sequence provided that B is path-connected. The term is the second homotopy group of B, which is defined to be the set of homotopy classes of maps from to B, in direct analogy with the definition of If E happens to be path-connected and simply connected, this sequence reduces to an isomorphism which generalizes the above fact about the universal covering (which amounts to the case where the fiber F is also discrete). If instead F happens to be connected and simply connected, it reduces to an isomorphism What is more, the sequence can be continued at the left with the higher homotopy groups of the three spaces, which gives some access to computing such groups in the same vein. Classical Lie groups Such fiber sequences can be used to inductively compute fundamental groups of compact classical Lie groups such as the special unitary group with This group acts transitively on the unit sphere inside The stabilizer of a point in the sphere is isomorphic to It then can be shown that this yields a fiber sequence Since the sphere has dimension at least 3, which implies The long exact sequence then shows an isomorphism Since is a single point, so that is trivial, this shows that is simply connected for all The fundamental group of noncompact Lie groups can be reduced to the compact case, since such a group is homotopic to its maximal compact subgroup. These methods give the following results: A second method of computing fundamental groups applies to all connected compact Lie groups and uses the machinery of the maximal torus and the associated root system. Specifically, let be a maximal torus in a connected compact Lie group and let be the Lie algebra of The exponential map is a fibration and therefore its kernel identifies with The map can be shown to be surjective with kernel given by the set I of integer linear combination of coroots. This leads to the computation This method shows, for example, that any connected compact Lie group for which the associated root system is of type is simply connected. Thus, there is (up to isomorphism) only one connected compact Lie group having Lie algebra of type ; this group is simply connected and has trivial center. Edge-path group of a simplicial complex When the topological space is homeomorphic to a simplicial complex, its fundamental group can be described explicitly in terms of generators and relations. If X is a connected simplicial complex, an edge-path in X is defined to be a chain of vertices connected by edges in X. Two edge-paths are said to be edge-equivalent if one can be obtained from the other by successively switching between an edge and the two opposite edges of a triangle in X. If v is a fixed vertex in X, an edge-loop at v is an edge-path starting and ending at v. The edge-path group E(X, v) is defined to be the set of edge-equivalence classes of edge-loops at v, with product and inverse defined by concatenation and reversal of edge-loops. The edge-path group is naturally isomorphic to π1(|X&hairsp;|, v), the fundamental group of the geometric realisation |X&hairsp;| of X. Since it depends only on the 2-skeleton X 2 of X (that is, the vertices, edges, and triangles of X), the groups π1(|X&hairsp;|,v) and π1(|X 2|, v) are isomorphic. The edge-path group can be described explicitly in terms of generators and relations. If T is a maximal spanning tree in the 1-skeleton of X, then E(X, v) is canonically isomorphic to the group with generators (the oriented edge-paths of X not occurring in T) and relations (the edge-equivalences corresponding to triangles in X). A similar result holds if T is replaced by any simply connected—in particular contractible—subcomplex of X. This often gives a practical way of computing fundamental groups and can be used to show that every finitely presented group arises as the fundamental group of a finite simplicial complex. It is also one of the classical methods used for topological surfaces, which are classified by their fundamental groups. The universal covering space of a finite connected simplicial complex X can also be described directly as a simplicial complex using edge-paths. Its vertices are pairs (w,γ) where w is a vertex of X and γ is an edge-equivalence class of paths from v to w. The k-simplices containing (w,γ) correspond naturally to the k-simplices containing w. Each new vertex u of the k-simplex gives an edge wu and hence, by concatenation, a new path γu from v to u. The points (w,γ) and (u, γu) are the vertices of the "transported" simplex in the universal covering space. The edge-path group acts naturally by concatenation, preserving the simplicial structure, and the quotient space is just X. It is well known that this method can also be used to compute the fundamental group of an arbitrary topological space. This was doubtless known to Eduard Čech and Jean Leray and explicitly appeared as a remark in a paper by André Weil; various other authors such as Lorenzo Calabi, Wu Wen-tsün, and Nodar Berikashvili have also published proofs. In the simplest case of a compact space X with a finite open covering in which all non-empty finite intersections of open sets in the covering are contractible, the fundamental group can be identified with the edge-path group of the simplicial complex corresponding to the nerve of the covering. Realizability Every group can be realized as the fundamental group of a connected CW-complex of dimension 2 (or higher). As noted above, though, only free groups can occur as fundamental groups of 1-dimensional CW-complexes (that is, graphs). Every finitely presented group can be realized as the fundamental group of a compact, connected, smooth manifold of dimension 4 (or higher). But there are severe restrictions on which groups occur as fundamental groups of low-dimensional manifolds. For example, no free abelian group of rank 4 or higher can be realized as the fundamental group of a manifold of dimension 3 or less. It can be proved that every group can be realized as the fundamental group of a compact Hausdorff space if and only if there is no measurable cardinal. Related concepts Higher homotopy groups Roughly speaking, the fundamental group detects the 1-dimensional hole structure of a space, but not higher-dimensional holes such as for the 2-sphere. Such "higher-dimensional holes" can be detected using the higher homotopy groups , which are defined to consist of homotopy classes of (basepoint-preserving) maps from to X. For example, the Hurewicz theorem implies that for all the n-th homotopy group of the n-sphere is As was mentioned in the above computation of of classical Lie groups, higher homotopy groups can be relevant even for computing fundamental groups. Loop space The set of based loops (as is, i.e. not taken up to homotopy) in a pointed space X, endowed with the compact open topology, is known as the loop space, denoted The fundamental group of X is in bijection with the set of path components of its loop space: Fundamental groupoid The fundamental groupoid is a variant of the fundamental group that is useful in situations where the choice of a base point is undesirable. It is defined by first considering the category of paths in i.e., continuous functions , where r is an arbitrary non-negative real number. Since the length r is variable in this approach, such paths can be concatenated as is (i.e., not up to homotopy) and therefore yield a category. Two such paths with the same endpoints and length r, resp. r''' are considered equivalent if there exist real numbers such that and are homotopic relative to their end points, where use a different definition by reparametrizing the paths to length 1. The category of paths up to this equivalence relation is denoted Each morphism in is an isomorphism, with inverse given by the same path traversed in the opposite direction. Such a category is called a groupoid. It reproduces the fundamental group since . More generally, one can consider the fundamental groupoid on a set A of base points, chosen according to the geometry of the situation; for example, in the case of the circle, which can be represented as the union of two connected open sets whose intersection has two components, one can choose one base point in each component. The van Kampen theorem admits a version for fundamental groupoids which gives, for example, another way to compute the fundamental group(oid) of Local systems Generally speaking, representations may serve to exhibit features of a group by its actions on other mathematical objects, often vector spaces. Representations of the fundamental group have a very geometric significance: any local system (i.e., a sheaf on X with the property that locally in a sufficiently small neighborhood U of any point on X, the restriction of F is a constant sheaf of the form ) gives rise to the so-called monodromy representation, a representation of the fundamental group on an n-dimensional -vector space. Conversely, any such representation on a path-connected space X arises in this manner. This equivalence of categories between representations of and local systems is used, for example, in the study of differential equations, such as the Knizhnik–Zamolodchikov equations. Étale fundamental group In algebraic geometry, the so-called étale fundamental group is used as a replacement for the fundamental group. Since the Zariski topology on an algebraic variety or scheme X is much coarser than, say, the topology of open subsets in it is no longer meaningful to consider continuous maps from an interval to X. Instead, the approach developed by Grothendieck consists in constructing by considering all finite étale covers of X. These serve as an algebro-geometric analogue of coverings with finite fibers. This yields a theory applicable in situations where no great generality classical topological intuition whatsoever is available, for example for varieties defined over a finite field. Also, the étale fundamental group of a field is its (absolute) Galois group. On the other hand, for smooth varieties X over the complex numbers, the étale fundamental group retains much of the information inherent in the classical fundamental group: the former is the profinite completion of the latter. Fundamental group of algebraic groups The fundamental group of a root system is defined in analogy to the computation for Lie groups. This allows to define and use the fundamental group of a semisimple linear algebraic group G, which is a useful basic tool in the classification of linear algebraic groups. Fundamental group of simplicial sets The homotopy relation between 1-simplices of a simplicial set X is an equivalence relation if X is a Kan complex but not necessarily so in general. Thus, of a Kan complex can be defined as the set of homotopy classes of 1-simplices. The fundamental group of an arbitrary simplicial set X are defined to be the homotopy group of its topological realization, i.e., the topological space obtained by gluing topological simplices as prescribed by the simplicial set structure of X.
Mathematics
Algebra
null
11034
https://en.wikipedia.org/wiki/Fluid%20dynamics
Fluid dynamics
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including (the study of air and other gases in motion) and (the study of water and other liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time. Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases. Equations The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the first law of thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem. In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored. For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state: where is pressure, is density, and is the absolute temperature, while is the gas constant and is molar mass for a particular gas. A constitutive relation may also be useful. Conservation laws Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow. Classifications Compressible versus incompressible flow All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used. Mathematically, incompressibility is expressed by saying that the density of a fluid parcel does not change as it moves in the flow field, that is, where is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density. For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate. Newtonian versus non-Newtonian fluids All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions . Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate. Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants. Inviscid versus viscous versus Stokes flow The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects. The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number () indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow. In contrast, high Reynolds numbers () indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression. This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox. A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions. Steady versus unsteady flow A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady. Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field is statistically stationary if all statistics are invariant under a shift in time. This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow. Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field. Laminar versus turbulent flow Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component. It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows. Most flows of interest have Reynolds numbers much too high for DNS to be a viable option, given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human ( > 3 m), moving faster than is well beyond the limit of DNS simulation ( = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling. Other approximations There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below. The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small. Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected. Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid. The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small. Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths. In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics. Multidisciplinary types Flows according to Mach regimes While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately. Reactive versus non-reactive flows Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics. Magnetohydrodynamics Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism. Relativistic fluid dynamics Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime. Fluctuating hydrodynamics This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model thermal fluctuations. As formulated by Landau and Lifshitz, a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics is added to the viscous stress tensor and heat flux. Terminology The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods. Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics. Characteristic numbers Terminology in incompressible fluid dynamics The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field. A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field. Terminology in compressible fluid dynamics In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion. To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference. Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy".
Physical sciences
Fluid mechanics
null